id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
|---|---|---|---|
no-problem/9812/astro-ph9812210.html
|
ar5iv
|
text
|
# A Simultaneous ASCA and RXTE Long Look at the Seyfert 1 Galaxy MCG–6-30-15
## 1 Introduction
The current paradigm for AGN is a central engine consisting of an accretion disk surrounding a supermassive black hole (e.g. see review by Rees 1984). The main source of power is the release of gravitational potential energy as matter falls towards the central black hole. Much of this energy is released in the form of X-rays, some fraction of which are reprocessed by matter in the AGN.
Careful study of X-ray reprocessing mechanisms can give much information about the immediate environment of the accreting black hole. These effects of reprocessing can often be observed in the form of emission and absorption features in the X-ray spectra of AGNs. In Seyfert 1 nuclei, approximately half of the X-rays are reflected off the inner regions of the accretion disk. The reflected spectrum is complicated with features of photoabsorption, iron fluorescence and Compton scattering (George & Fabian 1991). The strength, shape and broadening of the features of the reflected spectrum are diagnostics of the geometry, ionization state, and iron abundance of the accretion disk.
MCG–6-30-15 is a Seyfert 1 galaxy that is both bright and nearby ($`z=0.008`$). The spectral features above a few keV can be modeled well by a power-law continuum plus reflection component that encompasses the effects of reflection of this continuum by the inner regions of the accretion disk. The principle observables of this reflection component are a strong iron fluorescence line at $`6.4`$ keV and a Compton ‘hump’ peaking at 20–30 keV. The iron line together with the reflection component are important diagnostics for the geometry and physics of the X-ray continuum source. The strength of the emission line relative to the reflection hump depends largely on the abundance of iron relative to hydrogen in the disk. Disentangling the abundance from the absolute normalization of the reflection component is an important step in constraining physical models of AGN central regions.
Previous studies of GINGA spectra have suggested that the reflection component is present in many Seyfert 1 AGNs (Nandra & Pounds 1994). Our recent observations using the Rossi X-ray Timing Explorer (RXTE) have clearly confirmed the presence of reflection in MCG–6-30-15.
## 2 Observations
MCG–6-30-15 was observed by RXTE for 400 ks over the period from 1997 August 4 to 1997 August 12 by both the Proportional Counter Array (PCA) and High-Energy X-ray Timing Experiment (HEXTE) instruments. It was simultaneously observed by the Advanced Satellite for Cosmology and Astrophysics (ASCA) Solid-state Imaging Spectrometers (SIS) for 200 ks over the period 1997 August 3 to 1997 August 10 with a half-day gap in the middle. We concentrate primarily on the RXTE observation.
PCA light curves and spectra are extracted from only the top Xenon layer using the ftools v.4.0 software. We use only combined data from PCUs 0, 1, and 2 since PCUs 3 and 4 are sometimes turned off due to occasional problems with discharge. Good time intervals were selected to exclude any Earth or South Atlantic Anomaly (SAA) passage occulations, and to ensure stable pointing.
We generate background data using pcabackest v2.0c in order to estimate the internal background caused by interactions between the radiation/particles and the detector/spacecraft at the time of observation. This is done by matching the conditions of observations with those in various model files. The model files that we chose were constructed using the VLE rate (one of the rates in PCA Standard 2 science array data that is defined to be the rate of events which saturate the analog electronics) as the tracer of the particle background.
The PCA response matrix for the RXTE data set was provided by the RXTE Guest Observer Facility (GOF) at Goddard Space Flight Center. Background models and response matrices are representative of the most up-to-date PCA calibrations.
The net HEXTE spectra were generated by subtracting spectra from the off-source positions from the on-source data. Time intervals were chosen to exclude 30 seconds prior to and following SAA passages. This avoids periods when the internal background is changing rapidly. We use response matrices provided by the HEXTE team at the University of California, San Diego. The relative normalizations of the PCA and the two HEXTE clusters are allowed to vary, due to uncertainties ( $`<`$ about 5% ) in the HEXTE deadtime measurement.
ASCA data reduction was carried out using ftools version 4.0 and 4.1 with standard calibration provided by the ASCA GOF. Detected SIS events with a grade of 0, 2, 3 or 4 are used for the analysis. One of the standard data selection criteria, br earth, elevation angle of the source from the bright Earth rim, is found to affect little the soft X-ray data from the SIS. We thus use the SIS data of approximately 231 ks from each detector for spectral analysis. The source counts are collected from a region centred at the X-ray peak within $``$ 4 arcmin from the SIS and 5 arcmin from the GIS. The background data are taken from a (nearly) source-free region in the same detector with the same observing time.
Figure 1 shows the ASCA S0 160-2700 pha-channel ($``$ 0.6–10 keV), and the RXTE PCA 1–129 pha channel ($``$ 2-60 keV) background subtracted light curves. There is a gap of $``$ 60 ks in the ASCA light curve in which the satellite observed IC4329A while MCG–6-30-15 underwent a large flare observed by RXTE. Significant variability can be seen in both light curves on short and long timescales. Flare and minima events are seen to correlate temporally in both light curves.
## 3 Spectral Fits
We restrict ASCA and PCA data analysis to be respectively between 3–10 keV and 3–20 keV (the PCA on-source spectrum for MCG–6-30-15 is largely background dominated past 20 keV). The lower energy restriction at 3 keV is selected in order that the necessity for modeling photoelectric absorption due to Galactic ISM material, or the warm absorber that is known to be present in this object (i.e. Reynolds et al. 1995) is removed. HEXTE data is restricted to be between 16 and 50 keV in order that we may adequately model the reflection hump. We also include 0.5 per cent systematics to the PCA data.
### 3.1 Spectral Features
A nominal fit using a simple power law confirms the clear existence of a redshifted broad iron line at $``$ 6.0 keV and reflection component above 10 keV as shown in Lee et al. (1998, in press). A plot of the ratio of data to continuum shown in figure 2 demonstrates the significance of these features.
As further evidence for the existence of the reflection component and good agreement between ASCA and RXTE, figure 3 shows that the residuals are essentially flat when all three data sets (i.e. ASCA, RXTE PCA + HEXTE) are fit with a multicomponent model . This model consists of a power-law reflection component (Magdziarz & Zdziarski 1995) to model the primary and reflected continuum with an additional Gaussian component to represent the iron $`\mathrm{K}\alpha `$ emission. Fitting the RXTE data in the energy range $`350\mathrm{keV}`$ we obtain for a best fit power-law slope $`\mathrm{\Gamma }`$ = $`2.05_{0.06}^{+0.13}`$. The line energy is $`5.94_{0.19}^{+0.15}`$ keV, with line width $`\sigma `$ = $`0.62_{0.25}^{+0.20}`$ keV. The equivalent width EW = $`305_{19}^{+6}`$ eV, and the reduced-$`\chi ^2`$ for the overall fit is 0.51 for 48 degrees of freedom.
The improved S/N (as compared to GINGA) along with the broad waveband coverage afforded by RXTE has allowed us not only to detect features associated with Compton reflection but also to set bounds on the abundances as a function of reflective fraction. Figure 4 shows that 95 per cent confidence contours can be obtained by fitting in the energy range 3–50 keV, with best fit values for reflective fraction and lower elemental abundances set equal to that of iron respectively to be $`0.78\pm 0.31`$ and $`0.76_{0.29}^{+0.33}`$ solar abundances. The reflective fraction is defined such that its value equal to unity implies that the X-ray source is subtending 2 $`\pi `$ sr of the sky (i.e. $`\frac{\mathrm{\Omega }}{2\pi }=1`$).
In order to test the consistency of our results, we perform fits similar to those above on an $``$ 187 ks subset (from the first portion) of the RXTE observation, and find that results are nearly identical. This gives us confidence that our derived parameters have physical meaning. Implications of the large EW of the iron line will be address in future publications.
## 4 Summary
Previous studies of reflection of X–rays from optically thick cold matter in the central region of AGNs have concluded that a reflected spectrum exists and is observed above the primary continuum. However, most studies in the past have been only able to consider the iron line alone due in part to a lack of adequate waveband coverage and / or good spectral resolution.
The improved S/N (as compared to GINGA) along with the high energy coverage afforded by RXTE coupled with simultaneous ASCA observations have allowed us not only to detect such features in our observations of MCG–6-30-15 but to set bounds on the abundances as a function of reflective fraction (figure 4) at the 95 per cent confidence level. This will have important consequences for understanding the geometry, and constraining processes in AGN central regions.
###### Acknowledgements.
We thank all the members of the RXTE GOF for answering our inquiries in such a timely manner, with special thanks to William Heindl and the HEXTE team for help with HEXTE data reduction. We also thank Keith Jahoda for explanations of PCA calibration issues. JCL thanks the Isaac Newton Trust, the Overseas Research Studentship programme (ORS) and the Cambridge Commonwealth Trust for support. ACF thanks the Royal Society for support. CSR thanks the National Science Foundation for support under grant AST9529175, and NASA for support under the Long Term Space Astrophysics grant NASA-NAG-6337. KI and WNB thank PPARC and NASA RXTE grant NAG5-6852 for support, respectively.
|
no-problem/9812/nucl-th9812037.html
|
ar5iv
|
text
|
# Rigidity and Normal Modes in Random Matrix Spectra
## 1 Introduction
Random matrices have been used to describe level correlations in many different areas of physics from nuclear levels to acoustic resonances to QCD. See, e.g., the review . The most important feature distinguishing the eigenvalues of a random matrix from a sequence of uncorrelated levels is the existence of strong repulsion between neighbouring eigenvalues. This repulsion is conveniently pictured by the Coulomb gas model adopted by Dyson in 1962 . Dyson exploited the equivalence between the Gaussian ensembles of random matrices and a classical problem in electrostatics in which identical uniformly charged parallel wires are placed on a line in a harmonic confining external field. The eigenvalues of the random matrix experience a repulsion which is analogous to the Coulomb repulsion between the line charges. This level repulsion leads to the rigidity of random matrix spectra, which is most easily illustrated by the fact that the variance of the number of eigenvalues in an (unfolded) interval of length $`L`$ grows logarithmically with $`L`$. This is in contrast to the linear behaviour of the number variance for a sequence of uncorrelated levels.
All spectral properties of random matrices can be calculated from the joint probability distribution of the eigenvalues. Given the Coulomb analogy, it is natural to consider the independent “normal modes” of the joint probability density, which describe the correlated motion of eigenvalues about their most probable values. Evidently, these normal modes have a simple interpretation in the Coulomb analogy where they describe the independent oscillations of charges on a lattice about their equilibrium positions. An eigenvalue can be associated with each normal mode, and these eigenvalues form the normal mode spectrum. In the present case, soft (hard) modes correspond to large (small) amplitude fluctuations in the random matrix spectrum. These collective degrees of freedom prove to be useful in determining the long-range spectral fluctuation measures of random matrices including the number variance mentioned above.
The purpose of this paper is to calculate the normal modes of the Gaussian random matrix ensembles and to use them for a determination of the number variance. For comparison, we will also determine the corresponding normal modes and the number variance for a sequence of uncorrelated levels. We emphasise that the results of this exercise are neither new nor exact. Our intention is rather to offer a new way to regard the fluctuations in random matrix spectra and to emphasise the value of thinking about the correlated motion of eigenvalues.
## 2 Normal Modes for the Gaussian Ensembles
The Gaussian ensembles of $`N\times N`$ matrices have a joint probability distribution of the eigenvalues $`x_1,x_2,\mathrm{},x_N`$ which is
$$P_{N\beta }(x_1,x_2,\mathrm{},x_N)=C_{N\beta }\underset{1i<jN}{}|x_ix_j|^\beta \mathrm{exp}\left(\frac{\beta }{2}N\underset{i=1}{\overset{N}{}}x_i^2\right),$$
(1)
where $`\beta =1,2,4`$ corresponds to the Gaussian orthogonal, unitary, and symplectic ensembles, respectively. This distribution leads to an average level density $`\rho (x)`$ which is independent of $`\beta `$. In the large-$`N`$ limit:
$$\rho (x)=\frac{N}{\pi }\sqrt{2x^2}.$$
(2)
The maxima of $`P_{N\beta }`$ correspond to the most probable locations of the eigenvalues. It is sufficient to consider the single maximum with $`x_i<x_{i+1}`$, and we denote the maximum value of $`P_{N\beta }`$ by $`P_{N\beta }^0`$. One immediately obtains the following set of equations which determine the equilibrium positions of the eigenvalues
$$\underset{ji}{}\frac{1}{x_ix_j}Nx_i=0.$$
(3)
In the vicinity of the maximum, we approximate the logarithm of $`P_{N\beta }`$ by
$$\mathrm{ln}P_{N\beta }=\mathrm{ln}P_{N\beta }^0+\frac{1}{2}\beta \underset{i,j}{}\delta x_iC_{ij}\delta x_j.$$
(4)
The matrix $`C`$ is defined as
$$C_{ij}=\frac{1}{\beta }\frac{^2}{x_ix_j}\mathrm{ln}P_{N\beta }$$
(5)
evaluated at the maximum. The elements of $`C`$ are
$`C_{ii}`$ $`=`$ $`{\displaystyle \underset{ji}{}}{\displaystyle \frac{1}{(x_ix_j)^2}}N`$ (6)
$`C_{ij}`$ $`=`$ $`{\displaystyle \frac{1}{(x_ix_j)^2}}.`$ (7)
The eigenvectors of $`C`$ are the normal modes of the random matrix spectrum. Each of these eigenvectors describes a statistically independent mode of correlated motion of the eigenvalues of the random matrix. Clearly, they provide the natural basis in which to describe the behaviour of $`P_{N\beta }`$ in the vicinity of its maximum.
The solution to (3) for an $`N\times N`$ matrix is given by the zeros of the Hermite polynomial, $`H_N`$
$$H_N(\sqrt{N}x_i)=0.$$
(8)
This result was obtained by Stieltjes (see appendix A.6 in ref. ). It follows from the observation that Hermite’s differential equation,
$$H_N^{\prime \prime }(x)2xH_N^{}(x)+2NH_N(x)=0,$$
(9)
reduces to (3) at the zeros of $`H_N`$.
The results for the eigenvalues and eigenvectors of the matrix $`C`$ are readily stated. We prove these results in sections 3 and 4. The eigenvalues $`\lambda _k`$ of $`C`$, which satisfy
$$\underset{j=1}{\overset{N}{}}C_{ij}\delta y_j^{(k)}=\lambda _k\delta y_i^{(k)},$$
(10)
are remarkably simple. Namely,
$$\lambda _k=kN,$$
(11)
where $`k`$ runs from 1 to $`N`$. The $`i`$-th component of the corresponding eigenvector is a polynomial of order $`k1`$ evaluated at the equilibrium value of $`x_i`$. The first four normalised eigenvectors can be written as
$`\delta y_i^{(1)}`$ $`=`$ $`{\displaystyle \frac{1}{N^{1/2}}}`$ (12)
$`\delta y_i^{(2)}`$ $`=`$ $`\left({\displaystyle \frac{2}{N1}}\right)^{1/2}x_i`$ (13)
$`\delta y_i^{(3)}`$ $`=`$ $`\left({\displaystyle \frac{N1}{N(N2)}}\right)^{1/2}\left(1{\displaystyle \frac{2N}{N1}}x_i^2\right)`$ (14)
$`\delta y_i^{(4)}`$ $`=`$ $`\left({\displaystyle \frac{2(2N3)^2}{(N1)(N2)(N3)}}\right)^{1/2}\left(x_i{\displaystyle \frac{2N}{2N3}}x_i^3\right).`$ (15)
In the large-$`N`$ limit, the eigenvector component $`\delta y_i^{(k)}`$ is given by the value of the Chebyshev polynomial of the second kind, $`U_{k1}(x)`$, evaluated at the point $`x=x_i/\sqrt{2}`$. One finds that (up to corrections of order $`1/N`$)
$$\delta y_i^{(k)}=N^{1/2}U_{k1}\left(\frac{x_i}{\sqrt{2}}\right).$$
(16)
This result is natural when one notices that the identity
$$\underset{i=1}{\overset{N}{}}\delta y_i^{(k)}\delta y_i^{(l)}=\delta _{kl}$$
(17)
can be approximated for large $`N`$ by
$$_\sqrt{2}^\sqrt{2}dx\rho (x)\delta y^{(k)}(x)\delta y^{(l)}(x)=\delta _{kl}$$
(18)
with $`\rho (x)`$ given by (2). Eqn. (18) is seen to be the orthogonality relation for the Chebyshev polynomials of the second kind.
## 3 Eigenvalues and Eigenvectors of $`C`$ for Finite $`N`$
To prove relations (11) to (15), begin by looking at the definition of $`C`$ stated in (6) and (7), and let $`C`$ act on a power of $`x_i`$. We obtain
$`{\displaystyle \underset{j=1}{\overset{N}{}}}C_{ij}x_j^k`$ $`=`$ $`{\displaystyle \underset{ji}{}}{\displaystyle \frac{x_j^kx_i^k}{(x_jx_i)^2}}Nx_i^k`$
$`=`$ $`{\displaystyle \underset{l=0}{\overset{k2}{}}}(l+1)x_i^l{\displaystyle \underset{ji}{}}x_j^{kl2}+kx_i^{k1}{\displaystyle \underset{ji}{}}{\displaystyle \frac{1}{x_jx_i}}Nx_i^k`$
$`=`$ $`{\displaystyle \underset{l=0}{\overset{k2}{}}}(l+1)\sigma _{kl2}x_i^l{\displaystyle \frac{1}{2}}k(k1)x_i^{k2}(k+1)Nx_i^k.`$
Here it is understood that the sums over $`l`$ are zero for $`k1`$ and that
$$\sigma _m\underset{j=1}{\overset{N}{}}x_j^m.$$
(20)
The second identity in (3) can be proved by induction, and eqn. (3) can be used to obtain the third identity. It follows from (3) that the $`i`$-th component of the $`k`$-th eigenvector, $`\delta y_i^{(k)}`$, is a polynomial of order $`k1`$ in $`x_i`$ and that the corresponding eigenvalue is the coefficient, $`kN`$, of the highest order term, $`x_i^{k1}`$.
The sums $`\sigma _m`$ can be determined using the relation
$$\underset{i=1}{\overset{N}{}}\underset{j=1}{\overset{N}{}}C_{ij}x_j^k=N\underset{i=1}{\overset{N}{}}x_i^k=N\sigma _k,$$
(21)
which, together with (3), lead to the recursion relation
$$Nk\sigma _k=\underset{l=0}{\overset{k2}{}}(l+1)\sigma _{kl2}\sigma _l\frac{1}{2}k(k1)\sigma _{k2}.$$
(22)
One observes directly that $`\sigma _0=N`$. Since the $`x_i`$ come in pairs $`\pm x`$ (or are $`0`$), it follows that $`\sigma _{2k1}=0`$ for all $`k1`$. This fact and the recursion relation above permit the determination of all of the sums $`\sigma _k`$.
Knowing the values of the sums, $`\sigma _k`$, it is now possible to obtain all eigenvectors of $`C`$ using (3) and to verify directly that the vectors (12) to (15) are indeed eigenvectors of $`C`$ with the eigenvalues stated in (11).
## 4 The Large-$`N`$ Limit
In this section we show that, in the large-$`N`$ limit, the eigenvectors of $`C`$ are related to the Chebyshev polynomials of the second kind as indicated in equation (16). Define a generating function $`G`$ by the equation
$$G(x)\underset{k=0}{\overset{\mathrm{}}{}}\sigma _{2k}x^{2k},$$
(23)
where the $`\sigma _k`$ are now defined by the large-$`N`$ form of equation (22) which reads
$$Nk\sigma _k=\underset{l=0}{\overset{k2}{}}(l+1)\sigma _{kl2}\sigma _l.$$
(24)
Multiplication by $`x^{k1}`$ and subsequent summation over $`k`$ leads to an equation which is equivalent to the following differential equation for $`G`$
$$N\frac{\mathrm{d}}{\mathrm{d}x}G=(xG)\frac{\mathrm{d}}{\mathrm{d}x}(xG).$$
(25)
Since $`\sigma _0=N`$, it follows that $`G(0)=N`$, and that
$$G(x)=N\frac{1\sqrt{12x^2}}{x^2}.$$
(26)
From the definition of $`G`$ in expression (23), one observes that the $`\sigma _{2k}`$ are simply
$$\sigma _{2k}=N\frac{(2k1)!!}{(k+1)!}.$$
(27)
These values of $`\sigma _{2k}`$ can also be obtained from the integral
$$\sigma _{2k}=_\sqrt{2}^\sqrt{2}dxx^{2k}\rho (x),$$
(28)
where $`\rho (x)`$ is the Wigner semi-circle describing the average level density (2).
With these expressions for the sums $`\sigma _{2k}`$, one can construct the eigenvectors of $`C`$ using a Gram-Schmidt orthogonalisation procedure starting with a non-orthogonal basis of vectors with elements $`x_i^k`$. The coefficients to be determined in the Gram-Schmidt procedure are
$$a_{kj}=\underset{i=1}{\overset{N}{}}x_i^{k1}\delta y_i^{(j)}=_\sqrt{2}^\sqrt{2}dxx^{k1}\delta y^{(j)}(x)\rho (x).$$
(29)
The structure of the integrals implies that the $`\delta y^{(k)}(x)`$ are simply the polynomials orthogonal on the interval $`[\sqrt{2},\sqrt{2}]`$ with weight $`\rho (x)`$. These polynomials are recognised as the Chebyshev polynomials of the second kind, appropriately scaled. This shows that the elements of the eigenvectors $`\delta y_i^{(k)}`$ are given by (16) in the large-$`N`$ limit.
## 5 The Number Variance for the Gaussian Ensembles
The small amplitude, quadratic approximation to $`P_{N\beta }`$ in terms of its normal modes is useful in calculations of long range spectral fluctuation measures. We illustrate this by determining the asymptotic form of the number variance for the Gaussian ensembles. The number variance, $`\mathrm{\Sigma }^2(L)`$, is defined as the variance of the number of eigenvalues in an interval of length $`L`$. (Here, it is assumed that the spectrum has been “unfolded” so that the average spacing between adjacent eigenvalues is $`1`$.) It is well-known that the exact number variance $`\mathrm{\Sigma }_\beta ^2(L)`$ for the Gaussian ensembles has the form
$$\mathrm{\Sigma }_\beta ^2(L)=\frac{2}{\beta \pi ^2}\mathrm{ln}L+K_\beta +O(L^1),$$
(30)
where $`K_\beta `$ is a constant. The leading logarithmic term in this expression is indicative of a rigid sequence of numbers. This is to be contrasted with the linear $`L`$-dependence which characterises the number variance for a sequence of uncorrelated levels.
We wish to reproduce the logarithmic term in the expression for the number variance using the normal modes. Consider an interval of the unfolded eigenvalue spectrum from $`L/2`$ to $`L/2`$. For sufficiently large $`N`$, the level density of the original spectrum corresponding to this part of the unfolded spectrum has the constant value of $`\rho (0)=\sqrt{2}N/\pi `$. Within this interval, the equilibrium position of the $`k`$-th eigenvalue is therefore
$$x_k^{(0)}=\frac{\pi k}{\sqrt{2}N}.$$
(31)
Fluctuations in the eigenvalue spectrum will move the $`k`$-th eigenvalue to a new position which can be written as
$$x_k=x_k^{(0)}+\underset{n=1}{\overset{N}{}}\alpha _n\delta y_k^{(n)}=\frac{\pi k}{\sqrt{2}N}+\frac{1}{\sqrt{N}}\underset{n=1}{\overset{N}{}}\alpha _nU_{n1}\left(\frac{\pi k}{2N}\right).$$
(32)
This means that, when fluctuations are present, the eigenvalues at the (unfolded) energies $`\pm L/2`$ will have eigenvalue numbers
$$k_\pm =\pm \frac{L}{2}\frac{\sqrt{2N}}{\pi }\underset{n=1}{\overset{N}{}}\alpha _nU_{n1}\left(\pm \frac{\pi L}{4N}\right).$$
(33)
Since the number of levels in the interval is now $`k_+k_{}`$, it follows that the number variance can be approximated by the ensemble averages
$`\mathrm{\Sigma }_\beta ^2(L)`$ $``$ $`(k_+k_{})^2k_+k_{}^2`$
$`=`$ $`{\displaystyle \frac{8N}{\pi ^2}}{\displaystyle \underset{n=1}{\overset{[\frac{N}{2}]}{}}}\alpha _{2n}^2U_{2n1}^2\left({\displaystyle \frac{\pi L}{4N}}\right)`$
$`=`$ $`{\displaystyle \frac{4}{\beta \pi ^2}}{\displaystyle \underset{n=1}{\overset{[\frac{N}{2}]}{}}}{\displaystyle \frac{1}{n}}U_{2n1}^2\left({\displaystyle \frac{\pi L}{4N}}\right).`$
Here, we have made use of the fact that terms involving Chebyshev polynomials of even order cancel and that the averages over the coefficients are
$$\alpha _i\alpha _j=\frac{1}{j\beta N}\delta _{ij}.$$
(35)
This last result can be understood by expanding the $`\delta x_i`$ in the eigenvectors of $`C`$,
$$\delta x_i=\underset{k=1}{\overset{N}{}}\alpha _k\delta y_i^{(k)}.$$
(36)
The joint probability distribution can now be interpreted as the distribution of the $`\alpha _k`$. In the approximation (4), this distribution becomes
$`P_{N\beta }(\alpha _1,\alpha _2,\mathrm{},\alpha _N)`$ $`=`$ $`P_{N\beta }^0e^{\frac{1}{2}\beta _{i,j}\delta x_iC_{ij}\delta x_j}`$ (37)
$`=`$ $`P_{N\beta }^0e^{\frac{1}{2}\beta _k\lambda _k\alpha _k^2}`$
$`=`$ $`P_{N\beta }^0{\displaystyle \underset{k}{}}e^{\frac{1}{2}\beta \lambda _k\alpha _k^2},`$
from which it is clear that the $`\alpha _k`$ are independent and Gaussian distributed with variance (35).
To calculate the sum over the Chebyshev polynomials, we introduce a new variable $`\theta `$ defined as $`\mathrm{cos}\theta =\pi L/4N`$ for $`L<4N`$ and set $`K[N/2]`$. The terms in the sum can be rewritten as integrals, and after an interchange of integration and summation we obtain
$`{\displaystyle \underset{n=1}{\overset{K}{}}}{\displaystyle \frac{1}{n}}U_{2n1}^2\left({\displaystyle \frac{\pi L}{4N}}\right)`$ $`=`$ $`{\displaystyle \frac{2}{\mathrm{sin}^2\theta }}{\displaystyle _0^\theta }d\theta ^{}{\displaystyle \frac{\mathrm{sin}2K\theta ^{}\mathrm{sin}2(K+1)\theta ^{}}{\mathrm{sin}2\theta ^{}}}.`$ (38)
The value $`L=0`$ corresponds to $`\theta =\pi /2`$. For this value, the integral equals zero. We can now express the approximate number variance as
$$\mathrm{\Sigma }_\beta ^2(L)\frac{8}{\beta \pi ^2\mathrm{sin}^2\theta }_0^{\frac{\pi }{2}\theta }d\theta ^{}\frac{\mathrm{sin}2K\theta ^{}\mathrm{sin}2(K+1)\theta ^{}}{\mathrm{sin}2\theta ^{}}.$$
(39)
For fixed $`L`$ and large $`K`$, we see that
$$\frac{\pi }{2}\theta =\frac{\pi L}{4(2K)}.$$
(40)
In this limit, our approximation to the number variance can be written as
$$\mathrm{\Sigma }_\beta ^2(L)\frac{4}{\beta \pi ^2}_0^{\frac{\pi L}{4}}dx\frac{\mathrm{sin}^2x}{x}=\frac{2}{\beta \pi ^2}[\mathrm{ln}L\mathrm{Ci}\left(\frac{\pi L}{2}\right)+\mathrm{ln}\frac{\pi }{2}+\gamma ],$$
(41)
where $`\mathrm{Ci}\left(\pi L/2\right)`$ is the cosine integral and $`\gamma `$ is Eulers constant. For large values of $`L`$, this function can be approximated by
$$\mathrm{\Sigma }_\beta ^2(L)\frac{2}{\beta \pi ^2}(\mathrm{ln}L+\mathrm{ln}\frac{\pi }{2}+\gamma ).$$
(42)
This expression contains a logarithmic term identical to that in (30). The leading term in the number variance for large $`L`$ is thus obtained correctly by the Gaussian approximation to $`P_{N\beta }`$. The Gaussian approximation is not sufficient to reproduce the constant term in (30).
## 6 A Sequence of Uncorrelated Levels
It is useful to find a similar description of the normal modes for a sequence of uncorrelated levels. In this case, it is easiest to proceed by considering the correlation matrix
$$D_{ij}=(x_ix_i)(x_jx_j)=x_ix_jx_ix_j,$$
(43)
where $`x_k`$ is the ensemble average of eigenvalue $`k`$. This approach is slightly different from that adopted above when we considered a quadratic approximation to $`\mathrm{ln}P_{N\beta }`$. To the extent that this approximation is exact, the two approaches lead to identical eigenvectors and eigenvalues which are negative reciprocals of one another.
A spectrum of $`N`$ uncorrelated levels with unit mean level density has a Poisson distribution for the level spacings. The joint probability distribution for the levels can thus be written as a product of Poisson distributions and step functions:
$`P_N(x_1,x_2,\mathrm{},x_N)`$ $`=`$ $`e^{x_1}\theta (x_1){\displaystyle \underset{i=1}{\overset{N1}{}}}e^{(x_{i+1}x_i)}\theta (x_{i+1}x_i)`$ (44)
$`=`$ $`e^{x_N}\theta (x_1){\displaystyle \underset{i=1}{\overset{N1}{}}}\theta (x_{i+1}x_i).`$
This form of the joint probability distribution leads immediately to the ensemble averages $`x_i`$ and $`x_ix_j`$ from which the elements of the matrix $`D`$ follow:
$$D_{ij}=\mathrm{min}\{i,j\}.$$
(45)
The eigenvalues, $`\omega _k`$, and eigenvectors, $`\psi _i^{(k)}`$ , of $`D`$ can be obtained by exploiting the special structure of $`D`$ and expressing the eigenvalue problem in the following suggestive manner:
$`\psi _1^{(k)}`$ $`=`$ $`\omega _k(\psi _2^{(k)}2\psi _1^{(k)})`$ (46)
$`\psi _i^{(k)}`$ $`=`$ $`\omega _k(\psi _{i1}^{(k)}2\psi _i^{(k)}+\psi _{i+1}^{(k)})`$ (47)
$`\psi _N^{(k)}`$ $`=`$ $`\omega _k(\psi _N^{(k)}\psi _{N1}^{(k)}),`$ (48)
where $`2iN1`$. Introducing the definition $`\varphi _k(2k1)\pi /(2N+1)`$, it is readily verified that the normalised eigenvectors can be written as
$$\psi _j^{(k)}=\frac{2}{\sqrt{2N+1}}\mathrm{sin}(j\varphi _k).$$
(49)
The corresponding eigenvalues are
$$\omega _k=\frac{1}{4}\mathrm{sin}^2\left(\frac{\varphi _k}{2}\right).$$
(50)
To facilitate comparison with the random matrix results obtained above, it is useful to multiply the $`\omega _k`$ by $`[\rho (0)]^2=\pi ^2/2N^2`$ to establish identical scales. We then see that the hardest normal mode eigenvalue for the uncorrelated levels is of order $`2N^2/(\pi ^2\omega _N)=(8/\pi )^2N^2`$ which is comparable to the hardest eigenvalue of $`\lambda _N=N^2`$ obtained for the random matrix ensembles. (The difference is not significant and is largely due to the limitations of the quadratic approximation for the joint probability density.) Significant differences are found in the nature of the soft spectrum. For small values of $`k`$, the uncorrelated levels reveal a quadratic spectrum, $`2k^2`$, which is in sharp contrast to the linear spectrum, $`kN`$, of the random matrix ensembles seen in expression (11).
The number variance for a sequence of uncorrelated levels is well known
$$\mathrm{\Sigma }^2(L)=L.$$
(51)
Here, we wish to proceed in the spirit of the previous section and reproduce this result using the normal models just obtained. Considering the interval $`[0,L]`$ and repeating the arguments which led to equation (5), we arrive at the expression
$$\mathrm{\Sigma }^2(L)\frac{1}{2N+1}\underset{k=1}{\overset{N}{}}\frac{1}{\mathrm{sin}^2(\varphi _k/2)}\mathrm{sin}^2(L\varphi _k).$$
(52)
This sum can be performed exactly when $`L`$ is an integer or a half-integer. For these cases, we find that the number variance is given as
$$\mathrm{\Sigma }^2(L)\{\begin{array}{cc}L\hfill & \text{for integer}L\hfill \\ L\frac{1}{2(2N+1)}\hfill & \text{for half-integer}L\hfill \end{array}$$
(53)
for physically interesting values $`L<N`$. This result agrees with (51) in the large-$`N`$ limit. For other values of $`L`$, it is useful to approximate the sum in (52) by an integral as in eqn. (41).
$$\mathrm{\Sigma }^2(L)\frac{1}{\pi }_0^{\pi /2}dx\frac{\mathrm{sin}^2(2Lx)}{x^2}=\frac{1}{\pi ^2}\left[\mathrm{\hspace{0.17em}2}\pi L\mathrm{Si}(2\pi L)1+\mathrm{cos}(2\pi L)\right],$$
(54)
where $`\mathrm{Si}(2\pi L)`$ is the sine integral. For large values of $`L`$, this leads to the approximation
$$\mathrm{\Sigma }^2(L)L\frac{1}{2\pi ^3L}\mathrm{sin}(2\pi L)\frac{1}{\pi ^2},$$
(55)
where we note that the final term of $`1/\pi ^2`$ is an artifact of having replaced $`\mathrm{sin}^2x`$ by $`x`$ in the denominator of (54). Evidently, eqns. (54) and (55) invite comparison with the results of eqns. (41) and (42) obtained for the Gaussian ensembles. We see that the quadratic approximation to the joint probability density also provides a reliable description of the number variance in the large $`L`$ limit for uncorrelated levels.
## 7 Discussion and Conclusions
We have considered the small amplitude normal modes describing the fluctuations of the eigenvalues of random matrices about their equilibrium positions. In the limit of large matrices, these modes are essentially plane waves. (Recall that the Chebyshev polynomials have the form $`U_{2n}(x)=\mathrm{cos}(2n+1)x`$ and $`U_{2n+1}(x)=\mathrm{sin}(2n+2)x`$ in the limit of large $`n`$ and fixed $`x`$.) The mean square amplitude for each mode is inversely proportional to its associated eigenvalue, see eqn. (37). Since these eigenvalues grow monotonically with wave number for both the Gaussian ensembles and for uncorrelated levels, longer wave length fluctuations have a larger amplitude. Thus, the most probable fluctuation in a random matrix spectrum corresponds to a common shift of all eigenvalues with no change in their relative separation, see (12). The next most probable fluctuation is a simple “breathing mode” of the spectrum, see (13).
The properties of the normal mode spectrum provide us with some insight regarding the qualitative behaviour of long-range spectral measures. For uncorrelated levels, the long wave length spectrum is particularly soft with a quadratic spectrum. As a result, $`\mathrm{\Sigma }^2(L)`$ is completely dominated by soft modes when $`L`$ is large. This is seen most easily from eqn. (54). The linear asymptotic behaviour of the number variance for uncorrelated levels is thus seen to be a direct consequence of the quadratic dispersion relation obeyed by the soft modes. The situation is qualitatively different in the case of the Gaussian ensembles where all modes obey an exact linear dispersion relation. From eqn. (41) we see that this linearity ensures that $`\mathrm{\Sigma }^2(L)`$ must grow logarithmically for large $`L`$ and that all normal modes contribute democratically to this asymptotic behaviour. Similar linear dispersion relations characterise perfectly elastic solids, and it seems useful to regard the spectral rigidity of random matrices as a consequence of the physical rigidity of a classical one-dimensional array of line charges. Dyson was led to the same conclusion . Although aware of the somewhat arbitrary nature of distinctions between phases in one dimension, he felt it appropriate to call the Coulomb gas (and hence the spectrum of a random matrix) a “crystal”. The considerations presented here provide additional support for this designation. In the same spirit, the quadratic dispersion relation for the normal modes of uncorrelated levels leads us to regard them as a gas.
We have emphasised that fluctuations in random matrix spectra are most naturally described as the highly correlated motion of individual eigenvalues, i.e., the normal modes. Given the product form of eqn. (37), it is also clear that these normal modes are statistically independent. If the matrix $`D`$ of eqn. (43) is calculated in numerical simulations, the statistical errors associated with its eigenvalues will become uncorrelated as the sample size increases. Soft modes (with large amplitudes) and their eigenvalues can thus be determined accurately with relative ease. This fact can be useful in numerical simulations of ensembles which do not readily permit analytic analysis.
## Acknowledgements
We appreciate useful comments on the manuscript by J. Christiansen and K. Splittorff.
|
no-problem/9812/math9812144.html
|
ar5iv
|
text
|
# Noise and chaotic disturbance on self-similar set11footnote 1
## 1 Introduction
Many dynamical systems have an attractor or a repellor that has in some way a self-similar structure. In reality, fractal growth is always under the influence of the environment. In order to simulate realistic fractal growth, it is important to in corporate disturbances or noises into the iteration procedure.
The effect of noise on chaotic dynamical systems is of great interest and has been studied by many authors. The early work on this problem was carried out by Crutchfield et al$`^{\text{[1]}}`$, who studied the effect of noise on period doubling in a discrete system. Additional work was carried out by Svensmark and Samuelson$`^{\text{[2]}}`$ on the Josephson junction. Wiesenfeld and McNamara$`^{\text{[3]}}`$ have studied the amplification of a small resonant periodic perturbation in the presence of noise near the period doubling threshold. Arecchi et al$`^{\text{[4]}}`$ have studied the effect of noise on the forced Duffing oscillator in the region of parameter space, where different chaotic attractors coexist. Kautz$`^{\text{[5]}}`$ has investigated the problem of thermally induced escape from the basin of attraction in a dc-biased Josephson junction. Last, Kapi-taniak$`^{\text{[6]}}`$ has studied the behavior of the probability density function of a driven nonlinear system, his result implies that the noise may introduce a degree of order in a chaotic system, and the exponent is a random number and has a corresponding probability density function.
Some authors have studied the effects of noise on discrete dynamical systems. For example, Crutchfield and Packard$`^{\text{[7]}}`$ have studied the symbolic dynamics of chaotic maps when they are perturbed by a noise term. Carlson and Schieve$`^{\text{[8]}}`$ considered noise to the standard shift map. García-Pelayo and schieve$`^{\text{[9]}}`$ introduced noise to the affine contractive iterated function system. Cole and Schieve$`^{\text{[10]}}`$ studied effect noise on triadic Cantor set. More resently, Chia-Chu Chen$`^{\text{[11]}}`$ studied the effect of chaotic disturbance on triadic Cantor set. Chia-Chu Chen$`^{\text{[11]}}`$ pointed out that it will be more interesting if we can show that truncation of a fractal set under the influence of chaotic noise is a general phenomenon. Based the thought of ref., in this paper we study the effect of noise and chaotic disturbance on a class of more general fractal set, i.e. self-similar set.
## 2 Fractal constructure of self-similar sets and noise
Denote the $`d`$-dimensional Euclidean space by $`𝐑^d`$, and fixed $`K<\mathrm{}`$. Let and
$$S_j(x)=\xi _jR_jx+b_j,0<\xi _j<1,R_j\text{ orthogonal,}b_j𝐑^d,jJ=\{1,2,\mathrm{}K\}$$
be contractive similarities map on a compact set $`E_0`$ which satisfies $`\overline{Int(E_0)}=E_0`$, and assume that $`Int(S_j(E_0))Int(S_i(E_0))=\mathrm{}`$ for $`ji`$. For natural number $`n`$, let
$`E_{j_1j_2\mathrm{}j_n}`$ $`=`$ $`S_{j_1}\mathrm{}S_{j_n}(E_0)`$
$`E(n)=`$ $`_{j_iI}E_{j_1\mathrm{}j_n}.`$
It is obvious that
$$E_{j_1\mathrm{}j_n}E_{j_1\mathrm{}j_{n1}},E(n)E(n1).$$
Then
$$E=_{n=1}^{\mathrm{}}E(n)$$
is called self-similar set, and $`E_{j_1\mathrm{}j_n}`$ is called generating set of $`n`$th stage. From ref., the fractal dimension $`s`$ of $`E`$ is the solution of equation $`_{j=1}^K\xi _j^s=1.`$
Examples: Cantor’s set , Cantor’s $`k`$-bars, Von Koch snowflake and Sierpinski gasket are self-similar set.
We might as well assume $`|E_0|=1`$, where $`|E_0|`$ denote the diameter of $`E_0`$. It is easy to see
$$L_0=|E_0|=1,L_{j_1\mathrm{}j_n}=|E_{j_1\mathrm{}j_n}|=\xi _{j_1}\xi _{j_2}\mathrm{}\xi _{j_n}.$$
(1)
The noise is introduced by the above rules with the addition of $`K`$ independent stochastic variables, $`\delta _1,\mathrm{},\delta _K`$. Under this correction, we have
$$L_0=|E_0|=1,L_{j_1\mathrm{}j_n}=\xi _{j_n}(L_{j_1\mathrm{}j_{n1}}+\delta _{j_n}).$$
(2)
The generating set $`E_{j_1\mathrm{}j_n}`$ is said to ’collapse’ when $`L_{j_1\mathrm{}j_n}`$ becomes less than or equal to zero. We terminate the iteration of a generating set only when it collapse. But we may not terminate the iteration other generating set which is not subset of this generating set as the same time.
## 3 Distribution function
We want to obtain
###### Theorem 1
The distribution function that describes the probability that the generating set will collapse on $`n`$th iteration for two different cases can be given out definitely.
For case 1, our ’noises’ $`\delta _i`$, $`(i=1,2,\mathrm{},K)`$ are stochastic variables that can take the values $`\mathrm{}_i`$, $`0`$, $`\mathrm{}_i`$ with a probability of $`1/3`$, where $`\mathrm{}(0,1)`$. For case 2, our ’noises’ $`\delta _i`$, $`(i=1,2,\mathrm{},K)`$ are stochastic variables with an arbitrary normalized probability functions.
For any generating set $`E_{j_1\mathrm{}j_n}`$, from (2) we have
$$L_{j_1\mathrm{}j_n}=\xi _{j_1}\mathrm{}\xi _{j_n}+N_{j_1\mathrm{}j_n},$$
(3)
where
$$N_{j_1\mathrm{}j_n}=\xi _{j_n}\delta _{j_n}+\xi _{j_n}\xi _{j_{n1}}\delta _{j_{n1}}+\mathrm{}+\xi _{j_n}\mathrm{}\xi _{j_1}\delta _{j_1}$$
(4)
denote the noise term. From (2) we also have
$$N_{j_1\mathrm{}j_n}=\xi _{j_n}(N_{j_1\mathrm{}j_{n1}}+\delta _{j_n}).$$
(5)
From (4), the generating set collapse if and only if $`N_{j_1\mathrm{}j_n}\xi _{j_n}\mathrm{}\xi _{j_1}`$.
We denote $`C_{j_1\mathrm{}j_n}`$ the probability distribution describing the chance that the generating set $`E_{j_1\mathrm{}j_n}`$ collapse, $`NT_{j_1\mathrm{}j_n}`$ the probability that the previous $`n1`$ generating sets $`\{E_{j_1},\mathrm{},E_{j_1\mathrm{}j_{n1}}\}`$ do not collapse,$`LE_{j_1\mathrm{}j_n}`$ the probability that the noise term is $`\xi _{j_n}\mathrm{}\xi _{j_1}`$, $`GE_{j_1\mathrm{}j_n}`$ the probability that the noise term is $`\xi _{j_n}\mathrm{}\xi _{j_1}`$. Then
$$C_{j_1\mathrm{}j_n}=NT_{j_1\mathrm{}j_n}LE_{j_1\mathrm{}j_n}.$$
(6)
It is easy to see $`GE_{j_1\mathrm{}j_n}=LE_{j_1\mathrm{}j_n}`$.
### 3.1 Distribution function for case 1.
First, we determine $`LE_{j_1\mathrm{}j_n}`$.
We denote $`\xi =\mathrm{max}_{1iK}\xi _i`$, $`\mathrm{}=\mathrm{max}_{1iK}\mathrm{}_i`$, $`\mathrm{}^{}=\mathrm{min}_{1iK}\mathrm{}_i`$. Since $`\xi <1`$, then $`_{n=1}^{\mathrm{}}\xi ^n=\frac{\xi }{1\xi }<\mathrm{}`$. For $`\xi `$ and $`\mathrm{},\mathrm{}^{}`$, we assume
$$\xi \frac{\mathrm{}^{}}{2\mathrm{}+\mathrm{}^{}}.$$
(7)
From (4), we have
$$\frac{\xi \mathrm{}}{1\xi }N_{j_1\mathrm{}j_n}\frac{\xi \mathrm{}}{1\xi }.$$
(8)
For any $`n`$, we must have the following three cases.
If $`\xi _{j_1}\mathrm{}\xi _{j_n}>\frac{\xi \mathrm{}}{1\xi }`$, then from (8), the generating set $`N_{j_1\mathrm{}j_n}`$ can not $`\xi _{j_1}\mathrm{}\xi _{j_n}`$, hence $`LE_{j_1\mathrm{}j_n}=0`$.
If $`\xi _{j_1}\mathrm{}\xi _{j_{n1}}>\frac{\xi \mathrm{}}{1\xi }`$ and $`\xi _{j_1}\mathrm{}\xi _{j_n}\frac{\xi \mathrm{}}{1\xi }`$, since the possible values of $`N_{j_1\mathrm{}j_n}`$ are evenly spaced, from ref., the number of these values in a given range is proportional to the length of the range, the points of the numerator are confined to the region $`[\frac{\xi \mathrm{}}{1\xi },\xi _{j_1}\mathrm{}\xi _{j_n}]`$, while the points of the denominator are in $`[\frac{\xi \mathrm{}}{1\xi },\frac{\xi \mathrm{}}{1\xi }]`$, hence
$`LE`$ $`={\displaystyle \frac{\text{ number of possible values of}N_{j_1\mathrm{}j_n}\text{ less than}\xi _{j_1}\mathrm{}\xi _{j_n}}{\text{ total number of possible values of}N_{j_1\mathrm{}j_n}}}`$
$`{\displaystyle \frac{\frac{\xi \mathrm{}}{1\xi }\xi _{j_1}\mathrm{}\xi _{j_n}}{2\frac{\xi \mathrm{}}{1\xi }}}={\displaystyle \frac{\xi \mathrm{}(1\xi )\xi _{j_1}\mathrm{}\xi _{j_n}}{2\xi \mathrm{}}}.`$
Our approximation becomes very good for $`\mathrm{}<<1`$.
If $`\xi _{j_1}\mathrm{}\xi _{j_{n1}}\frac{\xi \mathrm{}}{1\xi }`$, we have $`\xi _{j_1}\mathrm{}\xi _{j_n}\xi _{j_n}\frac{\xi \mathrm{}}{1\xi }`$. when $`\delta _{j_n}=\mathrm{}_{j_n}`$, from (5) and (8), we have $`\xi _{j_n}(\frac{\xi \mathrm{}}{1\xi }\mathrm{}_{j_n})N_{j_1\mathrm{}j_n}\xi _{j_n}(\frac{\xi \mathrm{}}{1\xi }\mathrm{}_{j_n})`$. From (7), we have $`\xi _{j_n}(\frac{\xi \mathrm{}}{1\xi }\mathrm{}_{j_n})\xi _{j_n}\frac{\xi \mathrm{}}{1\xi }`$. Hence $`N_{j_1\mathrm{}j_n}\xi _{j_1}\mathrm{}\xi _{j_n}`$. This means that if the last step was negative then the generating set must collapse. But when $`\delta _{j_n}0`$, since $`N_{j_1\mathrm{}j_{n1}}>\xi _{j_1}\mathrm{}\xi _{j_{n1}}`$ and
$$N_{j_1\mathrm{}j_n}=\xi _{j_n}N_{j_1\mathrm{}j_{n1}}+\xi _{j_n}\delta _{j_n},$$
we have $`N_{j_1\mathrm{}j_n}>\xi _{j_1}\mathrm{}\xi _{j_n}`$. Hence the generating set can collapse only if $`\delta _{j_n}`$ was negative. Hence
$$LE_{j_1\mathrm{}j_n}=(\text{ probability that}\delta _{j_n}=\mathrm{}_{j_n})=1/3.$$
Second, we determine $`NT_{j_1\mathrm{}j_n}`$.
We find that
$$NT_{j_1\mathrm{}j_n}=1(\text{ probability that one of generating sets}E_{j_1},\mathrm{},E_{j_{n1}}\text{ collapse}),$$
(9)
hence
$`NT_{j_1\mathrm{}j_n}`$ $`=`$ $`1{\displaystyle \underset{i=1}{\overset{n1}{}}}C_{j_1\mathrm{}j_i}`$ (10)
$`=`$ $`1{\displaystyle \underset{i=1}{\overset{n1}{}}}NT_{j_1\mathrm{}j_i}LE_{j_1\mathrm{}j_i}.`$
From $`NT_{j_1}=1`$ and (10), we can determine $`NT_{j_1\mathrm{}j_n}`$. Then from (6) we can determine $`C_{j_1\mathrm{}j_n}`$ for any generating set $`E_{j_1\mathrm{}j_n}`$.
### 3.2 Distribution functions for case 2.
We assume that the density function of $`\delta _i`$ ($`i=1,2,\mathrm{},K`$) are $`f_i(x)`$. Denote $`F_{j_1\mathrm{}j_n}(x)`$ the density function of noise term $`N_{j_1\mathrm{}j_n}`$. Since the density function of $`\xi _{j_1}\delta _{j_1}`$ is
$$F_{j_1}(x)=\frac{1}{\xi _{j_1}}f_{j_1}(\frac{x}{\xi _{j_1}}),$$
we have
$$LE_{j_1}=_{\mathrm{}}^{\xi _{j_1}}\frac{1}{\xi _{j_1}}f_{j_1}(\frac{x}{\xi _{j_1}})𝑑x,$$
and $`NT_{j_1}=1`$, then $`C_{j_1}=LE_{j_1}`$. For next stage, since $`N_{j_1j_2}=\xi _{j_2}N_{j_1}+\xi _{j_2}\delta _{j_2}`$, $`N_{j_1j_2}`$ will have a density function $`F_{j_1j_2}(x)`$that is the convolution of the density functions of $`\xi _{j_2}N_{j_1}`$ and $`\xi _{j_2}\delta _{j_2}`$. If the iteration of $`E_{j_1}`$ is not terminated we know that $`N_{j_1}`$ must greater than $`\xi _{j_1}`$, this means the density function of $`N_{j_1}`$ must equal zero in $`(\mathrm{},\xi _{j_1})`$. Hence, when we take the convolution we should use
$$_{j_1}(x)=\frac{I(x+\xi _{j_1})F_{j_1}(x)}{1C_{j_1}}$$
for the density function of $`N_{j_1}`$, where
$$I(x)=\{\begin{array}{cc}0,\hfill & (x0)\hfill \\ 1,\hfill & (x>0)\hfill \end{array}$$
We divide by $`1C_{j_1}`$ in order to normalize $`_{j_1}(x)`$. Hence
$$F_{j_1j_2}(x)=_{\mathrm{}}^{\mathrm{}}\frac{1}{\xi _{j_2}^2}_{j_1}(\frac{t}{\xi _{j_2}})f_{j_2}(\frac{xt}{\xi _{j_2}})𝑑t.$$
$$LE_{j_1j_2}=_{\mathrm{}}^{\xi _{j_1}\xi _{j_2}}F_{j_1j_2}(x)𝑑x,$$
and $`NT_{j_1j_2}=1C_{j_1}`$, and hence $`C_{j_1j_2}=NT_{j_1j_2}LE_{j_1j_2}`$ is obtained. For any generating set $`E_{j_1\mathrm{}j_n}`$, from (5) we will have
$$LE_{j_1\mathrm{}j_n}=_{\mathrm{}}^{\xi _{j_1}\mathrm{}\xi _{j_n}}F_{j_1\mathrm{}j_n}(x)𝑑x,$$
where
$$F_{j_1\mathrm{}j_n}=_{\mathrm{}}^{\mathrm{}}\frac{1}{\xi _{j_n}^2}_{j_1\mathrm{}j_{n1}}(\frac{t}{\xi _{j_n}})f_{j_n}(\frac{xt}{\xi _{j_n}})𝑑t,$$
and
$$_{j_1\mathrm{}j_{n1}}(x)=\frac{I(x+\xi _{j_1}\mathrm{}\xi _{j_{n1}})F_{j_1\mathrm{}j_{n1}}(x)}{1C_{j_1\mathrm{}j_{n1}}}.$$
Then $`NT_{j_1\mathrm{}j_n}=1_{i=1}^{n1}C_{j_1\mathrm{}j_i}`$, hence we can obtain $`C_{j_1\mathrm{}j_n}`$.
## 4 Chaotic disturbance on self-similar sets.
For any infinite sequence $`\{j_iI\}_{i=1}^{\mathrm{}}`$, in this section, in (2) $`\delta _{j_n}`$ is assigned by a chaotic map. We will first concentrate on the case where the chaotic map is a tent map. The tent map is given by
$$x_{n+1}=\{\begin{array}{cc}2x_n,& x_n<1/2\\ 2(1x_n),& x_n1/2.\end{array}$$
(11)
This map is iterated together with the rule given by (2). The sequence begins with a position $`x_0`$ chosen arbitrarily. A sequence $`\{x_n\}`$ is then generated according to (11) and $`\delta _{j_n}`$ is assigned by
$$\delta _{j_n}=\{\begin{array}{cc}ϵ,& x_n<1/2\\ ϵ,& x_n1/2.\end{array}$$
(12)
where $`ϵ`$ is positive constants less than $`1`$. For $`\xi `$ and $`ϵ`$, we assume that
$$\xi +\frac{ϵ}{1\xi }<1.$$
(13)
In this section, for any $`x_0[0,1]`$ and any infinite sequence $`\{j_iI\}_{i=1}^{\mathrm{}}`$, we will show that
###### Theorem 2
Under the condition (13), the iteration with the rules given by (2) and (12) according to the infinite sequence must be terminated at finite order, i.e. there is a generating set at finite stage collapse. From the arbitrarity of the infinite sequence, we can see that the self-similar structure is truncated at finite order of iteration.
First we establish the fact that for $`\delta _{j_n}`$ given by (12), there exist an interval $`(0,a)`$ and $`n_0`$ such that for $`x_0(0,a),N_{j_1\mathrm{}j_{n_0}}<\xi _{j_1}\mathrm{}\xi _{j_{n_0}}`$. If $`\delta _{j_i}=ϵ,(i=1,2,\mathrm{},n_0)`$, from (4), $`N_{j_1\mathrm{}j_{n_0}}<\xi _{j_1}\mathrm{}\xi _{j_{n_0}}`$ implies
$$1+\frac{1}{\xi _{j_1}}+\frac{1}{\xi _{j_1}\xi _{j_2}}+\mathrm{}+\frac{1}{\xi _{j_1}\mathrm{}\xi _{j_{n_0}}}>\frac{1}{ϵ}.$$
It is sufficient that
$$1+\frac{1}{\xi }+\mathrm{}+(\frac{1}{\xi })^{n_01}>\frac{1}{ϵ},$$
it becomes
$$(\frac{1}{\xi })^{n_0}>\frac{\frac{1}{\xi }1}{ϵ}+1,$$
hence
$$n_0>\mathrm{log}(1+\frac{\frac{1}{\xi }1}{ϵ})/\mathrm{log}(\frac{1}{\xi }),$$
then it is sufficient to take
$$n_0=[\mathrm{log}(1+\frac{\frac{1}{\xi }1}{ϵ})/\mathrm{log}(\frac{1}{\xi })]+1,$$
where $`[A]`$ means the integer part of $`A`$. By knowing $`n_0`$, for $`\delta _{j_i}=ϵ,(i=1,2,\mathrm{},n_0)`$ we have $`2^{n_0}a=1/2`$ which implies $`a=1/2^{n_0+1}`$.
For any $`x_0(0,1)`$, by the ergodicity of the tent map$`^{\text{[13]}}`$, after a finite number of iterations, falls into the interval $`(0,a)`$. Suppose it takes $`k`$ iterations to move into $`(0,a)`$, we have $`x_n(0,1/2)`$ for $`k<n<k+n_0`$. The noise term generated by $`x_n`$ is
$$N_{j_1\mathrm{}j_n}=\xi _{j_n}ϵ\mathrm{}\xi _{j_n}\mathrm{}\xi _{j_{k+1}}ϵ+\xi _{j_n}\mathrm{}\xi _{j_{k+1}}N_{j_1\mathrm{}j_k},$$
(14)
where $`N_{j_1\mathrm{}j_k}`$ can either be positive or negative (when $`N_{j_1\mathrm{}j_k}<0`$, we can assume $`N_{j_1\mathrm{}j_k}>\xi _{j_1}\mathrm{}\xi _{j_k}`$, otherwise, we have truncated already). We want to find an integer $`l<n_0`$ such that
$$N_{j_1\mathrm{}j_{k+l}}<\xi _{j_1}\mathrm{}\xi _{j_{k+l}}.$$
(15)
From (14), (15) implies
$$(\frac{1}{\xi _{j_{k+l1}}\mathrm{}\xi _{j_1}}+\mathrm{}+\frac{1}{\xi _{j_k}\mathrm{}\xi _{j_1}})ϵ\frac{N_{j_1\mathrm{}j_k}}{\xi _{j_k}\mathrm{}\xi _{j_1}}>1.$$
It is sufficient that
$$\frac{ϵ}{\xi _{j_1}\mathrm{}\xi _{j_k}}(1+\frac{1}{\xi }+\mathrm{}+(\frac{1}{\xi })^{l1})\frac{N_{j_1\mathrm{}j_k}}{\xi _{j_1}\mathrm{}\xi _{j_k}}>1,$$
i.e
$$l>\mathrm{log}(1+\frac{1/\xi 1}{ϵ}(\xi _{j_1}\mathrm{}\xi _{j_k}+N_{j_1\mathrm{}j_k}))/\mathrm{log}(1/\xi )>0.$$
Then it is sufficient to take
$$l=[\mathrm{log}(1+\frac{1/\xi 1}{ϵ}(\xi _{j_1}\mathrm{}\xi _{j_k}+N_{j_1\mathrm{}j_k}))/\mathrm{log}(1/\xi )]+1.$$
We must to show $`l<n_0`$. When $`N_{j_1\mathrm{}j_k}0`$, since $`0<\xi _{j_1}\mathrm{}\xi _{j_k}|N_{j_1\mathrm{}j_k}|<1`$, then
$$\mathrm{log}(1+\frac{1/\xi 1}{ϵ}(\xi _{j_1}\mathrm{}\xi _{j_k}|N_{j_1\mathrm{}j_k}|))/\mathrm{log}(1+\frac{1/\xi 1}{ϵ})<1,$$
we obtain $`l<n_0`$. When $`N_{j_1\mathrm{}j_k}>0`$, from (13), since
$`\xi _{j_1}\mathrm{}\xi _{j_k}+N_{j_1\mathrm{}j_k}`$ $`<\xi _{j_1}\mathrm{}\xi _{j_k}+{\displaystyle \frac{1\xi ^{k+1}}{1\xi }}ϵ`$
$`<\xi ^k+{\displaystyle \frac{ϵ}{1\xi }}<\xi +{\displaystyle \frac{ϵ}{1\xi }}<1,`$
then
$$\mathrm{log}(1+\frac{1/\xi 1}{ϵ}(\xi _{j_1}\mathrm{}\xi _{j_k}+N_{j_1\mathrm{}j_k}))/\mathrm{log}(1+\frac{1/\xi 1}{ϵ})<1,$$
we have $`l<n_0`$. Thus for the chaotic sequence generated by tent map, our conclusion holds.
Remark: If we define a generating set $`E_{j_1\mathrm{}j_n}`$merges’ when $`\xi _{j_1}\mathrm{}\xi _{j_n}N_{j_1\mathrm{}j_n}`$, we change (12) to
$$\delta _{j_n}=\{\begin{array}{cc}ϵ,& x_n<1/2\\ ϵ,& x_n1/2,\end{array}$$
then similarly, for $`x_0(0,1)`$, there exists a generating set at finite stage merges.
Generalized case. From the above discussion, for any chaotic map, if it is ergodic and there exists an interval $`I_0`$ such that the same negative value of $`\delta _{j_n}`$ are assigned to any $`xI_0`$, then the self-similar structure is truncated at a finite order of iteration.
|
no-problem/9812/gr-qc9812075.html
|
ar5iv
|
text
|
# How to interpret black hole entropy?
## I Introduction
One of the most interesting branches of modern theoretical physics is black hole thermodynamics. The origin of this fascinating area of research can be traced back to the early 70’s when it was observed that there are certain striking similarities between the laws of black hole mechanics and the laws of thermodynamics. The similarities were mostly considered artificial until Hawking convincingly found – by taking into account quantum mechanical effects – that the exterior region of a black hole produces thermal radiation. Ever since, the thermodynamical properties, like entropy, of black holes have been studied seriously, but many unsolved problems are still waiting for solutions. One of the most interesting issues is the question of the underlying microstates of the hole itself. The unknown microstates determine the average values of the thermodynamical quantities of the hole, and it is very likely that the solution of the problem of the underlying microstates of the hole will give us valuable clues to the self-consistent quantum theory of gravity.
We all have been convinced by now by the fact that black holes bear entropy $`S=\frac{1}{4}A`$. This result originates from Bekenstein’s and Hawking’s work, but it has been reproduced by many authors since then. The original calculation yielding the entropy was based on the semiclassical gravity, where spacetime was considered as a classical object, whereas matter fields were quantized in this classical but curved background spacetime. Some years later, Hawking was able to recover the same result by means of a Euclidean path-integral approach to quantum gravity. Those approaches, however, failed to give an explanation to the black hole entropy at the fundamental level. More precisely, they did not provide a solution to the problem of underlying microstates of the hole: Since the black hole entropy is $`\frac{1}{4}A`$, one might expect that there are $`\mathrm{exp}(\frac{1}{4}A)`$ microstates corresponding to the same macrostate of the hole, and the problem is to identify these microstates. Search for the microstates has been going on for almost thirty years, and only recently the string-theoretical work of Strominger and Vafa has been able to give explicitly the number of the microstates. In this paper, however, we shall investigate the black hole entropy by means of canonical methods.
The classical no-hair theorem states that after the collapse, when a black hole has settled down to a stationary state, its properties are determined by very few parameters observed far from the hole: These parameters are the mass $`M`$, the charge $`Q`$ and the angular momentum $`\stackrel{}{J}`$ of the hole. Thus, from the classical point of view, black holes have only three degrees of freedom. What has happened to the enormous amount of degrees of freedom of the collapsing matter? The no-hair theorem prompts one to believe that these degrees of freedom, and the information contained in them, is lost in the collapse, and that the entropy of a black hole may be understood as a measure of information loss during the gravitational collapse, because between the entropy and the information there is a well-known relationship given by Brilliouin: the decrease in information increases the entropy. This viewpoint is purely quantum-mechanical. According to quantum mechanics all the information from the collapsing star is not able to reach to an observer exterior to the newly formed event horizon. In other words, all the microstates of the collapsing star cannot be measured by the external observer. This results to an increasing entropy $`S`$.
The question now arises: After the collapse of matter, are the degrees of freedom contained in the matter fields somehow encoded into the quantum states of the black hole spacetime itself, or have they vanished altogether, leaving no trace whatsoever? Of course, it is natural to claim that they are encoded into the quantum states of spacetime itself such that there is a vast $`\mathrm{exp}(\frac{1}{4}A)`$-fold degeneracy in the quantum states of the hole. This leads us to a conclusion that the total number of unknown quantum states of the black hole must be enormous, too. Thus, from a quantum-mechanical point of view, the number of the physical degrees of freedom of the hole is not limited to just few parameters. The contradiction between quantum and classical black holes is obvious: The number of physical degrees of freedom of the classical hole is three, whereas the number of physical degrees of freedom of the quantum black hole is enormous. The problem with this contradiction is that it is not quite clear how, starting from general relativity, quantization itself might bring along a huge number of additional degrees of freedom.
The purpose of this paper is to investigate a possibility that the entropy of a black hole is reproducable from the point of view of an external observer even if the observer takes into account the classical degrees of freedom only, and quantizes all the classically observed quantities, like mass, charge and angular momentum without assuming any degeneracy in the eigenstates of these quantities. In other words, we shall consider a possibility that the other but the classical degrees of freedom associated with the collapsing matter fields have vanished altogether. This point of view might provide a solution to the apparent contradiction related to the number of degrees of freedom of quantum and classical black holes. The key point in this paper is that we investigate the statistical mechanics of the exterior region of the black hole spacetime. This kind of a choice may be considered justified on grounds of the fact that the interior region of the black hole is separated from the exterior region by a horizon. Hence, an external observer cannot make any observations on the interior region, and one is justified to take a point of view that, for such an observer, physics of a black hole is physics of its exterior region. For the sake of convenience, we shall consider static and vacuum black holes only, but an analogous treatment could be performed for static electrovacuum black holes as well. The uniqueness theorem for nonrotating and vacuum black holes states that the Schwarzschild metric, with the mass parameter $`M`$, represents the only static and asymptotically flat black hole solution. We shall see that the Bekenstein–Hawking entropy of a black hole is reproducable from the statistical mechanics of the exterior region of Schwarzschild black hole spacetime, even if we assume that there is no degeneracy in the mass eigenstates of the hole. We shall also see that the Bekenstein–Hawking entropy can be obtained for the whole spacetime as well, but in that case we must assume, a priori, an $`\mathrm{exp}(\frac{1}{4}A)`$-fold degeneracy in the mass eigenstates.
The analysis performed in this paper is based on the so called Hamiltonian thermodynamics of black holes. This branch of physics is an outgrowth of the analysis on the Hamiltonian dynamics of the Schwarzschild spacetimes performed by Kuchař, and was initiated, among others, by Louko, Whiting, and Winters-Hilt. We want to emphasize that the whole analysis in this paper is performed in Lorentzian spacetime without euclideanizing neither the Hamiltonian nor the action. The reason for performing the analysis in Lorentzian spacetime is that the interior of the Schwarzschild black hole is included in the analysis, too. In contrast, when one performs the euclideanization of the Schwarzschild spacetime action, the black hole interior is reduced to one point and thus it is somewhat questionable to talk about the quantum states of the hole. In our investigations the interior, as well as the exterior region of the hole plays an essential role.
This paper is organized in the following way: In Sec. II we describe very briefly the Hamiltonian formulation of Schwarzschild spacetimes and represent the Hamiltonian produced by Louko and Whiting for the exterior region of the Schwarzschild black hole. In Sec. III we write two Lorentzian partition functions for the Schwarzschild black hole. The first of these partition functions describes the whole Kruskal spacetime, and the second the exterior region of the hole from the point of view of an observer at rest relative to the hole at the right-handed asymptotic infinity. These two partition functions appear to give identical partition functions for the radiaton emitted by the hole, if we use Bekenstein’s proposal for a discrete area spectrum, and assume, in addition, that all the energy and the entropy of the hole is exactly converted into the energy and the entropy of the radiation.
The point we try to emphasize is that in order to obtain the partition function describing the whole spacetime, the observer must accept an $`\mathrm{exp}(\frac{1}{4}A)`$-fold degeneracy in the energy eigenstates of the hole, whereas no degeneracy needs to be assumed when one writes the partition function describing the exterior region of the hole. This will be the main result of this paper, and it has an interesting consequence: If one takes a view that, for an external observer, only the physical properties of the exterior region of the hole are relevant, then it is not necessary to consider the possible internal degrees of freedom of the hole itself, but it is sufficient to take into account only the classical physical degree of freedom of the Schwarzschild black hole, namely the mass $`M`$, to obtain the Bekenstein-Hawking entropy. This result is in harmony with the no-hair theorem and with the semiclassical results. Unless otherwise stated, we shall use natural units where $`c=G=\mathrm{}=k_B=1`$.
## II Hamiltonian Theory
In this section we shall give a brief introduction to the classical Hamiltonian theory of spacetimes containing a Schwarzschild black hole. We have not aimed at a presentation that would give a technically detailed review on the subject; for more information, the authors recommend the reader to consult the papers written by Kuchař, and by Louko and Whiting. The classical Hamiltonian theory presented in this section is based on those papers.
The first successful Hamiltonian formulation of general relativity was the so called ADM-formalism, which was discovered by Arnowitt, Deser and Misner. The basic idea of the ADM formalism is to foliate the spacetime manifold into the spacelike hypersurfaces where the time $`t=constant`$ and to use the components of the induced three-metric tensor $`q_{ab}`$ as the coordinates of the configuration space. It is clear that the formalism depends heavily on the foliability of the spacetime manifold.
The ADM formalism of general relativity has four constraints per spacetime point, namely the Hamiltonian constraint and three diffeomorphism constraints. The three diffeomorphism constraints imply an invariance of general relativity under spacelike diffeomorphisms, and the remaining Hamiltonian constraint implies an invariance in time reparametrizations. In addition to these four constraints, the formalism has, of course, the Hamiltonian equations of motions. These equations plus the constraints of the Hamiltonian theory are equivalent to Einstein’s field equations of general relativity.
When quantizing gravity canonically, we have to choose between two different possibilities: we either solve the constraints at the classical level, identify the physical degrees of freedom of the system and quantize the theory in the physical phase space, or we solve the quantum counterparts of the classical constraints. The former quantization method is known as the reduced phase space quantization, whereas the latter is known as the Dirac quantization. In this paper we shall use the results based on the reduced phase space formalism. The quantization of the physical degrees of freedom of the system will not be performed explicitly. Quantum theories of the Schwarzschild black hole in the reduced phase space formalism have been constructed, among others, by Kuchař and by Louko and Mäkelä.
The classical constraints for spherically symmetric, asymptotically flat vacuum spacetimes have been solved, among others, by Kuchař, and by Thiemann and Kastrup. The only spherically symmetric, asymptotically flat vacuum solution to Einstein’s field equations is the Schwarzschild solution. When the spacelike hypersurfaces, where $`t=constant`$, were chosen to go from the left to the right asymptotic infinities in the Kruskal diagram, crossing both the horizons, and the constraints were solved, Kuchař found that only two canonical degrees of freedom are left. If these two degrees of freedom are chosen to be the Schwarzschild mass $`m`$, and its conjugate momentum $`p_m`$, the classical action of the system is
$$S_\mathrm{K}=𝑑t\left[p_m\dot{m}m\left(N_++N_{}\right)\right],$$
(1)
where $`N_+`$ and $`N_{}`$, respectively, are the lapse functions at the right and at the left asymptotic infinities in the Kruskal diagram. The classical Hamiltonian of the whole maximally extended Schwarzschild black hole spacetime found by Kuchař can therefore be written in terms of the two physical phase space coordinates $`m`$ and $`p_m`$ as:
$$H_{\mathrm{whole}}=m\left(N_++N_{}\right).$$
(2)
The classical Hamiltonian theory of the right-hand-side exterior region of the Schwarzschild black hole was investigated by Louko and Whiting. It follows from the analysis performed by those authors that, in the reduced phase space formalism, the classical Hamiltonian describing such a region of black hole spacetime can be written in terms of the Schwarzschild mass $`m`$ and its conjugate momentum $`p_m`$ as:
$$H_{\mathrm{ext}}=mN_+\frac{1}{2}R_h^2N_0,$$
(3)
where $`R_h=2m`$ is the Schwarzschild radius, $`N_0`$ is a function of the global time $`t`$ at the bifurcation two-sphere such that
$$\mathrm{\Theta }:=_{t_1}^{t_2}𝑑tN_0(t)$$
(4)
is the boost parameter elapsed at the bifurcation two-sphere during the time interval $`[t_1,t_2]`$, and, as before, $`N_+`$ is the lapse function at the right-hand-side asymptotic infinity. We shall now give a brief review on the analysis performed by Louko and Whiting to produce the Hamiltonian (3).
Louko and Whiting considered a spacetime foliation where the spacelike hypersurfaces begin from the bifurcation two-sphere, and end at a right-hand-side timelike three-surface, i.e. at a ”box wall” in the Kruskal diagram. With this choice, the spatial slices are entirely contained within the right-hand-side exterior region at the Kruskal spacetime. One of the main observations was that such foliations bring along an additional boundary term into the classical action. Hence, the Louko-Whiting boundary action $`S_\mathrm{\Sigma }`$ consists of terms resulting from the initial and the final spatial surfaces, that is, from the bifurcation two-sphere and from the ”box wall”. After solving the classical constraints, Louko and Whiting found that when the physical degrees of freedom are identified, the true Hamiltonian action is
$$S_{\mathrm{LW}}=𝑑t\left(p_m\dot{m}h(t)\right),$$
(5)
where $`h(t)`$ is the reduced Hamiltonian such that, when the radius of the initial boundary two-sphere does not change in time $`t`$, the Hamiltonian $`h(t)`$ is defined as
$$h(t):=\left(1\sqrt{1\frac{2m}{R}}\right)R\sqrt{g_{tt}}2N_0(t)m^2,$$
(6)
where $`R`$ is the time independent value of the radial coordinate of general spherically symmetric, asymptotically flat vacuum spacetime at the final timelike boundary i.e. at the ”box wall”, and $`g_{tt}`$ is the $`tt`$component of the metric tensor expressed as a function of the canonical variables after performing a canonical transformation, and of Lagrange’s multipliers. Details can be seen in Ref.. It is easy to see that if one transfers the ”box wall” to the asymptotic infinity by taking the limit $`R\mathrm{}`$, the Hamiltonian $`h(t)`$ of Eq. (6) reduces to the Hamiltonian $`H_{\mathrm{ext}}`$ of Eq. (3).
## III Hamiltonian Thermodynamics
If $`\widehat{H}`$ is the Lorentzian Hamiltonian operator of a system, the partition function of the system is
$$Z=Tr\mathrm{exp}(\beta \widehat{H}),$$
(7)
where $`\beta =(k_BT)^1`$, $`k_B`$ is Boltzmann’s constant and $`T`$ is the temperature of the system in a thermal equilibrium. The partition function (7) corresponds to the canonical ensemble and describes the thermodynamics of the system in a thermal equilibrium. Black holes can be considered as thermodynamical objects in a heat bath of temperature $`T`$. Therefore, if the system under consideration is the whole maximally extended Schwarzschild spacetime, its Lorentzian Hamiltonian operator $`\widehat{H}`$ would yield, via Eq. (7), a non-Euclideanized thermodynamical description of the whole black hole spacetime, and if the system under consideration is the exterior region of the Schwarzschild black hole only, the Lorentzian $`\widehat{H}`$ would yield a non-Euclideanized partition function corresponding to the thermodynamical properties of the exterior region of the black hole spacetime. In practice, when one calculates the partition function (7) one needs to know, or assume, the density of the energy states of the system. We shall come to this crucial point later on this section.
We first obtain the partition function corresponding to the whole maximally extended Schwarzschild spacetime. Classically, $`H_{\mathrm{whole}}`$ may be understood as the total energy of the whole spacetime. To choose a specific observer, who measures the energy of the gravitational field, we fix the values of the lapse functions at asymptotic infinities. From the point of view of an observer at the right-hand-side infinity at rest with respect to the hole, we can set $`N_{}=0`$ and $`N_+=1`$. In other words, we have chosen the time coordinate at the right infinity to be the proper time of our observer and we have ”frozen” the time evolution at the left infinity. The physical justification for such a choice is that our observer can make observations at just one asymptotic infinity. On the other hand, one may view the Schwarzschild mass $`m`$ as the total energy of the Schwarzschild spacetime, measured by the distant observer. Hence, we may write $`H_{\mathrm{whole}}=m`$.
To obtain the partition function for the Kruskal spacetime, we have to replace the operator $`\widehat{H}`$ in Eq. (7) by an operator counterpart $`\widehat{H}_{\mathrm{whole}}`$ of the Hamiltonian $`H_{\mathrm{whole}}`$. Hence, we get:
$$Z_{\mathrm{whole}}(\beta )=Tr\mathrm{exp}(\beta \widehat{H}_{\mathrm{whole}}).$$
(8)
During the recent years there has been increasing evidence that the mass spectrum of the black hole spacetime might be discrete. If we denote these discrete mass eigenvalues of the mass operator $`\widehat{m}=\widehat{H}_{\mathrm{whole}}`$ by $`m_n`$ $`(n=0,1,2,\mathrm{})`$ and the corresponding eigenvectors $`|m_n`$, we obtain an eigenvalue equation
$$\widehat{H}_{\mathrm{whole}}|m_n=\widehat{m}|m_n=m_n|m_n.$$
(9)
When the discrete energy spectrum is employed, the partition function (16) becomes
$$Z_{\mathrm{whole}}(\beta )=\underset{n=0}{\overset{\mathrm{}}{}}m_n|\mathrm{exp}(\beta \widehat{m})|m_n=\underset{n=0}{\overset{\mathrm{}}{}}\mathrm{exp}(\beta m_n).$$
(10)
Since the Bekenstein-Hawking entropy of black holes is
$$S_{\mathrm{BH}}=\frac{1}{4}A,$$
(11)
where $`A`$ is the area of the event horizon, it is natural to assume an $`\mathrm{exp}(\frac{1}{4}A)`$-fold degeneracy in the possible mass eigenvalues $`m_n`$ of the hole. This assumption of degeneracy is justified because entropy, in general, can be understood as a logarithm of the number of microstates corresponding to the same macrostate. Since for a Schwarzschild black hole with mass $`m`$, $`A=16\pi m^2`$, we are prompted to define $`g(m_n)`$ as the number of degenerate states corresponding to the same mass eigenvalue $`m_n`$ such that
$$\mathrm{g}(m_n)=\mathrm{exp}(4\pi m_n^2).$$
(12)
Hence, when the summation is performed over different mass eigenvalues only, we get for the whole maximally extended Schwarzschild spacetime the partition function which takes the following form:
$`Z_{\mathrm{whole}}(\beta )`$ $`=`$ $`{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}\mathrm{g}(m_n)\mathrm{exp}(\beta m_n)`$ (13)
$`=`$ $`{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}\mathrm{exp}(\beta m_n+4\pi m_n^2).`$ (14)
Before investigating the partition function (14) any further, let us, at this point, turn our attention to the partition function corresponding to the exterior region of the Schwarzschild black hole spacetime.
Classically, the Hamiltonian $`H_{\mathrm{ext}}`$ of the exterior region of the Schwarzschild black hole spacetime may be understood, in a certain foliation, as the total energy of the exterior region of the hole, although according to Bose et al. $`H_{\mathrm{ext}}`$ is the free energy of the whole black hole spacetime. To obtain the corresponding partition function for the exterior region, we replace, as before, the operator $`\widehat{H}`$ of Eq. (7) by an operator counterpart $`\widehat{H}_{\mathrm{ext}}`$ of $`H_{\mathrm{ext}}`$ and we require, as before, that the mass spectrum is discrete. In contrast to our discussion concerning the partition function of the whole spacetime, however, we assume the mass eigenstates to be non-degenerate. As a consequence, we get for the exterior region the partition function
$`Z_{\mathrm{ext}}(\beta )`$ $`=`$ $`{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}m_n|\mathrm{exp}[\beta (\widehat{m}N_+2\widehat{m}^2N_0)]|m_n`$ (15)
$`=`$ $`{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}\mathrm{exp}[\beta (m_nN_+2m_n^2N_0)].`$ (16)
Note that the partition function (16) is observer-dependent. To choose the same observer at the asymptotic infinity as in Eq. (14) we must, again, fix the value of the lapse function at the right-hand-side spatial infinity such that $`N_+1`$. From the point of view of such an observer, the partition function of the exterior region of the Schwarzschild black hole therefore takes the form:
$`Z_{\mathrm{ext}}(\beta )={\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}\mathrm{exp}[\beta (m_n2m_n^2N_0)].`$ (17)
To calculate the partition functions (14) and (17), we must assume, in addition, a specific spectrum for the mass eigenvalues $`m_n`$ of the hole. In 1974 J. Bekenstein made a proposal, since then revived by several authors, that the possible eigenvalues of the area of the event horizon of the black hole are of the form:
$$A_n=\gamma nl_{\mathrm{Pl}}^2,$$
(18)
where $`\gamma `$ is a pure number of order one, $`n`$ ranges over all non-negative integers, and $`l_{\mathrm{Pl}}:=(\mathrm{}G/c^3)^{1/2}`$ is the Planck length. When imposing this proposal, we find that the partition function of the whole Schwarzschild black hole spacetime is
$$Z_{\mathrm{whole}}(\beta )=\underset{n=0}{\overset{\mathrm{}}{}}\mathrm{exp}\left(\frac{\beta }{4}\sqrt{\frac{\gamma n}{\pi }}+\frac{\gamma n}{4}\right),$$
(19)
and the partition function of the exterior region of the Schwarzschild black hole is
$$Z_{\mathrm{ext}}(\beta )=\underset{n=0}{\overset{\mathrm{}}{}}\mathrm{exp}\left[\beta \left(\frac{1}{4}\sqrt{\frac{\gamma n}{\pi }}\frac{N_0}{8}\frac{\gamma n}{\pi }\right)\right],$$
(20)
which both diverge very badly, indeed.
To actually calculate the partition functions (19) and (20), we have to deal with the problem of diverging partition functions. Kastrup has suggested some very original and interesting solutions to the divergency problem. Our solution to the problem of a diverging partition function in the case of the whole maximally extended Schwarzschild spacetime is to study not the partition function of the whole spacetime itself but, instead, the partition function of the radiation emitted by the hole. When obtaining the partition function for the radiation, we assume that the evaporation of the hole is a reversible process. In other words, we assume that the entropy of the hole is converted exactly into the entropy of the radiation. A validity of this assumption has been investigated by Zurek. His conclusion was that if the temperature of the heat bath is the same as that of the hole, then the black hole evaporation is a reversible process.
First, we choose the zero point of the energy emitted by the hole. This could be done in many ways, but we choose the total energy of the radiation emitted to be zero when the hole has evaporated completely leaving nothing but radiation. With this choice of the zero point of the total energy of the radiation, we find that the relationship between the energy $`E^{\mathrm{rad}}`$ emitted by the hole and the mass $`m`$ of the Schwarzschild black hole measured at the asymptotic right-hand-side infinity is
$$E^{\mathrm{rad}}=m.$$
(21)
If all the entropy of the hole is converted into the entropy of the radiation by means of transitions between degenerate black hole energy eigenstates, then the radiated energy spectrum is degenerate, too, and the number of the degenerate states corresponding to the same total energy emitted by the hole since its formation up to the point where the Schwarzschild mass has achieved the value $`m_n`$, is given by a function $`g^{\mathrm{rad}}(m_n)`$. It is fairly obvious that $`g^{\mathrm{rad}}(m_n)`$ increases when $`m_n`$ decreases. In an ideal case, all the entropy of the hole is exactly converted to the entropy of the radiation. In that case we may choose
$$g^{\mathrm{rad}}(m_n)=\mathrm{exp}(\frac{1}{4}A_04\pi m_n^2),$$
(22)
where $`A_0`$ is the initial surface area of the black hole horizon, measured just before the hole has begun its evaporation. In other words, the decrease of the black hole entropy from $`\frac{1}{4}A`$ to $`\frac{1}{4}(AdA)`$ increases the number of degenerate states of the radiation emitted by the hole by a factor $`\mathrm{exp}(\frac{1}{4}dA)`$. This choice reflects the fact that just after the hole has been formed, and not radiated yet, the entropy of the radiation is zero, whereas the entropy is $`\frac{1}{4}A`$ after the hole has evaporated completely.
Now, since $`E^{\mathrm{rad}}=m`$ and $`H_{\mathrm{whole}}=m`$, we argue that
$$H_{\mathrm{whole}}^{\mathrm{rad}}=m.$$
(23)
To obtain the partition function for the radiation of the whole Schwarzschild spacetime, Eqs. (7), (9), (22) and (23) yield:
$`Z_{\mathrm{whole}}^{\mathrm{rad}}(\beta )`$ $`=`$ $`{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}\mathrm{g}^{\mathrm{rad}}(m_n)\mathrm{exp}(\beta m_n)`$ (24)
$`=`$ $`\mathrm{exp}\left({\displaystyle \frac{1}{4}}A_0\right){\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}\mathrm{exp}(\beta m_n4\pi m_n^2).`$ (25)
When Bekenstein’s proposal (18) is used, we get a partition function
$$Z_{\mathrm{whole}}^{\mathrm{rad}}(\beta )=\mathrm{exp}\left(\frac{1}{4}A_0\right)\underset{n=0}{\overset{\mathrm{}}{}}\mathrm{exp}\left(\frac{\beta }{4}\sqrt{\frac{\gamma n}{\pi }}\frac{\gamma n}{4}\right)$$
(26)
describing the radiation emitted by the Schwarzschild black hole. It is easy to see that $`Z_{\mathrm{whole}}^{\mathrm{rad}}`$ converges very nicely.
In comparison, let us obtain, by means of the same procedure as above, the partition function of the radiation emitted by the spacetime exterior to the Schwarzschild black hole. We choose the zero point of the energy of the radiation emitted by the external spacetime in the same way as before. This radiational energy should be understood as arising from transitions between the energy states of the gravitational field corresponding to the exterior region of the Schwarzschild black hole spacetime. The zero points of energy emitted by both the Kruskal and the exterior spacetime can be chosen to coincide because the distant observer outside the hole observes the same energy $`E^{\mathrm{rad}}`$.
Now, since all the energy of the exterior region is assumed to be converted into the energy of the radiation, the Hamiltonian of the radiation of the exterior region may then be taken to be
$$H_{\mathrm{ext}}^{\mathrm{rad}}=H_{\mathrm{ext}}.$$
(27)
To obtain the partition function $`Z_{\mathrm{ext}}^{\mathrm{rad}}`$ one uses Eqs. (7), (9) and (27). These equations give a partition function
$$Z_{\mathrm{ext}}^{\mathrm{rad}}(\beta )=\mathrm{exp}\left(\frac{1}{4}A_0\right)\underset{n=0}{\overset{\mathrm{}}{}}\mathrm{exp}(\beta m_n2N_0m_n^2),$$
(28)
where we chose an appropriate normalization constant to the partition function. This is allowed, since the normalization does not have any effect on the measurable thermodynamical quantities, like temperature, of the system.
Applying, again, Bekenstein’s proposal (18) to Eq. (28), we get
$$Z_{\mathrm{ext}}^{\mathrm{rad}}(\beta )=\mathrm{exp}\left(\frac{1}{4}A_0\right)\underset{n=0}{\overset{\mathrm{}}{}}\mathrm{exp}\left[\frac{\beta }{4}\left(\sqrt{\frac{\gamma n}{\pi }}\frac{N_0}{2}\frac{\gamma n}{\pi }\right)\right],$$
(29)
which, when keeping $`N_0`$ fixed, converges, too.
Let us next calculate the converging partition functions (26) and (29). Assuming that $`\beta `$ is very big, we may approximate the sums (26) and (29) by integrals:
$`Z_{\mathrm{whole}}^{\mathrm{rad}}(\beta )`$ $``$ $`\mathrm{exp}\left({\displaystyle \frac{1}{4}}A_0\right){\displaystyle _0^{\mathrm{}}}𝑑n\mathrm{exp}\left({\displaystyle \frac{\beta }{4}}\sqrt{{\displaystyle \frac{\gamma n}{\pi }}}{\displaystyle \frac{\gamma n}{4}}\right)`$ (31)
$`=`$ $`\mathrm{exp}\left({\displaystyle \frac{1}{4}}A_0\right)\left[{\displaystyle \frac{4}{\gamma }}+{\displaystyle \frac{\beta }{\gamma }}\left[1+\mathrm{erf}\left({\displaystyle \frac{\beta }{4\sqrt{\pi }}}\right)\right]\mathrm{exp}\left({\displaystyle \frac{\beta ^2}{16\pi }}\right)\right],`$ (32)
$`Z_{\mathrm{ext}}^{\mathrm{rad}}(\beta )`$ $``$ $`\mathrm{exp}\left({\displaystyle \frac{1}{4}}A_0\right){\displaystyle _0^{\mathrm{}}}𝑑n\mathrm{exp}\left[{\displaystyle \frac{\beta }{4}}\left(\sqrt{{\displaystyle \frac{\gamma n}{\pi }}}{\displaystyle \frac{N_0}{2}}{\displaystyle \frac{\gamma n}{\pi }}\right)\right]`$ (33)
$`=`$ $`\mathrm{exp}\left({\displaystyle \frac{1}{4}}A_0\right)\left[{\displaystyle \frac{8\pi }{\gamma \beta N_0}}+{\displaystyle \frac{4}{\gamma }}\sqrt{{\displaystyle \frac{2}{\beta }}}\left({\displaystyle \frac{\pi }{N_0}}\right)^{3/2}\left[{\displaystyle \frac{1}{2}}+{\displaystyle \frac{1}{2}}\mathrm{erf}\left({\displaystyle \frac{\beta }{8N_0}}\right)^{1/2}\right]\mathrm{exp}\left({\displaystyle \frac{\beta }{8N_0}}\right)\right],`$ (34)
where $`\mathrm{erf}(x)`$ is the error function.
If we, now, choose
$$N_0=\frac{2\pi }{\beta },$$
(35)
then
$$Z_{\mathrm{whole}}^{\mathrm{rad}}=Z_{\mathrm{ext}}^{\mathrm{rad}}:=Z^{\mathrm{rad}}.$$
(36)
This is the main result of this paper. It should be noted that this result in not just an artefact of an approximation of a sum by an integral, but it holds even for exact expressions (26) and (29). We shall discuss the consequences of our result at the end of this section. Let us, in the meantime, try to justify Eq. (35).
If Eq. (35) holds, then Eqs. (III) give the semiclassical partition function of the radiation observed by an external observer at asymptotic infinity:
$$Z^{\mathrm{rad}}(\beta )\mathrm{exp}\left(\frac{1}{4}A_0\right)\frac{2\beta }{\gamma }\mathrm{exp}\left(\frac{\beta ^2}{16\pi }\right).$$
(37)
It is easy to show that the upper bound for the absolute error made, when replacing the sums (26) and (29) by integrals (III) is, in the leading order approximation, $`\mathrm{exp}(1/4A_0+\beta ^2/16\pi )`$. If one compares the result (37) to the absolute error made when replacing the sums by integrals, one notices that, for very big $`\beta `$, the fractional error is much smaller than unity. Hence, in the highest order approximation, the resulting partition function (37) approximates the sums (26) and (29) very well and, most importantly, the effect of the error bars on the thermodynamical quantities is negligibly small.
We now require that the energy expectation value of the radiation is:
$$E^{\mathrm{rad}}:=\frac{}{\beta }\mathrm{ln}Z_{\mathrm{ext}}^{\mathrm{rad}}(\beta )=m.$$
(38)
When $`\beta `$ and $`m`$ are taken to be very big, we get from (38):
$$\frac{\beta }{8\pi }+𝒪(\beta ^1)=m,$$
(39)
which, in turn, is the same as
$$\beta 8\pi m.$$
(40)
This, on the other hand, corresponds to the choice
$$N_0\frac{1}{4m}.$$
(41)
It was noted by Bose et al. that when Einstein’s field equations are satisfied, the quantity $`N_0`$ can be expressed as $`N_0=\kappa \frac{dT}{dt}`$, where $`T`$ is the Schwarzschild time coordinate, i.e. the Killing time, $`t`$ is the global time coordinate, and $`\kappa =\frac{1}{4m}`$ is the surface gravity of the black hole. Now, Eq. (41) implies that, in the semiclassical limit, $`\frac{dT}{dt}=1`$, which states that the time coordinate $`t`$ equals with the Schwarzschild time $`T`$. In other words, the meaning of the choice (35) is that the spacetime foliation near the horizon of the Schwarzschild black hole is determined by the Schwarzschild time coordinate $`T`$. Since the Schwarzschild time coordinate is just the time coordinate used by our external observer at rest when he makes observations on the spacetime properties, one may regard the choice (35) justified on grounds of our aim to describe the black hole thermodynamics from the point of view of a faraway observer at rest. On the other hand, if one requires that, in the leading approximation, $`N_0\frac{1}{4m}`$, and that $`\frac{}{\beta }\mathrm{ln}Z_{\mathrm{ext}}^{\mathrm{rad}}(\beta )=m`$, then – as noted in Ref. – one gets $`\beta 4Cm`$, which gives $`N_0\frac{C}{\beta }`$, where the constant $`C`$ can be chosen to be $`2\pi `$. Hence, if we use a Schwarzschild -type foliation right from the beginning, we can obtain, up to a constant, the choice (35).
It is well known that the entropy $`S`$ of any thermodynamical system, described by a partition function $`Z`$, can be calculated from an expression
$$S=\mathrm{ln}Z\beta \frac{}{\beta }\mathrm{ln}Z.$$
(42)
When substituting $`Z^{\mathrm{rad}}`$ into Eq. (42), one gets an approximation to the entropy of the black hole radiation:
$$S^{\mathrm{rad}}=\frac{1}{4}(A_0A)+\frac{1}{2}\mathrm{ln}A+\mathrm{ln}\left(\frac{4\sqrt{\pi }}{\gamma }\right)1+𝒪\left(A^{1/2}\right)\mathrm{exp}\left(\frac{1}{4}A\right).$$
(43)
Hence, when the area of the black hole has shrinked from $`A_0`$ to $`A`$, the entropy carried away by the radiation is, in the leading order approximation, $`\frac{1}{4}(A_0A)`$. Under assumption that the black hole radiation is a reversible process, this result is compatible with the Bekenstein-Hawking expression for black hole entropy: A decrease of the area by an amount $`A_0A`$ decreases the entropy of the hole by an amount $`\frac{1}{4}(A_0A)`$. The error made when approximating the sum by an integral causes an error in the entropy which is of order $`𝒪\left(A^{1/2}\right)`$.
We have obtained two partition functions $`Z_{\mathrm{whole}}^{\mathrm{rad}}`$ and $`Z_{\mathrm{ext}}^{\mathrm{rad}}`$. When obtaining the partition function $`Z_{\mathrm{ext}}^{\mathrm{rad}}`$ for the radiation emitted by the exterior region of the hole, the mass eigenstates were assumed to be discrete – as proposed by Bekenstein – and non-degenerate. When obtaining the partition function $`Z_{\mathrm{whole}}^{\mathrm{rad}}`$ of the radiation emitted by the whole Kruskal spacetime, however, we had to make an ad hoc assumption of an $`\mathrm{exp}(\frac{1}{4}A)`$-fold degeneracy in the discrete mass eigenstates of the hole to get the correct black hole entropy. Still, the two partition functions turned out to be exactly the same from the point of view of a distant observer. This is a very interesting result. Does it bear any implications relevant to the question of the nature of the black hole entropy?
Our investigation suggests two possible interpretations to the black hole entropy. The first interpretation is that the entropy of the hole is simply caused by the fact that an external observer cannot make any observations on the interior region of the black hole. As a consequence, the physics of a black hole is physics of its external region for such an observer, and it is sufficient to consider the statistical mechanics of that external region only. This interpretation is supported by our straightforward calculation which gives correctly the Bekenstein-Hawking entropy, without assuming any degeneracy in the mass eigenstates.
Another interpretation is more conservative: The entropy of the hole is interpreted as a huge degeneracy in the mass eigenstates of the whole black hole spacetime – including the interior region of the hole. When using this interpretation to obtain the Bekenstein-Hawking entropy one must make an ad hoc assumption about a vast $`\mathrm{exp}(\frac{1}{4}A)`$-fold degeneracy in the mass eigenstates.
What, then, are pro’s and con’s of the two viewpoints? From the first point of view, let us call it as an external point of view, the degrees of freedom of the collapsing matter, except the mass, are completely lost. Thus, the external point of view indicates that the information contained in the collapsing matter is not just hovering at some place, but completely and totally lost, whereas the conventional viewpoint somehow allows one to include the information about the degrees of freedom of the collapsing matter into the microstates of the hole itself. The loss of information, as known, leads to severe fundamental problems. These problems are discussed, for example, in. On the other hand, the external view makes it possible to consider Schwarzschild black holes as objects having one physical degree of freedom only. This feature of the external point of view makes it appealing to us, as it – unlike the conventional point of view – is in perfect harmony with the no-hair theorem. Hence, one does not necessarily need to be concerned with how the quantization itself might bring along a vast number of additional degrees of freedom.
## IV Conclusion
In this paper we have obtained the partition function of the Schwarzschild black hole by means of two different Hamiltonians $`H_{\mathrm{whole}}`$ and $`H_{\mathrm{ext}}`$. These Hamiltonians describe, respectively, the whole maximally extended Schwarzschild spacetime, and the exterior region of the Schwarzschild black hole. The whole Hamiltonian thermodynamics was considered in Lorentzian spacetime. The main reason for not producing a euclideanized partition function of the Schwarzschild black hole was that we wanted to include the interior of the black hole in the analysis. After writing the Hamiltonians, we obtained the corresponding partition functions which can be viewed, respectively, as the partition functions of the whole maximally extended Schwarzschild spacetime, and the spacetime region exterior to the black hole, from the point of view of a faraway observer at rest.
We found that these two partition functions coincide. To obtain this result, however, we were compelled to assume an $`\mathrm{exp}(\frac{1}{4}A)`$-fold degeneracy in the mass eigenstates when calculating the partition function of the whole spacetime, whereas no degeneracy was needed to be assumed when calculating the partition function for the exterior region. In addition, we chose the spacetime foliation near the horizon of the Schwarzschild black hole to be determined by the Schwarzschild time coordinate $`T`$ which fixed, up to a constant, the quantity $`N_0`$.
To check the correctness of our partition functions, we used Bekenstein’s proposal for a discrete area spectrum of black holes to calculate the Bekenstein-Hawking entropy. Unfortunately, the partition functions of the whole black hole spacetime, and the spacetime region exterior to the hole were found to diverge, but we managed to solve the divergency problem, however, by turning our attention to the radiation emitted by the hole. More precisely, we obtained the partition functions of radiation emitted when either the whole black hole spacetime or its exterior region are assumed to perform transitions from a one state to another. When obtaining the partition functions of radiation we assumed that the evaporation of the hole is a reversible process and that all the energy and the entropy of the hole are exactly converted to the energy and the entropy of radiation. The resulting partition functions for radiation were found to converge very nicely producing, in the leading order approximation, the Bekenstein-Hawking entropy of black holes.
Our investigation suggested that the black hole entropy can be interpreted in two possible ways. First, there is the conservative view that the entropy of black holes may be understood as a result of a huge degeneracy in the mass eigenstates of the whole black hole spacetime. The degeneracy of the eigenstates might somehow, in a still unexplained manner, allow one to include the degrees of freedom of the collapsed matter, but the view is in contradiction with the no-hair theorem. The second view – called the external point of view – is that the entropy of black holes is, quite simply, caused by the fact that the interior region of black hole spacetime is separated from its exterior region by a horizon. Because of that, one might be justified to take a view that black hole statistical mechanics is, for an external observer, statistical mechanics of its exterior region. This point of view allows one to obtain the Bekenstein-Hawking entropy without assuming any degeneracy in the mass eigenstates of the hole. The result is in harmony with the no-hair theorem, but allows a complete loss of information, since the degrees of freedom of the matter, except the total mass $`M`$, have vanished. We have thus two complementary points of view to the interpretation of black hole entropy, of which neither is quite completely satisfactory: The conservative view is in conflict with the no-hair theorem, whereas the external point of view, although it is physically appealing and in harmony with the no-hair theorem, implies a tremendous loss of information. It remains to be seen whether these two possible interpretations could somehow be unified into a single, consistent description of black holes.
###### Acknowledgements.
We are grateful to Jorma Louko and Markku Lehto for their constructive criticism during the preparation of this paper. P. R. was supported by the Finnish Cultural Foundation, Wihuri Foundation and Nyyssönen Foundation.
|
no-problem/9812/gr-qc9812078.html
|
ar5iv
|
text
|
# Collapsing shells of radiation in anti-de Sitter spacetimes and the hoop and cosmic censorship conjectures gr-qc/9812078
## Abstract
Gravitational collapse of radiation in an anti-de Sitter background is studied. For the spherical case, the collapse proceeds in much the same way as in the Minkowski background, i.e., massless naked singularities may form for a highly inhomogeneous collapse, violating the cosmic censorship, but not the hoop conjecture. The toroidal, cylindrical and planar collapses can be treated together. In these cases no naked singularity ever forms, in accordance with the cosmic censorship. However, since the collapse proceeds to form toroidal, cylindrical or planar black holes, the hoop conjecture in an anti-de Sitter spacetime is violated.
PACS numbers: 04.20.Jb, 97.60.Lf.
1. Introduction
The cosmic censorship conjecture forbids the existence of naked singularities, singularities not surrounded by an event horizon. The hoop conjecture states that BHs form when and only when a mass $`M`$ gets compacted into a region whose circumference in every direction is less than its Schwarzschild circumference $`4\pi M`$ ($`G=c=1`$). The collapse of spherical matter in the form of dust, or radiation forms massless shell-focusing naked singularities violating the cosmic censorship .
It is also violated for matter with cylindrical symmetry where the collapse proceeds to form a line singularity . The hoop conjecture has not suffered from such counter-examples. Cylinders, for instance, do not get compacted, and do not form black holes. Spindles form black holes only when all collapsing directions are sufficiently compactified in accord with the hoop conjecture . These results follow from analysis in asymptotically flat spacetimes.
In this work we want to study how these features might get modified in an anti-de Sitter background. We study imploding radiation in an anti-de Sitter background both in spherically symmetric and toroidally, cylindrical or plane symmetric spaces, to test the appearance of naked singularities against the cosmic censorship conjecture and the formation of toroidal, cylindrical or planar black holes against the hoop conjecture.
The Vaidya metric, describing a spherically symmetric spacetime with radiation, is an exact solution of Einstein’s field equations which has been used to generate shell-focusing naked singularities. Its straightforward generalization to spherically symmetric spacetimes with negative cosmological constant is called the Vaidya-anti-de Sitter metric. In addition, to describe collapsing radiation in spacetimes with toroidal, cylindrical or planar symmetry and a negative cosmological constant, there is an appropriately modified Vaidya metric, which of course, satisfies Einstein’s field equations.
The spherically symmetric case with zero cosmological constant has been thoroughly studied (see for a review). It is usually admitted that in spherical symmetry the effects of adding a negative cosmological constant $`\mathrm{\Lambda }`$ do not alter radically the description. The main difference is that the exterior spacetime should be described by the Vaidya-anti-de Sitter metric. As we shall see the situation can change drastically for a collapse with non-spherical topology. Indeed, we find that (i) in spherical symmetry with negative $`\mathrm{\Lambda }`$, massless naked singularities form for a sufficiently inhomogeneous collapse, similarly to the $`\mathrm{\Lambda }=0`$ case and (ii) toroidal, cylindrical or plane symmetric collapse with negative $`\mathrm{\Lambda }`$ does not produce naked singularities; instead black holes will form giving an explicit counter-example to the hoop conjecture.
2. Spherical Collapse
The Einstein field equations are
$$G_{ab}+\mathrm{\Lambda }g_{ab}=8\pi T_{ab},$$
(1)
where $`G_{ab}`$, $`g_{ab}`$, $`T_{ab}`$ are the Einstein, the metric and the the energy-momentum tensors, respectively, and $`\mathrm{\Lambda }`$ is the cosmological constant ($`G=c=1`$). In the spherically symmetric case the equations admit the Vaidya solution
$`ds^2=\left(1+\alpha ^2r^2{\displaystyle \frac{2m(v)}{r}}\right)dv^2+2dvdr+`$ (2)
$`+r^2(d\theta ^2+\mathrm{sin}^2\theta d\varphi ^2),`$ (3)
for an energy-momentum tensor given by
$`T_{ab}={\displaystyle \frac{1}{4\pi r^2}}{\displaystyle \frac{dm(v)}{dv}}k_ak_b,`$ (4)
$`k_a=\delta _a^v,k_ak^a=0.`$ (5)
Here $`\alpha \sqrt{\frac{\mathrm{\Lambda }}{3}}`$, $`v`$ is a null coordinate, called the advanced time, with $`\mathrm{}<v<\mathrm{}`$, $`r`$ is the radial coordinate with $`0<r<\mathrm{}`$, and $`\theta ,\varphi `$ are the coordinates which describe the two-dimensional spherical surface. The Vaidya metric (3) describes the gravitational field of a spherical flow of unpolarized radiation in the geometrical optics approximation. It represents a spherical null fluid. Noting that the energy-density of the radiation is $`ϵ=\frac{2}{4\pi r^2}\frac{dm}{dv}`$, one sees that the weak energy condition for the radiation is satisfied whenever $`\frac{dm}{dv}0`$, i.e., the radiation is imploding. The function $`m(v)`$ represents a mass and is thus a non-negative increasing function of $`v`$.
The physical situation we want to represent is of radiation injected with the velocity of light radially into an anti-de Sitter spacetime from infinity towards the center. For $`v<0`$ the spacetime is anti-de Sitter with $`m(v)=0`$. At $`v=V`$, say, the radiation is turned off. For $`v>V`$ the exterior spacetime settles into a Schwarzschild-anti-de Sitter spacetime. The first spherical ray arrives at the center when $`r=0`$ and $`v=0`$, forming a singularity. We will now test whether future directed null geodesics terminate at the singularity. If they do, the singularity is naked.
In these coordinates, lines with $`v=`$constant represent incoming radial null vectors whose generator vectors have the form $`k^a=(0,1,0,0)`$, with $`k_a=(1,0,0,0)`$ (see equation (5)). The generators $`l^a`$ of outgoing null lines are then given by $`l^a=(1,\frac{1}{2}(1+\alpha ^2r^2\frac{2m(v)}{r}),0,0)`$ with, $`l_al^a=0`$ and $`l_ak^a=1`$. The equation for outgoing radial null geodesics $`v(r)`$ is then
$$\frac{dv}{dr}=\frac{l^v}{l^r}=\frac{2r}{r+\alpha ^2r^32m(v)}.$$
(6)
Equation (6), for future null geodesics, has a singular point at $`r=0`$ and $`v=0`$. The nature of the singular point can now be analysed. Following we write
$$2m(v)=\lambda v+f(v).$$
(7)
where $`\lambda `$ is a constant and $`f(v)=\mathrm{o}(v)`$ as $`v0`$. Following standard techniques one can see that the singularity is a unstable node when $`0\lambda \frac{1}{8}`$. Since $`\mu 1/\lambda `$ gives a degree of inhomogeniety of the collapse (see ), we see that for a collapse sufficiently inhomogeneous naked singularities develop. This result is the same result as in collapsing radiation in a Minkowski background, as it should have been expected, since when $`r0`$ the cosmological constant term $`\alpha ^2r^2`$ is negligible. The global nakedness of the singularity can then be seen by making a junction onto the Schwarzschild-anti-de Sitter spacetime.
The Kretschmann scalar is given by $`K=R^{abcd}R_{abcd}=a\frac{\lambda ^2}{r^4}`$ for some number $`a`$. Thus as $`r0`$ the collapse forms a polynomial curvature singularity. To see if, in addition, the naked singularity is strong we examine the scalar $`\psi \mathrm{lim}_{l0}l^2G_{ab}l^al^b`$, where $`l^a`$ is the null geodesic along the Cauchy horizon parametrized by $`l`$. This gives $`\psi =4\lambda `$, showing that the singularity is strong.
In another solution for outgoing light rays is presented. This solution is also valid here for sufficiently small radii. Indeed, it is found that if $`2m(v)=\beta v^\sigma \left(12\sigma \beta v^{\sigma 1}\right)`$, with $`\sigma >1`$ and $`\beta >0`$ constants, then light rays with $`r=v^\sigma `$ emanate from a strong naked singularity.
Thus, for spherical symmetry, the cosmic censorship is violated for a sufficiently inhomogeneous collapse. On the other hand, the hoop conjecture is validated since a black hole forms whenever the spherical surface of matter passes its own gravitational radius.
3. Toroidal, Cylindrical or Planar Collapse
We now study the phenomenon of gravitational collapse of toroidal cylindrical or planar shells of radiation to verify whether naked singularities can occur.
Einstein’s filed equations (1), have also the solution
$$ds^2=\left(\alpha ^2r^2\frac{qm(v)}{r}\right)dv^2+2dvdr+r^2(d\theta ^2+d\varphi ^2),$$
(8)
for an energy-momentum tensor given by
$`T_{ab}={\displaystyle \frac{q}{8\pi r^2}}{\displaystyle \frac{dm(v)}{dv}}k_ak_b,`$ (9)
$`k_a=\delta _a^v,k_ak^a=0,`$ (10)
where now, $`\theta ,\varphi `$ are the coordinates which describe the two-dimensional zero-curvature space generated by the two-dimensional commutative Lie group $`G_2`$ of isometries. The topologies of this two-dimensional space can be (i) $`T^2=S^1\times S^1`$, the flat torus model $`[G_2=U(1)\times U(1)]`$, (ii) $`R\times S^1`$, the cylindricallly symmetric model $`[G_2=R\times U(1)]`$, and (iii) $`R^2`$, the planar model $`[G_2=E_2]`$. In the toroidal case we choose $`0\theta <2\pi `$, $`0\varphi <2\pi `$, in the cylindrical case $`\mathrm{}<\theta <\mathrm{}`$, $`0\varphi <2\pi `$, and in the planar case $`\mathrm{}<\theta <\mathrm{}`$, $`\mathrm{}<\varphi <\mathrm{}`$. The parameter $`q`$ has different values depending on the topology of the two-dimensional space. For the torus $`q=\frac{2}{\pi }`$ and $`m(v)`$ is a mass, for the cylinder $`q=\frac{4}{\alpha }`$ and $`m(v)`$ is a mass per unit length, and for the plane $`q=\frac{2}{\alpha ^2}`$ and $`m(v)`$ is a mass per unit area. The values of the parameter $`q`$ given above, were taken from ADM masses of the corresponding static BHs found in . Setting $`m(v)=`$const and $`\alpha =0`$ one obtains the Taub metric . In this case, when $`\alpha =0`$, there are no black hole solutions .
As in the spherical symmetric case, metric (8) describes the gravitational field of a toroidal, cylindrical or planar flow of unpolarized radiation in the geometrical optics approximation. In the same way, for imploding radiation, $`\frac{dm}{dv}0`$, one has $`ϵ=\frac{q}{8\pi r^2}\frac{dm}{dv}>0`$, and the mass $`m(v)`$ is a non-negative increasing function of $`v`$.
One now injects radiation radially towards the center. If one whishes, one turns off the radiation at $`v=V`$, say, and then makes a junction with the exterior background spacetime, sometimes called Riemann-anti de Sitter spacetime . The first spherical ray arrives at the center for $`r=0`$, $`v=0`$, forming a singularity.
Lines with $`v=`$constant represent incoming radial null geodesics, with generators $`k^a=(0,1,0,0)`$, The generators $`l^a`$ of outgoing null lines are $`l^a=(1,\frac{1}{2}(\alpha ^2r^2\frac{qm(v)}{r}),0,0)`$. The equation for outgoing radial null geodesics $`v(r)`$ is now
$$\frac{dv}{dr}=\frac{l^v}{l^r}=\frac{2r}{\alpha ^2r^3qm(v)}.$$
(11)
Again, equation (11) for future null geodesics has a singular point at $`r=0`$ and $`v=0`$, and we write as in (7)
$$qm(v)=\lambda v+\mathrm{o}(v).$$
(12)
Following standard techniques one finds that the singularity is a center for all $`\lambda `$, and no radial future null geodesics terminate at the singularity. This is in contrast with the spherically symmetric collapse. In turn the collapse proceeds to form toroidal, cylindrical or planar black holes, once the exterior event horizon condition is achieved, i.e., $`r=\left(\frac{qM}{\alpha ^2}\right)^{1/3}`$, where $`M`$ is the total mass of the collapsing radiation, (see also ).
One can also try the Joshi and Dwivedi’s type solutions . However these give for the mass $`qm(v)=\beta \alpha ^2v^32\beta ^{\frac{2}{3}}v`$, for some arbitrary constant $`\beta `$. They are unphysical since $`\frac{dm}{dv}|_{v=0}<0`$.
Thus, for toroidal, cylindrical or planar collapse of radiation, naked singularities do not form, in accord with the cosmic censorship conjecture. On the other hand, cylindrical black holes as well as toroidal and planar ones do form, giving an explicit example of violation of the hoop conjecture.
Note that these conclusions are valid as long as $`\alpha >0`$. For $`\alpha =0`$ the collapse proceeds to form a naked singularity with toroidal, cylindrical, or planar symmetry and the Taub solution is the final background spacetime. In such case the cosmic censorship is violated, whereas the hoop conjecture is not .
4. Conclusions
The Vaidya metric has been extensively used to study the formation of naked singularities in spherical gravitational collapse . We have extended this study here to include a negative cosmological constant, and found that locally, in the vicinity of the singularity, the same results prevail. On the other hand, we have found that collapse of toroidal, cylindrical or planar radiation in a spacetime with negative cosmological constant does not yield naked singularities, but always black holes. This shows that non-spherical collapse in a negative cosmological constant background may violate the hoop but not the cosmic censorship conjecture.
|
no-problem/9812/cs9812021.html
|
ar5iv
|
text
|
# Forgetting Exceptions is Harmful in Language LearningThis is a preprint version of an article that will appear in Machine Learning, 11:1–3, pp. 11–42.
## 1 Introduction
Memory-based reasoning \[Stanfill and Waltz,1986\] is founded on the hypothesis that performance in real-world tasks (in our case language processing) is based on reasoning on the basis of similarity of new situations to stored representations of earlier experiences, rather than on the application of mental rules abstracted from earlier experiences as in rule-based processing. The type of learning associated with such an approach is called lazy learning \[Aha,1997\]. The approach has surfaced in different contexts using a variety of alternative names such as example-based, exemplar-based, analogical, case-based, instance-based, locally weighted, and memory-based \[Stanfill and Waltz,1986, Cost and Salzberg,1993, Kolodner,1993, Aha, Kibler, and Albert,1991, Atkeson, Moore, and Schaal,1997\]. Historically, lazy learning algorithms are descendants of the $`k`$-nearest neighbor (henceforth $`k`$-nn) classifier \[Cover and Hart,1967, Devijver and Kittler,1982, Aha, Kibler, and Albert,1991\].
Memory-based learning is ‘lazy’ as it involves adding training examples (feature-value vectors with associated categories) to memory without abstraction or restructuring. During classification, a previously unseen test example is presented to the system. Its similarity to all examples in memory is computed using a similarity metric, and the category of the most similar example(s) is used as a basis for extrapolating the category of the test example. A key feature of memory-based learning is that, normally, all examples are stored in memory and no attempt is made to simplify the model by eliminating noise, low frequency events, or exceptions. Although it is clear that noise in the training data can harm accurate generalization, this work focuses on the problem that, for language learning tasks, it is very difficult to discriminate between noise on the one hand, and valid exceptions and sub-regularities that are important for reaching good accuracy on the other hand.
The goal of this paper is to provide empirical evidence that for a range of language learning tasks, memory-based learning methods tend to achieve better generalization accuracies than (i) memory-based methods combined with training set editing techniques in which exceptions are explicitly forgotten, i.e. removed from memory, and (ii) decision-tree learning in which some of the information from the training data is either forgotten (by pruning) or made inaccessible (by the eager construction of a model). We explain these results in terms of the data characteristics of the tasks, and the properties of memory-based learning. In our experiments we compare ib1-ig \[Daelemans and Van den Bosch,1992, Daelemans, Van den Bosch, and Weijters,1997\], a memory-based learning algorithm, with (i) edited versions of ib1-ig, and (ii) decision-tree learning in c5.0 \[Quinlan,1993\] and in igtree \[Daelemans, Van den Bosch, and Weijters,1997\]. These learning methods are described in Section 2. The compared algorithms are applied to a selection of four natural language processing (nlp) tasks (described in Section 3). These tasks present a varied sample of the complete domain of nlp as they relate to phonology and morphology (grapheme-to-phoneme conversion); morphology and syntax (part of speech tagging, base noun phrase chunking); and syntax and lexical semantics (prepositional-phrase attachment).
First, we show in Section 4 that two criteria for editing instances in memory-based learning, viz. low typicality and low class prediction strength, are generally responsible for a decrease in generalization accuracy.
Second, memory-based learning is demonstrated in Section 5 to be mostly at an advantage, and sometimes at a par with decision-tree learning as far as generalization accuracy is concerned. The advantage is puzzling at first sight, as ib1-ig, c5.0 and igtree are based on similar principles: (i) classification of test instances on the basis of their similarity to training instances (in the form of the instances themselves in ib1-ig or in the form of hyper-rectangles containing subsets of partly-similar training instances in c5.0 and igtree), and (ii) use of information entropy as a heuristic to constrain the space of possible generalizations (as a feature weighting method in ib1-ig, and as a split criterion in c5.0 and igtree).
Our hypothesis is that both effects are due to the fact that ib1-ig keeps all training instances as possible sources for classification, whereas both the edited versions of ib1-ig and the decision-tree learning algorithms c5.0 and igtree make abstractions from irregular and low-frequency events. In language learning tasks, where sub-regularities and (small families of) exceptions typically abound, the latter is detrimental to generalization performance. Our results suggest that forgetting exceptional training instances is harmful to generalization accuracy for a wide range of language-learning tasks. This finding contrasts with a consensus in supervised machine learning that forgetting exceptions by pruning boosts generalization accuracy \[Quinlan,1993\], and with studies emphasizing the role of forgetting in learning \[Markovitch and Scott,1988, Salganicoff,1993\].
Section 6 places our results in a broader machine learning and language learning context, and attempts to describe the properties of language data and memory-based learning that are responsible for the ‘forgetting exceptions is harmful’ effect. For our data sets, the abstraction and pruning techniques studied do not succeed in reliably distinguishing noise from productive exceptions, an effect we attribute to a special property of language learning tasks: the presence of many exceptions that tend to occur in groups or pockets in instance space, together with noise introduced by corpus coding methods. In such a situation, the best strategy is to keep all training data to generalize from.
## 2 Learning methods
In this Section, we describe the three algorithms we used in our experiments. ib1-ig is used for studying the effect of editing exceptional training instances, and in a comparison to the decision tree methods c5.0 and igtree.
### 2.1 IB1-IG
ib1-ig \[Daelemans and Van den Bosch,1992, Daelemans, Van den Bosch, and Weijters,1997\] is a memory-based (lazy) learning algorithm that builds a data base of instances (the instance base) during learning. An instance consists of a fixed-length vector of $`n`$ feature-value pairs, and a field containing the classification of that particular feature-value vector. After the instance base is built, new (test) instances are classified by matching them to all instances in the instance base, and by calculating with each match the distance between the new instance $`X`$ and the stored instance $`Y`$.
The most basic metric for instances with symbolic features is the overlap metric given in Equations 1 and 2; where $`\mathrm{\Delta }(X,Y)`$ is the distance between instances $`X`$ and $`Y`$, represented by $`n`$ features, $`w_i`$ is a weight for feature $`i`$, and $`\delta `$ is the distance per feature. The $`k`$-nn algorithm with this metric, and equal weighting for all features is, for example, implemented in ib1 \[Aha, Kibler, and Albert,1991\]. Usually $`k`$ is set to 1.
$$\mathrm{\Delta }(X,Y)=\underset{i=1}{\overset{n}{}}w_i\delta (x_i,y_i)$$
(1)
where:
$$\delta (x_i,y_i)=0ifx_i=y_i,else1$$
(2)
We have made two additions to the original algorithm in our version of ib1. First, in the case of nearest neighbor sets larger than one instance ($`k>1`$ or ties), our version of ib1 selects the classification with the highest frequency in the class distribution of the nearest neighbor set. Second, if a tie cannot be resolved in this way because of equal frequency of classes among the nearest neighbors, the classification is selected with the highest overall occurrence in the training set.
The distance metric in Equation 2 simply counts the number of (mis)matching feature values in both instances. In the absence of information about feature relevance, this is a reasonable choice. Otherwise, we can add linguistic bias to weight or select different features \[Cardie,1996\] or look at the behavior of features in the set of examples used for training. We can compute statistics about the relevance of features by looking at which features are good predictors of the class labels. Information theory gives us a useful tool for measuring feature relevance in this way \[Quinlan,1986, Quinlan,1993\].
Information gain (IG) weighting looks at each feature in isolation, and measures how much information it contributes to our knowledge of the correct class label. The information gain of feature $`f`$ is measured by computing the difference in uncertainty (i.e. entropy) between the situations without and with knowledge of the value of that feature (Equation 3).
$$w_f=\frac{H(C)_{vV_f}P(v)H(C|v)}{si(f)}$$
(3)
$$si(f)=\underset{vV_f}{}P(v)\mathrm{log}_2P(v)$$
(4)
where $`C`$ is the set of class labels, $`V_f`$ is the set of values for feature $`f`$, and $`H(C)=_{cC}P(c)\mathrm{log}_2P(c)`$ is the entropy of the class label probability distribution. The probabilities are estimated from relative frequencies in the training set. The normalizing factor $`si(f)`$ (split info) is included to avoid a bias in favor of features with more values. It represents the amount of information needed to represent all values of the feature (Equation 4). The resulting IG values can then be used as weights in equation 1.
The possibility of automatically determining the relevance of features implies that many different and possibly irrelevant features can be added to the feature set. This is a very convenient methodology if theory does not constrain the choice enough beforehand, or if we wish to measure the importance of various information sources experimentally. A limitation is its insensitivity to feature redundancy; although a feature may be redundant, it may be assigned a high information gain weight. Nevertheless, the advantages far outweigh the limitations for our data sets, and ib1-ig consistently outperforms ib1.
### 2.2 C5.0
c5.0, a commercial version of c4.5 \[Quinlan,1993\], performs top-down induction of decision trees (tdidt). On the basis of an instance base of examples, c5.0 constructs a decision tree which compresses the classification information in the instance base by exploiting differences in relative importance of different features. Instances are stored in the tree as paths of connected nodes ending in leaves which contain classification information. Nodes are connected via arcs denoting feature values. Feature information gain (Equation 3) is used dynamically in c5.0 to determine the order in which features are employed as tests at all levels of the tree \[Quinlan,1993\].
c5.0 can be tuned by several parameters. In our experiments, we chose to vary the pruning confidence level (the $`c`$ parameter), and the minimal number of instances represented at any branch of any feature-value test (the $`m`$ parameter). The two parameters directly affect the degree of ‘forgetting’ of individual instances by c5.0:
* The $`c`$ parameter denotes the pruning confidence level, which ranges between 0% and 100%. This parameter is used in a heuristic function that estimates the predicted number of misclassifications of unseen instances at leaf nodes, by computing the binomial probability (i.e, the confidence limits for the binomial distribution) of misclassifications within the set of instances represented at that node \[Quinlan,1993\]. When the presence of a leaf node leads to a higher predicted number of errors than when it would be absent, it is pruned from the tree. By default, $`c=25\%`$; set at 100%, no pruning occurs. The more pruning is performed, the less information about the individual examples is remembered in the abstracted decision tree.
* The $`m`$ parameter governs the minimum number of instances represented by a node. By setting $`m>1`$, c5.0 can avoid the creation of long paths disambiguating single-instance minorities that possibly represent noise \[Quinlan,1993\]. By default, $`m=2`$. With $`m=1`$, c5.0 builds a path for every single instance not yet disambiguated. Higher values of $`m`$ lead to an increasing amount of abstraction and therefore to less recoverable information about individual instances.
Moreover, we chose to set the subsetting of values ($`s`$) parameter at the non-default value ‘on’. The $`s`$ parameter is a flag determining whether different values of the same feature are grouped on the same arc in the decision tree when they lead to identical or highly similar subtrees. We used value grouping as a default for reasons of computational complexity for the pos, pp, and np data sets, and because that setting yields higher generalization accuracy for the gs data set.
### 2.3 IGTREE
The igtree algorithm was originally developed as a method to compress and index case bases in memory-based learning \[Daelemans, Van den Bosch, and Weijters,1997\]. It performs tdidt in a way similar to that of c5.0, but with two important differences. First, it builds oblivious decision trees, i.e., feature ordering is computed only at the root node and is kept constant during tdidt, instead of being recomputed at every new node. Second, igtree does not prune exceptional instances; it is only allowed to disregard information redundant for the classification of the instances presented during training.
Instances are stored as paths of connected nodes and leaves in a decision tree. Nodes are connected via arcs denoting feature values. The global information gain of the features is used to determine the order in which instance feature values are added as arcs to the tree. The reasoning behind this compression is that when the computation of information gain points to one feature clearly being the most important in classification, search can be restricted to matching a test instance to those memory instances that have the same feature value as the test instance at that feature. Instead of indexing all memory instances only once on this feature, the instance memory can then be optimized further by examining the second most important feature, followed by the third most important feature, etc. A considerable compression is obtained as similar instances share partial paths.
The tree structure is compressed even more by restricting the paths to those input feature values that disambiguate the classification from all other instances in the training material. The idea is that it is not necessary to fully store an instance as a path when only a few feature values of the instance make the instance classification unique. This implies that feature values that do not contribute to the disambiguation of the instance (i.e., the values of the features with lower information gain values than the lowest information gain value of the disambiguating features) are not stored in the tree.
Apart from compressing all training instances in the tree structure, the igtree algorithm also stores with each non-terminal node information concerning the most probable or default classification given the path thus far, according to the bookkeeping information maintained by the tree construction algorithm. This extra information is essential when processing unknown test instances. Processing an unknown input involves traversing the tree (i.e., matching all feature-values of the test instance with arcs in the order of the overall feature information gain), and either retrieving a classification when a leaf is reached (i.e., an exact match was found), or using the default classification on the last matching non-terminal node if an exact match fails.
In sum, in the trade-off between computation during learning and computation during classification, the igtree approach chooses to invest more time in organizing the instance base than ib1-ig, but less than c5.0, because the order of the features needs to be computed only once for the whole data set.
## 3 Benchmark language learning tasks
We investigate four language learning tasks that jointly represent a wide range of different types of tasks in the nlp domain: (1) grapheme-phoneme conversion (henceforth referred to as gs), (2) part-of-speech tagging (pos), (3) prepositional-phrase attachment (pp), and (4) base noun phrase chunking (np). In this section, we introduce each of the four tasks, and describe for each task the data collected and employed in our study. First, properties of the four data sets are listed in Table 1, and examples of instances for each of the tasks are displayed in Table 2.
### 3.1 GS: grapheme-phoneme conversion with stress assignment
Converting written words to stressed phonemic transcription, i.e., word pronunciation, is a well-known benchmark task in machine learning \[Sejnowski and Rosenberg,1987, Stanfill and Waltz,1986, Stanfill,1987, Lehnert,1987, Wolpert,1989, Shavlik, Mooney, and Towell,1991, Dietterich, Hild, and Bakiri,1995\]. We define the task as the conversion of fixed-sized instances representing parts of words to a class representing the phoneme and the stress marker of the instance’s middle letter. We henceforth refer to the task as gs, an acronym of grapheme-phoneme conversion and stress assignment. To generate the instances, windowing is used \[Sejnowski and Rosenberg,1987\]. Table 2 (top) displays four example instances and their classifications. Classifications, i.e., phonemes with stress markers, are denoted by composite labels. For example, the first instance in Table 2, \_hearts, maps to class label 0A:, denoting an elongated short ‘a’-sound which is not the first phoneme of a syllable receiving primary stress. In this study, we chose a fixed window width of seven letters, which offers sufficient context information for adequate performance (in terms of the upper bound on error demanded by applications in speech technology).
From celex \[Baayen, Piepenbrock, and van Rijn,1993\] we extracted, on the basis of the standard word base of 77,565 words with their corresponding transcription, a data base containing 675,745 instances. The number of classes (i.e., all possible combinations of phonemes and stress markers) occurring in this data base is 159.
### 3.2 POS: Part-of-speech tagging of word forms in context
Many words in a text are ambiguous with respect to their morphosyntactic category (part-of-speech). Each word has a set of lexical possibilities, and the local context of the word can be used to select the most likely category from this set \[Church,1988\]. For example in the sentence “they can can a can”, the word can is tagged as modal verb, main verb and noun respectively. We assume a tagger architecture that processes a sentence from the left to the right by classifying instances representing words in their contexts (as described in \[Daelemans et al.,1996\]). The word’s already tagged left context is represented by the disambiguated categories of the two words to the left, the word itself and its ambiguous right context are represented by categories which denote ambiguity classes (e.g. verb-or-noun).
The data set for the part-of-speech tagging task, henceforth referred to as the pos task, was extracted from the LOB corpus<sup>1</sup><sup>1</sup>1The LOB corpus is available from icame, the International Computer Archive of Modern and Medieval English; consult http://www.hd.uib.no/icame.html for more information.. The full data set contains 1,046,152 instances. The “lexicon” of ambiguity classes was constructed from the first 90% of the corpus only, and hence the data contains unknown words. To avoid a complicated architecture, we treat unknown words the same as the known words, i.e., their ambiguous category is simply “unknown”, and they can only be classified on the basis of their context<sup>2</sup><sup>2</sup>2In our full POS tagger we have a separate classifier for unknown words, which takes into account features such as suffix and prefix letters, digits, hyphens, etc..
### 3.3 PP: Disambiguating verb/noun attachment of prepositional phrases
As an example of a semantic-syntactic disambiguation task we consider a simplified version of the task of Prepositional Phrase (henceforth pp) attachment: the attachment of a PP in the sequence VP NP PP (VP $`=`$ verb phrase, NP $`=`$ noun phrase, PP $`=`$ prepositional phrase). The data consists of four-tuples of words, extracted from the Wall Street Journal Treebank \[Marcus, Santorini, and Marcinkiewicz,1993\] by a group at ibm \[Ratnaparkhi, Reynar, and Roukos,1994\].<sup>3</sup><sup>3</sup>3The data set is available from ftp://ftp.cis.upenn.edu/pub/adwait/PPattachData/. We would like to thank Michael Collins for pointing this benchmark out to us. They took all sentences that contained the pattern VP NP PP and extracted the head words from the constituents, yielding a V N1 P N2 pattern (V $`=`$ verb, N $`=`$ noun, P $`=`$ preposition). For each pattern they recorded whether the PP was attached to the verb or to the noun in the treebank parse. For example, the sentence “he eats pizza with a fork” would yield the pattern:
> eats, pizza, with, fork, verb.
because here the PP is an instrumental modifier of the verb. A contrasting sentence would be “he eats pizza with anchovies”, where the PP modifies the noun phrase pizza.
> eats, pizza, with, anchovies, noun.
From the original data set, used in statistical disambiguation methods by \[Ratnaparkhi, Reynar, and Roukos,1994\] and \[Collins and Brooks,1995\], we took the train and test set together to form a new data set of 23,898 instances.
Due to the large number of possible word combinations and the comparatively small training set size, this data set can be considered very sparse. Of the 2390 test instances in the first fold of the 10 cross-validation (CV) partitioning, only 121 (5.1%) occurred in the training set; 619 (25.9 %) instances had 1 mismatching word with any instance in the training set; 1492 (62.4%) instances had 2 mismatches; and 158 (6.6 %) instances had 3 mismatches. Moreover, the test set contains many words that are not present in any of the instances in the training set.
The pp data set is also known to be noisy. \[Ratnaparkhi, Reynar, and Roukos,1994\] performed a study with three human subjects, all experienced treebank annotators, who were given a small random sample of the test sentences (either as four-tuples or as full sentences), and who had to give the same binary decision. The humans, when given the four-tuple, gave the same answer as the Treebank parse only 88.2% of the time, and when given the whole sentence, only 93.2% of the time.
### 3.4 NP: Base noun phrase chunking
Phrase chunking is defined as the detection of boundaries between phrases (e.g., noun phrases or verb phrases) in sentences. Chunking can be seen as a ‘light’ form of parsing. In NP chunking, sentences are segmented into non-recursive NP’s, so called baseNP’s \[Abney,1991\]. NP chunking can, for example, be used to reduce the complexity of sub-sequential parsing, or to identify named entities for information retrieval. To perform this task, we used the baseNP tag set as presented in \[Ramshaw and Marcus,1995\]: $`I`$ for inside a baseNP, $`O`$ for outside a baseNP, and $`B`$ for the first word in a baseNP following another baseNP. As an example, the IOB tagged sentence: “The/I postman/I gave/O the/I man/I a/B letter/I ./O” will result in the following baseNP bracketed sentence: “\[The postman\] gave \[the man\] \[a letter\].” The data we used are based on the same material as \[Ramshaw and Marcus,1995\] which is extracted from the Wall Street Journal text in the parsed Penn Treebank \[Marcus, Santorini, and Marcinkiewicz,1993\]. Our NP chunker consists of two stages, and in this paper we have used instances from the second stage. An instance (constructed for each focus word) consists of features referring to words, POS tags, and IOB tags (predicted by the first stage) of the focus and the two immediately adjacent words. The data set contains a total of 251,124 instances.
### 3.5 Experimental method
We used 10-fold CV \[Weiss and Kulikowski,1991\] in all experiments comparing classifiers (Section 5). In this approach, the initial data set (at the level of instances) is partitioned into ten subsets. Each subset is taken in turn as a test set, and the remaining nine combined to form the training set. Means are reported, as well as standard deviation from the mean. In the editing experiments (Section 4), the first train-test partition of the 10-fold CV was used for comparing the effect on the test set accuracy of applying different editing schemes on the training set.
Having introduced the machine learning methods and data sets that we focus on in this paper, and the experimental method we used, the next Section describes empirical results from a first set of experiments aimed at getting more insight into the effect of editing exceptional instances in memory-based learning.
## 4 Editing exceptions in memory-based learning is harmful
The editing of instances from memory in memory-based learning or the $`k`$-nn classifier \[Hart,1968, Wilson,1972, Devijver and Kittler,1980\] serves two objectives: to minimize the number of instances in memory for reasons of speed or storage, and to minimize generalization error by removing noisy instances, prone to being responsible for generalization errors. Two basic types of editing, corresponding to these goals, can be found in the literature:
* Editing superfluous regular instances: delete instances for which the deletion does not harm the classification accuracy of their own class in the training set \[Hart,1968\].
* Editing unproductive exceptions: deleting instances that are incorrectly classified by their neighborhood in the training set \[Wilson,1972\], or roughly vice-versa, deleting instances that are bad class predictors for their neighborhood in the training set \[Aha, Kibler, and Albert,1991\].
We present experiments in which both types of editing are employed within the ib1-ig algorithm (Subsection 2.1). The two types of editing are performed on the basis of two criteria that estimate the exceptionality of instances: typicality \[Zhang,1992\] and class prediction strength \[Salzberg,1990\] (henceforth referred to as cps). Unproductive exceptions are edited by taking the instances with the lowest typicality or cps, and superfluous regular instances are edited by taking the instances with the highest typicality or cps. Both criteria are described in Subsection 4.1. Experiments are performed using the ib1-ig implementation of the TiMBL software package<sup>4</sup><sup>4</sup>4TiMBL, which incorporates ib1-ig and igtree and additional weighting metrics and search optimalizations, can be downloaded from http://ilk.kub.nl/. \[Daelemans et al.,1998\]. We present the results of the editing experiments in Subsection 4.2.
### 4.1 Two editing criteria
We investigate two methods for estimating the (degree of) exceptionality of instance types: typicality and class prediction strength (cps).
#### 4.1.1 Typicality
In its common meaning, “typicality” denotes roughly the opposite of exceptionality; atypicality can be said to be a synonym of exceptionality. We adopt a definition from \[Zhang,1992\], who proposes a typicality function. Zhang computes typicalities of instance types by taking the notions of intra-concept similarity and inter-concept similarity \[Rosch and Mervis,1975\] into account. First, Zhang introduces a distance function which extends Equation 1; it normalizes the distance between two instances $`X`$ and $`Y`$ by dividing the summed squared distance by $`n`$, the number of features. The normalized distance function used by Zhang is given in Equation 5.
$$\mathrm{\Delta }(X,Y)=\sqrt{\frac{1}{n}\underset{i=1}{\overset{n}{}}(\delta (x_i,y_i))^2}$$
(5)
The intra-concept similarity of instance $`X`$ with classification $`C`$ is its similarity (i.e., $`1`$distance) with all instances in the data set with the same classification $`C`$: this subset is referred to as $`X`$’s family, $`Fam(X)`$. Equation 6 gives the intra-concept similarity function $`Intra(X)`$ ($`|Fam(X)|`$ being the number of instances in $`X`$’s family, and $`Fam(X)_i`$ the $`i`$th instance in that family).
$$Intra(X)=\frac{1}{|Fam(X)|}\underset{i=1}{\overset{|Fam(X)|}{}}1.0\mathrm{\Delta }(X,Fam(X)_i)$$
(6)
All remaining instances belong to the subset of unrelated instances, $`Unr(X)`$. The inter-concept similarity of an instance $`X`$, $`Inter(X)`$, is given in Equation 7 (with $`|Unr(X)|`$ being the number of instances unrelated to $`X`$, and $`Unr(X)_i`$ the $`i`$th instance in that subset).
$$Inter(X)=\frac{1}{|Unr(X)|}\underset{i=1}{\overset{|Unr(X)|}{}}1.0\mathrm{\Delta }(X,Unr(X)_i)$$
(7)
The typicality of an instance $`X`$, $`Typ(X)`$, is $`X`$’s intra-concept similarity divided by $`X`$’s inter-concept similarity, as given in Equation 8.
$$Typ(X)=\frac{Intra(X)}{Inter(X)}$$
(8)
An instance type is typical when its intra-concept similarity is larger than its inter-concept similarity, which results in a typicality larger than 1. An instance type is atypical when its intra-concept similarity is smaller than its inter-concept similarity, which results in a typicality between 0 and 1. Around typicality value 1, instances cannot be sensibly called typical or atypical; \[Zhang,1992\] refers to such instances as boundary instances.
We adopt typicality as an editing criterion here, and use it for editing instances with low typicality as well as instances with high typicality. Low-typical instances can be seen as exceptions, or bad representatives of their own class and could therefore be pruned from memory, as one can argue that they cannot support productive generalizations. This approach has been advocated by \[Ting,1994a\] as a method to achieve significant improvements in some domains. Editing atypical instances would, in this line of reasoning, not be harmful to generalization, and chances are that generalization would even improve under certain conditions \[Aha, Kibler, and Albert,1991\]. High-typical instances, on the other hand, may be good predictors for their own class, but there may be enough of them in memory, so that a few may also be edited without harmful effects to generalization.
Table 3 provides examples of low-typical (for each task, the top three) and high-typical (bottom three) instances of all four tasks. The gs examples show that loan words such as czech introduce peculiar spelling-pronunciation relations; particularly foreign spellings turn out to be low-typical. High-typical instances are parts of words of which the focus letter is always pronounced the same way. Low-typical pos instances tend to involve inconsistent or noisy associations between an unambiguous word class of the focus word and a different word class as classification: such inconsistencies can be largely attributed to corpus annotation errors. Focus tags of high-typical pos instances are already unambiguous. The examples of low-typical pp instances represent minority exceptions or noisy instances in which it is questionable whether the chosen classification is right (recall that human annotators agree only on 88% of the instances in the data set, cf. Subsection 3), while the high-typical pp examples have the preposition ‘of’ in focus position, which typically attaches to the noun. Low-typical np instances seem to be partly noisy, and otherwise difficult to interpret. High-typical np instances are clear-cut cases in which a noun occurring between a determiner and a finite verb is correctly classified as being inside an NP.
#### 4.1.2 Class-prediction strength
A second estimate of exceptionality is to measure how well an instance type predicts the class of all other instance types within the training set. Several functions for computing class-prediction strength have been proposed, e.g., as a criterion for removing instances in memory-based ($`k`$-nn) learning algorithms, such as ib3 \[Aha, Kibler, and Albert,1991\] (cf. earlier work on edited $`k`$-nn \[Hart,1968, Wilson,1972, Devijver and Kittler,1980, Voisin and Devijver,1987\]); or for weighting instances in the Each algorithm \[Salzberg,1990\]. We use the class-prediction strength function as proposed by \[Salzberg,1990\]. This is the ratio of the number of times the instance type is a nearest neighbor of another instance with the same class and the number of times that the instance type is the nearest neighbor of another instance type regardless of the class. An instance type with class-prediction strength 1.0 is a perfect predictor of its own class; a class-prediction strength of 0.0 indicates that the instance type is a bad predictor of classes of other instances, presumably indicating that the instance type is exceptional. Even more than with typicality, one might argue that bad class predictors can be edited from the instance base. Likewise, one could also argue that instances with a maximal cps could be edited to some degree too without harming generalization: strong class predictors may be abundant and some may be safely forgotten since other instance types may be strong enough to support the class predictions of the edited instance type.
In Table 4, examples from the four tasks of instances with low (top three) and high (bottom three) cps are displayed. Many instances with low cps are minority ambiguities. For instance, the gs examples represent instances which are completely ambiguous and of which the classification is the minority. For example, there are more words beginning with algo that have primary stress (class ‘1ae’) than secondary stress (class ‘2ae’), which makes the instance ‘\___algo 2ae’ a minority ambiguity.
To test the utility of these measures as criteria for justifying forgetting of specific training instances, we performed a series of experiments in which ib1-ig is applied to the four data sets, systematically edited according to each of four tested criteria. We performed the editing experiments on the first fold of the 10-fold CV partitioning of the four data sets. For each editing criterion (i.e., low and high typicality, and low and high cps), we created eight edited instance bases by removing 1%, 2%, 5%, 10%, 20%, 30%, 40%, and 50% of the instance tokens (rounded off so as to remove a whole number of instance types) according to the criterion from a single training set (the training set of the first 10-fold CV partition). ib1-ig was then trained on each of the edited training sets, and tested on the original unedited test set (of the first 10-fold CV partition).
To measure to what degree the two criteria are indeed different measures of exceptionality, the percentage of overlap between the removed types was measured for each data set. As can be seen in Figure 1, the two measures mostly have fairly little overlap, certainly for editing below 10%. The reason for this is that typicality is based on global properties of the data set, whereas class prediction strength is based only on the local neighborhood of each instance. Only for the PP attachment and POS tagging tasks do the sets of edited exceptional instances overlap up to 70% when editing 10%.
### 4.2 Editing exceptions: Results
The general trend we observe in the results obtained with the editing experiments is that editing on the basis of typicality and class-prediction strength, whether low or high, is not beneficial, and is ultimately harmful to generalization accuracy. More specifically, we observe a trend that editing instance types with high typicality or high cps is less harmful than editing instance types with low typicality or low class prediction strength – again, with some exceptions. The results are summarized in Figure 2. The results show that in any case for our data sets, editing serves neither of its original goals. If the goal is a decrease of speed and memory requirements, editing criteria should allow editing of 50% or more without a serious decrease in generalization accuracy. Instead, we see disastrous effects on generalization accuracy at much lower editing rates, sometimes even at 1%. When the goal is improving generalization accuracy by removing noise, the focus of the editing experiments in this paper, none of the studied criteria turns out to be useful.
To compute the statistical significance of the effect of editing, the output for each criterion was compared to the correct classification and the output of the unedited classifier. The resulting cross-tabulation of hits and misses was subjected to McNemar’s $`\chi ^2`$ test \[Dietterich,1998 in press\]. Differences with $`p<0.05`$ are reported as significant.
A detailed look at the results per data set shows the following results. Editing experiments on the gs task (top left of Figure 2) show significant decreases in generalization accuracy with all editing criteria and all amounts (even 1% is harmful); editing on the basis of low and high cps is particularly harmful, and all criteria except low typicality show a dramatic drop in accuracy at high levels of editing.
The editing results on the pos task (top right of Figure 2) indicate that editing on the basis of either low typicality or low class prediction strength leads to significant decreases in generalization accuracy even with the smallest amount (1%) of edited instance types. Editing on the basis of high typicality and high cps can be performed up to 10% and 5% respectively without significant performance loss. For this data set, the drop in performance is radical only for low typicality.
Editing on the pp task (bottom left of Figure 2) results in significant decreases of generalization accuracy with respectively 5% and 10% of edited instance tokens of low typicality and low cps. Editing with high typicality and high cps can be performed up to 20% and 10% repectively, without significant performance loss, but accuracies drop dramatically when 30% or more of high-typical or high-cps instance types are edited.
Finally, editing on the np data (bottom right of Figure 2) can be done without significant generalization accuracy loss with either the low or the high cps criterion, up to respectively 30% and 10%. Editing with low or high typicality, however, is harmful to generalization immediately from editing 1% of the instance tokens.
In sum, the experiments with editing on the basis of criteria estimating the exceptionality of instances show that forgetting of exceptional instances in memory-based learning while safeguarding generalization accuracy can only be performed to a very limited degree by (i) replacing instance tokens by instance types with frequency information (which is trivial and is done by default in ib1-ig), and (ii) removing small amounts of minority ambiguities with low (0.0) cps. None of the editing criteria studied is able to reliably filter out noisy instances. It seems that for the linguistic tasks we study, methods filtering out noise tend to also intercept at least some (small families of) productive instances. Our experiments show that there is little reason to believe that such editing will lead to accuracy improvement. When looking at editing from the perspective of reducing storage requirements, we find that the amount of editing possible without a significant decrease in generalization accuracy is limited to around 10%. Whichever perspective is taken, there does not seem to be a clear pattern across the data sets favoring either the typicality or class prediction strength criterion, which is somewhat surprising given their different basis (i.e., as a measure of global or local exceptionality).
## 5 Forgetting by decision-tree learning can be harmful in language learning
Another way to study the influence of exceptional instances on generalization accuracy is to compare ib1-ig, without editing, to inductive algorithms that abstract from exceptional instances by means of pruning or other devices. c5.0 and igtree, introduced in Section 2 are decision tree learning methods that abstract in various ways from exceptional instances. We compared the three algorithms for all data sets using 10-fold CV. In this Section, we will discuss the results of this comparison, and the influence of some pruning parameters of c5.0 on generalization accuracy.
### 5.1 Results
Ordered on a continuum representing how exceptional instances are handled, ib1-ig is at one end, keeping all training data, and c5.0 with default settings ($`c=25`$, $`m=2`$, value grouping on) is at the other end, making abstraction from exceptional (noisy) instances by pruning, constructing features (by grouping subsets of values of a feature), and enforcing a minimal number of instances at each node. In between is igtree, which collapses instances that have the same class and the same values for the most relevant features into one node.
Table 5 displays the generalization accuracies, measured in percentages of correctly classified test instances, for ib1-ig, igtree, and c5.0 on the four tasks. We were unfortunately unable to finish the c5.0 experiment on the np data set for memory reasons (running on a SUN Sparc 5 with 160 Mb internal memory and 386 Mb swap space). The statistical significance of the differences between the algorithms is summarized in Table 6. We performed a one-tailed paired t-test between the results of the 10 CV runs.
As the results in these Tables show, ib1-ig has significantly better generalization accuracy than igtree for all data sets. In two of the three data sets where the comparison is feasible, ib1-ig performs significantly better than c5.0. For the pos data set, c5.0 outperforms ib1-ig with a small but statistically significant difference.
#### 5.1.1 Abstraction in C5.0
We performed additional experiments with c5.0 with increasing values for the $`c`$ and $`m`$ parameters, to gain more insight into the effect of explicitly forgetting feature-value information through pruning ($`c`$) or blocking the disambiguation of small amounts of instances ($`m`$). The following space of parameters was explored for each data set on the first fold of the 10 CV partitioning.
1. $`m=1`$ and $`c=100,75,50,40,35,30,25,20,15,10,5,2,1`$ to visualize the gradual increase of pruning, and
2. $`c=100`$ and $`m=1,2,3,4,5,6,8,10,15,20,30,50`$ to visualize the gradual decrease in the level of instance granularity at feature tests.
Figure 3 displays the effect on generalization accuracy of varying the $`c`$ parameter from 1 to 100 (left) and the $`m`$ parameter from 1 to 50 (right). Performance of c5.0 on the pos and pp tasks is only slightly sensitive to the setting of both parameters, while the performance on the gs task is seriously harmed when $`c`$ is too small (i.e., when pruning is high), or when $`m`$ is larger than 1 (i.e., when single instances to be disambiguated are ignored). The direct effect of changing both parameters is shown in Figure 4; small values of $`c`$ lead to smaller trees, as do large values of $`m`$. For the pos, and pp tasks, it is interesting to note that the performance of c5.0, although usually lower than that of ib1-ig, is maintained even with a small number of nodes: with $`m=50`$ and $`c=100`$, c5.0 needs 1324 nodes for the pos task and 34 nodes for the pp task. However, nodes in these trees contain a lot of information since grouping of feature values was used.
Table 7 compares c5.0 with default settings (c5.0def) to c5.0 with ‘lazy’ parameter setting $`c=100`$ and $`m=1`$ (c5.0lazy). The differences are significant at the $`p<0.05`$ level for the gs and pos data sets, but not for the pp data set.
These parameter tuning results indicate that decision-tree pruning is not beneficial to generalization accuracy, but neither is it generally harmful. Only on the gs task are strong decreases in generalization accuracy found with decreasing $`c`$. Likewise, small decreases in performance are witnessed with increasing $`m`$ for the pos and pp tasks, while a strong accuracy decrease is found with increasing $`m`$ for the gs task.
#### 5.1.2 Efficiency
In addition to generalization accuracy, which is the focus of our attention in this research, efficiency, measured in terms of training and testing speed and in terms of memory requirements, is also an important criterion to evaluate learning algorithms. For training, ib1-ig is fastest as it reduces to storing instances and computing information gain (although in the implementation we used, various indexing strategies are used), and c5.0, because of the computation involved in recursively partitioning the training set, value grouping, and pruning, is the slowest. igtree occupies a place in between, similar to ib1-ig in training time. Memory requirements are, in theory, highest in ib1-ig and lowest for c5.0 with default parameter settings. Again, igtree is in between, similar to c5.0 in memory usage. However, in practice, the implementations of c5.0 and igtree store the entire data set during training and hence take up more space than ib1-ig. Finally, for testing speed, the most important efficiency measurement, igtree and c5.0 are on a par, and both are some 2 orders of magnitude faster than ib1-ig. In \[Daelemans, Van den Bosch, and Weijters,1997\], the asymptotic complexity of ib1-ig and igtree is described. Illustrative timing results on the first partition of each of the data sets are provided in Table 8. See \[Daelemans et al.,1998\] for the details of the effects of various optimizations in the TiMBL package.
In this Section, we have shown that when comparing the generalization accuracy of ib1-ig to that of decision tree methods, we see the same results as in our experiments on editing: different types of abstraction (some of them explicitly aimed at removing exceptional instances) do not succeed in general in providing a better generalization accuracy than ib1-ig. However, for some data sets, if a lower generalization accuracy is acceptable, the pruning and abstraction methods of c5.0 are able to induce compact decision trees without a significant loss in initial generalization accuracy.
## 6 Why forgetting exceptions is harmful
In this section we explain why forgetting exceptional instances, either by editing them from memory or by pruning them from decision trees, is harmful to generalization accuracy for the language processing tasks studied. We explain this effect on the basis of the properties of this type of task and the properties of the learning algorithms used. Our approach of studying data set properties, to find an explanation for why one type of inductive algorithm rather than another is better suited for learning a type of task, is in the spirit of \[Aha,1992\] and \[Michie, Spiegelhalter, and Taylor,1994\].
### 6.1 Properties of language processing tasks
Language processing tasks are usually described as complex mappings between representations: from spelling to sound, from strings of words to parse trees, from parse trees to semantic formulas, etc. These mappings can be approximated by (cascades of) classification tasks \[Ratnaparkhi,1997, Daelemans,1996, Cardie,1996, Magerman,1994\] which makes them amenable to machine learning approaches. One of the most salient characteristics of natural language processing mappings is that they are noisy and complex. Apart from some regularities, they contain also many sub-regularities and (pockets of) exceptions. In other words, apart from a core of generalizable regularities, there is a relatively large periphery of irregularities \[Daelemans,1996\]. In rule-based nlp, this problem has to be solved using mechanisms such as rule ordering, subsumption, inheritance, or default reasoning (in linguistics this type of “priority to the most specific” mechanism is called the elsewhere condition). In the feature-vector-based classification approximations of these complex language processing mappings, this property is reflected in the high degree of disjunctivity of the instance space: classes exhibit a high degree of polymorphism. Another issue we study in this Section is the usefulness of exceptional as opposed to more regular instances in classification.
#### 6.1.1 Degree of polymorphism
Several quantitative measures can be used to show the degree of polymorphism: the number of clusters (i.e., groups of nearest-neighbor instances belonging to the same class), the number of disjunct clusters per class (i.e., the numbers of separate clusters per class), or the numbers of prototypes per class \[Aha,1992\]. We approach the issue by looking at the average number of friendly neighbors per instance in a leave-one-out experiment \[Weiss and Kulikowski,1991\]. For each instance in the four data sets a distance ranking of the 50 nearest neighbors to an instance was produced. In case of ties in distance, nearest neighbors with an identical class as the left-out instance are placed higher in rank than instances with a different class. Within this ranked list we count the ranking of the nearest neighbor of a different class. This rank number minus one is then taken as the cluster size surrounding the left-out instance. If, for example, a left-out instance is surrounded by three instances of the same class at distance 0.0 (i.e., no mismatching feature values), followed by a fourth nearest-neighbor instance of a different class at distance 0.3, the left-out instance is said to be in a cluster of size three. The results of the four leave-one-out experiments are displayed graphically in Figure 5. The $`x`$-axis of Figure 5 denotes the numbers of friendly neighbors found surrounding instances; the $`y`$-axis denotes the cumulative percentage of occurrences of friendly-neighbor clusters of particular sizes.
The cumulative percentage graphs in Figure 5 display that for the case of the gs task, many instances have only a handful of friendly neighbors; 59.9% of the gs instances have five friendly neighbors or less, while 35.8% has no friendly neighbors at all. For the case of the pp task, the number of friendly neighbors is larger; 50.1% of the pp instances have 40 or less friendly neighbors. Instances of the pos and np tasks tend to have even more friendly neighbors surrounding them. In sum, the gs task appears to display high disjunctivity (i.e., a high degree of polymorphism) of its 159 classes; for the other three tasks, disjunctivity appears to be slightly lower, but still the classes are scattered across many unconnected clusters in the instance space.
In sum, we find indications for a high disjunctity or polymorphism of the language data sets investigated in this study. Other studies in which machine learning algorithms are applied to language data, and in which special attention is payed to learning exceptions, mention similar indications (e.g., \[Mooney and Califf,1995, Van den Bosch et al.,1995\]). However, the question whether language data in general exhibits a higher degree of disjunctiveness or polymorphism than comparable data sets of non-linguistic origin remains an open one, and will be a focal point in future research.
#### 6.1.2 Usefulness of exceptional instances
Having established a fairly high degree of disjunctivity for our data sets, an indication is needed that fully retaining this disjunctivity is indeed beneficial. With this in mind, we can return to our editing experiments and examine why even instances with low typicality or low prediction strength cannot be removed from the training data. For this purpose, we have looked at the instances that are actually used in the memory-based classification process to classify the test instances. We call the nearest neighbors that were used to classify test instances the support set. The distribution of both typicality and cps over the support set can be seen in Figure 6. The support set can be divided into support for correct decisions (Right) and errors (Wrong). The average number of neighbors for correct decisions is approximately the same as for errors. The figures clearly show that even instances with respectively low typicality (below 1.0) or low cps (below 0.5) are more often used to support correct decisions than errors. Although this does not present a proof of the detrimental effects of their removal, it does show that exceptional events can be beneficial for accurate generalization. The small disjunctive clusters are productive for classifying new instances.
### 6.2 Properties of learning algorithms
If we classify instance $`X`$ by looking at its nearest neighbors, we are in fact estimating the probability $`P(class|X)`$, by looking at the relative frequency of the class in the set defined by $`sim_k(X)`$, where $`sim_k(X)`$ is a function from $`X`$ to the set of most similar instances present in the training data. The $`sim_k(X)`$ function given by the overlap metric groups varying numbers of instances into buckets of equal similarity. A bucket is defined by a particular number of mismatches with respect to instance $`X`$. Each bucket can further be decomposed into a number of schemata characterized by the position of the mismatch.
The search for the nearest neighbors results in the use of the most similar instantiated schema or bucket for extrapolation. In statistical language modeling this is known as backed-off estimation \[Collins and Brooks,1995, Katz,1987\]. The distance metric defines a specific-to-general ordering ($`XY`$: read $`X`$ is more specific than $`Y`$, see also \[Zavrel and Daelemans,1997\]), where the most specific schema is the schema with zero mismatches (i.e., an identical instance in memory), and the most general schema has a mismatch on every feature, which corresponds to the entire memory being retrieved.
If information gain weights are used in combination with the overlap metric, individual schemata instead of buckets become the steps of the back-off sequence (unless two schemata are exactly tied in their IG values). The $``$ ordering becomes slightly more complicated now, as it depends on the number of wild-cards and on the magnitude of the weights attached to those wild-cards. Let $`S`$ be the most specific (zero mismatches) schema. We can then define the $``$ ordering between schemata in the following equation, where $`\mathrm{\Delta }(X,Y)`$ is the distance as defined in Equation 1.
$$S^{}S^{\prime \prime }\mathrm{\Delta }(S,S^{})<\mathrm{\Delta }(S,S^{\prime \prime })$$
(9)
This approach represents a type of implicit parallelism. The importance of all of the $`2^F`$ schemata is specified using only $`F`$ parameters (i.e., the IG weights), where $`F`$ is the number of features. Moreover, using the schemata keeps the information from all training instances available for extrapolation in those cases where more specific information is not available.
Decision trees can also be described as backed-off estimators of the class probability conditioned on the combination of the features-values. However, here some schemata are not available for extrapolation. Even in a decision tree without any pruning, such abstraction takes place. Once a test instance matches an arc with a certain value for a particular feature, the set of schemata from which it can receive a classification is restricted to those for which that feature matches. This means that other schemata which are more specific when judged by the ordering of Equation 9, are unavailable. If pruning is applied, even more schemata are blocked.
Figure 7 shows why this elimination of schemata can be harmful. In this figure the percentage correct for our data sets is plotted as a function of specificity. The decrease of the accuracy seen in the graph clearly confirms the intuition that an extrapolation from a more specific support set is more likely to be correct. Reasoning in the other direction, it suggests that any forgetting of specific information from the training set will push at least some test instances in the direction of a less specific support set, and thus of lower accuracy.
A more direct illustration of this matter can be given for the limited accessibility of schemata in igtree. As the ordering of features is constant throughout the tree, the schemas that are accessible at any given node in the tree are limited to those that match all features with a higher ig weight. The depth of the igtree node at which classification was performed can directly be translated into a distance between the test pattern and the branch of the tree, using the ig weights. To make the comparison fair, we have used an unpruned igtree. Table 9 shows the average distances at which classifications were made for the four tasks at hand. igtree consistently classifies at a larger average distance than ib1-ig. Moreover, through analysis of those test instances that were misclassified by igtree, but classified correctly by ib1-ig (i.e., TF in Table 9), we found that for a majority (69% for gs, 90% for pos, 55% for pp, and 100% for np) of these instances the classification distance was larger for igtree than for ib1-ig. This means that in all these cases a closer neighbor was available to support a correct classification, but was not used, because its schema was not accesible.
#### 6.2.1 Increasing $`k`$
As an aside, we note that we have reported solely on experiments with ib1-ig with $`k=1`$. Although it is not directly related to “forgetting”, taking a larger value of $`k`$ can also be considered as a type of abstraction, because the class is estimated from a somewhat smoothed region of the instance space. Only on the basis of the results described so far, we cannot claim that $`k=1`$ is the optimal setting for our experiments. The results discussed above suggest that the average ‘$`k`$’ actually surrounding an instance is larger than $`1`$, although many instances have only one or no friendly neighbor, especially in the case of the gs task. The latter suggests that a considerable amount of ambiguity is found in instances that are highly similar; matching with $`k>1`$ may fail to detect those cases in which an instance has one best-matching friendly neighbor, and many next-best-matching instances of a different class.
We performed experiments with ib1-ig on the four tasks with $`k=2`$, $`k=3`$, and $`k=5`$, and mostly found a decrease in generalization accuracy. Table 10 lists the effects of the higher values of $`k`$. For all tasks except np, setting $`k>1`$ leads to a harmful abstraction from the best-matching instance(s) to a more smoothed best matching group of instances.
In this Section, we have tried to interpret our empirical results in terms of properties of the data and of the learning algorithms used. A salient characteristic of our language learning tasks, shown most clearly in the gs data set but also present in the other data sets, is the presence of a high degree of class polymorphism (high disjunctivity). In many cases, these small disjuncts constitute productive (pockets of) exceptions which are useful in producing accurate extrapolations to new data. ib1-ig, through its implicit parallelism and its feature relevance weighting, is better suited than decision tree methods to make available the most specific relevant patterns in memory to extrapolate from.
## 7 Related research
\[Daelemans,1995\] provides an overview of memory-based learning work on phonological and morphological tasks (grapheme-to-phoneme conversion, syllabification, hyphenation, morphological synthesis, word stress assignment) at Tilburg University and the University of Antwerp in the early nineties. The present paper directly builds on the results obtained in that research. More recently, the approach has been applied to part-of-speech tagging (morphosyntactic disambiguation), morphological analysis, and the resolution of structural ambiguity (prepositional-phrase attachment) \[Daelemans and Van den Bosch,1996, Van den Bosch, Daelemans, and Weijters,1996, Zavrel, Daelemans, and Veenstra,1997\]. Whenever these studies involve a comparison of memory-based learning to more eager methods, a clear advantage of memory-based learning is reported.
Cardie \[Cardie,1993, Cardie,1994\] suggests a memory-based learning approach for both (morpho)syntactic and semantic disambiguation and shows excellent results compared to alternative approaches. \[Ng and Lee,1996\] report results superior to previous statistical methods when applying a memory-based learning method to word sense disambiguation. In reaction to \[Mooney,1996\] where it was shown that naive Bayes performed better than memory-based learning, \[Ng,1997\] showed that with higher values of $`k`$, memory-based learning obtained the same results as naive Bayes.
The exemplar-based reasoning aspects of memory-based learning are also prominent in the large literature on example-based machine translation (cf. \[Jones,1996\] for an overview), although systematic comparisons to eager approaches seem to be lacking in that field.
In the recent literature on statistical language learning, which currently still largely adheres to the hypothesis that what is exceptional (improbable) is unimportant, similar results as those discussed here for machine learning have been reported. In \[Bod,1995\], a data-oriented approach to parsing is described in which a treebank is used as a ‘memory’ and in which the parse of a new sentence is computed by reconstruction from subtrees present in the treebank. It is shown that removing all hapaxes (unique subtrees) from memory degrades generalization performance from 96% to 92%. Bod notes that “this seems to contradict the fact that probabilities based on sparse data are not reliable.” (\[Bod,1995\], p.68). In the same vein, \[Collins and Brooks,1995\] show that when applying the back-off estimation technique \[Katz,1987\] to learning prepositional-phrase attachment, removing all events with a frequency of less than 5 degrades generalization performance from 84.1% to 81.6%. In \[Dagan, Lee, and Pereira,1997\], finally, a similarity-based estimation method is compared to back-off and maximum-likelihood estimation on a pseudo-word sense disambiguation task. Again, a positive effect of events with frequency 1 in the training set on generalization accuracy is noted.
In the context of statistical language learning, it is also relevant to note that as far as comparable results are available, statistical techniques, which also abstract from exceptional events, never obtain a higher generalization accuracy than ib1-ig \[Daelemans,1995, Zavrel and Daelemans,1997, Zavrel, Daelemans, and Veenstra,1997\]. Reliable comparisons (in the sense of methods being compared on the same train and test data) with the empirical results reported here cannot be made, however.
In the machine learning literature, the problem of small disjuncts in concept learning has been studied before by \[Quinlan,1991\], who proposed more accurate error estimation methods for small disjuncts, and by \[Holte, Acker, and Porter,1989\]. The latter define a small disjunct as one that has small coverage (i.e., a small number of training items are correctly classified by it). This definition differs from ours, in which small disjuncts are those that have few neighbors with the same category. Nevertheless, similar phenomena are noted: sometimes small disjuncts constitute a significant portion of an induced definition, and it is hard to distinguish productive small disjuncts from noise (see also \[Danyluk and Provost,1993\]). A maximum-specificity bias for small disjuncts is proposed to make small disjuncts less error-prone. Memory-based learning is of course a good way of implementing this remedy (as noted, e.g., in \[Aha,1992\]). This prompted \[Ting,1994b\] to propose a composite learner with an instance-based component for small disjuncts, and a decision tree component for large disjuncts. This hybrid learner improves upon the c4.5 baseline for several definitions of ‘small disjunct’ for most of the data sets studied. Similar results have recently been reported by \[Domingos,1996\], where rise, a unification of rule induction (c4.5) and instance-based learning (pebls) is proposed. In an empirical study, rise turned out to be better than alternative approaches, including its two ‘parent’ algorithms. The fact that rule induction in rise is specific-to-general (starting by collapsing instances) rather than general-to-specific (as in the decision tree methods used in this paper), may make it a useful approach for our language data as well.
## 8 Conclusion and future research
We have provided empirical evidence for the hypothesis that forgetting exceptional instances, either by editing them away according to some exceptionality criterion in memory-based learning or by abstracting from them in decision-tree learning, is harmful to generalization accuracy in language learning. Although we found some exceptions to this hypothesis, the fact that abstraction or editing is never beneficial to generalization accuracy is consistently shown in all our experiments.
Data sets representing nlp tasks show a high degree of polymorphism: categories are represented in instance space as small regions with the same category separated by instances with a different category (the categories are highly disjunctive). This was empirically shown by looking at the average number of friendly neighbors per instance; an indirect measure of the average size of the homogeneous regions in instance space. This analysis showed that for our nlp tasks, classes are scattered across many disjunctive clusters in instance space. This turned out to be the case especially for the gs data set, the only task presented here which has extensively been studied in the ML literature before (through the similar nettalk data set). It will be necessary to investigate polymorphism further using more language data sets and more ways of operationalizing the concept of ‘small disjuncts’.
The high disjunctivity explains why editing the training set in memory-based learning using typicality and cps criteria does not improve generalization accuracy, and even tends to decrease it. The instances used for correct classification (what we called the support set) are as likely to be low-typical or low-class-prediction-strength (thus exceptional) instances as high-typical or high-class-prediction-strength instances. The editing that we find to be the most harmless (although never beneficial) to generalization accuracy is editing up to about 20% high-typical and high-class-prediction-strength instances. Nevertheless, these results leave room for combining memory-based learning and specific-to-general rule learning of the kind presented in \[Domingos,1996\]. It would be interesting further research to test his approach on our data.
The fact that the generalization accuracies of the decision-tree learning algorithms c5.0 and igtree are mostly worse than those of ib1-ig on this type of data set can be further explained by their properties. Interpreted as statistical backed-off estimators of the class probability given the feature-value vector, due to the way the information-theoretic splitting criterion works, some schemata (sets of partially matching instances) are not accessible for extrapolation in decision tree learning. Given the high disjunctivity of categories in language learning, abstracting away from these schemata and not using them for extrapolation is harmful. This type of abstraction takes place even when no pruning is used. Apparently, the assumption in decision tree learning that differences in relative importance of features can always be exploited is, for the tasks studied, untrue. Memory-based learning, on the other hand, because it implicitly keeps all schemes available for extrapolation, can use the advantages of information-theoretic feature relevance weighting without the disadvantages of losing relevant information. We plan to expand on the encouraging results on other data sets using tribl, a hybrid of igtree and ib1-ig that leaves schemas accesible when there is no clear feature-relevance distinction \[Daelemans, Van den Bosch, and Zavrel,1997\].
When decision trees are pruned, implying further abstraction from the training data, low-frequency instances with deviating classifications constitute the first information to be removed from memory. When the data representing a task is highly disjunctive, and instances do not represent noise but simply low-frequency instances that may (and do) reoccur in test data, as is especially the case with the gs task, pruning is harmful to generalization. The first reason for decision-tree learning to be harmful (accesability of schemata) is the most serious one, since it suggests that there is no parameter setting that may help c5.0 and similar algorithms in surpassing or equaling the performance of ib1-ig in these tasks. The second reason (pruning), less important than the first, only applies to data sets with low noise. However, there exist variations of decision tree learning that may not suffer from these problems (e.g., the lazy decision trees of \[Friedman, Kohavi, and Yun,1996\]) and that remain to be investigated in the context of our data.
Taken together, the empirical results of our research strongly suggest that keeping full memory of all training instances is at all times a good idea in language learning.
### Acknowledgements
This research was done in the context of the “Induction of Linguistic Knowledge” research programme, supported partially by the Foundation for Language Speech and Logic (TSL), which is funded by the Netherlands Organization for Scientific Research (NWO). AvdB performed part of his work at the Department of Computer Science of the Universiteit Maastricht. The authors wish to thank Jorn Veenstra for his earlier work on the PP attachment and NP chunking data sets, and the other members of the Tilburg ILK group, Ton Weijters, Eric Postma, Jaap van den Herik, and the MLJ reviewers for valuable discussions and comments.
|
no-problem/9812/astro-ph9812193.html
|
ar5iv
|
text
|
# A Look At Three Different Scenarios for Bulge Formation
## 1 Introduction
A number of different mechanisms have been proposed for the formation of bulges: primordial collapse (Eggen, Lynden-Bell, & Sandage 1962), hierarchical galaxy formation models (Kauffmann & White 1993, Baugh et al. 1996), infall of satellite galaxies, and the secular evolution of galaxy disks. Numerous arguments have been put forward that secular evolution of disks has occurred in at least some galaxies, particularly in late-type galaxies (Kormendy 1992; Courteau 1996). However, for some galaxies, notably those with a massive bulge, simple energy arguments show that not all galaxies could have formed in this way. Such galaxies would have necessarily formed by primordial collapse, major mergers at high redshifts, or infall of satellite galaxies (Pfenniger 1992). In summary, it appears that many mechanisms have been at work in forming bulges over the history of universe, and so the question is no longer which mechanisms were effective in forming bulges but in what fraction.
In this paper, we shall broadly classify these bulge formation scenarios into three types: secular evolution in which bulges form relatively late by a series of bar-induced starbursts, one in which bulges form simultaneously with disks, and an early bulge formation model in which bulges form earlier than disks. Adjusting the three models to produce optimal agreement with $`z=0`$ observations, we compare their high-redshift predictions with present observations, in particular, with data compiled in various studies based on the CFRS (Schade et al. 1995; Schade et al. 1996; Lilly et al. 1998) and the HDF (Abraham et al. 1998).
We begin by presenting the samples used to constrain the models (§2), follow with a description of the models (§3), provide a brief description of our computational method (§4), move onto our high-redshift predictions and comparisons with available observations (§5), and finally summarize the implications of our analysis (§6). Hereinafter, we use $`H_o=50`$ km/s/Mpc.
## 2 Local $`z=0`$ Samples
For the purposes of normalizing our models, we examine two local $`z=0`$ samples: the de Jong sample (de Jong & van der Kruit 1994; de Jong 1995, 1996; hereinafter, DJ) and the Peletier & Balcells sample of galaxies (Peletier et al. 1994; Peletier & Balcells 1996; hereinafter, PB). The DJ sample is selected from $`12.5\%`$ of the sky and considers only relatively face-on ($`b>0.625`$) galaxies (37.5% of all orientations). For simplicity, we shall treat selection of this sample to be on $`4.7\%`$ of the sky (e.g., 0.59 steradians). Following de Jong (1996), we also take it to be diameter-limited in $`R`$ to galaxies larger than $`2^{}`$ at 24.7 R-mag $`\text{arcsec}^2`$.
The PB sample is a similarly diameter-limited sample: the $`B`$-band diameter in terms of its $`25`$ mag/arcsec<sup>2</sup> isophote was restricted to the range $`90^{\prime \prime }`$ and $`150^{\prime \prime }`$. However, in contrast to the relatively face-on ($`b>0.625`$) DJ sample, the PB sample considers galaxies of all orientations, and this was our principal reason for including it in our comparisons. Unfortunately, the PB sample is more restricted than the DJ sample in the Hubble types included (3.0 to 6.5) and in the surface brightness range $`(20.5<\mu _0^{b_J}<21.5)`$.
In the model comparisons which follow, we select our local $`z=0`$ subsample from the local samples using the DJ selection criteria since the PB sample is roughly a subset of the DJ sample strictly in terms of the selection criteria. We attempt to normalize the PB sample relative to the DJ sample so that it contains 31% of the number in the de Jong sample since $`752`$ out of 1207 galaxies (62%) in the ESO-LF catalogue down to $`1^{}`$ (Lauberts & Valentijn 1989) were of type 3.0 to 6.5 (Sbcd) and roughly 50% of the DJ sample was in the PB surface brightness selection range. In principle, then, the PB sample should be a simple subset of the DJ sample. Unfortunately, a simple look at the relative colour and $`B/T`$ distributions for the DJ and PB samples indicates that there are more galaxies in the PB sample with large $`B/T`$ ratios and relatively blue bulges than in the DJ sample. Many of these differences can be attributed to the fact that the properties of the DJ sample were measured from face-on galaxies while the PB sample covered a range of inclination angles. Edge-on disks in the PB sample are simply redder and less prominent relative to the bulges due to the greater path length the light must traverse through the dusty disks.
In all of the model comparisons which follow, due to the various complications associated with the exact meaning of UGC $`R`$ diameter, the relative fraction of low surface brightness galaxies, and the influence of disk orientation on selection, we simply adjust the surface brightness threshold at which the UGC diameters (from which the DJ sample was taken) were measured to obtain rough agreement with the number of galaxies obtained in the DJ sample.
## 3 Models
Starting with the local properties of disks and a reasonable distribution of formation times, we construct a fiducial disk evolution model, to which we add three different models for bulge formation, the principle difference being simply the time the bulges form relative to that of their associated disks. Since it is simply our intent to examine the extent to which current observations allow us to discriminate the order in which bulges and disks form, we intentionally do not consider a more complex model (e.g., Kauffmann et al. 1993; Baugh et al. 1996; Molla & Ferrini 1995) nor do we attempt to model the internal dynamics or structure of spirals (e.g., Friedli & Benz 1995).
We assume the Sabc and Sdm luminosity functions (LFs) for disk galaxies given by Binggeli, Sandage, and Tammann (1988). We adjust the bulge-to-total ($`B/T`$) distributions of these galaxy types to obtain fair agreement with those distributions measured in the DJ and PB samples (see Figure 3). We evolve these galaxies backwards in time in luminosity according to their individual star formation histories without number evolution, presuming that significant evolution in number occurs only at redshifts above those examined in the present study ($`0<z<1`$). For this reason, we do not make predictions above $`z1`$.
We take the formation times of these galaxies to be distributed identically to that given by the procedure outlined in Section 2.5.2 of Lacey & Cole (1993) except that we take halo formation time to equal the time over which 0.25 of the final halo mass is assembled. For the purposes of calculating halo formation times corresponding to galaxies of a given luminosity, we assume a constant mass-to-light ratio where a $`M_{b_J}=21.1`$ galaxy has $`410^{12}M_{}`$ and we adopt the CDM matter power spectrum given in White & Frenk (1991):
$$P(k)=\frac{1.94\times 10^4b^2k}{(1+6.8k+72k^{3/2}+16k^2)^2}\text{Mpc}^3$$
(1)
For $`b=1`$, this expression yields $`\sigma _8=1.`$
Just as we choose to take the halo formation time to be the time over which 0.25 of the final halo mass is assembled instead of 0.5 used by Lacey & Cole (1993), we choose $`\mathrm{\Omega }=0.15`$ to push the epoch of large-scale merging to high enough redshift so that the observed number of stars are able to build up in these galaxies without being destroyed by the merging events prevalent at earlier epochs. Since we have observable constraints on the star formation history of the universe, there is a certain epoch after which disks must remain largely undisturbed. Of course, if we had assumed that some fraction of the stars in the disk were added by minor mergers, we could push the halo formation time, and consequently the formation of disks and bulges, to lower redshift by raising the value of $`\mathrm{\Omega }`$. We illustrate the distribution of halo formation times in Figure 1 for several different luminosity ranges.
We take star formation in the disk to commence at the halo formation time with an e-folding time that depends on the $`z=0`$ galaxy luminosity, i.e., $`\tau =(3\text{Gyr})10^{0.4(M_{b_J}+20)}`$ to roughly fit the $`z=0`$ colour-magnitude relationship (see Figure 2), so that the star formation rate in the disk of a galaxy with absolute magnitude $`M_{b_J}`$ and halo formation time $`t_{HF}`$ can be expressed:
$$SFR_{disk}\{\begin{array}{cc}e^{(t_{HF}t)/\tau }\hfill & t<t_{HF},\hfill \\ 0\hfill & tt_{HF}.\hfill \end{array}$$
(2)
We adopt the standard equations for evolution in metallicity to $`z=0`$ (Tinsley 1980) and tune the yields for each luminosity separately to reproduce the $`z=0`$ disk metallicities given by $`[Fe/H]=(0.17)(M_Z+20)0.28`$ (Zaritsky, Kennicutt, & Huchra 1994). Since we are not trying to develop a universal model for chemical evolution in disks, we have simply tuned the yields for each luminosity separately.
Using the Calzetti (1997) extinction prescription and a screen model for the dust, we take the optical depth $`\tau `$ of dust in the $`B`$ band to equal $`0.7(10^{0.17(1.3)(M_B+19.72)})`$, consistent with the values given in Peletier et al. (1995). We assume exponential profiles for the disks with a $`b_J`$ central surface brightness given by $`21.65+0.2(M_B+21)`$ for simplicity where this expression accounts for the observed correlation between surface brightness and luminosity (e.g., de Jong 1996; McGaugh & de Blok 1997). We compute spectra for the purposes of determining colours and magnitudes using the Bruzual & Charlot instantaneous-burst metallicity-dependent spectral synthesis tables as compiled in Leitherer et al. (1996). For metallicities in between those compiled here, we have interpolated between the provided tables in units of $`\mathrm{log}Z`$.
To calibrate our fiducial disk evolution models, we compare the model predictions to both the colour-magnitude relationship of disks in spirals and the cosmic history of luminosity density. Firstly, with regard to the colour-magnitude relationship, we note that we produce good agreement with the colour-magnitude relationship given in the DJ and PB samples, both in terms of their slopes and overall distributions (Figure 2). Given our relatively reasonable assumptions about the quantity of dust and metals in these galaxies, matching these distributions gives us a basic constraint on the star formation history in disk galaxies of different luminosities. Secondly, all models, for which bulge, disk, and E/S0 contributions have been considered, produce fair agreement with the luminosity density of the universe at all redshifts for which observable constraints are available (Lilly et al. 1996; Madau et al. 1996; Connolly et al. 1997), though the observed luminosity density is slightly lower at lower redshifts (Figure 3). Resolving this discrepancy requires pushing the formation of disk galaxies to higher redshifts, i.e., lowering $`\mathrm{\Omega }`$. Discrepancies in the ultraviolet luminosity density could be easily removed by introducing moderate dust extinction at high $`z`$, as motivated by many recent analyses (Sawicki & Yee 1998; Calzetti 1997; Meurer et al. 1997). Note that the similarities of the models at high redshift follows from the dominant and identical E/S0 contribution. Having described our fiducial disk model, we now describe the basic three models for bulge formation that we will be comparing.
Secular Evolution Model: In the secular evolution scenario, bulges form after disks. In this scenario, gas accretion onto the disk triggers the formation of a bar, gas-inflow into the center, and then star formation in the galaxy center (Friedli & Benz 1995; Norman, Sellwood, & Hasan 1996). The build-up of a central mass destroys the bar and inhibits gas inflow, consequently stopping star formation in the bulge until enough gas accretes onto the galaxy to trigger the formation of a second bar, gas inflow into the center, and finally a second central starburst. Somewhat arbitrarily, we suppose that the first central starburst occurs some 2 Gyr after disk formation in our fiducial model, that central starbursts last 0.1 Gyr, a time-scale matching those found in the detailed simulations by Friedli & Benz (1995), and that 2.4 Gyr separates central starbursts, numbers used just to illustrate the general effect of a late secular evolution model for the bulge. We repeat this cycle indefinitely and assume that the star formation rate follows an envelope with an e-folding time equivalent to the history of disk star formation:
$$SFR_{bulge}\{\begin{array}{cc}e^{(t_{HF}2Gyrt)/\tau }\hfill & \frac{t_{HF}2Gyrt}{2.5Gyr}\frac{t_{HF}2Gyrt}{2.5Gyr}<0.04,\hfill \\ 0\hfill & \frac{t_{HF}2Gyrt}{2.5Gyr}\frac{t_{HF}2Gyrt}{2.5Gyr}0.04,\hfill \\ 0\hfill & tt_{HF}2Gyr\hfill \end{array}$$
(3)
where $``$ is the greatest integer function. We thereby force star formation in the disk and the bulge to follow very similar time scales, given the extent to which they are both driven by gas infall processes. Of course, bulge growth over the history of the universe should affect these time-scales, but given the already large uncertainties in both gas accretion and star formation, we have decided to ignore this. For all bulge models, we adopt the slope of the approximate luminosity-metallicity relationship $`[Fe/H]=(0.02/0.135)M_R3.1852`$ (González & Gorgas 1996; Jablonka et al. 1996; Buzzoni et al. 1992). For the secular evolution model we fix the metallicity at the $`z=0`$ value somewhat crudely to account for the fact that this gas would already be polluted by stars which formed in the disk.
Simultaneous Formation Model: We assume for our second model that star formation in the bulge commences at the formation time of disks in our fiducial model. In this model, high angular momentum gas forms the disk while the low angular momentum gas simultaneously forms the bulge. As in the secular evolution model, we suppose that the star formation in the bulge lasts $`\tau _{burst}`$ = 0.1 Gyr so that
$$SFR_{bulge}\{\begin{array}{cc}e^{(t_{HF}t)/\tau _{burst}}\hfill & t<t_{HF},\hfill \\ 0\hfill & tt_{HF}.\hfill \end{array}$$
(4)
To obtain distributions of bulge colours for both the simultaneous and the early bulge formation models that match the data, we systematically decrease the metallicity of bulges by 0.2 relative to the relationship preferred by Jablonka et al. (1996). As in the disk, we assume evolution in the metallicity of the gas that forms the bulge.
Early Bulge Formation Model: In models where bulges form through the merging of disk galaxies, the formation of the stars found in bulges is expected to precede the formation of stars in the disks which form out of gas which accretes around the spheroid (e.g., Kauffmann & White 1993; Frenk et al. 1996). For simplicity, we commence star formation in the bulge 4 Gyr prior to the formation of disks in our fiducial model and suppose that it lasts $`\tau _{burst}=0.1`$ Gyr as in our other models so that
$$SFR_{bulge}\{\begin{array}{cc}e^{(t_{HF}+4Gyrt)/\tau _{burst}}\hfill & t<t_{HF}+4Gyr,\hfill \\ 0\hfill & tt_{HF}+4Gyr.\hfill \end{array}$$
(5)
Finally, to these models, we add a simple model for E/S0 galaxies to aid with the interpretation of observed high redshift, high $`B/T`$ systems. We adopt the same luminosity function for the E/S0 galaxies given in Pozzetti et al. (1996) but with a $`20\%`$ higher normalization to somewhat better fit the observed evolution in luminosity densities. We somewhat arbitrarily assume that the distribution of formation redshifts for the $`E/S0`$ population is scaled to be at exactly twice the distribution of formation redshifts obtained for the same luminosity spiral in the $`b_J`$ band so that if the median formation redshift for some luminosity spiral is 1, the median formation redshift for the same luminosity E/S0 is 2. We take the e-folding time for star formation to be 0.5 Gyr. Since it is likely that the stars in ellipticals were assembled from other galactic fragments through mergers, this scenario is only intended to be representative of when the stars in elliptical galaxies formed rather than where they formed. We assume that the $`E/S0`$ population has $`B/T`$ ratios distributed between 0.5 and 1 with a scatter intended to represent both the intrinsic uncertainty in the relative local mix of $`E`$ and $`S0`$ galaxies and the realistic scatter in $`B/T`$ values extracted in typical bulge-to-total luminosity decompositions (e.g., Ratnatunga et al. 1998). We further assume that the metallicity of all the stars in our E/S0 population are of solar metallicity.
## 4 Computational Method
We perform the calculations by considering four different morphological types, dividing each type into 3 different luminosity classes (where the width of each class in absolute magnitude is 2), and allowing each of the luminosity classes for a specific type to form at 20 different discrete redshift intervals from $`z=0`$ to $`z=2.5`$, the relative proportion being determined by the distribution of formation times for galaxies of a specific luminosity (as discussed in §3). Then, for each of these $`4\times 3\times 20=240`$ distinct galaxy evolution histories, we determine how the gas, metallicity, star formation rate, and luminosity evolves. Finally, Monte Carlo catalogues are constructed using the quoted selection criteria, the given number densities, and these computed luminosity histories.
## 5 High Redshift Comparisons
As already mentioned, all our models have been constrained to reproduce the bulge-to-total distribution for the DJ and PB samples as can be seen in Figure 4. Comparisons with bulge $`BR`$ colors and differences between bulge and disk $`BR`$ colors are presented in Figure 5. Both our early bulge formation model and simultaneous bulge formation model produce good fits to the bulge colours and relative bulge-to-total colours at low redshift. Clearly, in the secular evolution model, not only are the bulges of local galaxies too blue relative to the disks, but there is more dispersion in both the bulge colours and bulge-to-disk relative colours than there is in low redshift samples. If necessary, inclusion of a small amount of reddening in the models would give better agreement with the low redshift data. It is also possible that the irregular morphology and/or potential AGN activity might cause these blue-bulged galaxies to be removed from the local samples.
We now examine the predictions of these different scenarios in terms of the higher redshift ($`z1`$) observations. Since the essential difference between the models is the formation time of bulges relative to the fiducial formation time of disks, we shall focus on the observables directly contrasting the bulge and disk properties: in particular, the high-redshift bulge-to-disk ratios and the high-redshift bulge-to-disk colours in our comparisons (see Figures 4-6). We begin by comparing our models to the bulge-to-total ratios of high redshift galaxies in the CFRS sample in Figure 4. We examine both the ground-based sample of Schade et al. (1996) and the HST-selected large disk ($`r>4h_{50}`$ kpc) subsample of Lilly et al. (1998) in three different redshift intervals. To compare our models with the observations, we have generated Monte-Carlo catalogues of galaxies with bulge-to-total ratios in the observed $`F814W`$ band, applying the CFRS selection criteria ($`I_{AB}<22.5`$ and a central $`I_{AB}`$ surface brightness $`<24.5`$) and the size cut ($`h>4`$ kpc) to compare specifically with the Lilly et al. (1998) large disk subsample.
We subdivide the samples into the redshift bins (0.2,0.5), (0.5, 0.75), and (0.75, 1.00). While the differences between the models are quite small at low $`z`$, interesting differences begin to arise at $`z1`$. Unfortunately, at $`z1`$, the observed $`F814W`$ band is approximately probing rest-frame $`B`$ light and hence is quite sensitive to active star formation. Consequently, the ordering of Models I, II, and III in terms of the number of galaxies with large B/T ratios is not the same as the order in which bulges form in these three models. The secular evolution model (Model I), with late bulge formation, has a paucity of large B/T objects relative to the other models. The simultaneous bulge formation model (Model II) has a large number of such galaxies simply because a large number of bulges were forming at this time, while the early bulge formation model (Model III) has a slightly lower value due to the fact that bulges in this model had long been in place within their spiral hosts. Presumably, high resolution infrared images as will be available with NICMOS should be a more powerful discriminant between these models since it is more sensitive to total stellar mass than it is to current star formation. Unfortunately, both the lack of data and uncertainties in this data ($`\pm 0.2`$ in B/T) (Schade et al. 1996) are too large to permit any strong statements. It does appear, nevertheless, that there are too many large B/T systems observed (Lilly et al. 1998) relative to the models, and therefore there may be a lack of high B/T galaxies in our model of high luminosity, large disks.
We now look at the bulge colours and relative bulge-disk colours of high redshift galaxies. For our first sample (32 galaxies), for which HST images of CFRS-selected galaxies were available, we utilize the colours and bulge-to-total ratios compiled in Table 1 and Figure 5 of Schade et al. (1996). Following Schade et al. (1996) in the use of the best-fitting CWW SED templates for the purposes of k-corrections and colour conversions, we calculate the colours for the bulge-disk components from the total integrated $`(UV)_{0,AB}`$ colours, the tabulated bulge-to-disk ratios given in the rest-frame $`B`$ band, and the $`(UV)_{0,AB}`$ colours of the indicated component. For our second sample (27 galaxies), we consider the bulge-to-disk colours compiled by Abraham et al. (1998) from a subsample of the Bouwens, Broadhurst, & Silk (1998a) sample, for which both $`z>0.3`$ and fits to the bulge-to-total ratio were available (Ratnatunga et al. 1998). Note that the bulge (disk) colours compiled by Abraham et al. (1998) are determined from the light inside (outside) a 3-pixel aperture and are not determined from a proper bulge-disk decomposition. Using the best-fit CWW SED templates, we convert the Abraham et al. (1998) colours to their rest-frame values. We plot the data separately for these two samples due to potentially different systematics. Because of the larger uncertainties involved in determining the relative bulge-disk colours for galaxies dominated by a bulge ($`B/T>0.55`$) or disk component ($`B/T<0.1`$), we have excluded these galaxies from our comparisons due to the potentially large errors in the determination of the disk and bulge colours separately. For both data sets, we again compare with Monte-Carlo catalogues generated using the CFRS selection criteria due to its close similarity with the Bouwens et al. (1998a) selection criteria ($`I_{F814W,AB}<22.33`$). We present histograms of the bulge colours and relative bulge-to-disk colours in Figure 5, and a scatter plot of the bulge-to-total ratios both versus the bulge colours and versus the relative bulge-to-disk colours in Figure 6. For both Figures 5-6, we subdivide the galaxies into the redshifts bins (0.3, 0.5), (0.5, 0.75), and (0.75, 1.0).
As expected, in all redshift bins, bulges are slightly bluer in the late bulge formation models than are the disks (Figure 5). A blue tail may be marginally detectable in the Schade et al. data in the highest redshift bins. Unfortunately, given the extremely limited amount of data and uncertainties therein, little can be said about the comparison of the models in all three redshift bins, except that the range of bulge and relative bulge-to-disk colours found in the data appears to be consistent with that found in the models.
Figure 6 shows that the scatter in the data can readily be reproduced at both low and high redshift for the various models. Clearly, the secular evolution and other bulge formation models separate out in this diagram, late bulge formation models always yielding bluer bulges for a given B/T ratio. Unfortunately, the observational data set is sufficiently small and contains enough uncertainties (an estimated $`\pm 0.1`$ in the B/T ratio and $`\pm 0.3`$ in relative bulge-disk colours) that it is difficult to verify whether there is a paucity of blue bulges at high redshift relative to the predictions of the secular evolution model, though there appears to be several bluer bulges in the redshift interval (0.5,0.75).
## 6 Summary
We have developed three representative models for bulge formation and evolution. While consistent with currently available data, our models are schematic and are intended to illustrate the observable predictions that will eventually be made when improved data sets are available in the near future. Our models are (i) secular evolution, in which disks form first, (ii) simultaneous formation of bulge and disk, as might be expected in a monolithic model, and (iii) early bulge formation, in which bulges form first. We normalize to two local $`z=0`$ samples which provide template bulge and disk luminosity ratios and colours. We make predictions for these bulge and disk parameters to $`z1`$ for comparison with observed samples.
Admittedly, our models are still quite crude, assuming among other things that the effects of number evolution on the present population of disks can be ignored to $`z1`$ as suggested, for example, in Lilly et al. 1998. Of course, one recent analysis (Mao, Mo, & White 1998) has argued that observations favor the interpretation that a non-negligible amount of merging has taken place in the disk population from $`z=0`$ to $`z=1`$. For this particular interpretation, it is not clear to us how all the present stellar mass in disks could have built up if disks were continually destroyed by merging to low $`z`$ given the constraints on the cosmic star formation history.
We have also not considered the environmental dependencies that are sure to be important in the generation of the Hubble sequence. We plan on addressing these shortcomings in future work (Bouwens et al. 1998b) in the context of a semi-analytical hierarchical clustering model where we consider the formation of bulges by both secular and hierarchical evolution.
We acknowledge useful discussions with Roberto Abraham, Marc Balcells, Francoise Combes, Roelof de Jong, David Friedli, Kavan Ratnatunga, and David Schade. This document has also been improved based upon suggestions by the scientific editor Steven Shore and an anonymous referee. RJB is grateful for support from an NSF graduate fellowship and JS from NSF and NASA grants as well as support as from the Chaire Blaise Pascal. LC thanks the Astronomy Department and the CfPA (Berkeley) as well as the IAP (Paris) for their hospitality during her stays in those institutes. LC acknowledges support from the Spanish DGES PB95-0041. The Medium Deep Survey catalog is based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. The Medium-Deep Survey was funded by STScI grant GO2684.
|
no-problem/9812/astro-ph9812184.html
|
ar5iv
|
text
|
# ASTROPHYSICS WITH HESSI
## Abstract
ABSTRACT In the summer of the year 2000, a NASA Small Explorer satellite, the High Energy Solar Spectroscopic Imager (HESSI), will be launched. It will consist of nine large, coaxial germanium detectors viewing the Sun through a set of Rotation Modulation Collimators and will accomplish high-resolution imaging and spectroscopy of solar flares in the x-ray and gamma-ray bands. Here we describe some of the astrophysical observations HESSI will also perform in addition to its solar mission.
1) Space Sciences Laboratory, University of California, Berkeley, USA
KEYWORDS: Instrumentation, Nuclear Spectroscopy
1. What is HESSI? The High Energy Solar Spectroscopic Imager (HESSI) is a NASA Small Explorer mission being built at the University of California at Berkeley (Prof. Robert P. Lin, Principal Investigator), the NASA Goddard Space Flight Center, the Paul Scherrer Institut in Switzerland, and Spectrum Astro, Inc., with participation by a number of other institutions. It is scheduled for launch into low-Earth orbit in July 2000.
HESSI’s primary science goals are imaging spectroscopy (3 keV to several MeV), and high-resolution nuclear spectroscopy of solar flares during the next solar maximum. The instrument consists of 9 large germanium detectors (cooled to 75 K by a mechanical cooler) which cover the full energy range. The detectors sit below a Rotation Modulation Collimator (RMC) system for high resolution imaging capability (2 arcsec at hard x-ray energies). The rotation is provided by spinning the whole spacecraft at about 15 rpm.
Each germanium detector is a closed-end coaxial cylinder with a volume of over 300 cm<sup>3</sup>, electronically segmented into a thin front segment and thick rear segment. The rear segments view nearly half the sky through side walls of only 4mm of aluminum, giving then an effective energy range of 20 keV to 15 MeV. The front segments shield the rear segments from solar photons below 100 keV and view the Sun through beryllium windows and a small amount of insulating blankets, giving them a useful energy range down to about 3 keV.
2. HESSI Astrophysics Although HESSI is primarily a solar mission, the HESSI team is committed to making sure its capabilities for extra-solar astrophysical observations are fully exploited. All HESSI data and analysis software will be public, with no proprietary period.
The astrophysics program the HESSI team is planning to pursue combines aspects of what has been done with the CGRO/BATSE and Wind/TGRS instruments, as well as techniques unique to HESSI. In addition to the topics discussed below, high-resolution spectroscopy of gamma-ray bursts and soft gamma repeater (SGR) events will be accomplished without the need for a burst trigger: every photon is always telemetered with its arrival time.
3. RMC Imaging of the Crab Nebula
The Crab nebula and pulsar (ecliptic latitude -1.3), will drift into HESSI’s imaged field of view once per year, so we will automatically produce images from about 3 keV to 100 keV. Only one image above a few keV has ever been produced (Pelling et al. 1997), with a resolution of about 15 arcsec, as compared to HESSI’s 2 arcsec. The ROSAT soft x-ray image of the nebula (Hester et al. 1995) shows features at this scale, as do the radio wisps, so the hard x-ray image should be very informative. Our simulations indicate the statistics will be good enough to see the relevant details. In addition, the radio wisps evolve on the scale of a year, so we may be able to observe annual changes in the hard x-ray image.
4. Galactic Gamma-Ray Lines
By using the Earth as an occulter, we can produce background-subtracted spectra of the Galactic Center region. In this analysis, the whole HESSI array will be treated as a single detector. Spectra will be summed over a time on the order of a minute (several revolutions), and background will be constructed from data taken during other orbits when the Galactic Center was blocked by the Earth. A similar technique has been used to measure the Galactic 511 keV line with BATSE to the highest precision of any experiment. (Smith et al. 1998; see also the other poster by D. M. Smith et al. at this meeting).
Figure 1 shows HESSI’s 3$`\sigma `$ sensitivity to narrow lines in one year of observations. The sensitivity is not as good as the INTEGRAL SPI, since HESSI is unshielded, but HESSI also observes a much larger portion of the sky at once, and will therefore receive a larger (albeit unimaged) signal in the diffuse Galactic lines. This will make HESSI a good complement to INTEGRAL; subtracting the fluxes in INTEGRAL line maps from HESSI fluxes will allow us to find large scale, low-surface-brightness components in the 511 keV and 1809 keV lines.
Important results awaiting confirmation include:
* The small (or zero) amount of Galactic 511 keV flux which is in the relatively broad, 6.4 keV FWHM component expected from annihilation in flight after charge exchange with neutral hydrogen. This result, from Harris et al. 1998, implies that Galactic positrons are mostly magnetically excluded from cold cloud cores.
* The large width (5.4 (+1.4,-1.3) keV FWHM) measured for the integrated Galactic 1809 keV line by the GRIS balloon instrument (Naya et al. 1996). This unexpectedly high width means that <sup>26</sup>Al ejected in supernovae maintains high velocities long after it would be expected to slow down in the ISM.
* The low upper limit on <sup>60</sup>Fe, also from GRIS, constraining models of supernova nucleosynthesis when compared to <sup>26</sup>Al (and assuming most of the <sup>26</sup>Al is produced in supernovae).
5. Pulsar Period Folding
By folding the rear-segment data on the period of known accreting pulsars, we will produce some of the best high-resolution spectra of the pulsed emission from these objects. Figure 2 shows our expected spectrum from the pulsed emission of Her X-1 in one month of observation. The upper curve was generated under the assumption that the cyclotron absorption line is of the same width as the resolution of the scintillators which have generally observed it. The lower curve, divided by 10 for clarity, shows the spectrum we would observe if the absorption line were narrower than the resolution of HESSI’s germanium detectors.
In addition, pulsar period folding will allow us to follow the period evolution of the sources, a project which has been extremely fruitful for BATSE (Bildsten et al. 1997). Although HESSI will have less than 10% of BATSE’s effective area for these observations, there are still a number of known sources which will be bright enough to follow. Furthermore, since every photon will be recovered with a time tag, HESSI will not have the limitation of BATSE’s normal operating mode, which samples rates every 2 seconds. We will therefore be able to do a long-term survey of the undersampled range of periods $`<`$ 4 sec.
5. Spin Period Folding
By folding the rear-segment data on the spin period of the spacecraft, we will observe bright Galactic point sources by analyzing the repeated occultation of one detector by another with respect to the source. BATSE’s success with occultation by the Earth is well known (Harmon et al. 1992; Zhang et al. 1993). Although HESSI is much smaller than BATSE, we have the advantage of gaining many more occultations per orbit: about 750 detector/detector occultations due to spin per orbit in addition to the 2 Earth occultations. We expect to monitor transients and persistent sources above a few hundred mCrab.
Although the detectors are not completely opaque at 511 keV, we may be able to obtain some spatial information on the 511 keV line by detector/detector occultation, in a way similar to the analysis done by Harris et al. (1998) for Wind/TGRS, but taking advantage of HESSI’s extra order of magnitude of germanium volume.
REFERENCES
Bildsten, L. et al. 1997, ApJ 113, 367
Harmon, B. A. et al. 1992, Proc. Compton Observatory Workshop, p. 69
Harris, M. J. et al. 1998, ApJ, 501, 55
Hester, J. J. et al. 1995, ApJ, 448, 240
Naya, J. E. et al. 1996, Nature, 384, 44
Pelling, R. M. et al. 1987, ApJ 319, 416
Smith, D. M. et al. 1998, Proc. of the 4th Compton Symposium, AIP Conf. Proc. #410, p. 1012
Zhang, S. N. et al. 1993, Nature 366, 245
|
no-problem/9901/cond-mat9901022.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
In recent years, physics has been successful in providing several models to describe some collective behaviors in both human societies and social organisations . In particular, new light has been shed on democratic voting biases , decision making process , outbreak of cooperation , social impact , and power genesis in groups .
However such a new approach to social behavior is yet at its earlier stage. More work is needed as well more connexion with social data. At this stage, it is really the first ingrdients to what could become, in the near future, a new field of research by itself. At least it is our challenge.
It is also worth to state few words of caution, since dealing with social reality can often interfere with the reality itsel via biases in actual social representations. One contribution of this “sociophysics” would be indeed to take away social studies from political or philosophical beliefs, to place it on a more modelling frame without any religion like attitude.
In this paper we adress the question of coalition forming in the framework of country military alliances, using some basic concepts from the physics of spin glasses systems . Along this line, an earlier attempt from political sciences , used the physical concept of minimum energy. However this work was misleading since it was based on a confusion between the two physically different spin glass models of Mattis and Edwards-Anderson . The model presented here, indeed involves the interplay between these two models.
The following of the paper is organised as follows. The second part contains the presentation of our model. Several features in the dynamics of bimodal coalitions are obtained. Within such a framework the one country viewpoint is studied in Section 3 showing up the frontiers of turning some local cooperation to conflict or the opposite still preserving the belonging to the same coalition. The setting up of world wide alliances is discussed in section 4. The cold war situation is then analysed in section 5. A new explanation is given in Section 6 to estern Europe instabilities following the Warsaw pact dissolution as well as to western Europe stability. Some hints are also obtained within the model on how to stabilize these eastern Europe instabilities giving the still existing Nato organisation. The model is then applied in Section 7 to describe the Chinese situation. The concept of “risky actor” is briefly introduced in Section 8. Last Section contains some concluding remarks.
## 2 Presentation of the model
We now address the problem of alignment between a group of $`N`$ countries . From historical, cultural and economic frames there exit bilateral propensities $`J_{i,j}`$ between any pair of countries $`i`$ and $`j`$ to either cooperation $`(J_{i,j}>0)`$, conflict $`(J_{i,j}<0)`$ or ignorance $`(J_{i,j}=0)`$. Each propensity $`J_{i,j}`$ depends solely on the pair $`(i,j)`$ itself and is positive, negative or zero. Propensities $`J_{i,j}`$ are somehow local since they don’t account for any global organization or net. Their intensities vary for each pair of countries to account for the various military and economic power of both pair actors. They are assumed to be symmetric, i.e., $`J_{ij}=J_{ji}`$.
From the well known saying “the enemy of an enemy is a friend” we postulate the existence of only two competing coalitions, like for instance western and eastern blocks during the so-called cold war. They are denoted respectively by A and B.
Each actor has then the choice to be in either one of two coalitions. A variable $`\eta _i`$ associated to each actor, where index i runs from 1 to N, states its actual belonging. It is $`\eta _i=+1`$ if actor $`i`$ belongs to alliance A while $`\eta _i=1`$ in case it is part of alliance B. From symmetry all A-members can turn to coalition B with a simultaneous flip of all B-members to coalition A.
Given a pair of actors $`(i,j)`$ their respective alignment is readily expressed through the product $`\eta _i\eta _j`$. The product is $`+1`$ when $`i`$ and $`j`$ belong to the same coalition and $`1`$ otherwise. The “cost” of exchange between a pair of countries is then measured by the quantity $`J_{ij}\eta _i\eta _j`$.
Here factorisation over $`i`$ and $`j`$ is not possible. Indeed we are dealing with competing given bonds or links. It is equivalent to random bond spin glasses as opposed to Mattis random site spin glasses .
Given a configuration $`X`$ of actors, for each nation $`i`$ we can measure the overall degree of conflict and cooperation with all others $`N1`$ countries, with the quantity,
$$E_i=\underset{j=1}{\overset{n}{}}J_{ij}\eta _j,$$
(1)
where the summation is taken over all other countries including $`i`$ itself with $`J_{ii}0`$. The product $`\eta _iE_i`$ then evaluates the local “cost” associated with country $`i`$ choice. It is positive if $`i`$ goes along the tendancy produced by $`E_i`$ and negative otherwise. For a given configuration $`X`$, all country local “cost” sum up to a total “cost”,
$$E(X)=\frac{1}{2}\underset{i}{}\eta _iE_i,$$
(2)
where the $`1/2`$ accounts for the double counting of pairs. This “cost” measures indeed the level of satisfactions of each country alliance choice. It can be recast as,
$$E(X)=\frac{1}{2}\underset{i,j}{\overset{n}{}}J_{ij}\eta _i\eta _j,$$
(3)
where the sum runs over the $`n(n1)`$ pairs $`(i,j)`$. Eq.(3) is indeed the Hamiltonian of an Ising random bond magnetic system.
### 2.1 THE CHOSEN DYNAMICS
At this stage we postulate that the actual configuration is the one which minimizes the cost in each countiry choice. In order to favor two cooperating countries $`(G_{i,j}>0)`$ in the same alliance, we put a minus sign in from of the expression of Eq. (3), to get,
$$H=\frac{1}{2}\underset{i>j}{\overset{n}{}}J_{ij}\eta _i\eta _j.$$
(4)
There exist by symmetry $`2^n/2`$ distinct sets of alliances since each country has 2 choices for coalition.
Starting from any initial configuration, the dynamics of the system is implemented by single actor coalition flips. An actor turns to the competing coalition only if the flip decreases its local cost. The system has reached its stable state once no more flip occurs. Given $`\{J_{ij}\}`$, the $`\{\eta _i\}`$ are thus obtained minimizing Eq. (4).
Since here the system stable configuration minimizes the “energy”, we are from the physical viewpoint, at the temperature $`\mathrm{`}\mathrm{`}T=0\mathrm{"}`$. Otherwise when $`\mathrm{`}\mathrm{`}T0\mathrm{"}`$ it is the free-energy which has to be minimized. In practise for a finite system the theory can tell which coalitions are possible and how many of them exist. But when several coalitions have the same energy, it is not possible to predict which one will be the actual one.
### 2.2 Frustration effect
The physical concept of frustration is embodied in the model. For instance, in the case of three conflicting nations as Israel, Syria and Iraq, any possible alliance configuration leaves always someone unsatisfied.
To define this feture more precisely; let us attach respectively the labels 1, 2, 3 to each one of the three countries. In case we have equal and negative exchange interactions $`J_{12}=J_{13}=J_{23}=J`$ with $`J>0`$, the associated minimum of the energy (Eq. (4)) is equal to $`J`$. However this value of the minimum is realized for several possible and equivalent coalitions. Namely for countries (1, 2, 3) we can have respectively alignments (A, B, A), (B, A, A), (A, A, B), (B, A, B), (A, B, B), and (B, B, A). First 3 are identical to last 3 by symmetry since here what matters is which countries are together within the same coalition. The peculiar property is that the system never gets stable in just one configuration since it costs no energy to switch from one onto another. This case is an archetype of frustration. It means in particular the existence of several ground states with exactly the same energy.
Otherwise, for non equal interactions the system has one stable minimum and no frustration occurs within the physical meaning defined above. The fact that some interactions are not satisfied does not automatically imply frustration in above sense of multiple equivalent set of alliances.
## 3 A one country viewpoint
We now make this point more quantitative within the present formalism. Consider a given site $`i`$. Interactions with all others sites can be represented by a field,
$$h_i=\underset{j=1}{\overset{n}{}}J_{ij}\eta _j$$
(5)
resulting in an energy contribution
$$E_i=\eta _ih_i,$$
(6)
to the Hamiltonian $`H=\frac{1}{2}_{i=1}^nE_i`$. Eq. (10) is minimum for $`\eta _i`$ and $`h_i`$ having the same sign. For a given $`h_i`$ there exists always a well defined coalition except for $`h_i=0`$. In this case site $`i`$ is “neutral” since then both coalitions are identical with respect to its local “energy” which stays equal to zero. A neutral site will flip with probability $`\frac{1}{2}`$.
### 3.1 SHIFTING COALITION
The coupling $`\{J_{ij}\}`$ are given. Let us then assume there exists only one minimum. Once the system reaches its stable equilibrium it gets trapped and the energy is minimum. At the minimum the field $`h_i`$ can be calculated for each site $`i`$ since $`\{J_{ij}\}`$ are known as well as $`\{\eta _i\}`$.
First consider all sites which have the value -1. The existence of a unique non-degenerate minimum makes associated fields also negative. We then take one of these sites, e.g. $`k`$, and shift its value from -1 to +1 by simultaneously changing the sign of all its interactions $`\{J_{kl}\}`$ where $`l`$ runs from 1 to $`n`$ ($`J_{kk}=0`$). This transformation gives,
$$\eta _k=+1\mathrm{and}h_k>0,$$
(7)
instead of,
$$\eta _k=1\mathrm{and}h_k<0,$$
(8)
which means that actor $`k`$ has shifted from one coalition into the other one.
It is worth to emphazise that such systematic shift of propensities of actor $`k`$ has no effect on the others actors. Taking for instance actor $`l`$, its unique interaction with actor $`k`$ is through $`J_{kl}`$ which did change sign in the transformation. However as actor $`k`$ has also turn to the other coalition, the associated contribution $`J_{kl}\eta _k`$ to field $`h_l`$ of actor $`l`$ is unchanged.
The shift process is then repeated for each member of actor k former coalition. Once all shifts are completed there exits only one unique coalition. Everyone is cooperating with all others. The value of the energy minimum is unchanged in the process.
Above transformation demonstrates the $`\{J_{ij}\}`$ determine the stable configuration. It shows in particular that given any site configuration, it always exists a set of $`\{J_{ij}\}`$ which will give that configuration as the unique minimum of the associated energy. At this stage, what indeed matters are the propensity values.
Above gauge transformation shows what matters is the sign of field $`\{h_i\}`$ and not a given $`J_{ij}`$ value. A given set of field signs, positive and negative, may be realized through an extremely large spectrum of $`\{J_{ij}\}`$.
This very fact opens a way to explore some possible deviations from a national policy. For instance given the state of cooperation and conflict of a group of actors, it is possible to find out limits in which local pair propensities can be modified without inducing coalition shift. Some country can turn from cooperation to conflict or the opposite, without changing the belonging to a given alliance as long as the associated field sign is unchanged. It means that a given country could becomes hostile to some former allies, still staying in the same overall coalition. One illustration is given by german recognition of Croatia against the will of other european partners like France and England, without putting at stake its belonging to the European community. The Falklands war between England and Argentina is another example since both countries have strong american partnerships.
## 4 Setting up coalitions
From above analysis, countries were found to belong to some allianace without apriori macro-analysis at the regional or world level. Each country is adjusting to its best interest withg respect to countries with whom it interacts. However the setting up of global coalitions which are aimed to spread and organize economic and miulitaru exchanges produces an additional ingredient in each country choice.
Yet staying in the two coalition scheme, each country has an apriori natural choice. To account for this fact we introduced for each actor $`i`$, a variable $`ϵ_i`$. It is $`ϵ_i=+1`$ if actor would like to be in $`A`$, $`ϵ_i=1`$ in $`B`$ and $`ϵ_i=0`$ for no apriori. Such natural belonging is induced by cultural and political history.
Moreover we measure exchanges produced by these coalitions trough a set of additional pairwise propensities $`\{C_{i,j}\}`$. They are always positive since sharing resources, informations, weapons is basically profitable. Nevertheless a pair $`(i,j)`$ propensity to cooperation, conflict or ignorance is $`A_{i,j}ϵ_iϵ_jC_{i,j}`$ which can be positive, negative or zero. Now we do have a Mattis random site spin glasses .
Including both local and macro exchanges result in an overall pair propensity
$$O_{i,j}J_{i,j}+ϵ_iϵ_jC_{i,j},$$
(9)
between two countries $`i`$ and $`j`$ with always $`J_{i,j}>0`$.
An additional variable $`\beta _i=\pm 1`$ is introduced to account for benefit from economic and military pressure attached to a given alignment. It is still $`\beta _i=+1`$ in favor of $`A`$, $`\beta _i=1`$ for $`B`$ and $`\beta _i=0`$ for no belonging. The amplitude of this economical and military interest is measured by a local positive field $`b_i`$ which also accounts for the country size and its importance. At this stage, the sets $`\{ϵ_i\}`$ and $`\{\beta _i\}`$ are independent.
Actual actor choices to cooperate or to conflict result from the given set of above quantites. The associated total cost is,
$$H=\frac{1}{2}\underset{i>j}{\overset{n}{}}\{J_{i,j}+ϵ_iϵ_jC_{ij}\}\eta _i\eta _j\underset{i}{\overset{n}{}}\beta _ib_i\eta _i.$$
(10)
## 5 Cold war scenario
The cold war scenario means that the two existing world level coalitions generate much stonger couplings than purely bilateral ones, i.e., $`|J_{i,j}|<C_{i,j}`$ since to belong to a world level coalition produces more advantages than purely local unproper relationship. In others words local propensities were unactivated since overwhelmed by the two block trend. The overall system was very stable. We can thus take $`J_{i,j}=0`$. Moreover each actor must belong to a coalition, i. e., $`ϵ_i0`$ and $`\beta _i0`$. In that situation local propensities to cooperate or to conflict between two interacting countries result from their respective individual macro-level coalition belongings. the cold war energy is,
$$H_{CW}=\frac{1}{2}\underset{i>j}{\overset{n}{}}ϵ_iϵ_jJ_{ij}\eta _i\eta _j\underset{i}{\overset{n}{}}\beta _ib_i\eta _i.$$
(11)
### 5.1 COHERENT TENDENCIES
We consider first the coherent tendency case in which cultural and economical trends go along the same coalition, i.e., $`\beta _i=ϵ_i`$. Then from Eq. (15) the minimum of $`H_{CW}`$ is unique with all country propensities satisfied. Each country chooses its coalition according to its natural belonging, i.e., $`\eta _i=ϵ_i`$. This result is readily proven via the variable change $`\tau ϵ_i\eta _i`$ which turns the energy to,
$$H_{CW1}=\frac{1}{2}\underset{i>j}{\overset{n}{}}J_{ij}\tau _i\tau _j\underset{i}{\overset{n}{}}b_i\tau _i,$$
(12)
where $`C_{i,j}>0`$ are positive constants. Eq. (16) is a ferromagnetic Ising Hamiltonian in positive symmetry breaking fields $`b_i`$. Indeed it has one unique minimum with all $`\tau _i=+1`$.
The remarkable result here is that the existence of two apriori world level coalitions is identical to the case of a unique coalition with every actor in it. It shed light on the stability of the Cold War situation where each actor satisfies its proper relationship. Differences and conflicts appear to be part of an overall cooperation within this scenario. Both dynamics are exactly the same since what matters is the existence of a well defined stable configuration. However there exists a difference which is not relevant at this stage of the model since we assumed $`J_{i,j}=0`$. However in reality $`J_{i,j}0`$ making the existence of two coalitions to produce a lower “energy” than a unique coalition since then, more $`J_{i,j}`$ can be satisfied.
It worth to notice that field terms $`b_iϵ_i\eta _i`$ account for the difference in energy cost in breaking a pair proper relationship for respectively a large and a small country. Consider for instance two countries $`i`$ and $`j`$ with $`b_i=2b_j=2b_0`$. Associated pair energy is
$$H_{ij}J_{ij}ϵ_i\eta _iϵ_j\eta _j2b_0ϵ_i\eta _ib_0ϵ_j\eta _j.$$
(13)
Conditions $`\eta _i=ϵ_i`$ and $`\eta _j=ϵ_j`$ give the minimum energy,
$$H_{ij}^m=J_{ij}2b_0b_0.$$
(14)
¿From Eq. (18) it is easily seen that in case $`j`$ breaks proper alignment shifting to $`\eta _j=ϵ_j`$ the cost in energy is $`2J_{ij}+2b_0`$. In parallel when $`i`$ shifts to $`\eta _i=ϵ_i`$ the cost is higher with $`2J_{ij}+4b_0`$. Therfore the cost in energy is lower for a breaking from proper alignment by the small country ($`b_j=b_0`$) than by the large country ($`b_j=2b_0`$). In the real world, it is clearly not the same for instance for the US to be against Argentina than to Argentina to be against the US.
### 5.2 UNCOHERENT TENDENCIES
We now consider the uncoherent tendency case in which cultural and economical trends may go along opposite coalitions, i.e., $`\beta _iϵ_i`$. Using above variable change $`\tau ϵ_i\eta _i`$, the Hamiltonian becomes,
$$H_{CW2}=\frac{1}{2}\underset{i>j}{\overset{n}{}}J_{ij}\tau _i\tau _j\underset{i}{\overset{n}{}}\delta _ib_i\tau _i,$$
(15)
where $`\delta _i\beta _iϵ_i`$ is given and equal to $`\pm 1`$. $`H_{CW2}`$ is formally identical to the ferromagnetic Ising Hamiltonian in random fields $`\pm b_i`$. However, here the fields are not random.
The local field term $`\delta _ib_i\tau _i`$ modifies the country field $`h_i`$ in Eq. (9) to $`h_i+\delta _ib_i`$ which now can happen to be zero. This change is qualitative since now there exists the possibility to have “neutrality”, i.e., zero local effective field coupled to the individual choice. Switzerland attitude during World war II may result from such a situation. Moreover countries which have opposite cultural and economical trends may now follow their economical interest against their cultural interest or vice versa. Two qualitatively different situations may occur.
* Unbalanced economical power: in this case we have $`_i^n\delta _ib_i0`$.
The symmetry is now broken in favor of one of the coalition. But still there exists only one minimum.
* Balanced economical power: in this case we have $`_i^n\delta _ib_i=0`$.
Symmetry is preserved and $`H_{CW2}`$ is identical to the ferromagnetic Ising Hamiltonian in random fields which has one unique minimum.
## 6 Unique world leader
Now we consider current world situation where the eastern block has disappeared. However it is worth to emphazise the western block is still active as before in this model. Within our notations, denoting $`A`$ the western alignment, we have still $`ϵ_i=+1`$ for countries which had $`ϵ_i=+1`$. On the opposite, countries which had $`ϵ_i=1`$ now turned to either $`ϵ_i=+1`$ or to $`ϵ_i=0`$.
Therefore above $`J_{i,j}=0`$ assumption based on inequality $`|J_{i,j}|<|ϵ_iϵ_j|C_{i,j}`$ no longer holds for each pair of countries. In particular propensity $`p_{i,j}`$ becomes equal to $`J_{i,j}`$ in respective cases where $`ϵ_i=0`$, $`ϵ_j=0`$ and $`ϵ_i=ϵ_j=0`$.
A new distribution of actors results from the collapse of one block. On the one hand $`A`$ coalition countries still determine their actual choices according to $`C_{i,j}`$. On the other hand former $`B`$ coaltion countries are now found to determine their choices according to competing links $`J_{i,j}`$ which did not automatically agree with former $`C_{i,j}`$. This subset of countries has turned from a Mattis random site spin glasses without frustration into a random bond spin glasses with frustration. In others world the former $`B`$ coalition subset has jumped from one stable minimum to a highly degenerated unstable landscape with many local minima. This property could be related to the fragmentation process where ethnic minorities and states shift rapidly allegiances back and forth while they were part of a stable structure just few years ago.
While the $`B`$ coalition world organization has disappeared, the $`A`$ coalition world organization did not change and is still active. It makes $`|J_{i,j}|<C_{i,j}`$ still valid for $`A`$ countries with $`ϵ_iϵ_j=+1`$. Associated countries thus maintain a stable relationship and avoid a fragmentation process. This result supports a posteriori arguments against the dissolution of Nato once Warsaw Pact was disolved.
Above situation could also shed some light on the european debate. It would mean european stability is a result in particular of the existence of european structures with economical reality. These structures produce associated propensities $`C_{i,j}`$ much stronger than local competing propensities $`J_{i,j}`$ which are still there. In other words european stability would indeed result from $`C_{i,j}>|J_{i,j}|`$ and not from either all $`J_{i,j}>0`$ or all $`J_{i,j}=0`$. An eventual setback of the european construction ($`ϵ_iϵ_jC_{i,j}=0`$) would then automatically yield a fragmentation process with activation of ancestral bilateral oppositions.
In this model, once a unique economical as well as military world level organisation exists, each country interest becomes to be part of it. We thus have $`\beta _i=+1`$ for each actor. There may be some exception like Cuba staying almost alone in former $`B`$ alignment, but this case will not be considered here. Associated Hamiltonian for the $`ϵ_i=0`$ subset actor is,
$$H_{UL}=\frac{1}{2}\underset{i>j}{\overset{n}{}}G_{ij}\eta _i\eta _j\underset{i}{\overset{n}{}}b_i\eta _i,$$
(16)
which is formally equivalent to a random bond Hamiltonian in a field. At this stage $`\eta _i=+1`$ means to be part of $`A`$ coalition which is an international structure. On the opposite $`\eta _i=1`$ is to be in a non-existing $`B`$-coalition which really means to be outside of $`A`$.
For small field with respect to interaction the system may still exhibit physical-like frustration depending on the various $`J_{i,j}`$. In this case the system has many minima with the same energy. Perpetual instabilities thus occur in a desperate search for an impossible stability. Actors will flip continuously from one local alliance to the other. The dynamics we are refering to is an individual flip each time it decreases the energy. We also allow a flip with probabilty $`\frac{1}{2}`$ when local energy is unchanged.
It is worth to point out that only strong local fields may lift fragmentation by putting every actor in $`A`$-coalition. It can be achieved through economical help like for instance in Ukrainia. Another way is military $`A`$ enforcement like for instance in former Yugoslavia.
Our results point out that current debate over integrating former eastern countries within Nato is indeed relevant to oppose current fragmentation processes. Moreover it indicated that an integration would suppress actual instabilities by lifting frustration.
## 7 The case of China
China is an extremely huge country built up from several very large states. These state typical sizes are of the order or much larger than most other countries in the world. It is therefore interesting to analyse China stability within our model since it represents a case of simultaneous Cold war scenario and Unique world leader scenario.
There exists $`n`$ states which are all part of a unique coalition which is the chinese central state. Then all $`ϵ_i=+1`$ but $`\beta _i=\pm 1`$ since some states keep economical and military interest in the “union” $`(\beta _i=+1)`$ while capitalistic advanced rich states contribute more than their share to the “union” $`(\beta _i=1)`$. Associated Hamiltonian is,
$$H=\frac{1}{2}\underset{i>j}{\overset{n}{}}\{J_{i,j}+J_{ij}\}\eta _i\eta _j\underset{i}{\overset{n}{}}\beta _ib_i\eta _i,$$
(17)
where $`C_{i,j}>0`$ and $`G_{ij}`$ is positive or negative depending on each pair of state $`(i,j)`$.
At this point China is one unified country which means in particular that $`C_{i,j}>|G_{ij}|`$ for all pair of states with negative $`G_{ij}`$. Therefore $`\eta _i=+1`$ for each state. Moreover it also implies $`b_i<q_iC_{i,j}`$ where $`q_i`$ is the number of states state $`i`$ interacts with. Within this model, three possible scenari can be oulined with respect to China stability.
1. China unity is preserved.
Rich states will go along their actual economic growth with the central power turning to a capitalistic oriented federative like structure. It means turning all $`ϵ_i`$ to $`1`$ with then $`\eta _i=ϵ_i`$. In parallel additional development of poor states is required in order to maintain condition $`C_{i,j}>|G_{ij}|`$ where some $`G_{ij}`$ are negative.
2. Some rich states break unity.
Central power is unchanged with the same political and economical orientation making heavier limitations over rich state development. At some point the condition $`b_i>q_iC_{i,j}`$ may be achieved for these states. These very states will then get a lower “energy” breaking down from chinese unity. They will shift to $`\eta _i=1`$ in their alignment with the rest of China which has $`\eta _j=+1`$.
3. China unity is lost with a fragmentation phenomenon.
In this case, opposition among various states becomes stronger than the central organisational cooperation with now $`C_{i,j}<|G_{ij}|`$ with some negative $`G_{ij}`$. The situation would become spin glass-like and China would then undergo a fragmentation process. Former China would become a highly unstable part of the world.
## 8 The risky actor driven dynamics
In principle actors are expected to follow their proper relationship, i.e., to minimize their local “energy”. In other words, actors follow normal and usual patterns of decision. But it is well known that in real life these expectations are sometimes violated. Then new situations are created with reversal of on going policies.
To account for such situations we introduce the risky actor. It is an actor who goes against his well defined interest. It is different from the frustrated actor which does not have a well defined interest. Up to now everything was done at $`\mathrm{`}\mathrm{`}T=0\mathrm{"}`$. However a risky actor chooses coalition associated to $`\eta _i=1`$, although its local field $`h_i`$ is positive. Therefore the existence of risky actors requires a $`T0`$ situation. The case of Rumania, having its own independent foreign policy, in former Warsaw Pact may be an illustration of risky actor behavior. Greece and Turkey in the Cyprus conflict may be another example.
Once $`T0`$, it is not the energy which has to be minimized but the free energy,
$$F=UTS,$$
(18)
where U is the internal energy, now different from the Hamiltonian and equal to its thermal average and S is the entropy. To minimize the free energy means stability of a group of countries matters on respective size of each coalition but not, which actors are actually in these coalitions. At a fixed ”temperature” we thus can expect simultaneous shift of alliances from several countries as long as the size of the coalition is unchanged, without any modification in the relative strenghts. Egypt quitting soviet camp in the seventies and Afghanistan joining it may illustrate these non-destabilizing shifts.
Within the coalition frame temperature could be viewed as a way to account for some risky trend. It is not possible to know which particular actor will take a chance but it is reasonable to assume the existence of some number of risky actors. Temperature would thus be a way to account for some global level of risk taking.
Along ideas developped elsewhere we can assume that a level of risky behavior is profitable for the system as a whole. It produces surprises which induce to reconsider some aspect of coalitions themselves. Recent danish refusal to the signing of Maastricht agreement on closer european unity may be viewed as an illustration of a risky actor. The net effect have been indeed to turn what seemed a trivial and apathetic administrative agreement into a deep and passionated debate among european countries with respect to european construction.
Above discussion shows implementation of $`T0`$ within the present approach of coalition should be rather fruitful. More elaboration is left for future work.
## 9 Conclusion
In this paper we have proposed a new way to understand the alliance forming phenomena. in particular it was shown that within our model the cold war stabilty was not the result of two opposite alliances but rather the existence of alliance induced exchange which neutralize the conflicting interactions within allies. It means that to have two alliances or jut one is qualitatively the same with respect to stability.
From this viewpoint the strong instabilies which resulted from the Warsow pact dissolution are given a simple explanation. Simultaneously some hints are obtained about possible policies to stabilize world nation relationships. Along this line, the importance of european construction was also underlined.
We have also given some ground to introduce non-rational behavior in country behaviors. especially with the notions of ”risky”, ”frustrated” or “neutral” actors. A ”risky” actor acts against his well defined interest while a ”frustrated” actor acts randomly since not having a well defined interest. .
At this stage, our model remains rather primitive. However it opens some new road to explore and to forecast international policies.
### Acknowledgments
I indebted to D. Stauffer for numerous comments and critical discussions on the manuscript.
|
no-problem/9901/cond-mat9901353.html
|
ar5iv
|
text
|
# Extremal Optimization of Graph Partitioning at the Percolation Threshold
## I Introduction
The optimization of systems with many degrees of freedom with respect to some cost function is a frequently encountered task in physics and beyond . In cases where the relation between individual components of the system is frustrated , such a cost function often exhibits a complex “landscape” over the space of all configurations. For growing system size, the cost function may exhibit an exponentially increasing number of unrelated local extrema separated by sizable barriers which makes the search for the exact, optimal solution usually unreasonably costly. Thus, it is of great importance to develop fast and reliable methods to find near-optimal solutions for such problems.
The observation of certain physical processes, in particular the annealing of disordered materials, have lead to general-purpose optimization methods such as “simulated annealing” (SA) . SA applies the formalism of equilibrium statistical mechanics and in principle only requires the cost function as input. Thus, it is applicable to a variety of problems. But the performance of SA is hard to assess in general, even when limited to the standard combinatorial optimization problems. Aside from a multitude of adjustable parameters that crucially determine the quality of SA’s performance in a particular context, typical combinatorial optimization problems themselves possess various parameters that may change the landscape and SA’s behavior drastically .
In this paper we will explore the properties of a new general-purpose method, called Extremal Optimization (EO) , in comparison with SA. In contrast to SA, EO is based on ideas from non-equilibrium physics. As the basis for comparison we will use the graph partitioning problem (GPP), a standard NP-hard combinatorial optimization problem with similarities to disordered spin systems. We find that the GPP has a critical point as a function of the connectivity of graphs, with a less complex phase at lower connectivities. This critical point is related to the percolation transition of the graphs. Near this critical point, the performance of SA markedly deteriorates while EO produces only small errors.
This paper is organized as follows: In the next section we describe the philosophy behind the EO method, in Sec. III we introduce the graph partitioning problem, and Sec. IV we present the algorithms and the results obtained in our numerical comparison of SA and EO, followed by conclusions in Sec. V.
## II Extremal Optimization
EO provides an entirely new approach to optimization , based on the non-equilibrium dynamics of systems exhibiting self-organized criticality (SOC). SOC often emerges when a system is dominated by the evolution of extremely atypical degrees of freedom .
A simple example of such a dynamical system which inspired the development of EO is the Bak-Sneppen model . There, species are represented by a number between 0 and 1 that indicates their “fitness,” located on the sites of a lattice. The smallest number (representing the worst adapted species) at each update is discarded and replaced with a new number drawn from a uniform distribution on $`[0,1]`$. Without any interactions, all the numbers in the system would eventually become 1. But obvious interdependencies between species provide constraints for balancing the systems’ fitness with that of each species: The change of fitness in one species impacts the fitness of an interrelated species. In the Bak-Sneppen model, the fitness values on all sites neighboring the smallest number at that time step are simply replaced with new random numbers as well . After a certain number of such updates, the system organizes itself into a highly correlated state known as self-organized criticality (SOC) .
In the SOC state, almost all species have reached a fitness above a certain threshold. But these species merely possess what is referred to as punctuated equilibrium , because the co-evolutionary activity is bound to return in a chain reaction where a weakened neighbor can undermine one’s own fitness. Fluctuations that rearrange the fitness of many species abound and can rise to the size of the system itself, making any possible configuration accessible. Hence, such non-equilibrium systems provide a high degree of adaptation for most entities in the system without limiting the scale of change towards even better states.
EO attempts to utilize this phenomenology to obtain near-optimal solutions for optimization problems . For instance, in a spin glass system we may consider as fitness for each spin its contribution to the total energy of the system. EO would search for ground state configurations by perturbing preferentially spins with large contributions. Like in the Bak-Sneppen model, such perturbations would be local, random rearrangements of those poorly adapted spins, allowing for better as well as for worse outcomes at each update. In the same way as systems exhibiting SOC get driven recurrently towards a small subset of attractor states through a sequence of “avalanches” , EO can fluctuate widely to escape local optima while the extremal selection process ensures recurrent approaches to many near-optimal configurations. Especially in exploring low temperature properties of disordered spin systems, those qualities may help to avoid the extremely slow relaxation behavior faced by heat bath based approaches . In that, EO provides an approach alternative – and apparently equally capable – to Genetic Algorithms, which are often the only means to illuminate those important properties . The partitioning of sparse graphs as discussed here is particularly pertinent in preparation for similar studies on actual spin glasses.
It has been observed that many optimization problems exhibit critical points that separate off phases with simple cases of a generally hard problem . Near such a critical point, finding solutions becomes particularly difficult for local search methods which proceed by exploring for an existing solution some neighborhood in configuration space. There, near-optimal solutions become widely separated with diverging barrier heights between them. It is not surprising that search methods based on heat-bath techniques like SA are not particularly successful in this highly correlated state . In contrast, the driven dynamics of EO does not possess any temperature control parameters to successively limit the scale of its fluctuations. Our numerical results in Sec. IV show that EO’s performance does not diminish near such a critical point. A non-equilibrium approach like EO may thus provide a general-purpose optimization method that is complementary to SA: While SA has the advantage far from this critical point, EO appears to work well “where the really hard problems are” .
## III Graph Partitioning
To illustrate the properties of EO and its differences with SA, we focus in this paper on the well-studied graph partitioning problem (GPP). In particular, we will consider the GPP near a phase transition where the optimization problem becomes especially difficult and possesses many similarities with physical systems.
### A Formulation of the Problem
The graph (bi-)partitioning problem is easy to formulate: Take $`N`$ points where $`N`$ is an even number, let any pair of two points be connected by an edge with a certain probability, divide the points into two sets of equal size $`N/2`$ such that the number of edges connecting both sets, the “cutsize” $`m`$, is minimal: $`m=m_{\mathrm{opt}}`$. The global constraint of an equal division of the points between the sets places this problem generally among the hardest problems in combinatorial optimization, requiring a computational effort that would grow faster than any power of $`N`$ to determine the exact solution with certainty . The two physically motivated optimization methods, SA and EO, which we focus on here, usually obtain approximate solutions in polynomial time.
For random graphs, the GPP depends on the probability $`p`$ with which any two points in the system are connected. Thus, $`p`$ determines the total number of edges in an instance, $`L=pN(N1)/2`$ on average, and its mean connectivity per point, $`\alpha =p(N1)`$ on average. Alternatively, we can formulate a “geometric” GPP by specifying $`N`$ randomly distributed points in the $`2`$-dimensional unit square which are connected with each other if they are located within a distance $`d`$ of one another. Then, the average expected connectivity $`\alpha `$ of such a graph is given by $`\alpha =N\pi d^2`$. This form of the GPP has the advantage of a simple graphical representation, shown in Fig. 1.
It is known that geometric graphs are harder to optimize than random graphs . The characteristics of the GPP for random and geometric graphs at low connectivity appear to be very different due to the dominance of long loops and short loops, resp., and we present results for both types of graphs here. In fact, in the case of random graphs the structure is locally tree-like which allows for a mean-field treatment that yields exact results . In turn, the geometric case corresponds to continuum percolation of “soft” (overlapping) circles for which precise numerical results exist . Finally, we also try to determine the average ground state energy of a dilute ferro-magnetic system on a cubic lattice at fixed (zero) magnetization, which amounts to the equal partitioning of “up” and “down” spins while minimizing the interface between both types . Here, each vertex of the lattice holds a $`\pm `$-spin, and any two nearest-neighbor spins either possess a ferromagnetic coupling of unit strength or are unconnected. The probability that a coupling exists is fixed such that the average connectivity of the system is $`\alpha `$.
### B Graph Partitioning and Percolation
Like many other optimization problems, the GPP exhibits a critical point as a function of its parameters . In case of the GPP we observe this critical point as a function of the connectivity $`\alpha `$ of graphs, with the cutsize $`m_{\mathrm{opt}}`$ as the order parameter. In fact, the critical point of partitioning is closely linked to the percolation threshold of graphs. In our numerical simulations we proceed by averaging over many instances of a class of graphs and try to reproduce well-known results from the corresponding percolation problem. Of course, using stochastic optimization methods (instead of cluster enumeration) is neither an efficient nor a precise means to determine percolation thresholds. But in turn we obtain also some valuable information about the scaling behavior of the average cost $`<m_{\mathrm{opt}}>`$ for optimal partitions near the threshold that goes beyond the percolating properties of these graphs.
We note, in accordance with Ref. , that the critical point separates between hard cases and easy-to-solve cases of the GPP. The transition is related to the corresponding percolation problem for the graphs in the following manner: If the mean connectivity $`\alpha `$ is very small, the graph of $`N`$ points consists mainly of disconnected, small clusters or isolated points which can be enumerated and sorted into two equal partitions in polynomial time with no edges between them ($`m_{\mathrm{opt}}=0`$). If $`\alpha `$ is large and the probability that any two points are connected is $`p=\mathrm{O}(1)`$, almost all points are connected into one giant cluster with $`m_{\mathrm{opt}}=\mathrm{O}(N^2)`$, and almost any partition leads to an acceptable solution. But when $`p=\mathrm{O}(1/N)`$, i. e. $`\alpha =\mathrm{O}(1)`$, the distribution of cluster sizes is broad, and the partitioning problem becomes nontrivial. Obviously, as soon as a cluster of size $`>N/2`$ appears, $`m_{\mathrm{opt}}`$ must be positive. In this sense, we observe for $`N\mathrm{}`$ a sharp, percolation-like transition at an $`\alpha _{\mathrm{crit}}`$ with the cutsize $`m_{\mathrm{opt}}`$ as the order parameter.
For random graphs it is known that a cluster of size $`N`$ exists for $`\alpha >1`$ , but only for $`\alpha >\alpha _\mathrm{c}=2\mathrm{ln}21.386`$ do we find a cluster of size $`>N/2`$ . Geometric graphs in $`D=2`$ are known to percolate at about $`\alpha =4.5`$ , and we would expect $`\alpha _\mathrm{c}`$ for the GPP to be slightly larger than that. Also, the dilute ferro-magnet should exhibit a non-trivial energy when the fraction of occupied bonds reaches slightly beyond the critical point $`p_\mathrm{c}0.2488`$ for bond percolation on a cubic ($`D=3`$) lattice , i. e. for connectivities $`\alpha >2Dp_\mathrm{c}1.493`$.
## IV Numerical Experiments
### A Simulated Annealing Algorithm
In SA , we try to minimize a global cost function given by $`f=m+\mu (P_1P_2)^2`$, where $`P_1`$ and $`P_2`$ are the number of points in the respective sets. Allowing the size of the sets to fluctuate is required to improve SA’s performance in outcome and computational time at the cost of an arbitrary parameter $`\mu `$ to be determined. Then, starting at a “temperature” $`T_0`$, the annealing schedule proceeds with $`lN`$ trial Monte-Carlo steps on $`f`$ by tentatively moving a randomly chosen point from one set to the other (which changes $`m`$) to equilibrate the system. This move is accepted, if $`f`$ improves or if the Boltzmann factor $`\mathrm{exp}[(f_{\mathrm{old}}f_{\mathrm{new}})/T]`$ is larger than a randomly drawn number between $`0`$ and $`1`$. Otherwise the move is rejected and the process continues with another randomly chosen point. After that, we set $`T_i=T_{i1}(1ϵ)`$, equilibrate again for $`lN`$ trials, and so on, until the MC acceptance rate drops below $`A_{\mathrm{stop}}`$ for $`K`$ consecutive temperature levels. At this point the optimization process can be considered “frozen” and the configuration should be near-optimal, $`mm_{\mathrm{opt}}`$ (and balanced, $`P1=P2`$). While SA is intuitive, controlled, and of very general applicability, its performance in practice is strongly dependent on the multitude of parameters which have to be arduously tuned. For us it is thus expedient (and most unbiased!) to rely on an extensive study of SA for graph partitioning which determined $`\mu =0.05`$, $`T_0=2.5`$, $`ϵ=0.04`$, $`A_{\mathrm{stop}}=2\%`$, and $`K=5`$. Ref. set $`l=16`$, but performance improved noticeably for our choice, $`l=64`$.
### B Extremal Optimization Algorithm
In EO , each point $`i`$ obtains a “fitness” $`\lambda _i=g_i/(g_i+b_i)`$ where $`g_i`$ and $`b_i`$ are the number of “good” and “bad” edges that connect that point within its set and across the partition, resp. (We fix $`\lambda _i=1`$ for isolated points.) Of course, point $`i`$ has an individual connectivity of $`\alpha _i=g_i+b_i`$ while the overall mean connectivity of a graph is given by $`\alpha =_i\alpha _i/N`$. The current cutsize is given by $`m=_ib_i/2`$. At all times, an ordered list $`\lambda _1\lambda _2\mathrm{}\lambda _N`$ is maintained where $`\lambda _n`$ is the fitness of the point with the $`n`$-th rank in the list.
At each update we draw two numbers, $`1n_1,n_2N`$, from a probability distribution
$`P(n)n^\tau .`$ (1)
Then we pick the points which are elements $`n_1`$ and $`n_2`$ of the rank-ordered list of fitnesses. (We repeat a drawing of $`n_2`$ until we obtain a point that is from the opposite set than $`n_1`$.) These two points swap sets no matter what the resulting new cutsize $`m`$ may be, in notable distinction to the (temperature-) scale-dependent Monte Carlo update in SA. Then, these two points, and all points they are connected to ($`2\alpha `$ on average), reevaluate their fitness $`\lambda `$. Finally, the ranked list of $`\lambda `$’s is reordered using a “heap” at a computational cost $`\alpha \mathrm{ln}N`$, and the process is started again. We repeat this process for a number of update steps per run that rises linearly with system size, and we store the best result generated along the way. Note that no scales are introduced into the process, since the selection follows a scale-free power-law distribution $`P(n)`$ and since – unlike in a heat bath – all moves are accepted, allowing for fluctuations on all scales. Instead of a global cost function, the rank-ordered list of fitnesses provides the information about optimal configurations. This information emerges in a self-organized manner merely by selecting with a bias against badly adapted points, instead of “breeding” better ones .
There is merely one parameter, the exponent $`\tau `$ in the probability distribution in Eq. (1), that controls the selection process and optimizes the performance of EO. In initial studies, we determined $`\tau =1.4`$ as the optimal value for all graphs. It is intuitive that such an optimal value of $`\tau `$ should exist: If $`\tau `$ is too small, points would be picked purely at random with no gradient towards a good partition, while if $`\tau `$ is too large, only a small number of points with particularly bad fitness would be chosen over and over again, confining the system to a poor local optimum. It is a surprising numerical result that this value of $`\tau `$ appears to be rather universal, independent of $`N`$, $`\alpha `$, and the type of graph considered.
### C Testbed of Graphs
In our numerical simulations we have generated random and $`2D`$ geometric graphs of varying connectivity by choosing $`p`$ or $`d`$, resp. For any instance of a graph labeled by a “connectivity $`\alpha `$”, the actual connectivity not only varies from point to point, but also the mean connectivity of such graphs follows a normal distribution. (In particular for geometric graphs it is shifted to lower values due to the loss of connectivity at the boundaries.) For $`N=500`$, 1000, 2000, 4000, 8000, and 16000, we varied the connectivity between $`\alpha =1.25`$ and $`\alpha =5`$ for random graphs, and $`\alpha =4`$ and $`\alpha =10`$ for geometric graphs. Then, for each $`\alpha `$ we generated 16 different instances of graphs, identical for SA and EO. On each instance, we performed 8 (32) optimization runs for random (geometric) graphs, both for EO and SA. Each run, we used a new random seed to establish an initial partition of the points. SA’s runs terminate when the system freezes. We terminated EO-runs after $`200N`$ updates, leading to a comparabile runtime between both methods.
For the dilute ferro-magnet, we fixed the number of couplings to obtain a specific average connectivity $`\alpha `$. Those couplings were then placed on random links between nearest-neighbor spins to generate an instance. We used 16 instances, and 16 runs for each, at connectivities $`1.6\alpha 4`$. Here, we only used $`100N`$ updates for EO, and the temperature length of $`16N`$ recommended in Ref. but with a higher starting temperature for SA to optimize performance at a comparable runtime for both methods, as shown in Fig. 2.
### D Evaluation of Results
#### 1 Comparison of EO and SA
We evaluate the performance of SA and EO separately. For each method, we only take its best result for each instance and average those best results at any given connectivity $`\alpha `$ to obtain the mean cutsize for that method as a function of $`\alpha `$ and $`N`$. To compare EO and SA, we determine the relative error of SA with respect to the best result found by either method for $`\alpha \alpha _\mathrm{c}`$. Figs. 3a-c show how the error of SA diverges with increasing $`N`$ near to $`\alpha _\mathrm{c}`$ for each class of graphs.
Depending on the type of graph under consideration, the quality of the SA results may vary. The data for random graphs in Fig. 3a only shows a relatively weak deficit in SA’s performance relative to EO. Near $`\alpha _\mathrm{c}=2\mathrm{ln}2=1.386`$, SA’s relative error remains modest, and only grows very weakly with increasing $`N`$. For large connectivities $`\alpha `$, SA quickly becomes the superior method for random graphs, which may be due to their increasingly homogeneous structure (i. e. low barriers between optima) that does not favor EO’s large fluctuations. On the other hand, the averages obtained by EO appear to be very smooth (see the scaling in Fig. 4a) whereas the apparent noise in Fig. 3a indicates large variations between instances for the SA results.
The very rugged structure of geometric graphs near the percolation threshold (see Fig. 1a), $`\alpha _\mathrm{c}4.5`$, is most problematic for SA, leading to huge errors which appear to increase linearly with $`N`$. Barriers between optima are high within each graph, now favoring EO’s propensity for large fluctuations. On the scale of Fig. 3b, error bars attached to the data (which we have generally omitted) would hardly be significant. But experience shows that both methods exhibit large variations in results between instances which is in large part due to actual variations in the structure between geometric graphs.
The results for the dilute ferromagnet exhibit a mix of the two previous cases. Since the points are arranged on a $`D=3`$-lattice, the structure of these graphs is definitely geometrical, but local connectivities are limited to the $`2D=6`$ nearest neighbors that each point possesses. Again, SA’s error is huge and appears to diverge about linearly near the threshold, $`\alpha _\mathrm{c}1.5`$. But due to the limited rage of connectivities, graphs soon become rather homogeneous for increasing $`\alpha `$ which in turn appears to favor SA away from the transition, especially for larger graphs. (For larger $`N`$, any local structure gets quickly averaged out due to the local limits on the connectivity, whereas an unlimited range of local structures can emerge in the geometric graphs above.)
#### 2 Scaling of EO-Data near the Transition
For the data obtained with EO, we make an Ansatz
$`m_{\mathrm{opt}}N^\nu \left(\alpha \alpha _\mathrm{c}\right)^\beta `$ (2)
to scale the data for all $`N`$ onto a single curve, shown in Figs. 4a-c. From the scaling Ansatz we can extract an estimate for $`\alpha _\mathrm{c}`$ to compare with percolation results as a measure of the accuracy of the data obtained with EO. Furthermore, we also obtain a numerical estimates for the exponents $`\nu `$ and $`\beta `$ which characterize the transition. The exponent $`\nu `$, describing the finite-size scaling behavior, could be infered from general, global properties of a class of graphs. For instance, $`\nu =1`$ for random graphs because any global property of these graphs is extensive . On the other hand, the exponent $`\beta `$, describing the scaling of the order parameter near the transition, is related to the intricate structure of the interface needed to separate points into equal-sized partitions. Thus, we would expect $`\beta `$ to be nontrivial even for random graphs. (To our knowledge, no previous predictions for these exponents exist.)
For random graphs in Fig. 4a, the scaling Ansatz in Eq. (2) is particularly convincing. We verify that $`\nu =1`$ and obtain $`\beta =1.2`$. From the fit we obtain also $`\alpha _\mathrm{c}1.30`$, just slightly below the exact value of $`1.38`$ . The fit produces an error of about $`\pm 0.1`$ in the determination of $`\beta `$, which would ignore any error received through the limited number of instances averaged over, or any bias due to the shortcomings of EO to approach the exact optima. A satisfactory fit in turn would indicate that such errors should be negligible.
For geometric graphs in Fig. 4b, we found the best scaling for $`\nu =0.6`$. Since we used only 16 different instances to average over at each $`N`$ and $`\alpha `$, the data gets very noisy for larger connectivities due to large fluctuations in the optimal cutsizes between those instances and/or EO’s inability to find good approximations. We chose to fit only points up to $`\alpha =7`$ and obtained $`\beta =1.4`$ and $`\alpha _\mathrm{c}4.1`$, even smaller than the critical value for percolation, $`4.5`$. Obviously, the obtained values are very poor, but at least indicate EO’s ability to approximate the optimal cutsizes with bounded error near the transition.
The data for the dilute ferromagnet in Fig. 4c appears to scale well for $`\nu =0.75`$. Since EO’s performance is falling behind that of SA for $`\alpha >3`$ we only fit to smaller values of $`\alpha `$ and obtain $`\beta =1.15`$ and $`\alpha _\mathrm{c}=1.55`$, as desired just slightly larger than the value for percolation, $`1.49`$. We estimate the error from the fit for each of these values to be about $`\pm 0.05`$.
#### 3 Fixed-Valence Graphs
Finally, we have also performed a study on graphs where points are linked at random, but where the connectivity $`\alpha `$ at each point is fixed. These graphs have been investigated previously theoretically and numerically using SA . While $`\alpha `$ now is fixed to be an integer, we can not tune ourselves arbitrarily close to a critical point. Furthermore, the problem is non-trivial only when $`\alpha 3`$. These graphs have the property that at a given $`\alpha `$ and $`N`$ the optimal cutsizes between instances vary little, and only few instances are needed to determine $`m_{\mathrm{opt}}`$ with good accuracy.
In our simulations we found that for larger values of $`\alpha `$, SA and EO both confirm the results in Ref. quite well. But for $`\alpha =3`$, the lowest non-trivial connectivity, we did observe significant differences between EO and the study in Ref. . Ref. , by averaging 5 instances each at various values of $`N`$ ($`450N4000`$), found a normalized average energy
$`E=1+{\displaystyle \frac{4m_{\mathrm{opt}}}{\alpha N}}`$ (3)
of $`0.840`$, presumably correct to the digits given. We found by averaging over 32 instances, using 8 EO runs on each, for $`N=1024,`$ 2048, and 4096 that $`E=0.844\pm 0.001`$. But this result is still significantly higher than some theoretical predictions , and we will investigate whether longer runtimes may further reduce the cutsizes for these graphs .
## V Conclusions
In this paper we have demonstrated that Extremal Optimization (EO), a new optimization method derived from non-equilibrium physics, may provide excellent results exactly where Simulated Annealing (SA) fails. While further studies will be necessary to understand (and possibly, predict) the behavior of EO, we have used it here to analyze the phase transition in the NP-hard graph partitioning problem. The results illustrate convincingly the advantages of EO and produce a new set of scaling exponents for this transition for a variety of different graphs.
I thank A. Percus, P. Cheeseman, D. S. Johnson, D. Sherrington, and K. Y. M. Wong for very helpful discussions.
|
no-problem/9901/astro-ph9901071.html
|
ar5iv
|
text
|
# Studying Evolution of the Galactic Potential and Halo Streamers with Future Astrometric Satellites
## 1 Introduction
Tidal streams in the Galactic halo are a natural prediction of hierarchical galaxy formation, where the Galaxy builds up its mass by accreting smaller satellite galaxies. They are often traced by luminous horizontal and giant branch stars outside the tidal radius of a satellite (such as the Sagittarius dwarf galaxy) or a globular cluster (cf. Grillmair et al. 1998, Irwin & Hatzidimitriou 1995) as a result of tidal stripping, shocking or evaporation. That extra-tidal material (stars or gas clouds) traces the orbit of the satellite or globular cluster has long been known to be a powerful probe of the potential of the Galaxy in the halo, and has been exploited extensively particularly in the case of the Magellanic Clouds and Magellanic Stream (Murai & Fujimoto 1980, Putman et al. 1999, this volume).
Among the future astrometric missions, SIM is a pointed observatory with the main science goal of finding nearby planets, while GAIA (under study) will continuously monitor the relative positions of about $`10^9`$ stars above $`20`$ mag. limit over the full sky over a period of $`45`$ years with additional information of radial velocity to better than $`310`$km/s accuracy (depending on spectral type) for stars brighter than $`16`$ mag. (cf. the compiled instrument specs in Table 1, and the original documents by Gilmore et al. 1998, Unwin et al. 1998). GAIA represents an improvement over the Hipparcos mission by a factor of about a thousand in precision, more than a million in the probing volume, and a factor of ten thousand in number of objects.
Helmi, Zhao & de Zeeuw (1999, this volume) show that streams can be identified by as peaks in the distribution in the angular momentum space, measurable with GAIA. Once identified, we can fit a stream with an orbit or more accurately a simulated stream in a given potential. Johnston, Zhao, Spergel & Hernquist (1999) show that a few percent precision in the rotation curve, flattening and triaxiality of the halo is reachable by mapping out the proper motions (with SIM accuracy) and radial velocities along a tidal stream $`\mathrm{\hspace{0.25em}20}`$ kpc from the Sun. In particular, they show that the fairly large error in distance measurements to outer halo stars presents no serious problem since one can predict distances theoretically using the known narrow distribution of the angular momentum or energy along the tails associated with a particular Galactic satellite. We expect these results should largely hold for streams detectable by GAIA. These numerical simulations are very encouraging since they show that it is plausible to a learn great deal about the Galactic potential with even a small sample of stream stars from GAIA. Some unaddressed issues include whether stream members will still be identifiable in angular momentum in potentials without axial symmetry, and the robustness of both methods if the Galactic potential evolves in time.
Here we illustrate how the properties of tidal streams evolve in time-dependent potentials and discuss whether members of such streams might still be identified using the 6D information from GAIA. We study the effect of contamination from field stars, evolution and non-axial symmetry of the potential, and the lack of very accurate radial velocity and parallax on our ability to detect streams. We concentrate on satellites which fall in and are disrupted recently (about 4 Gyrs ago, well after the violent relaxation phase) and maintain a cold spaghetti-like structure. We also put the satellites on relatively tight orbits but which lie outside the solar circle (pericenter of about 8 kpc and apocenter of about 40 kpc) such that the bright member stars in the stream are still within the reach of detectability of GAIA. Such streams typically go around the Galaxy less than 5 times since disruption, and are typically far from fully phase-mixed.
## 2 Strategy for GAIA: stars in a stream vs. field stars
We propose to select bright horizontal/giant branch (HB/GB) stars as tracers of tidal debris of a halo satellite (which we take to be either a dwarf galaxy or a globular cluster). A satellite with a typical luminosity $`L=10^{57}L_{}`$ has numerous HB and GB stars with $`M_V0.75`$mag, which are easily observable at 20 kpc from the Galactic center ($`m_V18`$mag or brighter) with GAIA. While the depth of the debris is typically difficult to resolve with GAIA parallax, GAIA proper motion (cf. Table 1) is good enough to resolve the internal dispersion of the debris, which is of the order $`\sigma /20\mathrm{k}\mathrm{p}\mathrm{c}100\mu `$as/yr, where $`\sigma 10\text{ km\hspace{0.17em}s}^1`$ is the typical velocity dispersion of a satellite.
A satellite with luminosity $`10^{57}L_{}`$ typically has
$$N_{obs}=\frac{fL}{500L_{}}100$$
(1)
numbers of stars in its tidal tail, where we assume there is about one HB or GB star per $`500L_{}`$ and between $`f=0.5\%`$ to $`50\%`$ of the stars in the original satellite are liberated.
In comparison, the density of field HB or GB stars in the halo which happen to be in the same solid angle, and have the same proper motion and radial velocity
$$N_{field}=\mathrm{\Sigma }\mathrm{\Omega }\left(\frac{\sigma }{100\text{ km\hspace{0.17em}s}^1}\right)^31$$
(2)
where $`100\text{ km\hspace{0.17em}s}^1`$ is the dispersion of field stars, $`\mathrm{\Sigma }(110)`$ is the typical number of field giants per square degree and $`\mathrm{\Omega }`$ is the solid angle of the tidal debris in degrees, which for the Sagittarius (Sgr) tidal stream is about $`5^o\times (20^o60^o)`$. In other words while there might be 25 stars in a solid angle of $`5^o\times 5^o`$ in a stream, only $`25\%`$ of the chance that there is one field star in the same piece of sky with proper motions and radial velocities indistinguishable from the stream stars. In fact Sgr was discovered on the basis of radial velocity and photometric parallax against a dense foreground of bulge stars (Ibata, Gilmore, & Irwin 1994). So as far as identifying stars in a cold stream with GAIA is concerned we conclude that contamination from field halo stars is likely not a serious problem.
It is worth commenting on the advantages of stars in a stream, as compared to stars in the field, in constraining the Galactic potential. Stars in a stream trace a narrow bunch of orbits in the vicinity of that of the center of the mass, and are correlated in orbital phase: they can all be traced back to a small volume (e.g., near pericenters of the satellite orbit) where they were once bound to the satellite. Hence we expect a tight constraint on parameters of the Galactic potential and the initial condition of the center of the mass of the satellite (about a dozen parameters in total) by fitting the individual proper motions of one hundred or more stars along a stream since the fitting problem is over-constrained. In contrast, field stars are random samples of the distribution function (DF) of the halo, and the large number of degrees of freedom in choosing the 6-dimensional DF makes the problem often under-constrained: one generally cannot make the assumption that the halo field stars are in steady state as an ensemble because it typically takes much longer than a Hubble time to phase-mix completely at radii of 30 kpc or more.
How well can we determine the Galactic potential with a stream? Assume each debris star has an intrinsic energy spread $`\mathrm{\Delta }EV_{cir}^2a/R`$ relative to the orbit of the center of the mass, where $`a=(0.11)`$ kpc is the size of the parent satellite at the time of disruption, also the thickness of a cross-section of the stream, and $`R=(840)`$ kpc is the radius of the pericenter, then the accuracy of the potential from fitting $`N_{obs}(1001000)`$ stars is
$$ϵ\frac{1}{\sqrt{N_{obs}}}\frac{\mathrm{\Delta }E}{V_{cir}^2}(0.11)\%$$
(3)
The accuracy is not very sensitive to the luminosity of the disrupted satellite since both $`N_{obs}`$ and $`a`$ scale with the luminosity of the satellite. While this agrees fairly well with the study of parameterized static potentials (Johnston et al. 1999), it remains to be tested for more realistic models with evolution.
## 3 Science for GAIA: evolution of the Galactic potential
To study the effect of the evolution and flattening of the potential on a stream, we follow the disruption of satellites in a simple, flattened, singular isothermal potential $`\mathrm{\Phi }(r,\theta ,t)`$ which is time-dependent but maintains a rigorously flat rotation curve at all radii, where $`(r,\theta )`$ are the spherical coordinates describing the radius and the angle from the North Galactic Pole, $`t`$ is defined such that $`t=0`$ would be the present epoch. This is perhaps a sensible choice since we do not expect a major merger to have occurred within the past 4 Gyrs. We study three types of moderately evolving potential with identical flattening and rotation curves at the present epoch.
First we consider a Galactic potential
$$\mathrm{\Phi }_G(r,\theta ,t)=V_0^2\left[A_s\mathrm{log}r+\frac{ϵ}{2}\mathrm{cos}2\theta \right],$$
(4)
where
$$A_s(t)=1ϵ_0+ϵ(t),ϵ(t)=ϵ_0\mathrm{cos}\frac{2\pi t}{T_G}.$$
(5)
This model simulates the effect of the Galaxy becoming more massive and flattened in potential as it grows a disk. The time-evolution is such that the Galactic potential grows from prolate at time $`t=T_G/2`$ to spherical at time $`t=T_G/4`$, and then to oblate at $`t=0`$. A more general prescription of the temporal variation might include a full set of Fourier terms. We adopt parameters
$$V_0=200\text{ km\hspace{0.17em}s}^1,ϵ_0=0.1,$$
(6)
such that the present-day rotation curve amplitude is $`200\text{ km\hspace{0.17em}s}^1`$, and equal potential axis ratio $`1ϵ_00.9`$. $`ϵ_0`$ needs to be small so as to guarantee that the volume density of the model is positive everywhere at all time. We set $`T_G/4=4`$Gyr, a reasonable time scale for the growth of the Galactic disk.
Second we consider a Galactic potential where the minor axis of the potential slowly flips over a time scale $`T_F`$,
$$\mathrm{\Phi }_F(r,\theta ,t)=V_0^2\left[\mathrm{log}r+\frac{ϵ_0}{2}\mathrm{cos}2(\theta \frac{\pi t}{T_F})\right].$$
(7)
This potential might mimic the tidal harassment of Local Group galaxies. We set $`T_F=2`$Gyr for the time to flip $`180^o`$. This type of model has also been examined by Ostriker & Binney (1989).
For the last case we analyze a static Galactic potential plus a time-varying perturbation coming from a massive satellite on a circular orbit with a period $`T_P`$. The potential is
$$\mathrm{\Phi }_P(r,\theta ,t)=V_0^2\left[\mathrm{log}r+\frac{ϵ_0}{2}\mathrm{cos}2\theta \right]\frac{GM_P}{\sqrt{|𝐑𝐑_P(t)|^2+b_P^2}},$$
(8)
where we use a Plummer model for the perturber’s potential with a scale length $`b_P`$ about half of the tidal radius, the total mass of the perturber $`M_P`$ is taken to be $`5\%`$ of the Galactic mass enclosed by its circular orbit of period $`T_P`$ and radius $`R_P=\frac{V_0T_P}{2\pi }`$. Generally, the orbital plane of the perturber is unrelated to that of the stream, but here they are taken to be the same plane with both moving in the same sense. The perturber is set on an exactly polar, circular orbit with a period $`T_P=2`$Gyr ($`R_P=64`$ kpc, $`M_P=3\times 10^{10}M_{}`$, $`2b_P=14`$ kpc) without considering effects such as dynamical friction and mass-loss. These parameters might be appropriate for a very massive perturber such as the progenitor of the Magellanic Clouds (Zhao 1998a,b), which could significantly “harass” all smaller satellites in the halo.
Fig.1 shows the orbit and morphology of the simulated stream in these three potentials. The orbit of the disrupted satellite is chosen so that the released stream stays in the polar $`xz`$ plane, which passes through the location of the Sun and the Galactic center; the $`xyz`$ coordinate is defined such that the Sun is at $`x=8`$ kpc and $`y=z=0`$.
We then simulate observations of mock data of 100 bright HB and GB stars convolved with GAIA accuracy. The particles in the disrupted satellite are initially distributed with an isotropic Gaussian density and velocity profile (as in Helmi & White 1999) with dispersions $`0.4`$ kpc and $`4\text{ km\hspace{0.17em}s}^1`$ respectively. These particles are released instantaneously at the pericenter $`8`$ kpc from the center $`4`$ Gyrs ago. These parameters might be most relevant for satellites such as the progenitor of the Sagittarius stream. The parallax and radial velocity from GAIA will not be very constraining for stream stars beyond 10 kpc ($`100\mu `$as in parallax) because of rapid growth of error bars with magnitude of the stars (Lindegren, private communications). Here we adopt a simple parameterization for the error of the sample stars. We consider only horizontal branch stars, for which the errors in parallax ($`\pi `$ in $`\mu `$as), proper motion ($`\mu `$ in $`\mu `$as yr<sup>-1</sup>) and heliocentric radial velocity ($`V_h`$ in $`\text{ km\hspace{0.17em}s}^1`$) are functions of the parallax. We find that this simple formula
$$\sigma _\pi =1.6\sigma _\mu =\sigma _{V_h}=f(\pi ),f(x)=5+50(50/x)^{1.5},$$
(9)
approximates the GAIA specifications fairly well.
Nevertheless, Fig. 2 and Fig. 3 show that the distribution of the proper motions of the stream remains narrow (as shown by the proper motion vs. proper motion diagram and the position-proper motion diagram) after taking observational error into account. The narrow distribution allows stars in a stream to be selected out from random field stars, as we argue in the previous section.
Fig.4 shows the simulated streams in the energy and angular momentum space for various evolution histories of the Galactic potential. For example, in the model where the Galaxy slews, particles move along straight lines defined by the Jacobian integral $`E\frac{\pi }{T_F}J`$, where $`J`$ is the angular momentum in the direction around which the system slews. By and large the energy $`E`$ of particles across each stream is spread out only in a narrow range at each epoch in the three models; the same holds but to a lesser extent for the angular momentum vector $`𝐉`$. This implies that stars in the stream are largely coeval even in the presence of realistic, moderate evolution of the Galactic potential. The energy and angular momentum is also modulated with particle position in a sinusoidal way across the stream, an effect which in principle can be used to infer the evolution rate of the Galactic potential and the flattening of the potential.
One of the challenges of using streams to constrain the potential is that measurements of both parallax and radial velocity from GAIA are likely dominated by noise for the fainter ($`M_V20`$ mag), more distant ($`50`$ kpc) members of a halo stream (cf. Table 1). Fortunately, we can use the property of nearly constant angular momentum and energy across the stream to predict the missing information with an accuracy often comparable to or better than directly observable by GAIA. In essence we apply variations of the classical method of obtaining “secular parallaxes”. The simplest example is a polar stream, which has no net azimuthal angular momentum, $`J_z0`$, so parallaxes can be recovered (to about $`10\mu `$as accuracy) from the solar reflex motion of the stream in the longitude direction as shown by the linear regression $`\pi |\mu _l|/40`$ in the top right panel of Fig. 2. More generally we can use the property that the total angular momentum and an approximate energy are roughly constant, i.e.,
$$𝐉=𝐫\times 𝐕\mathrm{constant},E(200\text{ km\hspace{0.17em}s}^1)^2\mathrm{log}r+\frac{1}{2}𝐕^2\mathrm{constant}$$
(10)
to predict both parallax and heliocentric velocity; here we simply pretend that the Galactic potential is spherical. Surprisingly, these very rough approximations often yield fairly accurate parallaxes ($`10\%`$) and heliocentric velocities ($`30\text{ km\hspace{0.17em}s}^1`$) as shown by the narrow bands in Fig. 2 and Fig. 3; predictions tend to be poorer for particles in the (anti-)center direction, because it is where the angular momentum $`𝐉`$ becomes insensitive to the heliocentric velocity. The predicted velocities and parallaxes are testable with direct observations at least for the nearby brighter members of a stream.
Most relevant to constraining the evolution of the Galaxy is that the differences in the position-proper motion diagram (Fig. 5), reflected by these different evolution scenarios, are at a level clearly resolvable with GAIA accuracy. We caution that the examples shown here are biased towards nicely-structured and extended streams. These happen quite often for models where the Galaxy simply slews, but not for models involving a growing disk or a massive perturber as we increase the amplitude of the perturbation. We have also run simulations with various parameters for the satellite orbit and initial size and the Galactic potential. The structure of a stream can become very noisy for highly eccentric orbits with a pericenter smaller than 8 kpc and/or for potentials where the temporal fluctuation of the rotation curve is greater than 10%. These noisy structures as a result of strong evolution can be challenging to detect. Nevertheless we conclude that tidal streams are excellent tracers of the Galactic potential as long as a stream maintains a cold spaghetti-like structure, in particular, the results of Johnston et al. (1999) and Helmi et al. (1999) for static Galactic potentials are likely to be largely generalizable to moderately time-evolving potentials. However, perhaps the most exciting implication of these preliminary results is that by mapping the proper motions along the debris with SIM or GAIA we could eventually set limits on the rate of evolution of the Galactic potential, and distinguish among scenarios of Galaxy formation.
HSZ thanks Amina Helmi and Tim de Zeeuw for many helpful comments on an earlier version.
|
no-problem/9901/astro-ph9901278.html
|
ar5iv
|
text
|
# Characterization of neutrino signals with radiopulses in dense media through the LPM effect
We discuss the possibilities of detecting radio pulses from high energy showers in ice, such as those produced by PeV and EeV neutrino interactions. It is shown that the rich radiation pattern structure in the 100 MHz to few GHz allows the separation of electromagnetic showers induced by photons or electrons above 100 PeV from those induced by hadrons. This opens up the possibility of measuring the energy fraction transmitted to the electron in a charged current electron neutrino interaction with adequate sampling of the angular distribution of the signal. The radio technique has the potential to complement conventional high energy neutrino detectors with flavor information.
PACS number(s): 96.40.Pq; 96.40.Tv; 95.85.Bh; 13.15.+g
Keywords: Cherenkov radiation, LPM effect, Electromagnetic and hadronic showers, Neutrino detection.
High energy neutrino detection is one of the experimental challenges for the next decade with efforts under way to construct large Cherenkov detectors arrays under water or ice . The size of these detectors must be in the scale of 1 km<sup>3</sup> water equivalent to test the neutrino flux predictions above the TeV that arise in a number of models attempting to explain the origin of highest energy observed cosmic rays and gamma rays . EeV fluxes are difficult to avoid both in the production of the highest energy cosmic rays and in the propagation through the cosmic microwave background. Moreover there are allowed regions in parameter space for neutrino oscillations which may be best probed with high energy neutrinos from cosmological or galactic distances . It is thus desirable to explore possibilities for alternative neutrino detection such as horizontal air showers or radio pulses from high energy showers. These techniques may be advantageous at sufficiently high energies and can provide in any case complementary information relevant for flavor identification.
The detection of coherent radio waves from high energy showers has been known since the 60’s as an interesting alternative for detecting ultra high energy showers . These showers develop large excess negative charge because the vast majority of the shower particles are in the low energy regime dominated by electromagnetic interactions with the electrons in the target (Compton, Bhabha, Möller scattering and electron positron annihilation). The excess charge becomes about $`20\%`$ of the total number of electrons and positrons (shower size), which is proportional to shower energy. This excess charge radiates coherently. As long as the wavelength is larger than the shower dimensions, the electric field amplitude $`E=|\stackrel{}{E}|`$ scales with shower energy. The technique has been proposed for detecting neutrino interactions in ice or sand . It has potential advantages such as the relatively low cost of the detectors (antennae), the large attenuation length for radio waves and most importantly the fact that information on the excess charge distribution can, in principle, be reconstructed from the radiation pattern because the radiation is coherent.
When a particle of charge $`z`$ travels through a medium of refractive index $`n`$ with velocity $`\stackrel{}{v}=\stackrel{}{\beta }c>c/n`$ Cherenkov light is emitted at the Cherenkov angle $`\theta _C`$, verifying $`\mathrm{cos}\theta _C=(\beta n)^1`$, with a power spectrum given by the well known Frank-Tamm result :
$$\frac{d^2W}{d\nu dl}=\left[\frac{4\pi ^2\mathrm{}}{c}\alpha \right]z^2\nu \left[1\frac{1}{\beta ^2n^2}\right],$$
(1)
with $`dl=c\beta dt`$, the particle track, and $`\alpha `$ the fine structure constant. This is the standard approximation used for most Cherenkov applications for wavelengths orders of magnitude smaller than the tracks.
The frequency band over which Cherenkov radiation is emitted can extend well beyond the familiar optical band if the medium is transparent. As the radiation wavelength becomes comparable to the particle tracks the emission from all particles is coherent and the excess charge distribution in the shower generates a complex radiation pattern. It is most convenient to work directly with the Fourier transform of the radiated electric field, $`\stackrel{}{E}`$, which can be directly obtained from Maxwell’s equations in a dielectric medium . In the Fraunhofer limit (observation distance $`R`$ much greater than the tracklength) the contribution of an infinitesimal particle track $`\stackrel{}{\delta l}=\stackrel{}{v}\delta t`$ is given by:
$$R\stackrel{}{E}(\omega ,\stackrel{}{\mathrm{x}})=\frac{e\mu _\mathrm{r}i\omega }{2\pi ϵ_0\mathrm{c}^2}\stackrel{}{\delta }l_{}\mathrm{e}^{i(\omega \stackrel{}{k}\stackrel{}{v}_1)\mathrm{t}_1}\mathrm{e}^{ikR},$$
(2)
where $`\stackrel{}{k}`$, $`\stackrel{}{k}\stackrel{}{R}`$, is the wave vector in the direction of observation ($`\stackrel{}{R}`$) and $`\stackrel{}{l}_{}`$ is the tracklength projected onto a plane perpendicular to the observing direction.
This simple expression displays in a transparent form three most important characteristics of such signals: The proportionality between the electric field amplitude and the tracklength, the fact that in the Cherenkov direction ($`\omega \stackrel{}{k}\stackrel{}{v}=0`$) there is no phase factor associated to the position along the track direction and the fact that radiation is polarized in the direction of $`\stackrel{}{l}_{}`$, that is in the apparent direction of the track as seen from an observer located at $`\stackrel{}{\mathrm{x}}`$.
Recent numerical simulations of radio pulses from both electromagnetic and hadronic showers are based on this expression. For energies below 10 PeV full simulations are possible . The characteristic angular distributions and frequency spectra are shown in Figs. 1,2. We can understand most of the pulse characteristics by studying the particle distributions in a shower as the excess charge follows the electron distribution closely.
To a good approximation the pulse is the Fourier transform of the spatial distribution of the excess charge. For many purposes it is sufficient to study the Fourier transform of the one dimensional distribution (in shower axis $`z`$) as has been extensively checked :
$$R|\stackrel{}{E}(\omega ,\stackrel{}{\mathrm{x}})|\frac{e\mu _\mathrm{r}i\omega }{2\pi ϵ_0\mathrm{c}^2}\mathrm{e}^{ikR}sin\theta 𝑑zQ(z)\mathrm{e}^{ipz}$$
(3)
We here introduce the parameter $`p(\theta ,\omega )=\omega /c(1n\mathrm{cos}\theta )`$ to transparently relate the radio emission spectrum to the Fourier transform of the (excess) charge distribution $`Q(z)`$. This approximation together with hybrid techniques combining simulation and parameterization of shower development have allowed the characterization of pulses from showers of energy up to 100 EeV .
The angular distribution of the pulse has a main ”diffraction” peak corresponding to $`p=0`$, the Cherenkov direction, see Fig. 1. For $`|p|l_{sh}1`$, where $`l_{sh}`$ is a length scale parameter for the shower , the electric field spectrum accurately scales with electron tracklength, see Fig. 2. In electromagnetic showers the tracklength is proportional to the energy and for hadronic showers it scales with a slowly varying fraction of the energy ($`8092\%`$ for shower energies between 100 TeV and 100 EeV) .
The scaling with electromagnetic energy is broken by interference from different parts of the shower when $`|p|l_{sh}1`$. As a result the frequency spectrum stops rising linearly with frequency and has a maximum $`\omega _M(\theta )`$ which depends strongly on $`\theta `$ as seen in Fig. 2. Expanding the condition for $`p(\theta )`$ about $`\theta _C`$ it simply becomes $`n\mathrm{sin}\theta _Cl_{sh}\mathrm{\Delta }\theta \omega _M/c1`$ which clearly displays how $`\mathrm{\Delta }\theta `$ is inversely proportional to $`\omega _M`$ as shown in the figure. This allows independent establishment of the angle between the observation and Cherenkov directions which is not sufficient to establish the shower direction but it can be combined with other measurements to provide useful information. This relation however breaks down when approaching the Cherenkov direction because the lateral distribution plays the destructive role although there is no interference from different shower depths (Eq. 2).
The ”central peak” at 1 GHz concentrates most of the power. For given frequency the angular spread of the pulse is also inversely proportional to $`l_{sh}`$. This effect hardly shows in showers below 10 PeV with a longitudinal scale that only depends logarithmically on energy. The difference between the longitudinal development of the excess charge for electromagnetic and hadronic showers is not enough to show up in the radiopulse structure (both are governed by the radiation length of the material). Nevertheless the angular width of the pulse reduces significantly for the characteristically elongated electromagnetic showers above 100 PeV because of the LPM effect . This narrowing of the angular distribution allows the identification of elongated showers.
The LPM effect manifests as a dramatic reduction of the pair production and bremsstrahlung cross sections at large energies due to large scale correlations in the atomic electric fields . It only affects the development of showers initiated by photons, electrons or positrons above a given energy, about $`20`$ PeV in ice . Showers initiated by EeV hadrons have high multiplicities (50-100) in their first interaction, and the pions produced typically have energies $`12\%`$ that of the initial hadron. Moreover $`\pi ^0`$’s above 6.7 PeV are more likely to interact in ice than to decay and only about $`2\%`$ of the hadron showers above 10 EeV have one photon with more than $`10\%`$ of the hadron energy. Furthermore in a 100 EeV neutrino interaction for example the fraction of energy transferred to the hadron debris ($`25\%`$ in average) fragments into about 17 hadrons (mostly pions) which have about $`5\%`$ of the transferred energy (except for the leading baryon which would carry a fraction $`1K`$ where $`K`$ is the inelasticity). As a result the photons that are responsible for the electromagnetic subshowers (from $`\pi ^0`$ and other short lived particles decay) have energies which are far removed from that of the initial neutrino. Very few hadronic showers induced by neutrino interactions of 100 EeV would display an LPM tail. For the photon to exceed $`100`$ PeV with a probability greater than $`2\%`$, the initial neutrino energy should be above 80 EeV.
The elongation has a dramatic effect on the angular distribution of the radio pulse. For electromagnetic showers the central peak width narrows as $`E^{1/3}`$ above 20 PeV. A 10 EeV electron produces a pulse which is about 10 times narrower than that of a hadronic shower of the same energy what makes differentiation between pulses from electromagnetic and hadronic cascades possible in principle, allowing the characterization of electron neutrinos (see Fig. 3). For showers initiated by hadrons above 10 EeV the pulse shows a characteristic angular distribution of interference of two periodicities corresponding to the two length scales, one associated to the hadronic shower and the second, longer but of much less intensity associated to the electromagnetic LPM tail . The radio pulse for an electron neutrino interaction has an interference pattern of similar nature. As the average fraction of energy transfer to the hadron debris is expected to be about $`<y>=0.25`$ this interference effect is typically enhanced as shown in Fig. 4. The angular distribution of the pulse retains enough information to allow independent extraction of the total electromagnetic energy in both showers, that is to determine the individual energy transfer of the reaction.
Typical energy thresholds for detecting electromagnetic showers with single antennas have been made in and are typically tens of PeV for showers produced at 1 km from the antenna assuming nominal frequencies of $`\nu _0=1`$ GHz and bandwidths of $`0.1\nu _0`$. This corresponds of course to the case that the antenna lies just in the illuminated region of the central peak. The volume of the illuminated region decreases linearly as $`\nu _0`$ rises and is also significantly different for electromagnetic and hadronic showers of energy above 100 PeV.
Clearly a signal from a single antenna would be of little use for neutrino detection unless information about the shower direction and/or the shower energy could be obtained from it. If this was not the case it would be impossible to distinguish them from nearby pulses produced by low energy showers such as those induced by deeply penetrating muons. Information on neutrino interactions can only be obtained by placing antennas in an array covering a large region whether on the ice surface or under it . The arrival times for pulses, the polarization , the relative amplitudes of the signals, and the frequencies at different positions of the array elements are in principle experimentally accessible and would give relevant and redundant information. The technique is similar to ”conventional” neutrino detector proposals but can be highly enriched with the angular diffraction patterns, the frequency spectra and the polarization.
For intermediate energies one looks for events coming from ”below” where the Earth provides a shield for all other types of particles. For extremely high energies ($`>`$ PeV) however the Earth becomes opaque and neutrino events have to be searched in the horizontal direction or possibly from ”above”. Although some high energy showers can be expected from other processes such as atmospheric muon bremsstrahlung at PeV energies the background of these events is sufficiently suppressed. There is redundant information that allows a variety of cross checks. For instance timing can be used to establish the shower position in a manner very similar to arrays of particle detectors detecting air showers, but the spatial distribution of the signal in the detector can be also used for the same purpose, even signal polarization provides an interesting cross check of the shower orientation, what will also be particularly useful to filter spurious noise signals out.
The antenna/array parameters are crucial for performance. Most importantly operating frequency $`\nu _0`$, bandwidth $`\mathrm{\Delta }\nu `$ and array spacing. These parameters are deeply interconnected at detection level and therefore require complex optimization once the noise levels are well understood. The spacing of the antenna array will determine the minimum distance at which the geometry of the illuminated region corresponding to the Cherenkov central peak can be reconstructed, what would indicate the position of the shower. The nominal frequency will determine both the width of the diffraction peaks and the transmitting properties of the medium so that array spacing should be adjusted to the choice of antenna. Lastly the larger the bandwidth the better the signal to noise ratio because the noise behaves as $`\sqrt{\mathrm{\Delta }\nu }`$.
Neglecting attenuation the ratio of shower energy to distance has to be above a given value for a shower to be detected. As $`\nu _0`$ approaches the naively optimal value for coherence of 1 GHz the attenuation distance drops below 1 km and what is more problematic temperature effects become important possibly forcing detections to be within scales less than $`1`$ km for such antennas. Using lower frequencies the signal to noise ratio drops and higher energy thresholds are needed to compensate the loss of signal. This may be nevertheless advisable if one is ready to concentrate on neutrinos of the highest energies allowing detection at distances above 1 km.
We have discussed the implications of radio pulse calculations for high energy shower detection stressing how different features of the signal can be used for shower characterization. We have shown how the LPM effect allows the separation of charged current electron neutrino interactions from the rest, and in principle how the technique can be used to extract the energy fraction transmitted to the electron. We have avoided the discussion of unresolved experimental issues , i.e. noise, which are likely to determine the final sensitivity of the technique, that is the precise energy value above which showers become detectable over sufficiently long distances. This sensitivity will be also completely dependent on the experimental setup which will have to be optimized accordingly. These crucial issues have to be addressed with in situ experiments and there are efforts in this direction , but are unlikely to change the general conclusions obtained here.
Acknowledgements: We thank F. Halzen for suggestions after reading the manuscript and G. Parente, T. Stanev and I.M. Zheleznykh for helpful discussions. This work was supported in part by CICYT (AEN96-1773) and by Xunta de Galicia (XUGA-20604A96). J.A. thanks the Xunta de Galicia for financial support.
|
no-problem/9901/cond-mat9901341.html
|
ar5iv
|
text
|
# Experimental determination of 𝐵-𝑇 phase diagram of YBa2Cu3O7-δ to 150T for 𝐵⟂𝑐
\[
## Abstract
The $`B`$-$`T`$ phase diagram for thin film YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub> with $`B`$ parallel to the superconducting layers has been constructed from GHz transport measurements to 150T. Evidence for a transition from a high $`T`$ regime dominated by orbital effects, to a low $`T`$ regime where paramagnetic limiting drives the quenching of superconductivity, is seen. Up to 110T the upper critical field is found to be linear in $`T`$ and in remarkable agreement with extrapolation of the longstanding result of Welp et al arising from magnetisation measurements to 6T. Beyond this a departure from linear behaviour occurs at $`T`$=74K, where a 3D-2D crossover is expected to occur.
\] Recent magneto-transport measurements on high-$`T_c`$ cuprates have provided invaluable information in building a complete picture of these materials, essential to the development of a rigorous theory for the phenomenon of high temperature superconductivity. For example, divergence in the upper critical field at low $`T`$ has been reported in overdoped Tl<sub>2</sub>Ba<sub>2</sub>CuO<sub>6</sub> , Bi<sub>2</sub>Sr<sub>2</sub>CuO<sub>y</sub> and La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> . Such results have provided the impetus for considerations of unconventional behaviour in the anisotropic high-$`T_c`$ cuprates such as reentrant superconductivity and the possibility to exceed the paramagnetic limit . These measurements have typically relied on millisecond pulsed fields to observe upper critical fields which exceed the range ($``$35T) of steady field magnets. For higher $`T_c`$ materials such as optimally oxygen-doped YBCO ($`T_c`$90K) access to the normal state requires magnetic fields well in excess of those generated by ms pulsed systems except for $`T`$ very near $`T_c`$. Explosive flux compression technology has been used to access this high field regime , in one case providing evidence for paramagnetic limiting of the upper critical field in YBCO for $`B`$ parallel to the superconducting layers ($`Bc`$) . However, such measurements are extremely difficult and opportunities to make them are few. Single-turn coil magnetic field generators have also been used to make transport measurements on thin film YBCO . While these systems do not produce fields in the range of flux compression techniques, they do allow for systematic, repeatable measurements to be made as the destruction of the coil does not damage the sample or cryostat. Nevertheless, transport measurements remain difficult since the peak field is reached in a few $`\mu `$s and the maximum d$`B`$/dt exceeds $`10^8`$T/s.
Previous flux compression measurements on thin film YBCO with $`Bc`$ gave an onset of dissipation $`B_{ons}`$=150T and an upper critical field $`B_{c2}`$=240T at 1.6K. This $`B_{c2}`$ is significantly smaller than $`B_{c2}^{}`$(0)=674T predicted by Welp et al who applied the Werthamer-Helfand-Hohenberg (WHH) formalism, accounting for orbital effects only, following measurements of the slope d$`B_{c2}`$/d$`T`$ near $`T_c`$. This large discrepancy has been interpreted in terms of the Clogston-Chandresekhar paramagnetic limit $`B_p`$ which arises when the Zeeman energy exceeds the superconducting energy gap $`\mathrm{\Delta }_0`$, thus destroying the Cooper pair singlet state. Within BCS theory $`B_p`$=$`\gamma T_c`$, with $`\gamma `$=1.84T/K which, for optimally-doped YBCO gives $`B_p`$170T. In stark contrast to these results for in-plane magnetic fields, measurements in the alternative and more widely-studied configuration, $`Bc`$, have mapped out the entire phase diagram in good agreement with the WHH model.
Here we report transport measurements which have allowed the $`B`$-$`T`$ phase diagram of YBCO to be constructed for $`B`$$`<`$150T in the $`Bc`$ orientation, greatly extending previous magnetisation measurements to 6T . Although explosive flux compression techniques have previously been used to access this regime , the single turn coil system allows systematic measurements to be made on a single sample with both rising and falling magnetic field. Measurements on thin-film samples were made using a GHz technique in a single-turn coil system generating fields to 150T. The superconducting-normal transition was observed to be an equilibrium process, evidenced by the absence of any measurable hysteresis between up and down $`B`$ sweeps. Above $`T`$=74K $`B_{c2}`$ is found to be linear in $`T`$ with the slope $`\alpha `$=d$`B_{c2}`$/d$`T`$ corresponding closely to that found in magnetisation measurements . Below 74K a departure from this slope is observed and is understood as arising from a transition from 3D behaviour where orbital effects quench superconductivity to 2D behaviour where paramagnetic limiting dominates.
The experimental configuration is shown in the inset to Fig. 1. A symmetric triplet coplanar transmission line (CTL) carries a microwave signal, $`\nu `$=0.8GHz-1GHz, past the sample and the transmission $`S`$ is modulated by the resistivity $`\rho `$ of the sample. Full details of the experimental arrangement are set out in Ref. . A flow of cold He gas gives access to $`T`$ in the range 7K-300K with sample $`T`$ monitored by a AuFe-Cromel thermocouple mounted on the back side of the substrate. Two thermocouples mounted side by side on the same sample gave consistent readings to within 0.5K. Discharge of a 40kV capacitor bank into a 10mm diameter single turn copper coil generated fields to 150T and the copper coil was vaporised in the process, leaving the sample and cryostat intact .
A YBCO film, thickness 250nm, $`T_c`$=87.2K and critical current $`J_c`$=3.14MA/cm<sup>2</sup> at 77K, was grown by on-axis dc magnetron sputtering on a MgO (001) substrate, with the $`c`$-axis oriented in the growth direction, at $`T`$=$`770^{}`$C in an Argon-Oxygen atmosphere with a deposition time $``$90min. The film was etched to produce a 20$`\mu `$m strip perpendicular to the CTL to match the sample resistance to the characteristic impedance of the CTL Z=50$`\mathrm{\Omega }`$. A 50nm dielectric layer of Si<sub>3</sub>N<sub>4</sub> separated the film and CTL so that coupling to the sample was capacitive. This eliminates the need for ohmic contacts to the sample which can be problematic in pulsed fields .
Raw transmission $`S`$ as a function of $`B`$ is plotted for $`T`$=60K and 70K in Fig. 1. The single-turn coil system produces a number of cycles of $`B`$ prior to destruction (Fig. 1 inset). The absence of any hysteresis in the data was consistently observed in a number of samples at a range of temperatures, providing confidence in the critical field information obtained and in its comparison with equilibrium models for high-$`T_c`$ behaviour.
In the superconducting state the sample acts as an equipotential across the CTL, completely attenuating the microwave signal, resulting in zero transmission. As the applied field $`B`$ drives the sample normal, $`S`$ increases with increasing $`\rho `$. A model of this response is shown in the inset to Fig. 2, calculated assuming capacitive coupling across a thin dielectric layer to a 2D sheet of electrons . Sharp noise spikes in the data are attributed to GHz emission from the plasma produced in vaporisation of the single turn coil and are predominantly in the direction of increasing $`S`$ since the technique measures transmitted power (as opposed to voltage).
Fits to the raw data for the decreasing $`B`$ sweep which take into account the positive-going nature of the noise spikes are shown in Fig. 2. We define $`B_{c2}`$ as the intersection of the tangent to the transmission curve in the transition region and that immediately after it, following Ref. on the basis that this gives values close to the 90$`\%`$ criterion and in good agreement with tunnelling data. For the lowest $`T`$ measurements, determining $`B_{c2}`$ becomes more difficult because there is less saturation region. We find, however, that the form of the transition is essentially the same for all $`T`$. $`B_{ons}`$ is defined as the point at which the transmission departs from $`S`$=0.
Fig. 3 shows the $`B`$-$`T`$ phase diagram for $`T`$$`>`$60K, with $`B_{ons}`$ and $`B_{c2}`$ values determined from the complete data set, a subset of which is shown in Fig. 2. In magnetisation measurements up to 6T on single crystal YBCO with $`Bc`$, $`B_{c2}`$ was found to be linear in $`T`$ with d$`B_{c2}`$/d$`T`$=-10.5T/K . The extrapolation of this slope $`\alpha `$ is plotted in Fig. 3 and we find that our data follow it closely down to $`T`$=74K where $`B_{c2}`$100T. Note that as in Ref. this line intersects the $`T`$ axis slightly below $`T_c`$ as do the $`B_{ons}`$ data. This is allowed for in Fig. 4 below.
Discrepancies between $`B_{c2}`$ determined from resistive measurements and either magnetisation or specific heat data have been reported, and it has been argued that the latter two better probe the mean-field $`B_{c2}`$ than do resistivity measurements . With the definition of $`B_{c2}`$ used here for GHz measurements in the high field regime the good agreement we find with d$`B_{c2}`$/d$`T`$ determined from low field magnetisation measurements suggests that, in our case, probing resistivity yields a $`B_{c2}`$ in accordance with the mean-field value.
Although WHH theory includes paramagnetic and spin-orbit effects, in this Letter we consider the model arising from orbital effects only, as applied by Welp et al . Whereas this model predicts a departure from the slope $`\alpha `$ only at low $`T`$, we see clear evidence for a departure below 74K. Previous measurements by Nakagawa et al provide convincing evidence for the applicability of this model to YBCO for $`Bc`$ (Fig. 3 inset). Near $`T_c`$ their data agree well with the d$`B_{c2}`$/d$`T`$ slope determined previously and application of the WHH result:
$$B_{c2}(0)=0.7T_c(dB_{c2}/dT)|_{T_c}$$
(1)
gives $`B_{c2}^{}`$(0)=112T, in close agreement with that measured. Indeed, this agreement extends over the entire phase boundary (dotted line in Fig. 3 inset). A larger d$`B_{c2}`$/d$`T`$ for the case $`Bc`$, due to anisotropy in coherence lengths $`\xi _c`$ out of plane and $`\xi _{ab}`$ in plane, leads to a significantly larger $`B_{c2}^{}`$(0)$`>`$600T in this model. A deviation well below this value is clear in Fig. 3, however. The good agreement between WHH and experiment for $`Bc`$ and the departure from expected behaviour for $`Bc`$ suggests that a different mechanism may be responsible for the quenching of superconductivity in the latter case. Misalignment of $`B`$ has been considered but cannot explain the low $`T`$ results of Ref. or the deviation observed here.
Paramagnetic-limited upper critical fields have been observed in UPd<sub>2</sub>Al<sub>3</sub> and $`B_{c2}`$$`>`$$`B_p`$ has recently been seen in (TMTSF)<sub>2</sub>PF<sub>6</sub> . The results in Ref. provided the first evidence for paramagnetic limiting in a high $`T_c`$ material. The paramagnetic limit in YBCO is expected to occur at $`B_p`$$``$170T, well above $`B_{c2}^{}`$(0)=110T measured for $`Bc`$ , and well below $`B_{c2}^{}`$(0)=640T predicted for $`Bc`$ with $`T_c`$=87K (see Ref. ). The difference between WHH and experiment for $`Bc`$ is shown clearly in Fig. 4 where the WHH phase boundary and the data from Fig. 3 are plotted on a full phase diagram. The departure from this phase boundary is consistent with the results from flux compression measurements (also plotted) and provides further evidence for paramagnetic limiting of $`B_{c2}`$ for $`Bc`$ .
A possible fit to the data of Refs. for $`Bc`$ has been obtained by including spin-orbit and paramagnetic parameters in the WHH and Maki models. Whilst it is instructive to apply these models, the value of the parameter giving rise to paramagnetic effects is tending to be unphysical , and spin-orbit scattering should be negligible in YBCO above a few Tesla since it is in the clean limit . Furthermore, the applicability of these 3D models to in-plane critical fields in layered superconductors is brought into question by analysis which has found that there is a non-zero temperature $`T^{}`$$`<`$$`T_c`$ below which the normal cores of the vortices are smaller than the lattice constant $`d`$, defined by $`\xi _c(T^{})`$=$`d/\sqrt{2}=8.5`$Å. Below $`T^{}`$ orbital effects should no longer provide a mechanism for quenching of superconductivity and, in the absence of paramagnetic and spin-orbit effects, $`B_{c2}`$ would be infinite.
A 3D-2D crossover is expected to occur near $`T^{}`$ when $`\xi _c`$ becomes smaller than the inter-plane spacing. $`\xi _c`$(70K)$``$8Å is the separation between pairs of CuO planes , and $`\xi _c`$(80K)$``$d=12Å is the lattice constant . The departure from the linear behaviour of $`B_{c2}(T)`$ observed here at $`T`$=74K is almost midway between these two characteristic temperatures. A theory which includes the finite thickness of the superconducting layers in the cuprates, but neglects paramagnetic effects , predicts a crossover in $`B_{c2}(T)`$ from linear to non-linear behaviour at $`T`$=0.9$`T_c`$$``$78K, close to the departure observed here. The fact that we see $`B_{c2}`$ increase less rapidly with $`T`$ in the nonlinear regime, in contrast to this theory , is most likely due to paramagnetic limiting.
For systems in which paramagnetic effects are important a finite momentum or Fulde-Ferrel-Larkin-Ovchinnikov (FFLO) superconducting state may exist. An extension of the original theory suggests that a large ratio of orbital to paramagnetic terms, $`\beta `$=$`\sqrt{2}B_{c2}(0)/B_p`$, is favourable to the formation of such a state, provided the superconductor is in the clean limit . Here $`B_{c2}(0)`$ is that defined in Eq. 1. The first reported observation of the FFLO state was in UPd<sub>2</sub>Al<sub>3</sub> with $`\beta `$=2.4 . More recently it has been suggested that the FFLO state should be enhanced in quasi-2D superconductors , making YBCO, which is in the clean limit with $`\beta `$=5.7 for $`Bc`$, an ideal candidate.
The phase diagram predicted for $`d`$-wave superconductivity in layered materials with $`Bc`$ which accounts for coupling between the spins and the applied $`B`$ includes a FFLO phase. A comparison with experiment is shown in Fig. 4 where the only free parameter is the $`g`$-factor, which is set equal to 2. The agreement between this theory and the low $`T`$ data is surprisingly good, supporting the conjecture that quenching of superconductivity is driven by paramagnetic effects at low $`T`$. In contrast, the data close to $`T_c`$ is better fitted by WHH, which is probably due to the fact that the theory in Ref. does not consider orbital effects. We note that $`B_{ons}`$ for the data of Ref. coincides with the position of the first order phase transition between the zero momentum and FFLO state. This may not be accidental since the superfluid density and $`J_c`$ may be lower in the FFLO phase .
In conclusion, we have found that for $`Bc`$ the upper critical field in YBCO follows the WHH phase boundary for $`T`$$`>`$74K. A clear departure is observed below 74K near the expected position of a 3D-2D crossover. The low $`T`$ data below the crossover is consistent with a theory which accounts for coupling of the spins to the applied $`B`$. This suggests a transition from a high $`T`$ regime where superconductivity is governed by orbital effects to a low $`T`$ regime where paramagnetic effects dominate. Combined with the data of Refs. this constitutes strong evidence for the experimental realisation of paramagnetic limiting in a high-$`T_c`$ cuprate superconductor.
We thank K. Yang, S.L. Sondhi and R.H. McKenzie for detailed discussions and comments on this manuscript, B. Sankrithyan, who grew the film, and researchers at the Megagauss Laboratory, ISSP, Tokyo University for expert technical help and advice.
|
no-problem/9901/astro-ph9901119.html
|
ar5iv
|
text
|
# The SiC problem: astronomical and meteoritic evidence
## 1 Introduction
Most of the solid material in the solar system is believed to have originated as small particles that condensed in outflows from stars. However, most solar system solids (predominantly silicates) have been reprocessed and/or homogenized so extensively that even the most primitive meteorite silicate samples no longer contain evidence of their origins. But some types of dust particles in the solar system have not been reprocessed and can potentially be associated with their stellar origin. One such dust type, silicon carbide (SiC), is believed to be a significant constituent of the dust around carbon-rich AGB stars (Gilman 1969; Treffers & Cohen 1974). Silicon carbide grains can be divided into two basic groups: $`\alpha `$-SiC if the structure is one of the many hexagonal or rhombohedral polytypes and $`\beta `$-SiC if the structure is cubic (e.g., Bechstedt et al. 1997). Silicon carbide grains exhibit a strong mid-infrared feature between 10 and 12 $`\mu `$m, with the peak of the $`\beta `$-SiC feature occurring about 0.4 $`\mu `$m shortwards of that for $`\alpha `$-SiC. Until now, the observed peak wavelengths of the SiC feature in astronomical spectra have been interpreted as indicating $`\alpha `$-SiC to be the dominant type of SiC around carbon stars (e.g. Baron et al. 1985; Pégourié 1988; Groenewegen 1995; Speck et al. 1997a,b). In fact, Speck et al. 1997a,b found no evidence of $`\beta `$-SiC in these circumstellar environments. Silicon carbide grains found in meteorites have isotopic compositions which imply that most of these grains were formed around carbon stars, with small amounts forming around novae and supernovae (see Hoppe & Ott 1997; Ott 1993 and references therein). All studies to date of meteoritic SiC grains have found them to be of the $`\beta `$-type (Bernatowicz 1997). $`\beta `$-SiC will transform into $`\alpha `$-SiC above 2100<sup>o</sup>C but the reverse process is thermodynamically unlikely. There is therefore an apparent discrepancy between the meteoritic and astronomical SiC-types, which has been discussed in detail by Speck et al. (1997a,b).
We present new infrared (IR) absorption measurements of thin films of $`\alpha `$\- and $`\beta `$-SiC created by compression in a diamond anvil cell. Unlike some other methods, a dispersive medium (such as potassium bromide; KBr) is not used. This relatively new approach is quantitative, if sufficient care is taken to produce an appropriately thin and uniform film, as shown by comparison of thin film spectra of various minerals to reflectivity data from the same samples (Hofmeister 1995; 1997 and references therein). Moreover, thin film spectra of garnets are nearly identical to single-crystal absorption data acquired in a vacuum (Hofmeister, 1995): hence, thin film spectra can be applied to astronomical data without further manipulation. Our measurements strongly suggest, through comparison of the new thin film data with previous IR spectra collected for fine-grained KBr dispersions (in which the dust particles are dispersed in a KBr pellet), that the “matrix correction” wavelength shift, invoked by Dorschner et al. (1978) and adopted by other authors (e.g. Friedemann et al. 1981; Borghesi et al. 1985), should not be applied to laboratory spectra of sub-micron grain size dispersions of SiC: it was the use of this “KBr correction” which caused the above-mentioned discrepancy between the SiC-types found in meteorites and around carbon stars. A companion paper (Hofmeister & Speck, in preparation) clarifies of the roles of scattering, absorption, reflection, and baseline correction in laboratory measurements, sheds light on problems associated with the powder dispersion technique, and discusses the conditions appropriate for the application of such data.
## 2 Laboratory techniques and results for thin-film samples
Single-crystals of $`\alpha `$-SiC were purchased from Alpha/Aesar (catalog no. 36224). This specimen is 99.8% SiC, consisting of hexagonal platelets of 50 to 250 $`\mu `$m in diameter and 5 to 15 $`\mu `$m thick. Less than 1% of the platelets had an amber color, the remainder were pale grey. All were transparent in the visible with smooth, highly reflective surfaces. Polycrystals of $`\beta `$-SiC were donated by Superior Graphite. The purity of this sample is also 99.8%. One batch consisted of 1 $`\mu `$m powder, the other was a conglomerate of equant crystallites of up to 25 $`\mu `$m in size. For this study, only the gray crystals of $`\alpha `$-SiC were examined.
Mid-IR spectra were obtained from 450 to 4000 cm<sup>-1</sup> (2.5-22.2 $`\mu `$m) at 2 cm<sup>-1</sup> ($``$ 0.01 $`\mu `$m) resolution using a liquid-nitrogen cooled HgCdTe detector, a KBr beam splitter and an evacuated Bomem DA 3.02 Fourier transform interferometer. Thin films were created through compression in a diamond anvil cell which was interfaced with the spectrometer using a beam condenser. Type II diamonds were used. Film thickness was estimated from the initial grain size, by the relative relief and color seen among the various films, and by the increase in grain diameter from the initial size during compression. Efforts were made to cover the entire diamond tip (0.6 mm diameter) with an even layer of sample, but slight irregularities in the thickness were inevitable. Reference spectra were collected from the empty DAC. Uncertainties in peak positions are related to peak widths because the accuracy of the FTIR spectrometer is high, $`\pm `$0.01 cm<sup>-1</sup>. For procedural details see Hofmeister (1997).
Spectra obtained from $`\alpha `$-SiC (Fig. 1a,b) have an intense, broad band near 11.8 $`\mu `$m. The peak position lies between the longitudinal optic mode (LO) and transverse optic mode (TO) components observed by Spitzer et al. (1959) and a shoulder is seen at the LO position. A shoulder also occurs at 12.2 $`\mu `$m. The sample thickness could not be precisely determined, but was estimated to be sub-micron. Spectra obtained from $`\beta `$-SiC (Fig. 1c-g) depend somewhat on thickness. For the thinnest films, of sub-micron thickness (Fig. 1c,d), a fairly symmetric peak is found at 11.3 to 11.4 $`\mu `$m, and a weak shoulder exists at 10.7 $`\mu `$m, consistent with excitation of the LO component. Spectra from thicker film samples, $``$$`\mu `$m in thickness from visual inspection (Fig. 1e,f), have a peak at a similar position, with an asymmetric increase in intensity on the short-wavelength side, and display additional weak features. The 12.7 $`\mu `$m band is due to the TO feature. The weak, broad band at 13.4 microns is not an absorbance feature but is due to the Christiansen effect which gives a minimum when the real part of the index of refraction is unity (Hapke 1993). The asymmetry of the main peak is due to the baseline rising towards the visible, probably a scattering effect from the grain boundaries. A spectrum from the thickest sample examined ($``$$`\mu `$m), has high absorbance values overall, with the Si-C stretching peak superimposed (Fig. 1g). The appearance of the peak is intermediate between the peaks observed from the thin (Fig. 1d) and moderately thick samples (Fig. 1e), in that the main peak is symmetric but a weak subsidiary feature exists at 12.7 $`\mu `$m. Below 8 $`\mu `$m, the absorbance in Fig. 1g drops, rather than increasing as in the other spectra, because of interference fringes in the near-IR (not shown). These interference fringes indicate a distance of 5 $`\mu `$m, inferred to be the separation of the diamond anvils. The peak positions of the $`\beta `$-SiC samples are relatively independent of the thickness. No difference can be discerned between the two samples of $`\beta `$-SiC (fine grain size vs. mixed grain sizes). Fig. 1c,g were made from a mixture of grain sizes and Fig. 1d,e,f are from the 1 $`\mu `$m powder fraction. Additional spectra from both $`\beta `$\- samples resembled those shown. The appearance of the spectra are consistent with being due to pure absorption for the thinnest samples (Fig. 1c,d) and absorption with minor reflection for the thicker samples (Fig. 1e-g), given that the LO-TO coupling is stronger in $`\beta `$-SiC than in $`\alpha `$-SiC (Hofmeister & Speck, in preparation).
## 3 Comparison with dispersed-sample results
KBr matrix spectra of $`\beta `$-SiC obtained by Borghesi et al. (1985) for a fine grain size sample (mean diameter modeled by them as 0.02 $`\mu `$m, average diameter observed in TEM as 0.12 $`\mu `$m) closely match our own thin-film data, particularly the spectrum in Fig. 1e (shown in Fig.2a). The greatest difference is that the TO mode appears as a shoulder, rather than a separate peak. This difference is obviously due to sample thickness, because the thinner film of Fig. 1d has a barely discernable shoulder at the TO position. The LO mode occurs as a weak shoulder in their dispersion data. Their uncorrected peak barycenter was at 11.4 $`\mu `$m, the same as for our thin films. The match of Borghesi et al.’s dispersion data with Fig. 1e is consistent with an estimated film thickness $`<`$0.1 $`\mu `$m. The $`\beta `$-SiC spectrum of Papoular et al. (1998), with maximum absorbance of 0.4, has a peak at 11.5 $`\mu `$m, in agreement with previous results and Fig. 1. Papoular et al. (1998) also present two unusual spectra of $`\beta `$-SiC consisting of broad overlapping peaks at 10.9 and 12.2 $`\mu `$m. These positions are close to the TO and LO components. The very high absorbance units of 1 and 2.5 for these samples suggest over-loaded pellets. For extreme concentrations of SiC (or large thicknesses), light is reflected between the TO and LO modes: the scattering in the pellet produces the dip in absorption. Problems occur at high absorption because the partial opacity induces a frequency dependent baseline.
For $`\alpha `$-SiC, the KBr-dispersion spectrum of Borghesi et al. (1985)’s smallest-grained (mean diameter modeled by them as 0.04 $`\mu `$m, average diameter observed in TEM as 0.16 $`\mu `$m) and purest sample (SiC-600) closely matches the spectrum of our thinnest film (Fig. 1a; comparison shown in Fig. 2b). Its peak position of 11.6 $`\mu `$m equals our result, given the experimental uncertainties. The positions of the shoulders are comparable to the LO and TO positions (Spitzer et al. 1959). Their sample N is compromised by $``$ 10% impurities (C and SiO<sub>2</sub>). Their SiC-1200 sample was 3-10 times larger grained, even for the ground and sedimented fraction, and is inappropriate for comparison. The study by Friedemann et al. (1981) involved larger grain sizes, but yielded similar spectral profiles, with a slight shift of the peak position to 11.8 $`\mu `$m.
It is clear that the introduction of a KBr matrix wavelength correction (e.g. Friedemann 1981) is incorrect, since the barycenter peak for KBr dispersions with fine grain sizes and reasonably low concentrations equals that of corresponding thin films, while the peak shapes are in excellent agreement. For these ($`<`$0.1 $`\mu `$m) grain sizes or film thicknesses, bulk absorption rather than surface effects dominates in the vicinity of the intense peak. Only for extremely thick or large grain samples, $``$$`\mu `$m, do the parameters of the dispersions differ from those of a bulk sample but the differences are due to internal scattering among the particulates and sample opacity leading to incorrect assumptions for zero transmission. This issue is discussed further by Hofmeister & Speck (in preparation). Similarly, the application of a KBr correction for silicates (Dorschner et al. 1978) is also problematic. Recent measurements by Colangeli et al. (1993, 1995) indicate minimal matrix effects for various silicates. Thin film data on the other hand do not suffer from these problems.
## 4 Implications for the SiC-type that best matches carbon star spectra
Having established that previous fits of laboratory spectra for SiC to astronomical spectra have been erroneous due to the unnecessary application of a KBr correction factor, we have re-fitted our own UKIRT CGS3 spectra of carbon stars (Speck et al. 1997a,b) without such a correction. We used the same $`\chi ^2`$–minimization routine described by Speck et al. (1997a,b) but the Borghesi et al. (1985) data for $`\alpha `$-SiC (SiC-1200, SiC-600 and SiC-N) and for $`\beta `$-SiC, to which Speck et al. (1997a,b) applied the usual KBr correction, were used uncorrected this time.
A detailed discussion of the fitting procedure can be found in Speck et al. (1997a). The routine was used on the flux-calibrated spectra, over the whole wavelength range (7.5–13.5 $`\mu `$m). All attempted fits involved either a blackbody or a blackbody modified by a $`\lambda ^1`$ emissivity, together with some form of silicon carbide. The results are listed in Table 1 and representative sample fits are shown in Fig. 3. The $`\chi _R^2`$ values are the reduced $`\chi ^2`$ values, given by dividing the $`\chi ^2`$ value by the number of degrees of freedom. The fitting routine was unable to find fits for four of the spectra, those of AFGL 341, AFGL 2699, V Aql and Y CVn. However, these four spectra are unusual in that they display a strong feature in the 7.5-9.5 $`\mu `$m region (see Fig. 2 of Speck et al. 1997a), possibly identifiable with $`\alpha `$:C–H hydrogenated amorphous carbon (Baron et al. 1987, Goebel et al. 1995), and need to be classified separately. Self-absorption by SiC grains is a possibility in some cases (Speck et al. 1997a,b), so the fitting procedure was repeated using either a blackbody or modified blackbody, together with silicon carbide in both emission and absorption simultaneously. The results of this fitting are listed in Table 1: 13 of the 20 spectra that could previously be fitted by SiC in pure emission produced better fits with self-absorption included. Four sources found to have SiC absorption features by Speck et al. (1997a) were also re-fitted and the new results are shown at the bottom of Table 1. Two of these four sources required interstellar silicate absorption as well as circumstellar SiC absorption (see Speck et al. 1997a).
The results in Table 1 shows that there is an obvious predominance of the $`\beta `$-SiC phase and that there is now no evidence for the $`\alpha `$-SiC phase at all. This is in contrast to previous attempts to fit the astronomical SiC feature using similar, and in some cases the same, raw laboratory data, but inappropriately corrected for the KBr dispersion. Previous work found that the best fits were obtained with $`\alpha `$-SiC, and had concluded that there was no unequivocal evidence for the presence of any $`\beta `$-SiC. Without the KBr correction, $`\beta `$-SiC matches the observed features, while $`\alpha `$-SiC does not. Thus there is now no astronomical evidence for the presence of $`\alpha `$-SiC in the circumstellar regions around carbon stars. While $`\alpha `$-SiC might exist in small quantities, all observations to date are consistent with the exclusive presence of $`\beta `$-SiC grains. This resolves the past discrepancy, reconciling astronomical observations and meteoritic samples of silicon carbide grains. Having confirmed that SiC grains observed around carbon stars and those found in meteorites are of the same polytype, further discrepancies need to be addressed. In particular, the differences in grain sizes between astronomical models and meteoritic grains merits attention (see Speck et al. 1997a,b for a detailed discussion).
Furthermore, the current work has demonstrated that mineral spectra produced using the DAC thin film method are directly applicable to astrophysical contexts without further manipulation of the data. It is now appropriate to use the DAC thin film method to produce more mineral spectra of use to astronomers.
Support for AKS was provided by the United Kingdom Particle Physics and Astrophysics Research Council and by University College London. Support for AMH was provided by Washington University. We thank Chris Bittner (Superior Graphite Co.) for providing samples, and Tom Bernatowicz for suggesting this collaboration. This paper is dedicated to Dr. Chris Skinner, who died suddenly on October 21st 1997.
|
no-problem/9901/physics9901040.html
|
ar5iv
|
text
|
# ACCELERATION AND STORAGE OF POLARIZED ELECTRON BEAMS aafootnote aInvited plenary talk presented at the 13th International Symposium on High Energy Spin Physics (SPIN98), Protvino, Russia, September 1998. Also as DESY report 98–182.
## 1 Introduction
This article provides an update on my review at SPIN96 on activities surrounding spin polarization in electron storage rings and accelerators. In the written versions of previous talks at the Spin Symposia I have opened with a review of the basic theory of radiative spin polarization, spin precession and resonance phenomena. That background material is readily available in the proceedings of earlier Symposia and elsewhere . So to avoid repetition I will, on this occasion, launch straight into the main themes. Historical overviews of radiative polarization can be found in .
## 2 High energy storage rings: HERA and LEP
HERA is the $`e^\pm p`$ collider at DESY in Hamburg. The $`e^+`$ or $`e^{}`$ beams run at about 27.5 GeV. Up to the end of 1997 the proton ring ran at 820 GeV. In 1998 it has been running at 920 GeV. $`e^\pm `$ beams in storage rings can become vertically polarized by the Sokolov–Ternov effect (ST) and a key aspect of HERA is that since 1994 longitudinal spin polarization has been supplied to the HERMES experiment with the help of a pair of spin rotators .
The value of the polarization in an $`e^\pm `$ storage ring is the same everywhere around the ring even with rotators running. However, at high energy, as at HERA, the polarization is very sensitive to the size and form of closed orbit distortions. With very careful adjustment of the vertical closed orbit distortion using harmonic closed orbit spin matching , up to about 70 % polarization has been seen at HERA with the HERMES rotators running. This is to be compared with the theoretical maximum for that configuration of 89.06 % .
The polarization at HERA can also be affected by the beam–beam (b–b) forces due to collisions with the proton beam at the H1 and ZEUS experiments where, incidently, the polarization is vertical. Since the b–b forces are very nonlinear it is very difficult to make analytical calculations of their effects on $`e^\pm `$ beams. And of course, it is even more difficult to make analytical estimates of the effects on the polarization. However, the naive expectation is that the b–b forces reduce the polarization and some spin–orbit tracking calculations support that view . Normally is it assumed that it is a good idea to reduce the b-b tune shift (explained below) but as usual, there is no substitute for measurement and in 1996 even during collisions with 50 mA of protons, positron polarizations of about 70 % were observed with the rotators running. One such run lasted ten hours. So, at least in those optics, b–b forces had little influence.
Since a few proton bunches, which would normally be in collision with electrons (positrons), are by intent missing, not all electron (positron) bunches come to collision with protons. Towards the end of 1996 a second polarimeter, built by HERMES, came into operation . In contrast to the original polarimeter which measures the level of vertical polarization in the West area by Compton scattering using the so called single photon technique , the new polarimeter, which employs Compton scattering to measure longitudinal polarization directly close to HERMES and which uses the multi–photon technique, can collect data more quickly. It then became possible to study the positron polarization with sufficient precision on a bunch–to–bunch basis. Figure 1 summarizes a typical measurement for collisions of positrons with about 60 mA of protons
and in this example, contrary to intuition, the colliding bunches have more polarization than the non-colliding bunches. At present we interpret this unexpected result as being due to the b–b tune shift: an oncoming proton bunch appears to the positrons as a nonlinear lens and to a first approximation the colliding positron bunches have betatron tunes which differ from those of the non–colliding bunches. So, by the routine adjustment of some quadrupole strengths to get overall betatron tunes which lead to optically stable running conditions for the colliding bunches and to high polarization (averaged over the bunches), it is possible that the non–colliding bunches are close to a depolarizing spin–orbit resonance (probably a synchrotron sideband resonance of a parent resonance ) and likely that the colliding bunches are not on such a resonance. This interpretation is supported by the fact that on other occasions with slightly different machine tunes, there is either little difference between the colliding and non–colliding polarizations or the colliding bunches indeed have less polarization than the non–colliding bunches. For the measurement of figure 1 the vertical b–b tune shift was about 0.034 for each interaction point.
Apart from the sensitivity to orbital tunes one observes that in the presence of b–b effect the rise time for the polarization after injection is sometimes larger than that expected from standard radiative polarization theory and that the polarization level is sometimes relatively insensitive to the settings of the closed orbit harmonics of the harmonic closed orbit correction scheme . Naturally, since the b–b effect can affect the rise time it makes little sense to calibrate a polarimeter by measuring the rise time after resonant depolarization while the beam is in collision with protons.
The electron (positron) bunches in HERA come in three groups of about sixty bunches with gaps between the groups. This causes dynamic beam loading of the rf cavity system needed to replace the energy lost by radiation. That in turn can cause the synchrotron tune to vary along a group with the result that electrons (positrons) at the beginning of a group can be closer to a depolarizing resonance than those at the end (or vice versa). Thus we sometimes see a variation of the polarization of the colliding bunches across a group.
In 1997 under normal running conditions with typically 80 mA of protons we had about 50 % polarization, averaged over the bunches. Even towards the end of the year when we ran with over 100 mA of protons (vertical b–b tune shift $`0.035`$) a polarization level of 50 % could still be reached. It might have been possible to attain more with careful adjustment of the closed orbit but we must normally make a compromise between tuning the orbit and providing stable running conditions for the high energy physics experiments.
Electrons and positrons can also become polarized in LEP, the $`e^\pm `$ collider at CERN in Geneva. The effect of b–b forces on polarization has also been studied there and it has been found that the polarization is very sensitive to optical parameters , just as at HERA.
So far, we cannot claim that we understand in detail all the effects of b–b forces on the polarization and it has not yet been conclusively demonstrated that it is impossible to get high polarizations in the presence of a b–b effect which is large but not large enough to disrupt the beam itself. More investigations under controlled and reproducible conditions are needed.
In the winter shutdown 1999/2000 we plan to change the geometry of the North and South interaction regions of HERA in order to increase the luminosity supplied to the H1 and ZEUS experiments by a factor of about 4.7 beyond the design value of 1.5$`10^{31}`$ cm<sup>-2</sup>sec<sup>-1</sup> . This will be achieved by reducing the beam cross–sections at the interaction points (IP’s) and by reaching the design currents. The smaller beam sizes will be achieved by having smaller $`\beta `$ functions at the IP’s and by changing the optics in the arcs in order to decrease the horizontal emittance. These changes have profound consequences for $`e^\pm `$ polarization: smaller $`\beta `$ functions imply that the focussing magnets must be moved closer to the IP’s and this in turn means that the “antisolenoids” which currently compensate the H1 and ZEUS experimental solenoids will be removed. In fact new stronger combined quadrupole and dipole magnets will be installed on each side of the H1 and ZEUS IP’s and their fields will overlap with the solenoid fields. At the same time additional spin rotators will be installed to enable H1 and ZEUS to run with longitudinal polarization. We plan to run these rotators in a slightly mistuned state designed so that they effectively compensate for the effect on the equilibrium spin axis of the overlap of solenoid and dipole fields and ensure that the spin axis is still vertical in the arcs — an essential requirement for high polarization . The absence of the antisolenoids means that the resultant orbital coupling must be corrected away with skew quadrupoles and that the computer programs involved in the strong spin matching and calculation of polarization must be upgraded to handle the new and complicated magnetic field configurations near the IP’s. Accounts of the full implications for the maintenance of radiative polarization can be found in .
Since the ratio: (depolarization rate/polarization rate) rises strongly with energy, it was much more difficult to attain high polarization in LEP at the old running energy of about 46 GeV per beam (near the $`Z^0`$) than at HERA with 27.5 GeV. Moreover the vertical polarization of LEP (there are no spin rotators) is of little use for high energy physics and in any case the rise time for the polarization is a few hours compared with the twenty minutes of HERA. In spite of these difficulties the LEP team recorded a polarization of about 57 % in 1993 — a major achievement. Under routine running conditions at about 46 GeV, LEP ran with 5 – 10 % polarization. But this was sufficient for the exploitation of polarization to measure the beam energies, and hence the $`Z^0`$ mass, by means of resonant depolarization, leading to a precision of about 1.5 MeV .
But now LEP runs at above 80 GeV per beam and the polarization is effectively zero; 5 % polarization was recorded at 55.3 GeV but this was down to 2 % at 60.6 GeV. However, vertical polarization can still be used for energy calibration — but indirectly by calibrating a flux loop and sixteen NMR probes in dipoles at about 41, 45, 50 and 55 GeV and then using the calibrated flux loop and NMR probes at above 80 GeV. The estimated systematic error for this method is about 25 MeV per beam but the long term aim is for a precision of about 15 MeV per beam.
## 3 Accelerators
Because the rise time for ST polarization is typically in the range of minutes to many hours, extracted polarized $`e^{}`$ beams can only be obtained from accelerators by injecting a pre-polarized beam from a source. Modern gallium–arsenide sources deliver up to about 80 % electron polarization. But that must then be preserved during acceleration. There are at present no suitable polarized positron sources and therefore no extracted polarized positron beams.
When dealing with polarized beams we can distinguish two basic types of accelerators, namely linear accelerators where, by design, the particle velocity and the accelerating electric field are essentially parallel, and ring accelerators where, in addition, the beam must make many thousands or even millions of turns in the magnetic guide field on the way to full energy. If the particle velocity and the electric field are almost parallel, then according to the T–BMT precession equation there is very little spin precession and hence little opportunity for depolarization. The (lack of) spin precession in the two mile long accelerating section of the SLC at SLAC in California is the prime example of this. The SLC has regularly delivered an electron beam of about 46 GeV with over 70 % polarization .
A good example of the other type is ELSA , the 3.5 GeV ring at Bonn, Germany which accelerates vertically polarized electrons. According to the T–BMT equation, in vertical magnetic fields, spins precess $`a\gamma `$ times per turn where $`a=(g2)/2`$ is the gyromagnetic anomaly and $`\gamma `$ is the Lorentz factor. If the spin precession is in resonance with the orbital motion: $`a\gamma =m_0+m_xQ_x+m_zQ_z+m_sQ_s`$ where the $`m`$’s are integers and the $`Q`$’s are orbital tunes, the spins can be strongly disturbed and the polarization can be lost. Since the precession rate $`a\gamma `$ is proportional to the energy, and increases by unity for every 440 MeV increase in energy, several such resonances must be crossed on the way to 3.5 GeV. A typical example is at 1.32 GeV in ELSA. This corresponds to $`m_0=3`$ but $`m_x=m_z=m_s=0`$. Spin perturbations in this case result from the radial fields “seen” by the spins in the quadrupoles when there is vertical closed orbit distortion. A first approximation for the polarization surviving the crossing of a resonance is given by the Froissart–Stora (FS) formula :
$`{\displaystyle \frac{P_{\mathrm{final}}}{P_{\mathrm{initial}}}}=2e^{\frac{\pi |ϵ|^2}{2\alpha }}1`$ (1)
where $`ϵ`$ is the “resonance strength”, a measure of the dominant spin perturbation at resonance, and $`\alpha `$ expresses the rate of resonance crossing. Thus if the resonance is crossed sufficiently quickly ($`|ϵ|^2/\alpha `$ is small) the polarization is hardly affected but if it is crossed sufficiently slowly ($`|ϵ|^2/\alpha `$ is large) a complete reversal of the vertical polarization can occur without much change in the magnitude. Measurements of the surviving polarization for a range of $`|ϵ|^2/\alpha `$ values are now available from ELSA both for 1.32 GeV and for 1.76 GeV ($`m_0=4`$). The measurements for $`m_0=3`$ show good agreement with the prediction of the FS formula. In particular, by running at $`|ϵ|^2/\alpha 4.0`$ one can preserve the value of the polarization by means of complete spin flip. However, for $`m_0=4`$ only partial spin flip with a $`|P_{\mathrm{final}}/P_{\mathrm{initial}}|0.8`$ is seen even out to $`|ϵ|^2/\alpha 12.0`$. This is probably due to the encroachment of stochastic spin decoherence owing to synchrotron radiation emission at the higher energy. If this is indeed the case, these measurements provide a window on what can be expected from attempts to flip $`e^\pm `$ spins in HERA .
A good compromise between the space occupied by the SLC and the spin perturbation problems of ELSA, is provided by the ring at the Jefferson Laboratory in Virginia, U.S.A. This was designed to provide longitudinally polarized electrons at 4 GeV. However, it is already providing up to 77 % polarization at 5 GeV . This ring combines the best of both worlds; it consists essentially of two parallel superconducting linear accelerators connected at their ends by semicircular arcs of bending magnets. The beam is accelerated to full energy in just five turns. In the arcs the energy is constant so that there is no resonance crossing and in the accelerating sections, just as in the SLC, spin perturbations are negligible. In any case, with so few turns and with such a large acceleration rate (large $`\alpha `$) no depolarization is expected. This machine is already a wonderful tool for research with spin and in the long term with steady improvements it might even be possible to reach 12 GeV .
## 4 Kinetic polarization
At SPIN96 I reported on progress towards obtaining longitudinal electron polarization in the AmPs ring in Amsterdam . This ring runs at up to 900 MeV. The electron beam is injected pre–polarized and a Siberian Snake, based on a superconducting solenoid, is employed to stabilize the polarization and to ensure that the polarization is longitudinal at the internal target. A fascinating and educational aspect of this machine is that, because the normal radiative polarization process is eliminated owing to the fact that the equilibrium polarization lies in the horizontal plane, a weaker polarization mechanism, “kinetic polarization”, might become observable . As reported at this Symposium by Yu. Shatunov, measurements at AmPs have now provided preliminary experimental evidence for this effect . Confirmation of these observations will vindicate efforts to put the theory of the combined radiative polarization and radiative depolarization processes on a firm semiclassical basis. Moreover, kinetic polarization is expected to contribute $`+/`$ a few percent to the $`e^\pm `$ polarization at HERA when the spin rotators are running. But since its magnitude depends sensitively on the details of the closed orbit distortion and since that cannot be measured with sufficient accuracy, kinetic polarization sets a limit to the precision with which the polarimeters can be calibrated by measuring the polarization rise time with the rotators in use.
Perhaps the work at AmPs can be extended at the B$`\tau `$CF ring being planned in Beijing and at the MIT–Bates ring .
## 5 “Spinlight”
In high energy storage rings, $`e^\pm `$ polarization is normally measured using Compton scattering. In linear accelerators Moeller scattering, which is destructive, can be used too. However, there is another possibility, namely to measure the tiny, O($`\mathrm{}`$), component of synchrotron radiation (“spin light”) which depends on the orientation of the spins . This causes a small difference between the spectra of very high energy photons radiated from vertically polarized and unpolarized bunches. The difference, which has already been detected at low energy , is proportional to the polarization and therefore supplies a way of measuring the latter. Indeed, a feasibility study for a “spin light polarimeter” at HERA is now being undertaken by physicists from the Yerevan Physics Institute in Armenia and from DESY . Furthermore, the correlation between the radiation spectra and the spin orientation lies at the heart of the kinetic polarization effect so that even apart from the question of polarimetry, it would be of interest to make more detailed measurements and at high energy.
## 6 Fokker–Planck theory for spin diffusion
Now, to conclude, I would like to mention that a way has recently been found, using classical concepts, to write a diffusion equation describing stochastic spin dynamics in storage rings. The key is to work with the density in phase space of the spin angular momentum as a parallel to the use of particle density for orbital diffusion. If the Fokker–Planck equation for the orbital motion is known, the corresponding equation for spin can be written immediately. More details can be found in another article in these proceedings .
## Conclusion
$`e^\pm `$ polarization in storage rings and accelerators is an active, developing and exciting field. Much is now routine but there are still many aspects to investigate and challenges to meet.
## Acknowledgments
I would like to thank E. Gianfelice–Wendt for her valuable comments on HERA performance; colleagues from the HERA Polarimeter Group for supplying me with information; P. Rutt and J.S. Price for updating me on the Jefferson Laboratory machine; and C. Prescott and Yu. Shatunov for providing me with information about their work. Finally I thank M. Berglund and M. Vogt for their careful reading of the manuscript.
## References
|
no-problem/9901/adap-org9901005.html
|
ar5iv
|
text
|
# Untitled Document
A Mathematical Model with Modified Logistic Approach for Singly-Peaked Population Processes
Ryoitiro Huzimura<sup>∗1</sup> and Toyoki Matsuyama<sup>†2</sup>
Department of Economics, Osaka Gakuin University, 2-36-1 Kishibe-minami, Suita-shi, Osaka 564-8511, Japan and Department of Physics, Nara University of Education, Takabatake-cho, Nara 630-8528, Japan
<sup>1</sup>Fax: 0081-06-6382-4363.
<sup>2</sup>Fax: 0081-0742-27-9289. E-mail: matsuyat@nara-edu.ac.jp.
A short running head title: Singly-Peaked Population Processes
Proofs should be sent to : Toyoki Matsuyama, Department of Physics, Nara University of Education, Takabatake-cho, Nara 630-8528, Japan
Abstract
When a small number of individuals of organism of single species is confined in a closed space with limited amount of indispensable resources, their breading may start initially under suitable conditions, and after peaking, the population should go extinct as the resources are exhausted. Starting with the logistic equation and assuming that the carrying capacity of the environment is a function of the amount of resources, a mathematical model describing such pattern of population change is obtained. An application of this model to typical population records, that of deer herds by Scheffer (1951) and O’Roke and Hamerstrome (1948), yields estimations of the initial amount of indispensable food and its availability or nutritional efficiency which were previously unspecified.
INTRODUCTION
The logistic or the Lotka-Volterra model has long been a mathematical frame to study population dynamics tending to stationary or oscillating equilibrium due to intra- or interspecific interactions (e.g., Pielou, 1974; Begon et al., 1996; Borrelli et al., 1996; Glesson and Wilson, 1986; Reed et al., 1996). Also, there is another pattern of population change which is singly-peaked. Typical one may be the population change of deer herds observed by Scheffer (1951). It was reported that the deer were freed in closed spaces at some definite time and the populations increased first nearly exponentially to reach a peak and then decreased or went extinct finally. The change was considered to be fluctuation or over-abundance from the sigmoidal pattern and ascribed to changes of reproduction rate and/or mortality due to unspecified reasons. But, such patterns should be generally observable if living organisms are confined in a closed space with constant amount of growth resources which are actually not reproducible although initially given. Effects of food availability or resource limitation on population dynamics are one of recent concerns (e.g., Ogushi and Sawada, 1985; Edgar and Aoki, 1993). To our knowledge, however, rather few mathematical models have been studied to analyse such patterns of population change and the carrying capacity for population has been traditionally assumed to be a constant characterizing its environment. In this report, we propose a new mathematical model to interpret such a pattern of population change by introducing a new assumption that the carrying capacity is a function of the amount of resources. After formulation and its application to the deer herd population, we discuss several characters of our model as compared with the existing models.
MATHEMATICAL MODEL
We start with the logistic equation for a single species of organism living in some limited space,
$`{\displaystyle \frac{1}{N}}{\displaystyle \frac{dN}{dt}}=r(1{\displaystyle \frac{N}{K}}),`$ (1)
where $`N`$ is the population size of the organism, $`r`$ the potential net reproduction rate and $`K`$ the carrying capacity of the population. Now we assume that the carrying capacity depends on the amount of indispensable resources in the space for the organisms and the resources are consumed by the organisms after they begin to live. Under such situation, we may suppose that the carrying capacity is a function of the amount ($`X`$) of the resource, $`K=f(X)`$. Then, we have
$`{\displaystyle \frac{1}{N}}{\displaystyle \frac{dN}{dt}}=r(1{\displaystyle \frac{N}{f(X)}}).`$ (2)
We may further assume that the decreasing rate of $`X`$ is proportional to the population size and the reproduction rate of the resources is negligible compared with the consumption rate, i.e.,
$`{\displaystyle \frac{dX}{dt}}=aN`$ (3)
where $`a(>0)`$ is the consumption rate of the resources per individual of the organism per unit time. From Eqs.(2) and (3), we have
$`\mathrm{ln}{\displaystyle \frac{N}{N_0}}=r(tt_0)+{\displaystyle \frac{r}{a}}{\displaystyle _{X_0}^X}{\displaystyle \frac{dX}{f(X)}}`$ (4)
where $`N_0`$, $`X_0`$ and $`t_0`$ are the initial values of $`N`$, $`X`$ and $`t`$, respectively. There may be various choices for $`f(X)`$ as an integrable function which represents a possible resource dependence of carrying capacity. We choose here the simplest one, a linear function $`f(X)=bX`$, with the proportional constant $`b`$($`>0`$), which we may call the nutritional efficiency. Then we have
$`N(t)=N_0[{\displaystyle \frac{X(t)}{X_0}}]^{r/ab}\mathrm{exp}(rt),`$ (5)
with $`t_0=0`$. Equation (5) predicts that the amount of the resources per individual, $`X/N`$, in the case of $`a=r/b`$, decreases exponentially with time from the initial value $`X_0/N_0`$. Solving the simultaneous Eqs.(3) and (5), we obtain the following solutions: For the case $`a=r/b`$,
$`N(t)`$ $`=`$ $`N_0\mathrm{exp}[rt+{\displaystyle \frac{a}{r}}{\displaystyle \frac{N_0}{X_0}}\{1\mathrm{exp}(rt)\}],`$ (6)
and for $`ar/b`$,
$`N(t)`$ $`=`$ $`N_0[1+({\displaystyle \frac{a}{r}}{\displaystyle \frac{1}{b}}){\displaystyle \frac{N_0}{X_0}}\{1\mathrm{exp}(rt)\}]^{r/(abr)}\mathrm{exp}(rt).`$ (7)
The $`N(t)`$ curve given by Eq.(6) or Eq.(7) has a single peak for a limited range or combinations of parameters $`a`$, $`b`$, $`r`$, $`X_0`$ and $`N_0`$. The range giving the single peak is determined from the extreme condition of $`N(t)`$. The solution (6) in the case of $`a=r/b`$ has the peak if $`rX_0/aN_0>1`$. We note $`rX_0/aN_0=bX_0/N_0`$ in this case. The maximum of $`N`$ is given by
$`N_m={\displaystyle \frac{rX_0}{a}}\mathrm{exp}({\displaystyle \frac{aN_0}{rX_0}}1),`$ (8)
at the time $`t_m=(1/r)\mathrm{ln}(rX_0/aN_0)`$. In the case of $`ar/b`$, the peak exists again when $`bX_0/N_0>1`$. The maximum is
$`N_m=N_0[{\displaystyle \frac{1}{ab}}\{r+(abr){\displaystyle \frac{N_0}{bX_0}}\}]^{r/(abr)}({\displaystyle \frac{rX_0}{aN_0}}+1{\displaystyle \frac{r}{ab}})`$ (9)
with $`t_m=(1/r)\mathrm{ln}(rX_0/aN_0+1r/ab)`$. We show the range where the single peak exists on the $`(\frac{N_0}{X_0},b)`$ plane in Fig. 1. It should be noted that our model is soluble exactly. We also note that it has the scale invariance under the change of ($`a`$, $`1/b`$, $`X_0`$) into ($`\lambda a`$, $`\lambda /b`$, $`\lambda X_0`$) with an arbitrary constant $`\lambda `$ and the units of $`X`$ defines the units of $`a`$ and $`b`$.
APPLICATION TO THE DEER POPULATIONS
What can be analysed by the present model? To show this, we apply it to the population changes of reindeer on St. Paul Island (SPI) from 1911 to 1950 and on St. George Island (SGI) from 1911 to 1949 (Scheffer, 1951). The population data have been well-known to be of ideal observation in out door laboratory where the animals lived under small hunting pressure and were free of predator attack for the 40 years; the definite numbers of the animal were planted in the closed spaces at the definite time, after which the population showed singly-peaked changes. The accuracy of the numbers was estimated to be about 10 %. We also apply the model to the population change of white-tailed deer at the George Reserve of the University of Michigan (GRM) which showed a similar trend from 1928 to 1947 (O’Roke and Hamerstrom, 1948).
For the application, we need to fix one of three parameters, $`a`$, $`b`$ and $`X_0`$, and need to assume the presence of indispensable resources for the animal. We may suppose that it was lichen at least for the SPI herd. This is because lichen was considered to be the key forage for reindeer, especially in winter (Scheffer, 1951). The grass disappeared on SPI 40 years after the reindeer introduction, which was regarded as the cause of the reindeer extinction. We may apply Eq.(3) here without adding any reproduction term for the plant breeding since it was reported that recovery of lichen range may take 15 or 20 years there. A caribou is reported to eat 4.5 kg of lichen a day (Bandfield, 1996). We infer that real values of the consumption rate of the three deer herds are near to this value since they belong to the same family (a Japanese deer is reported to eat 11 kg of grass a day). As the choice of the value is not so essential to obtain perspectives to consider the real population, we use $`a=1.64`$ tons a year per individual for the three herds commonly.
The population change $`(N)`$ of the SPI reindeer from Scheffer’s table is shown in Fig. 2-A with empty circles. To fit the curve of Eqs. (7), we use the direct search of optimization (DSO) for three parameters, $`r`$, $`b`$ and $`X_0`$ and obtain $`r=0.182`$ per year, $`b=0.111`$ individual per ton and $`X_0=37000`$ tons. We notice here some deviation of the curve from the data points which might be caused by changes of hunting effects or weather. We can not clarify the reason at present, however. After the similar application of DSO to the population on SGI and that in GRM, the optimized curves are compared in Figs. 2-B and C with the observed data. All parameters thus obtained are summarized in TAB. I together with the areas of three habitats and the respective initial and maximum population sizes.
Now we explain some characters of the population processes referring the figures and the table. The most significant result in the table is that the initial stock $`X_0`$ on SPI is more than 8 times larger than on SGI although the land areas are almost same. In the present model, the deviation of $`X_0`$ is proportional to that of $`a`$ due to the scale invariance for parameters mentioned above. However, this difference in $`X_0`$’s is much more than one that can be caused by probable difference in $`a`$’s. Rather, this may correspond to about ten times larger $`N_m`$ observed on SPI than that on SGI and suggests that SPI was much more fertile than SGI. Scheffer remarked some environmental differences between the two islands. Here we propose that the initial values of the carrying capacity is given by $`K_0=bX_0`$, of which data are also included in TAB. I. $`K_0`$ is free from the effect of $`a`$-ambiguity. The significant difference between $`K_0`$’s of SPI and SGI in the table also supports above view. We find next that the net reproduction rate $`r`$ of the SPI herd is much smaller than that of SGI which is further smaller than that of GRM. Values of $`r`$ are free from the effects of $`a`$. A biological reason may exist for the differences of r, although we cannot explain it now. The $`b`$ value of the SPI herd is about twice of that of SGI (and GRM). However, this difference might be caused by any difference in possible $`a`$ values.
Further, we find significant differences between population processes on SPI and on SGI (and in GRM): The population on SPI increased rather slowly and went extinct steeply after the maximum while that on SGI increased fast and decayed slowly. For the SPI herd, the ratio of the obtained $`r`$ to the $`b`$ value is very near to the $`a`$ value, meaning that the curve fitting for SPI reindeer is attained with Eq.(6) or as the case of $`a=r/b`$, as far as the $`a`$ value is acceptable. In contrast with this, some similarities are found in the population processes of SGI reindeer and GRM white-tailed deer: The $`r/b`$ is much less than the assumed $`a`$ for both herds, meaning that the fitting is realized with Eq. (7) or as the case of $`a<r/b`$. In spite of the large difference between the areas of two habitats, the two magnitudes of $`b`$ are nearly equal each other and the two $`X_0`$’s are too. Between the two habitats, a similarity in ecological characters for deers should have existed.
Now we discuss relations among observed and calculated population parameters. Inspection of the table suggests no definite relation of $`r`$ to $`N_m`$, $`X_0`$ and the respective densities. $`r`$ is presumably inversely related to $`N_0`$. The observed $`N_m`$ may have a linear relation to $`X_0`$ which is clearly found in Fig. 3. We have shown a non-linear relation between $`N_m`$ and $`X_0`$ in Eqs. (8) and (9). First, for the case of $`ab/r=1`$, Eq.(8) is approximated as $`N_mrX_0/ea`$ since $`aN_0/rX_0<<1`$ within the present range of parameters ($`e`$ is the base of the natural logarithm). Second, for the case of $`ab/r<<1`$ (the SGI and GRM cases), we rewrite Eq.(9) as $`N_m=N_0(r/ab)^{r/(abr)}[1(1ab/r)N_0/bX_0]^{r/(abr)}`$ $`(rX_0/aN_0+1r/ab)`$. If $`ab/r+bX_0/N_0>>1`$, we have $`N_m(r/a)(r/ab)^{r/(abr)}X_0`$. This condition is fulfilled in the present ranges of the parameters. We have then a linearly-increasing trend of $`N_m`$ with $`X_0`$ for both cases. Concerned with the coefficients of the linear increase, we show the quantitative estimations of the ratio $`N_m/X_0`$ in TAB. II. $`N_m(\mathrm{DATA})`$ is the maximum $`N`$, which was really observed on SPI, SGI, or GRM. $`N_m(\mathrm{LNR})`$ is estimated by using $`N_mrX_0/ea`$ for SPI case or $`N_m(r/a)(r/ab)^{r/(abr)}X_0`$ for SGI and GRM cases. Eq.(8) or Eq.(9) gives us the fully theoretical value of $`N_m`$ which is denoted by $`N_m(\mathrm{DSO})`$. We may consider that these values of coefficients are almost constant over three herds causing the linear relation between $`N_m`$ and $`X_0`$.
The minimum requirement of year-long grazing area of lichen for a reindeer was estimated to be 33 acres on SPI (Scheffer, 1951). This meant that the carrying capacity per unit area was 0.030 and the carrying capacity of SPI was 800 individuals (Dasman, 1964). The peak densities ($`N_m`$/area, estimable in TAB. I) exceeds 0.03 in two habitats, SPI and GRM, which were considered to be fluctuations over the carrying capacity. In our model, we postulate that the carrying capacity is not a constant of a land but a changeable parameter which depends on environmental conditions, e.g., the quantity of indispensable forage for the animal. Referring the $`K_0`$, the initial value of the carrying capacity defined above, we find $`N_mK_0`$ in TAB. I, a reasonable limiting relation of the maximum population to the maximum carrying capacity.
Finally, we compare the present model with the original Lotka-Volterra system (LVS) for predator -prey interaction. In fact, at a glance, the deer may be regarded as predator and the lichen as prey. The system is given by
$`{\displaystyle \frac{dP}{dt}}=cP+\alpha PS,`$ (10)
for the predator population size ($`P`$) and
$`{\displaystyle \frac{dS}{dt}}=kP\beta PS,`$ (11)
for the prey population ($`S`$) with the coefficients $`c`$, $`k`$, $`\alpha `$ and $`\beta `$ of the well-known meanings (Borrelli et al., 1996). When $`k=0`$, this system becomes that of the ordinary differential equations of Kermack-Mckendrick type (KMS) and can reproduce a singly-peaked process if the initial value of $`S`$ is larger than $`c/\alpha `$. However the LVS or KMS contains as its essence the encounter term which is proportional to $`PS`$. This means that encounter between two interacting species should take place with a constant probability uniformly through-out the space and time (applicability of the mass-action law). Hence the system should be applicable to the case of thin populations of prey and predator. Our model has no such encounter term (see Eq. (3)) to lead such limitation to the population density. The estimated values of $`X_0`$ or $`X`$ per unit area may be interpreted to be of thin or dense population (or stock) of the lichen (or forage) according to its magnitude. For the predator or deer, the present model assumes only intraspecific competition as the original logistic does. Hence the estimated values of $`N_m`$ and $`r`$ may be of dense population. Of course, effects of overcrowding can be discussed within LVS by introducing the $`S^2`$ and $`P^2`$ terms to it. However, an addition of new terms with new parameters may make the analysis more vague unless the parameters are determined by any other methods. We should also note that the unimodal curves can be reproduced by a modified logistic equation with a term of integrated toxins for population (Small 1987). However, the model has no explicit relationship with the resources for the population.
CONCLUSIONS
We have presented a simple mathematical model with which one can analyse singly-peaked population processes. Although it is simple, the model provides a good account of the deer population dynamics by assuming the resource-dependent carrying capacity and by introducing two ecological parameters, the consumption rate of indispensable resources ($`a`$) and the nutritional efficiency ($`b`$), in addition to such traditional ones as the reproduction rate ($`r`$) and the initial stock of the indispensable resources $`X_0`$. Here $`a`$ and $`b`$ can be in principle determined by observation. The model is soluble exactly as the original logistic is, providing mathematical benefits. It may be applicable to consumption of fertilizer by plant (perfectly zero breeding of prey) and to the case of non-zero breading of prey by adding a breding term for it in Eq. (3). Further we add that a population can go extinct steeply, or even suddenly, from its peak in the model. Breeding and extinction of many organisms should depend or should have depended on their indispensable resources of finite amount to which processes the present model may be applied.
ACKNOWLEDGMENTS
We would like to thank Prof. H. Sato (Dept. of Biology, Nara Women’s Univ.) and Prof. N. Kitagawa (Dept. of Biology, Nara Univ.of Education) for introducing us some literatures and informations on related subjects. We also sincerely thank to the referee for giving us very useful comments.
REFERENCES
Banfield, A. W. F. 1996. Caribou in the ”Encyclopedia americana”, Grolier Incorp., Danbury, Connecticut, p.659.
Begon, M., Mortimer, M. and Thompson, D. J. 1996. ”Population Ecology”, Blackwell Science, Oxford.
Borrelli, R. L. and Coleman, C. S. 1996. ”Differential Equations, a modeling perspective”, John Wiley & Sons, NY.
Dasmann, R. F. 1964. ”Wildlife Biology,” John Wiley & Sons, New York, NY.
Edgar, G. J., and Aoki, M. 1993. Resource limitation and fish predation: their importance to mobile epifauna associated with Japanese Sargassum, Oecologia 95, 122-133.
Gleeson, S. K. and Wilson, D. S. 1986. Equilibrium diet: optimal foraging and prey coexistence, Oikos 46, 139-144.
Ogushi, T. and Sawada, H. 1985. Population equilibrium with respect to available food resource and its behavioural basis in an herbivorous lady beetle, henosepilachna niponica, J. Anim. Ecol. 54, 781-796.
O’Roke, E. C. and Hamerstrom, Jr., F. N. 1948. Productivity and yield of the George reserve deer herd, J. Wildl. Mgmt. 12, 78-86.
Pielou, E. C. 1974. ”Population and community ecology,” Gordon and Breach Sci.Pub., New York, NY.
Reed, D. J., Begon, M. and Thompson, D. J. 1996. Differential cannibalism and population dynamics in a host-parasitoid system, Oecologia 105, 189-193.
Scheffer, V. B. 1951. The rise and fall of a reindeer herd, Scientific Monthly 73, 356-362.
Small, R.D. 1987. Population growth in a closed system, in ”SIAM Mathematical Modelling: Classroom Notes in Applied Mathematics (ed. by M.S. Klamkin)”, SIAM, 317-320, Philadelphia.
TAB. I. Population data of three deer herds: The habitat area, the initial and maximum population sizes ($`N_0`$ and $`N_m`$) are from references (Scheffer, 1951; O’Roke and Hamerstrom, 1948). The nutritional efficiency ($`b`$), the initial stock of indispensable food ($`X_0`$) and the reproduction rate ($`r`$) which are defined in text are obtained in this work after direct search optimization of the theoretical curve (Eqs.(6) and (7) in text) to fit the population records in the references and shown with significant figures of three digits. The consumption rate of indispensable food ($`a`$) is fixed to be 1.64 tons per year per individual for three herds. $`K_0(=bX_0)`$ is the initial carrying capacity given in text.
| Herd | Habitat area | $`N_0`$ | $`N_m`$ | $`r`$ | b | $`X_0`$ | $`K_0`$ |
| --- | --- | --- | --- | --- | --- | --- | --- |
| | (acre) | | | ($`y^1`$) | (indiv./ton) | (ton) | |
| St.Paul I. reindeer | 26500 | 25 | 2046 | 0.182 | 0.111 | 37000 | 4090 |
| St.George I. reindeer | 22400 | 15 | 222 | 0.469 | 0.0512 | 4460 | 229 |
| George Res. w.t.deer | 1200 | 6 | 211 | 0.740 | 0.0561 | 4150 | 233 |
TAB. I.
TAB. II. The estimations of $`N_m/X_0`$ of three deer herds: $`N_m(\mathrm{DATA})`$, $`N_m(\mathrm{LNR})`$, and $`N_m(\mathrm{DSO})`$ are defined in text. They are divided by $`X_0`$ which takes the value corresponding to each herd in TAB. I.
| Herd | $`N_m(\mathrm{DATA})/X_0`$ | $`N_m(\mathrm{LNR})/X_0`$ | $`N_m(\mathrm{DSO})/X_0`$ |
| --- | --- | --- | --- |
| St.Paul I. reindeer | 0.0553 | 0.0407 | 0.0409 |
| St.George I. reindeer | 0.0498 | 0.0352 | 0.0356 |
| George Res. w.t.deer | 0.0508 | 0.0417 | 0.0419 |
TAB. II.
Figure Captions
FIG. 1. The range where the condition giving a peak in $`N(t)`$ curves is fulfilled: $`b>N_0/X_0`$. Notations are defined in text.
FIG. 2. Population curves obtained with Eqs.(6) and (7) in text to fit the population records of the three deer herds. The nutritional efficiency (b) and the initial stock of indispensable food ($`X_0`$) and the potential reproduction rate ($`r`$) which are defined in text are optimized. Plate A, reindeer on St. Paul Island (Scheffer, 1951); Plate B, reindeer on St. George Island (ibid); Plate C, white-tailed deer in George Reserve Michigan (O’Roke and Hamerstrom, 1948).
FIG. 3. The maximum size of deer populations observed ($`N_m`$) vs the initial amount of indispensable food ($`X_0`$) estimated in text.
|
no-problem/9901/astro-ph9901140.html
|
ar5iv
|
text
|
# 𝑈𝐵𝑉𝐼 CCD photometry of two old open clusters NGC 1798 and NGC 2192
## 1 Introduction
Old open clusters provide us with an important information for understanding the early evolution of the Galactic disk. There are about 70 known old open clusters with age $`>1`$ Gyrs \[Friel 1995\]. These clusters are faint in general so that there were few studies about these clusters until recently. With the advent of CCD camera in astronomy, the number of studies on these clusters has been increasing. However, there are still a significant number of old open clusters for which basic parameters are not well known. For example, metallicity is not yet known for about 30 clusters among them.
Recently Phelps et al. (1994) and Janes & Phelps (1994) presented an extensive CCD photometric survey of potential old open clusters, the results of which were used in the study on the development of the Galactic disk by Janes & Phelps (1994). In the sample of the clusters studied by Phelps et al. there are several clusters for which only the non-calibrated photometry is available.
We have chosen two clusters among them, NGC 1798 and NGC 2192, to study the characteristics of these clusters using $`UBVI`$ CCD photometry. These clusters are located in the direction of anti-galactic centre. To date there is published only one photometric study of these clusters, which was given by Phelps et al. (1994) who presented non-calibrated $`BV`$ CCD photometry of these clusters. From the instrumental color-magnitude diagrams of these clusters Phelps et al. estimated the ages of these clusters using the morphological age indicators, obtaining the values of 1.5 Gyrs for NGC 1798 and 1.1 Gyrs for NGC 2192. However, no other properties of these clusters are yet known.
In this paper we present a study of NGC 1798 and NGC 2192 based on $`UBVI`$ CCD photometry. We have estimated the basic parameters of these clusters: size, reddening, metallicity, distance, and age. Also we have derived the luminosity function of the main sequence stars in these clusters. Section 2 describes the observations and data reduction. Sections 3 and 4 present the analysis for NGC 1798 and NGC 2192, respectively. Section 5 discusses the results. Finally Section 6 summarizes the primary results.
## 2 OBSERVATIONS AND DATA REDUCTION
### 2.1 Observations
$`UBVI`$ CCD images of NGC 1798 and NGC 2192 were obtained using the Photometrics 512 CCD camera at the Sobaeksan Observatory 61cm telescope in Korea for several observing runs between 1996 November and 1997 October. We have used also $`BV`$ CCD images of the central region of NGC 1798 obtained by Chul Hee Kim using the Tek 1024 CCD camera at the Vainu Bappu Observatory 2.3m telescope in India on March 4, 1998. The observing log is given in Table 1.
The original CCD images were flattened after bias subtraction and several exposures for each filter were combined into a single image for further reduction. The sizes of the field in a CCD image are $`4^{}.3\times 4^{}.3`$ for the PM 512 CCD image, and $`10^{}.6\times 10^{}.6`$ for the Tek 1024 CCD image. The gain and readout noise are, respectively, 9 electrons/ADU and 10.4 electrons for the PM 512 CCD, and 9 electrons/ADU and 10.4 electrons for the Tek 1024 CCD.
Figs. 1 and 2 illustrate grey scale maps of the $`V`$ CCD images of NGC 1798 and NGC 2192 made by mosaicing the images of the observed regions. It is seen from these figures that NGC 1798 is a relatively rich open cluster, while NGC 2192 is a relatively poor open cluster.
### 2.2 Data Reduction
Instrumental magnitudes of the stars in the CCD images were obtained using the digital stellar photometry reduction program IRAF<sup>1</sup><sup>1</sup>1IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc. under contract with the National Science Foundation./DAOPHOT (Stetson 1987, Davis 1994). The resulting instrumental magnitudes were transformed onto the standard system using the standard stars from Landolt (1992) and the M67 stars in Montgomery et al. (1993) observed on the same photometric nights. The transformation equations are
$$V=v+a_V(bv)+k_VX+Z_V,$$
$$(BV)=a_{BV}(bv)+k_{BV}X+Z_{BV},$$
$$(UB)=a_{UB}(ub)+k_{UB}X+Z_{UB},\mathrm{and}$$
$$(VI)=a_{VI}(vi)+k_{VI}X+Z_{VI},$$
where the lower case symbols represent instrumental magnitudes derived from the CCD images and the upper case symbols represent the standard system values. X is the airmass at the midpoint of the observations. The results of the transformation are summarized in Table 2. The data obtained on non-photometric nights were calibrated using the photometric data for the overlapped region.
The total number of the measured stars is 1,416 for NGC 1798 and 409 for NGC 2192. Tables 3 and 4 list the photometry of the bright stars in the C-regions of NGC 1798 and NGC 2192, respectively. The $`X`$ and $`Y`$ coordinates listed in Table 3 and 4 are given in units of CCD pixel ($`=0^{\prime \prime }.50`$). The $`X`$ and $`Y`$ values are increasing toward north and west, respectively.
We have divided the entire region of the fields into several regions, as shown in Figs. 1 and 2, for the analysis of the data. The C-region represents the central region of the cluster, and the F-regions (F, Fb, Fir, and Fi regions) represent the control field regions, and the N-region represents the intermediate region between the central region and the field region. The radius of the C-region is 300 pixel for NGC 1798 and NGC 2192. The ratio of the areas of the C-region, N-region, Fb-region, and (Fi + Fir)-regions for NGC 1798 is 1:1.50:1.00:1.07, and the ratio of the areas of the C-region, N-region, and F-region for NGC 2192 is 1:1.26:0.98.
## 3 ANALYSIS FOR NGC 1798
### 3.1 The Size of the Cluster
We have investigated the structure of NGC 1798 using starcounts. The centre of the cluster is estimated to be at the position of ($`X=710`$ pixel, $`Y=1110`$ pixel), using the centroid method. Fig. 3 illustrates the projected surface number density profile derived from counting stars with $`V<19.5`$ mag in the entire CCD field. The magnitude cutoff for starcounts was set so that the counts should be free of any photometric incompleteness problem. Fig. 3 shows that most of the stars in NGC 1798 are concentrated within the radius of 250 pixel ($`=125^{\prime \prime }`$), and that the outskirts of the cluster extend out to about 500 pixel ($`=250^{\prime \prime }`$) from the center. The number density changes little with radius beyond 500 pixel, showing that the outer region of the observed field can be used as a control field. Therefore we have estimated the approximate size of NGC 1798 for which the cluster blends in with the field to be about $`500^{\prime \prime }`$ in diameter, which corresponds to a linear size of 10.2 pc for the distance of NGC 1798 as determined below.
### 3.2 Color-Magnitude Diagrams
Figs. 4 and 5 show the $`V(BV)`$ and $`V(VI)`$ color-magnitude diagrams (CMDs) of the measured stars in the observed regions in NGC 1798. These figures show that the C-region consists mostly of the members of NGC 1798 with some contamination of the field stars, while the F-regions consist mostly of the field stars. The N-region is intermediate between the C-region and the F-region.
The distinguishable features seen in the color-magnitude diagrams of the C-region are: (a) There is a well-defined main sequence the top of which is located at $`V16`$ mag; (b) There is seen a distinct gap at $`V16.2`$ mag in the main sequence, which is often seen in other old open clusters (e.g. M67); (c) There is a poorly defined red giant branch and these is seen some excess of stars around $`(BV)=1.3`$ and $`V=15.6`$ mag on this giant branch, which is remarked by the small box in the figures. This may be a random excess of stars. However, the positions of the stars in the CMDs are consistent with the positions of known red giant clump in other old open clusters. Therefore most of these stars are probably red giant clump stars; and (d) There are a small number of stars along the locus of the red giant branch.
### 3.3 Reddening and Metallicity
NGC 1798 is located close to the galactic plane in the anti-galactic centre direction ($`b=4^{}.85`$ and $`l=160^{}.76`$) so that it is expected that the reddening toward this cluster is significant. We have estimated the reddening for NGC 1798 using two methods as follows.
First we have used the mean color of the red giant clump. Janes & Phelps (1994) estimated the mean color and magnitude of the red giant clump in old open clusters to be $`(BV)_{RGC}=0.87\pm 0.02`$ and $`M_{V,RGC}=0.59\pm 0.09`$, when the difference between the red giant clump and the main sequence turn-off of the clusters, $`\delta V`$, is smaller than one. The mean color of the red giant clump in the C-region is estimated to be $`(BV)_{RGC}=1.34\pm 0.01`$ ($`(VI)_{RGC}=1.47\pm 0.01`$, and $`(UB)_{RGC}=1.62\pm 0.04`$), and the corresponding mean magnitude is $`V_{RGC}=15.57\pm 0.05`$. $`\delta V`$ is estimated to be $`0.8\pm 0.2`$, which is the same value derived by Phelps et al. (1994). From these data we have derived a value of the reddening, $`E(BV)=0.47\pm 0.02`$.
Secondly we have used the color-color diagram to estimate the reddening and the metallicity simultaneously. We have fitted the mean colors of the stars in the C-region with the color-color relation used in the Padova isochrones \[Bertelli et al. 1994\]. This process requires iteration, because we need to know the age of the cluster as well as the reddening and metallicity. We have iterated this process until all three parameters are stabilized.
Fig. 6 illustrates the results of fitting in the $`(UB)(BV)`$ color-color diagram. It is shown in this figure that the stars in NGC 1798 are reasonably fitted by the color-color relation of the isochrones for \[Fe/H\] $`=0.47\pm 0.15`$
with a reddening value of $`E(BV)=0.55\pm 0.05`$. The error for the metallicity, 0.15, was estimated by comparing isochrones with different metallicities. As a reference the mean locus of the giants for solar abundance given by Schmidt-Kaler (1982) is also plotted in Fig. 6. Finally we derive a mean value of the two estimates for the reddening, $`E(BV)=0.51\pm 0.04`$.
### 3.4 Distance
We have estimated the distance to NGC 1798 using two methods as follows. First we have used the mean magnitude of the red giant clump. We have derived a value of the apparent distance modulus $`(mM)_V=14.98\pm 0.10`$ from the values for the mean magnitudes of the red giant clump stars described above.
Secondly we have used the the zero-age main sequence (ZAMS) fitting, following the method described in VandenBerg & Poll (1989). VandenBerg & Poll (1989) presented the semi-empirical ZAMS as a function of the metallicity \[Fe/H\] and the helium abundance Y:
$$V=M_V(BV)+\delta M_V(Y)+\delta M_V([\mathrm{Fe}/\mathrm{H}])$$
where $`\delta M_V(Y)=2.6(Y0.27)`$ and $`\delta M_V([\mathrm{Fe}/\mathrm{H}])=[\mathrm{Fe}/\mathrm{H}](1.444+0.362[\mathrm{Fe}/\mathrm{H}])`$.
Before the ZAMS fitting, we subtracted statistically the contribution due to the field stars in the CMDs of the C-region using the CMDs of the Fb-region for $`BV`$ photometry and the CMDs of the Fi+Fir region for $`VI`$ photometry. The size of the bin used for the subtraction is $`\mathrm{\Delta }V=0.25`$ and $`\mathrm{\Delta }(BV)=0.1`$. The resulting CMDs are displayed in Fig. 7. We used the metallicity of \[Fe/H\] = –0.47 as derived above and adopted $`Y=0.28`$ which is the mean value for old open clusters \[Gratton 1982\]. Using this method we have obtained a value of the apparent distance modulus $`(mM)_V=14.5\pm 0.2`$. Finally we calculate a mean value of the two estimates, $`(mM)_V=14.7\pm 0.2`$. Adopting the extinction law of $`A_V=3.2E(BV)`$, we derive a value of the intrinsic distance modulus $`(mM)_0=13.1\pm 0.2`$. This corresponds to a distance of $`d=4.2\pm 0.3`$ kpc.
### 3.5 Age
We have estimated the age of NGC 1798 using two methods as follows. First we have used the morphological age index (MAI) as described in Phelps et al. (1994). Phelps et al. (1994) and Janes & Phelps (1994) presented the MAI–$`\delta V`$ relation,
$$MAI[\mathrm{Gyrs}]=0.73\times 10^{(0.256\delta V+0.0662\delta V^2)}.$$
From the value of $`\delta V`$ derived above, $`0.8\pm 0.2`$ mag, we obtain a value for the age, MAI $`=1.3\pm 0.2`$ Gyrs.
Secondly we have estimated the age of the cluster using the theoretical isochrones given by the Padova group \[Bertelli et al. 1994\]. Fitting the isochrones for \[Fe/H\] = –0.47 to the CMDs of NGC 1798, as shown in Fig. 8, we estimate the age to be $`1.4\pm 0.2`$ Gyrs. Both results agree very well.
### 3.6 Luminosity Function
We have derived the $`V`$ luminosity functions of the main sequence stars in NGC 1798, which are displayed in Fig. 9. The Fb-region was used for subtraction of the field star contribution from the C-region and the magnitude bin size used is 0.5 mag. This control field may not be far enough from the cluster to derive the field star contribution. If so, we might have oversubtracted the field contribution, obtaining flatter luminosity functions than true luminosity functions. However, the fraction of the cluster members in this field must be, if any, very low, because the surface number density of this region is almost constant with the radius as shown in Fig. 3. The luminosity function of the C-region in Fig. 9(a) increases rapidly up to $`V16.5`$ mag, and stays almost flat for $`V>16.5`$ mag. The luminosity functions of the N-region and the (R+Fir)-region are steeper than that of the C-region. A remarkable drop is seen at $`V=16.2`$ mag ($`M_V=1.5`$ mag) in the luminosity function of the C-region based on smaller bin size of 0.2 mag in Fig.9(b). This corresponds to the main sequence gap described above.
## 4 ANALYSIS FOR NGC 2192
### 4.1 The Size of the Cluster
We have investigated the structure of NGC 2192 using starcounts. We could not use the centroid method to estimate the centre of this cluster, because this cluster is too sparse. So we have used eye-estimate to determine the centre of the cluster to be at the position of ($`X=465`$ pixel, $`Y=930`$ pixel). Fig. 10 illustrates the projected surface number density profile derived from counting stars with $`V<18`$ mag in the entire CCD field. The magnitude cutoff for starcounts was set so that the counts should be free of any photometric incompleteness problem. Fig. 10 shows that most of the stars in NGC 2192 are concentrated within the radius of 200 pixel ($`=100^{\prime \prime }`$), and that the outskirts of the cluster extend out to about 440 pixel ($`=220^{\prime \prime }`$) from the centre. Therefore the approximate size of NGC 2192 is estimated to be about $`440^{\prime \prime }`$ in diameter,which corresponds to a linear size of 7.5 pc for the distance of NGC 2192 as determined below.
### 4.2 Color-Magnitude Diagrams
Figs. 11 and 12 show the $`V(BV)`$ and $`V(VI)`$ color-magnitude diagrams of the measured stars in the observed regions in NGC 2192. The distinguishable features seen in the color-magnitude diagrams of the C-region are: (a) There is a well-defined main sequence the top of which is located at $`V14`$ mag; (b) There are a group of red giant clump stars at $`(BV)=1.1`$ and $`V=14.2`$ mag, which are remarked by the small box in the figures; and (c) There are a small number of stars along the locus of the red giant branch.
### 4.3 Reddening and Metallicity
NGC 2192 is located 11 degrees above the galactic plane in the anti-galactic centre direction ($`b=10^{}.64`$ and $`l=173^{}.41`$) but higher than NGC 1798 so that it is expected that the reddening toward this cluster is significant but smaller than that of NGC 1798. We have estimated the reddening for NGC 2192 using two methods as applied for NGC 1798.
First we have used the mean color of the red giant clump. The mean color of the red giant clump in the C-region is estimated to be $`(BV)_{RGC}=1.08\pm 0.01`$ ($`(VI)_{RGC}=1.07\pm 0.01`$, and $`(UB)_{RGC}=0.61\pm 0.02`$), and the corresponding mean magnitude is $`V_{RGC}=14.20\pm 0.05`$. $`\delta V`$ is estimated to be $`0.6\pm 0.2`$, which is similar to the value derived by Phelps et al. (1994). From these data we have derived a value of the reddening, $`E(BV)=0.19\pm 0.03`$.
Secondly we have used the color-color diagram to estimate the reddening and the metallicity simultaneously. We have fitted the mean colors of the stars in the C-region with the color-color relation used in the Padova isochrones \[Bertelli et al. 1994\]. Fig. 13 illustrates the results of fitting in the $`(UB)(BV)`$ color-color diagram. It is shown in this figure that the stars in NGC 2192 are reasonably fitted by the color-color relation of the isochrones for \[Fe/H\] $`=0.31\pm 0.15`$ dex with a reddening value of $`E(BV)=0.21\pm 0.01`$. The error for the metallicity, 0.15, was estimated by comparing isochrones with different metallicities. As a reference the mean locus of the giant for solar abundance given by Schmidt-Kaler is also plotted in Fig. 13. Finally we derive a mean value of the two estimates for the reddening, $`E(BV)=0.20\pm 0.03`$.
### 4.4 Distance
We have estimated the distance to NGC 2192 using two methods as for NGC 1798. First we have used the mean magnitude of the red giant clump. We have derived a value of the apparent distance modulus $`(mM)_V=13.61\pm 0.10`$ from the values for the mean magnitudes of the red giant clump stars described previously.
Secondly we have used the ZAMS fitting. Before the ZAMS fitting, we subtracted statistically the contribution due to the field stars in the CMDs of the C-region using the CMDs of the F-region. The size of the bin used for the subtraction is $`\mathrm{\Delta }V=0.25`$ and $`\mathrm{\Delta }(BV)=0.1`$.
The resulting CMDs are displayed in Fig. 14. We used the metallicity of \[Fe/H\] = –0.31 as derived before and adopted $`Y=0.28`$. Using this method we have obtained a value of the apparent distance modulus $`(mM)_V=13.1\pm 0.2`$. Finally we calculate a mean value of the two estimates, $`(mM)_V=13.3\pm 0.2`$. Adopting the extinction law of $`A_V=3.2E(BV)`$, we derive a value of the intrinsic distance modulus $`(mM)_0=12.7\pm 0.2`$. This corresponds to a distance of $`d=3.5\pm 0.3`$ kpc.
### 4.5 Age
We have estimated the age of NGC 2192 using two methods as follows. First we have used the morphological age index. From the value of $`\delta V`$ derived above, $`0.6\pm 0.2`$ mag, we obtain a value for the age, MAI $`=1.1\pm 0.2`$ Gyrs. Secondly we have estimated the age of the cluster using the theoretical isochrones given by the Padova group \[Bertelli et al. 1994\]. Fitting the isochrones for \[Fe/H\] = –0.31 to the CMDs of NGC 2192, as shown in Fig. 15, we estimate the age to be $`1.1\pm 0.1`$ Gyrs. Both results agree very well.
### 4.6 Luminosity Function
We have derived the $`V`$ luminosity functions of the main sequence stars in NGC 2192, which are displayed in Fig. 16. The F-region was used for subtraction of the field star contribution from the C-region. The luminosity function of the C-region in Fig. 16(a) increases rapidly up to $`V14`$ mag, and stays almost flat for $`V>15`$ mag. The luminosity function of the N-region is steeper than that of the C-region. Fig. 16(b) displays a comparison of the luminosity functions of NGC 1798, NGC 2192, and NGC 7789 which is another old open cluster of similar age \[Roger et al. 1994\]. Fig. 16(b) shows that the luminosity functions of these clusters are similar in that they are almost flat in the faint part. The flattening of the faint part of the luminosity functions of old open clusters has been known since long, and is believed to be due to evaporation of low mass stars \[Friel 1995\].
## 5 DISCUSSION
We have determined the metallicity and distance of NGC 1798 and NGC 2192 in this study. We compare them with those of other old open clusters here. Fig. 17 illustrates the radial metallicity gradient of the old open clusters compiled by Friel (1995) and supplemented by the data in Wee & Lee (1996) and Lee (1997). Fig. 17 shows that the mean metallicity decreases as the galactocentric distance increases. The positions of NGC 1798 and NGC 2192 we have obtained in this study are consistent with the mean trend of the other old open clusters. The slope we have determined for the entire sample including these two clusters is $`\mathrm{\Delta }[\mathrm{Fe}/\mathrm{H}]/R_{GC}=0.086\pm 0.011`$ dex/kpc, very similar to that given in Friel (1995), $`\mathrm{\Delta }[\mathrm{Fe}/\mathrm{H}]/R_{GC}=0.091\pm 0.014`$ dex/kpc.
There are only four old open clusters located beyond $`R_{GC}=13`$ kpc in Fig. 17. These four clusters follow the mean trend of decreasing outward. However, the number of the clusters is not large enough to decide whether the metallicty keeps decreasing outward or it stops decreasing somewhere beyond $`R_{GC}=13`$ kpc and stays constant. Further studies of more old open clusters beyond $`R_{GC}=13`$ kpc are needed to investigate this point.
## 6 SUMMARY AND CONCLUSIONS
We have presented $`UBVI`$ photometry of old open clusters NGC 1798 and NGC 2192. From the photometric data we have determined the size, reddening, metallicity, distance, and age of these clusters. The luminosity functions of the main sequence stars in these clusters are similar to those of the other old open clusters. The basic properties of these clusters we have determined in this study are summarized in Table 5.
## Acknowledgments
Prof.Chul Hee Kim is thanked for providing the $`BV`$ CCD images of NGC 1798. This research is supported in part by the Korea Science and Engineering Foundation Grant No. 95-0702-01-01-3.
FIGURE CAPTIONS
Figure 1. A grey scale map of the $`V`$ CCD image of NGC 1798 made by mosaicing the images of observed regions. North is right and east is up. The size of the field is $`10^{}.6\times 13^{}.8`$. The circle and lines represent the boundary of each region described in the text.
Figure 2. A grey scale map of the $`V`$ CCD image of NGC 2192 made by mosaicing the images of observed regions. North is right and east is up. The size of the field is $`7^{}.2\times 10^{}.8`$. The circle and line represent the boundary of each region described in the text.
Figure 3. Projected surface number density of the stars with $`V<19.5`$ mag in the NGC 1798 area.
Figure 4. $`V`$–($`B`$$`V`$) color-magnitude diagrams of NGC 1798: the entire region, the C-region, the N-region, and Fb-region. The square represents the position of the red giant clump.
Figure 5. $`V`$–($`V`$$`I`$) color-magnitude diagrams of NGC 1798: the entire region, the C-region, the N-region, and the Fir-region plus Fi-region. The square represents the position of the red giant clump.
Figure 6. ($`U`$$`B`$)–($`B`$$`V`$) diagram of stars with small photometric errors in the C-region of NGC 1798. The dotted line and solid line represent, respectively, the mean line for the giants with solar abundance (III) given by Schmidt-Kaler (1982) and the mean line for the Padova isochrones with \[Fe/H\] = –0.47, which were shifted according to the reddening of $`E(BV)=0.55`$.
Figure 7. ZAMS fitting for the C-region of NGC 1798. The solid line represents the empirical ZAMS for \[Fe/H\] = –0.47, and $`Y=0.28`$, shifted according to the reddening and distance of NGC 1798. The dashed lines represent the upper and lower envelope corresponding to the fitting errors.
Figure 8. Isochrone fitting for the C-region of NGC 1798 in the color-magnitude diagrams. The solid line represents the Padova isochrone for age = 1.4 Gyrs, \[Fe/H\] = –0.47, shifted according to the reddening and distance of NGC 1798. The dashed lines represents isochrones for ages of 1.2 and 1.6 Gyrs.
Figure 9. (a) Luminosity functions of the main sequence stars in the C-region (filled circles), N-region (open squares) and R-region plus Fir-region (open triangles) of NGC 1798. (b) The luminosity function for the C-region based on a smaller bin size of 0.2 mag.
Figure 10. Projected surface number density of the stars with $`V<18`$ mag in the NGC 2192 area.
Figure 11. $`V`$–($`B`$$`V`$) color-magnitude diagrams of NGC 2192: the entire region, the C-region, the N-region, and the F-region. The square represents the position of the red giant clump.
Figure 12. $`V`$–($`V`$$`I`$) color-magnitude diagrams of NGC 2192: the entire region, the C-region, the N-region, and the F-region. The square represents the position of the red giant clump.
Figure 13. ($`U`$$`B`$)–($`B`$$`V`$) diagram of stars with small photometric errors in the C-region of NGC 2192. The dotted line and solid line represent, respectively, the mean line for the giants with solar abundance (III) given by Schmidt-Kaler (1982) and the mean line for the Padova isochrones with \[Fe/H\] = –0.31, which were shifted according to the reddening of $`E(BV)=0.19`$.
Figure 14. ZAMS fitting for the C-region of NGC 2192. The solid line represents the empirical ZAMS for \[Fe/H\] = –0.31, and $`Y=0.28`$, shifted according to the reddening and distance of NGC 2192. The dashed lines represent the upper and lower envelope corresponding to the fitting errors.
Figure 15. isochrone fitting for the C-region of NGC 2192 in the color-magnitude diagrams. The solid line represents the Padova isochrone for age = 1.1 Gyrs, \[Fe/H\] = –0.31, shifted according to the reddening and distance of NGC 2192. The dashed lines represents isochrones for ages of 1.0 and 1.2 Gyrs.
Figure 16. (a) Luminosity functions of the main sequence stars in the C-region (filled circles) and N-region (open squares) of NGC 2192. (b) Comparison of the luminosity functions of NGC 2192 (filled circles), NGC 1798 (open triangles) and NGC 7789 (open circles).
Figure 17. Metallicity versus the galactocentric distance of NGC 1798 (the filled circle) and NGC 2192 (the filled square) compared with other old open clusters (open circles).
|
no-problem/9901/astro-ph9901121.html
|
ar5iv
|
text
|
# The mass density in black holes inferred from the X-ray background
## 1 Introduction
Many current models for the X-ray Background (XRB) assume that it is due to the summed emission from many Active Galactic Nuclei (AGN) with strong intrinsic absorption (Setti & Woltjer 1989; Madau et al 1994; Comastri et al 1995; Celotti et al 1995). In this case most X-ray emission from AGN in the Universe must be highly absorbed (Fabian et al 1998). Here we make a representative correction of the spectrum of the XRB in order to deduce the current radiation energy density of the XRB had absorption not been present. This is converted to a bolometric radiation density using the X-ray to bolometric ratio of a sample of unobscured AGN. Then from the simple cosmology-free argument of Soltan (1982) we convert this energy density into a mean mass density of black holes at the current epoch. Soltan (1982) took the quasar counts in the optical B-band for his estimate so was only using unobscured AGN. Our result exceeds his earlier estimate by a factor of about 7.5 and the upper limit of the more recent estimates of Chokshi & Turner (1992), which also use optical quasar counts, by a factor of 3. It agrees with the very recent estimate of Salucci et al (1998), based on X-ray source counts.
We show that the mean mass density of black holes is within a factor of two of direct estimates based on the detection of black holes in nearby galaxies. There is then no strong requirement for any significant mass build up in black holes from some radiatively inefficient mode of accretion. Most accretion power in the Universe is absorbed, and likely reradiated in the infrared wavebands. This has implications for the IR background and source counts.
## 2 The mass density in Black Holes
We model the spectrum of the XRB as
$$I_\nu =9E^{0.4}\mathrm{exp}(E/50\mathrm{keV})\mathrm{keV}\mathrm{cm}^2\mathrm{s}^1\mathrm{sr}^1\mathrm{keV}^1.$$
This agrees with both the Marshall et al (1980) result (HEAO A2 spectra taken in the 3–60 keV band) above $`7\mathrm{keV}`$ and that of Gendreau et al (1995) result (ASCA 0.5–7 keV spectra) below that energy. The $`EF_E`$ spectrum then peaks at 30 keV with an intensity of $`38.1\mathrm{keV}\mathrm{cm}^2\mathrm{s}^1\mathrm{sr}^1.`$
The major assumption made in this paper is that the XRB spectrum above 30 keV is unaffected by absorption. This is because the likely maximum photoelectric absorption for sources making a significant contribution to the XRB intensity occurs at column densities of about $`2\times 10^{24}\mathrm{cm}^2`$, which has little effect above 30 keV in the rest frame, and of course even less in redshifted sources. (We assume approximate Solar metallicity for absorption purposes.) At higher column densities the Thomson depth exceeds unity and Compton scattering significantly reduces the transmitted power at all energies (see e.g. Madau et al 1994). Such Compton-thick sources are therefore unlikely to play a major role in the XRB, although they may be significant in number (perhaps one third of all sources; Maiolino et al 1998).
We now assume that the intrinsic spectrum of the sources responsible for the XRB has a photon index of 2 and matches the above spectrum at its $`EF_E`$ peak. A more complex spectrum including reflection can have a slightly higher intensity (Fig. 1), particularly if redshifted. The caveats noted so far mean that the true intrinsic spectrum is above the simple power-law estimate, by some small factor.
The unabsorbed 0.66–3.33 keV (2–10 keV in the assumed mean restframe redshift of 2, but with our adopted photon index this is independent of redshift) intensity of the sources is $`I_0=9.8\times 10^8\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1\mathrm{sr}^1`$. The unabsorbed radiation energy density is then $`_\mathcal{0}=4\pi I_0/c=4.1\times 10^{17}\mathrm{erg}\mathrm{cm}^3\mathrm{s}^1.`$ To convert this to an unabsorbed bolometric radiation density, we note that the mean ratio of the 2–10 keV to bolometric luminosities for radio-quiet quasars in the compilation of spectral energy distributions of Elvis et al (1994) is 3.3 per cent (we convert from the tabulated monochromatic value of $`L_\mathrm{X}`$ to the 2–10 keV value by multiplying by a factor of 1.6, using a photon index of 2; 6 out of every 7 objects have a luminosity ratio between 1 and 7.6 per cent). This is similar to the ratio of 2 per cent found using Ferland’s (1996) ’Table AGN’ model, which reproduces emission line ratios well. Denoting this fraction as $`(f/0.03)`$, we find a total energy density of $`_0^{}=4\pi I_0/fc=1.2\times 10^{15}(f/0.03)^1\mathrm{erg}\mathrm{cm}^3\mathrm{s}^1.`$ If we now assume, following Soltan (1982), that this radiation has been produced by accretion at an efficiency of 10 per cent (noting that (comoving) radiation energy density decays as $`1+z`$ whereas mass density does not) we find a mass density
$$\rho =10(1+z)_0^{}/c^2=4.1\times 10^{35}\mathrm{g}\mathrm{cm}^3.$$
This corresponds to $`6\times 10^{14}\mathrm{M}_{}\mathrm{Gpc}^3=6\times 10^5\mathrm{M}_{}\mathrm{Mpc}^3.`$ A redshift of 2 is used here (see Miyaji et al 1998 for a discussion of evolution models for X-ray observed AGN).
This mass density is much greater than Soltan’s original estimate and about three times higher than that estimated by Chokshi & Turner (1992) using optical counts of unobscured quasars. It is about half that estimated by Haehnelt et al (1998) from the product of the mean black hole mass to bulge mass, estimated from spectroscopic data on the cores of nearby galaxies by Magorrian et al (1998), and the mass density in galactic bulges (Fukugita et al 1998). The Haehnelt et al estimate is likely to be fairly uncertain since the method for black hole mass measurement of Magorrian et al may overestimate the masses. Our result agrees with the very recent, and more complicated, estimate of Salucci et al (1998) who use X-ray source counts together with other factors. If a further 50 per cent of sources are Compton-thick and, due to Compton down-scattering, contribute little to the XRB spectral intensity, then our estimate rises proportionately.
An important conclusion of our result is that the absorption-corrected XRB estimate and the more direct galaxy-core estimate of the local density of massive black holes are within a factor of two. There is thus no strong need to invoke very low efficiency accretion or any exceptional accretion flows. What is required is that most accretion power is absorbed by surrounding gas. This may require the geometries discussed by Fabian et al (1998).
The fraction which is unabsorbed, in the hard X-ray spectrum, is about 6 per cent of the total spectrum. Here we assume that all the optical and UV emission are absorbed and that the infrared emission from AGN is already due to reprocessing. The fraction is obtained by noting that only about one half the 0.1–60 keV intrinsic X-ray spectrum is transmitted to make the XRB spectrum (this is the ratio of the observed 0.1–60 keV intensity to that in the inferred unabsorbed spectrum). The ratio of the 0.1–60 keV to 2–10 keV fluxes for our assumed intrinsic spectrum is $`r4`$, so if the 2–10 keV flux is 3 per cent of the total flux of an AGN, the transmitted fraction of the XRB, assuming the spectrum of Fig. 1, is $`fr/26`$ per cent.
The spectrum of the observed XRB does turn up below 1 keV in part due to unabsorbed quasars and AGN, which from the integration of ROSAT source counts (Hasinger et al 1993) contribute about half the total XRB intensity at 1 keV. This corresponds to about 10 per cent of the assumed intrinsic power-law spectrum so the total fraction of the accretion power emitted in the Universe which escapes unabsorbed is therefore about 16 per cent. If we further assume 50 per cent more AGN are Compton-thick with column densities above $`2\times 10^{24}\mathrm{cm}^2`$ (cf. Maiolino et al 1998) then this final number drops to about 12 per cent.
## 3 The Infrared Background Light
The absorbed intensity can now be used to estimate the IR background. Assuming the absorber to be dusty gas, we have found that at least 85 per cent of the emitted intrinsic flux for our adopted XRB spectrum is absorbed, or an intensity $`I_{\mathrm{abs}}=0.85I_0/f3\times 10^6\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1\mathrm{sr}^1.`$ This is 3 nW m<sup>-2</sup> sr<sup>-1</sup>, for comparison with the limits and detections derived of the infrared backgrounds derived and reviewed by Hauser et al (1998) and Fixsen et al (1998). If much of the absorbed flux emerges longward of 100$`\mu `$, which requires a significant contribution occurring at redshifts much greater than 2, it will dominate the FIR background and source counts (see also Almaini, Lawrence & Boyle 1998). Wherever the reprocessed emission emerges, it corresponds to several tens per cent of the likely background in the 10–100$`\mu `$m infrared bands (Hauser et al 1998; Fixsen et al 1998) to most of the background longward of $`200\mu `$m, which is generally assumed to be dominated by the reprocessed radiation from stars. This means that the integrated intrinsic radiation from AGN is several tens per cent of that due to stars.
A check on this result is obtained by adapting an argument due to G. Hasinger (priv. comm.). Magorrian et al (1998) find that the mass of a central black hole $`M_{\mathrm{BH}}`$ is about $`0.006`$ times the mass of the host bulge $`M_\mathrm{b}`$. If the present bulge mass is a fraction $`a_\mathrm{b}`$ of the mass of the stars associated with that galaxy which have already burnt, then the total radiation emitted over time is $`0.0006a_\mathrm{b}^1M_\mathrm{b}c^2`$, where it is assumed that one tenth of a star undergoes nuclear burning with an efficiency of 0.6 per cent. The total accretion energy radiated from the black hole, at an efficiency of 0.1, is $`0.1M_{\mathrm{BH}}c^2=0.0006M_\mathrm{b}c^2`$. If the history of star formation and accretion are similar (see e.g. Boyle & Terlevich 1998), then the contribution of AGN to the background radiation density is $`a_\mathrm{b}`$ times that of stars. For a Salpeter IMF extending to $`0.4\mathrm{M}_{}`$, $`a_\mathrm{b}0.2`$. This fraction can be increased if the radiative efficiency of the accretion is higher (e.g. Kerr black holes) or the IMF is steeper than assumed, and decreased if the IMF is flatter and a large non-bulge stellar component is included.
## 4 Discussion and Conclusion
We have estimated the local mean mass density in black holes from the background light of AGN, correcting for the large absorption effects necessary to make typical AGN explain the XRB spectrum. Our result of $`69\times 10^5\mathrm{M}_{}\mathrm{Mpc}^3`$ (the higher value includes a correction for Compton-thick sources) is within a factor of 2 to 1.5 of that estimated from direct optical studies of the cores of nearby galaxies. The range of uncertainty on our mass density estimate, based only on variations in the adopted spectral energy distributions for AGN, is also about a further factor of two.
The key assumptions are that a) the background above 30 keV is unaffected by absorption and b) the typical spectrum of high redshift, and also of absorbed AGN, are similar to that of the unobscured AGN used by Elvis et al (1994) in their compilation of spectral energy distributions. The first assumption is robust to reasonable changes in the relevant energy (e.g. going to 40 or 50 keV in Fig. 1 would have little effect on the final result). There is insufficient information to assess the second assumption, but we do note that it is plausible that the intrinsic X-ray emission region is unaffected by the absorption which is usually assumed to occur at much greater radii from the central black hole. Finally, the total X-ray intensity is reduced by about 30 per cent if the photon index drops to 1.5, but the contribution of reflection in all cases increases our estimate (by about 10 per cent for a photon index of 2).
We are concerned that the simple use of the Elvis et al (1994) spectral energy distribution for AGN leads to the UV emission being counted twice. This is because the infrared emission may be dominated by absorbed UV (and soft X-ray) emission (assuming that we view the UV source direct, but the absorbing matter which reradiates UV as infrared covers other lines of sight). If we omit the infrared emission from that distribution the bolometric luminosities drop by about one third. This increases $`f0.05`$ and decreases our mass density estimate by one third. It also means that the total absorbed fraction of the power drops to about 80 per cent.
In conclusion, the spectrum of the XRB provides a robust estimate of the total radiation emitted by AGN in the Universe. The agreement of the local black hole density resulting from the conversion of the implied energy density to mass with direct estimates from other wavebands indicates that there is no obvious need for some radiatively inefficient form of accretion. Most of the accretion power in the Universe has been absorbed and presumably reradiated in the infrared.
## 5 Acknowledgements
We thank Omar Almaini, Guenther Hasinger and Andy Lawrence for discussions and Elihu Boldt for comments. ACF thanks the Royal Society for support.
|
no-problem/9901/hep-ph9901374.html
|
ar5iv
|
text
|
# The Goldberger-Treiman Discrepancy in SU(3)
## I Introduction
The Goldberger-Treiman relation (GTR), obtained from matrix elements of the divergence of axial currents between spin 1/2 baryons, is an important indicator of explicit chiral symmetry breaking by the quark masses. It interrelates baryon masses, axial vector couplings, the baryon-pseudoscalar meson (Goldstone boson $``$ GB) couplings and the GB decay constants. Explicit chiral symmetry breaking leads to a departure from the GTR (defined below) which is called the Goldberger-Treiman discrepancy (GTD).
The GTD has been repeatedly discussed over time and for several reasons there were difficulties in arriving at a clear understanding. On one hand, there was no available effective theory with a systematic expansion to address the problem, and on the other hand the experimental values of the baryon-GB couplings were too poorly known. In recent years, progress has been made on both fronts. There is now a baryon chiral effective theory that permits a consistent expansion of the discrepancy. There has also been progress in the determinations of the baryon-GB couplings that are the main source of uncertainty in the phenomenological extraction of the discrepancies. In fact, the current knowledge of the couplings $`g_{\pi NN}`$, $`g_{KN\mathrm{\Lambda }}`$ and, to a lesser extent, $`g_{KN\mathrm{\Sigma }}`$ is good enough to justify a new look at the GTD in SU(3). In this work we study the GTD in the light of heavy baryon chiral perturbation theory (HBChPT).
Let us first briefly review the derivation of the GTR and the definition of the GTD. We consider the matrix elements of the octet axial current $`A_\mu ^a=\frac{1}{2}\overline{q}(x)\gamma _\mu \gamma _5\lambda ^aq(x)`$ (the Gell-Mann matrices are normalized to $`\mathrm{Tr}(\lambda ^a\lambda ^b)=2\delta ^{ab}`$) between states of the baryon octet:
$`b,p_bA_\mu ^ca,p_a=\overline{U}_b(p_b)\left[{\displaystyle \frac{1}{2}}\gamma _\mu g_A^{abc}(q^2)q_\mu g_2^{abc}(q^2)\right]\gamma _5U_a(p_a),`$ (1)
where $`a,b,c=1,\mathrm{},8`$ and $`q=p_bp_a`$ is the momentum transfer between baryons $`a`$ and $`b`$. From Eq. (1), the matrix elements of the divergence of the axial currents become
$`b,p_b^\mu A_\mu ^ca,p_a=i\overline{U}_b(p_b)\left[{\displaystyle \frac{1}{2}}(M_a+M_b)g_A^{abc}(q^2)+q^2g_2^{abc}(q^2)\right]\gamma _5U_a(p_a),`$ (2)
where $`M_a`$ is a baryon mass. Crucial to the derivation of the GTR is the GB pole contribution represented in Fig. 1. To explicitly expose the pole term, the matrix element in Eq. (2) can be rewritten as
$`b,p_b^\mu A_\mu ^ca,p_a=i\overline{U}_b(p_b){\displaystyle \frac{N^{abc}(q^2)}{q^2m_c^2+iϵ}}\gamma _5U_a(p_a),`$ (3)
where $`N^{abc}(q^2)=g_{cab}(q^2)P^c(q^2)+(q^2m_c^2)\delta ^{abc}(q^2)`$, $`m_c`$ is a GB mass, and $`g_{cab}(q^2)`$ the baryon-GB form factor, defined such that in the physical basis of the Gell-Mann matrices $`g_{3,6+i7,6i7}(M_\pi ^2)`$ is equal to $`g_{\pi ^0nn}`$, etc. $`P^c(q^2)`$ represent the couplings of the pseudoscalar currents to the GB’s, given in the chiral limit by $`P^c=m_c^2F_c`$ ($`F_c`$ is the decay constant, where $`F_\pi =92.42`$ MeV); the $`q^2`$ dependence of $`P^c(q^2)`$ starts at $`𝒪(p^4)`$ and is henceforth disregarded. Finally, $`\delta ^{abc}(q^2)`$ denotes contributions not involving the GB pole, and it starts as a quantity of $`𝒪(p^2)`$. This separation of pole and non-pole contributions is not unique (the off-shell functions separately are not observables); for instance, up to higher order terms in $`q^2`$, we can choose to remove the $`q^2`$ dependence in $`g_{cab}(q^2)`$ around the point $`q^2=m_c^2`$ by a simple redefinition of $`\delta ^{abc}`$.
In the chiral limit $`^\mu A_\mu ^c=0`$, and at $`q^2=0`$ Eq. (2) gives:
$$Mg_A^{abc}(0)=\underset{q^20}{lim}q^2g_2^{abc}(q^2)=F_cg_{cab}(0),$$
(4)
which is the general form of the GTR. Here $`M`$ is the common octet baryon mass in the chiral limit. In the real world, chiral symmetry is explicitly broken by the quark masses and the GB’s become massive. In this case, Eqs. (2) and (3) lead to
$$m_{c}^{}{}_{}{}^{2}g_2^{abc}(m_c^2)=\underset{q^2m_c^2}{lim}\frac{g_{cab}(q^2)P^c(q^2)}{q^2m_{c}^{}{}_{}{}^{2}+iϵ}.$$
(5)
In order to define the GTD it is also convenient to take the limit $`q^20`$ which gives
$$(M_a+M_b)g_A^{abc}(0)=\frac{1}{m_c^2}g_{cab}(0)P^c(0)\delta ^{abc}(0).$$
(6)
The discrepancy $`\mathrm{\Delta }^{abc}`$ is then defined by:
$$(M_a+M_b)g_A^{abc}(0)=\frac{(1\mathrm{\Delta }^{abc})}{m_c^2}g_{cab}(m_c^2)P^c(m_c^2).$$
(7)
Notice that while the GTR, Eq. (4), is defined at $`q^2=0`$, the GTD in Eq. (7) is given at $`q^2=m_{c}^{}{}_{}{}^{2}`$ because only at that point is the coupling $`g_{cab}`$ unambiguously determined. At leading order in the quark masses, the GTD can then be expressed as follows:
$$\mathrm{\Delta }^{abc}=m_c^2\frac{}{q^2}\mathrm{log}N^{abc}(q^2)_{q^2=m_c^2}.$$
(8)
## II Tree Level Contributions
Throughout we are going to use standard definitions, namely:
$`u`$ $``$ $`\mathrm{exp}\left(i{\displaystyle \frac{\pi ^a\lambda ^a}{2F_0}}\right),`$ (9)
$`\chi `$ $``$ $`2B_0(s+ip),`$ (10)
$`\chi _\pm `$ $``$ $`u^{}\chi u^{}\pm u\chi ^{}u,`$ (11)
$`\omega ^\mu `$ $``$ $`{\displaystyle \frac{i}{2}}(u^+^\mu uu^\mu u^+),`$ (12)
$`S_v^\mu `$ $``$ $`{\displaystyle \frac{i}{2}}\gamma _5\sigma ^{\mu \nu }v_\nu .`$ (13)
The HBChPT Lagrangian is ordered in powers of momenta and GB masses, which are small compared to both the chiral scale and the baryon masses,
$$=^{(1)}+^{(2)}+^{(3)}+\mathrm{}.$$
(14)
Although the Lagrangian is written as a single expansion, it will be useful to keep track of the chiral and $`1/M`$ suppression factors separately. As will be demonstrated explicitly below, leading order (LO) contributions to the GTD appear within $`^{(3)}`$. Subleading contributions are suppressed by at least two suppression factors, so we will refer to any contribution at the order of $`^{(5)}`$ as a next-to-leading order (NLO) contribution.
The tree level contributions to the GTD stem from contact terms in the effective Lagrangian that can contribute to $`\delta ^{abc}`$, and also from terms that can give a $`q^2`$ dependence to $`g_{cab}`$. First we notice that in HBChPT such terms must contain the spin operator $`S_v^\mu `$ that results from the non-relativistic reduction of the baryon pseudoscalar density. There are two types of terms which contribute to the GTD. The first type must contain the pseudoscalar source $`\chi _{}`$. The second type must contain monomials such as $`[𝒟^\mu ,[𝒟_\mu ,\omega _\nu ]]`$ and $`[𝒟^\nu ,[𝒟_\mu ,\omega ^\mu ]]`$ between the baryon field operators (here $`𝒟^\mu `$ is the chiral covariant derivative). However, upon using the classical equations of motion satisfied by the GB fields at $`𝒪(p^2)`$, it turns out that terms of the second type can be recast into terms among which there are terms of the first type. In this way, one moves the explicit $`q^2`$ dependence from $`g_{cab}`$ to contact terms, some of which contribute to $`\delta ^{abc}`$. Such reduction of terms has been implemented for $`^{(1)}+^{(2)}+^{(3)}`$, for instance in Ref. , and in the relativistic effective Lagrangian as well. Some terms in $`^{(3)}`$ whose coefficients are determined by reparametrization invariance may seem at first glance to give a $`q^2`$ dependence to $`g_{cab}`$, but a careful calculation shows that this is not so.
Since $`\chi _{}`$ is $`𝒪(p^2)`$, and since a factor of the spin operator $`S_v^\mu `$ is needed, the LO tree contributions to the GTD must come from $`^{(3)}`$. One can further argue that there are no contributions from the even-order Lagrangians, $`^{(2n)}`$. The reason is that an even number of derivatives would require factors in the monomial of the form $`v.`$ which, when acting on the baryon field, are in effect replaced by $`^2/2M`$; the other possibility would be factors of $`v.S_v`$ that vanish.
In the case of SU(2), the Lagrangian $`^{(3)}`$ has been given by Ecker and Mojžiš. There are only two terms in $`^{(3)}`$ that are of interest to us, namely the terms $`𝒪_{19}`$ and $`𝒪_{20}`$ given in Refs. . In the scheme used by Ecker and Mojžiš these are finite counterterms. We note that although $`𝒪_{17}`$ and $`𝒪_{18}`$ do contribute to $`g_{cab}`$ and to $`g_A^{abc}`$, they are such that no contribution to the GTD results, as noticed in Ref. . In SU(3) there are instead three $`^{(3)}`$ terms that are of interest to us, namely,
$`_{GTD}^{(3)}`$ $`=`$ $`iF_{19}\mathrm{Tr}(\overline{B}S_v^\mu [_\mu \chi _{},B])`$ (15)
$`+`$ $`iD_{19}\mathrm{Tr}(\overline{B}S_v^\mu \{_\mu \chi _{},B\})`$ (16)
$`+`$ $`ib_{20}\mathrm{Tr}(\overline{B}S_v^\mu B)\mathrm{Tr}(_\mu \chi _{}).`$ (17)
The NLO contributions come from $`_{GTD}^{(5)}`$ and will not be displayed here. There are, for instance, terms quadratic in the quark masses such as $`\mathrm{Tr}(\overline{B}S_v^\mu \chi _+[_\mu \chi _{},B])`$ and others.
The contribution to $`\delta ^{abc}`$ from $`_{GTD}^{(3)}`$ is given by
$$\frac{\delta _{CT}^{abc}}{4MB_0}=2s_0[iF_{19}f^{abc}+D_{19}d^{abc}]+d^{cde}s^d[iF_{19}f^{bea}+D_{19}d^{abe}]+s^c[\frac{2}{3}D_{19}+b_{20}]\delta ^{ab},$$
(18)
where
$`s_0`$ $`=`$ $`{\displaystyle \frac{1}{3}}(m_u+m_d+m_s),`$ (19)
$`s^a`$ $`=`$ $`\delta _{a3}(m_um_d){\displaystyle \frac{1}{\sqrt{3}}}\delta _{a8}(2m_sm_um_d).`$ (20)
In deriving Eq. (18) from Eq. (17) we used the Ward identity:
$$^\mu A_\mu ^a=2s_0\frac{\delta }{\delta p_a}+\frac{1}{3}s^a\frac{\delta }{\delta p^0}+d^{abc}s^b\frac{\delta }{\delta p_c},$$
(21)
as well as the following correspondence of operators between the heavy baryon and relativistic theories:
$$\overline{B}_vS_v^\mu _\mu pB_viM\overline{B}\gamma _5pB,$$
(22)
where $`B`$ and $`B_v`$ are the relativistic and heavy baryon fields respectively.
The leading terms in the GTD are therefore of order $`p^2`$. There are several relations among the discrepancies that are exact at LO. One of them is the Dashen-Weinstein relation:
$$m_K^2\left(\frac{g_A}{g_V}\right)^{NN\pi }\mathrm{\Delta }^{NN\pi }=\frac{1}{2}m_\pi ^2\left(3\left(\frac{g_A}{g_V}\right)^{N\mathrm{\Lambda }K}\mathrm{\Delta }^{N\mathrm{\Lambda }K}\left(\frac{g_A}{g_V}\right)^{N\mathrm{\Sigma }K}\mathrm{\Delta }^{N\mathrm{\Sigma }K}\right).$$
(23)
This particular relation provides useful insight as will be shown in the phenomenological discussion.
Since the bulk of the contribution to the GTD will result from the counterterms of Eq. (17), it is important to consider what physics determines their magnitude. It seems likely that a meson dominance model may provide the correct picture. In such a model the size of the counterterms would be determined by the lightest excited pseudoscalar mesons that can attach the pseudoscalar current $`\overline{q}\gamma _5\lambda ^aq`$ to the baryons. The relevant such states are in the $`\mathrm{\Pi }^{}`$ octet consisting of $`\pi (1300)`$, $`\eta (1440)`$ and $`K(1460)`$. The next set of pseudoscalar states is in the range of 1800 to 2000 MeV, and thus, one may expect that they only give corrections at the order of 20 to 30%. The meson dominance model can be implemented using an effective Lagrangian in analogy with Ref. . The coupling of the $`\mathrm{\Pi }^{}`$ octet to the pseudoscalar current is obtained from the effective Lagrangian:
$$_\mathrm{\Pi }^{}=\frac{1}{2}\mathrm{Tr}(^\mu \mathrm{\Pi }^{}_\mu \mathrm{\Pi }^{})\frac{1}{2}M_\mathrm{\Pi }^{}^2\mathrm{\Pi }^2+id_\mathrm{\Pi }^{}\mathrm{Tr}(\mathrm{\Pi }^{}\chi _{})+\mathrm{},$$
(24)
where we display only those terms relevant to our problem. Here the $`\mathrm{\Pi }^{}`$ octet responds to chiral rotations in the same way as the baryon octet. The matrix element of the divergence of the axial current is given by
$`<0^\mu A_\mu ^a\mathrm{\Pi }^b>`$ $`=`$ $`{\displaystyle \frac{B}{2}}d_\mathrm{\Pi }^{}\mathrm{Tr}(\lambda ^b\{\lambda ^a,_q\})`$ (25)
$`=`$ $`\delta ^{ab}d_\mathrm{\Pi }^{}m_a^2,`$ (26)
and the $`\mathrm{\Pi }^{}`$-baryon coupling can be expressed through the effective Lagrangian:
$`_{\mathrm{\Pi }^{}B}`$ $`=`$ $`D^{}\mathrm{Tr}(\overline{B}\gamma _5\{\mathrm{\Pi }^{},B\})+F^{}\mathrm{Tr}(\overline{B}\gamma _5[\mathrm{\Pi }^{},B]).`$ (27)
From Eqs. (26) and (27) one readily obtains the contribution to $`\delta ^{abc}`$:
$$\delta _\mathrm{\Pi }^{}^{abc}=d_\mathrm{\Pi }^{}g_{\mathrm{\Pi }^{}B}^{abc}\frac{m_c^2}{q^2M_\mathrm{\Pi }^{}^2}d_\mathrm{\Pi }^{}g_{\mathrm{\Pi }^{}B}^{abc}\frac{m_c^2}{M_\mathrm{\Pi }^{}^2}.$$
(28)
Here $`g_{\mathrm{\Pi }^{}B}^{abc}=\frac{F^{}}{\sqrt{8}}\mathrm{Tr}(\lambda ^b[\lambda ^c,\lambda ^a])+\frac{D^{}}{\sqrt{8}}\mathrm{Tr}(\lambda ^b\{\lambda ^c,\lambda ^a\})`$. The current situation is that the couplings of the $`\mathrm{\Pi }^{}`$ are not known, and there is no estimate in the literature that one could judge reliable. As we comment later, the GTD’s actually serve to determine $`d_\mathrm{\Pi }^{}(q^2)g_{\mathrm{\Pi }^{}B}^{abc}`$ much more precisely than any model calculation available, provided the meson dominance model is realistic.
## III Loop contributions
There are several one-loop contributions to the GTD that we illustrate in Fig. 2. There are also, at the same NLO, two-loop contributions that we do not display here. Although we do not perform here a full calculation, we do arrive a some interesting observations about such NLO effects by loops. Let us consider the loop diagram in Fig. 2a. We can show that in HBChPT this loop effect on the GTD is $`𝒪(1/M^2)`$, and must therefore be suppressed by two powers relative to the LO contribution. Indeed, in HBChPT the diagram is proportional to the following loop integral:
$$iT^{\mu \nu }\frac{d^dk}{(2\pi )^d}\frac{k_\mu k_\nu }{k^2m_d^2}\frac{1+kv/(2M_f)+𝒪(1/M_f^2)}{k.v+k^2/(2M_f)\delta m_{fa}}\frac{1+(k+q).v/(2M_e)+𝒪(1/M_e^2)}{(k+q).v+(k+q)^2/(2M_e)\delta m_{eb}},$$
(29)
where $`\delta m_{ab}M_aM_b`$, and $`T^{\mu \nu }`$ is transverse to the four-velocity $`v`$. For spin 1/2 baryons in the loop $`T^{\mu \nu }S_v^\mu qS_vS_v^\nu `$. It is also easy to show explicitly that $`T^{\mu \nu }`$ is transverse if one or both lines in the loop are spin 3/2 baryons. From energy-momentum conservation we have
$$q.v=(M_bM_a)\frac{q^2}{2M_b}.$$
(30)
Using this and the transversity of $`T^{\mu \nu }`$, the expansion of Eq. (29) shows no $`q^2`$-dependence at $`𝒪(1)`$ and $`𝒪(1/M)`$. We conclude that the one-loop diagrams considered here must affect the GTD at $`𝒪(1/M^2)`$ <sup>*</sup><sup>*</sup>*For a related discussion, see Ref. ., and are thus negligible in the large $`M`$ limit. Another type of one-loop contribution is not suppressed by $`1/M`$. These are the diagrams involving the insertion of terms from $`^{(3)}`$ as shown in Fig. 2b, which correct the GTD at NLO. Similarly there are NLO two-loop contributions that are of leading order in $`1/M`$.
It is interesting to comment here on a one-loop calculation in the framework of a relativistic baryon effective Lagrangian, as used in Refs. . It turns out that the relativistic version of the loop diagram in Fig. 2a gives a finite $`q^2`$ dependence to the $`g_{cab}`$ coupling, namely,
$`g_{cab}(q^2)g_{cab}(0)`$ $`=`$ $`\left({\displaystyle \frac{1}{2F_\pi }}\right)^3{\displaystyle \underset{d,e,f=1}{\overset{8}{}}}g_A^{afd}g_A^{ebd}g_A^{fec}𝒥^{fed}(q^2,M_a,M_b,m_c)`$ (31)
where the integral $`𝒥^{fed}`$ is given by:
$`𝒥^{fed}(q^2,M_a,M_b,m_c)`$ $`=`$ $`{\displaystyle \frac{1}{(4\pi )^2}}𝒞(M_a,M_b,M_e,M_f){\displaystyle _0^1}𝑑x{\displaystyle _0^{1x}}𝑑y{\displaystyle \frac{𝒩(x,y)}{𝒟(x,y)}},`$ (32)
$`2𝒩(x,y)`$ $`=`$ $`(x+y1)(M_a+M_b)^2q^2(x+y)+2(1x)M_aM_e+2xM_aM_f`$ (33)
$`+`$ $`2(1y)M_fM_b+2yM_bM_e+(M_fM_e)^2,`$ (34)
$`𝒟(x,y)`$ $`=`$ $`(1xy)(xM_a^2+yM_b^2M_d^2)M_f^2xM_e^2y+xyq^2,`$ (35)
$`𝒞(M_a,M_b,M_e,M_f)`$ $`=`$ $`(M_b+M_e)(M_a+M_f)(M_e+M_f).`$ (36)
One can readily check that for SU(2) one obtains the result in Ref. .
The interesting thing here is that the contribution to the GTD by the loop is not suppressed by $`1/M`$. Actually, it is nearly constant for baryon masses ranging from a few hundred MeV to an arbitrarily large mass. This result seems at odds with the one from HBChPT, but the two can be harmonized as follows: in the limit of large $`M`$ it turns out that in the relativistic calculation there are contributions to the loop integral from momenta that are $`𝒪(M)`$. $`M`$ acts in fact as a regulator scale. In HBChPT on the other hand, one is doing a $`1/M`$ expansion of the integrand, which implies that one is assuming a cutoff in the loop integrals given by a QCD scale. The relativistic and HBChPT frameworks must each lead to the same physical results; in the present case this implies that in order to lead to the same results for the discrepancies, the coefficients $`F_{19}`$ and $`D_{19}`$ in $`^{(3)}`$ must be readjusted when going from one framework to the other. In the real world, $`M\mathrm{\Lambda }_\chi `$ and we may use the relativistic calculation as an estimate of this class of loop contributions to the discrepancy. For the discrepancies of interest herein, these loop contributions are small, between ten to twenty percent of the discrepancies themselves, and smaller than their current errors. The numerical results are
$`\mathrm{\Delta }_{loop}^{NN\pi }`$ $`=`$ $`0.0043`$ (37)
$`\mathrm{\Delta }_{loop}^{N\mathrm{\Lambda }K}`$ $`=`$ $`0.044`$ (38)
$`\mathrm{\Delta }_{loop}^{N\mathrm{\Sigma }K}`$ $`=`$ $`0.044,`$ (39)
where we use $`D=0.79`$ and $`F=0.46`$ for the SU(3) axial vector couplings.
Of course the calculated loop contribution is not all that there is; the inclusion of decuplet baryons in the loop also gives contributions to the discrepancy. (Ref. discusses some $`\mathrm{\Delta }(1232)`$ effects with only two quark flavors.) Using Rarita-Schwinger propagators and three quark flavors, we have checked that the $`q^2`$-dependent part does show an UV divergence in the relativistic framework. HBChPT also permits two-loop contributions at NLO. Currently a more complete calculation of the discrepancy at NLO is underway.
## IV Results
There are only three discrepancies that can be determined from existing data on baryon-pseudoscalar couplings: $`\mathrm{\Delta }^{NN\pi }`$, $`\mathrm{\Delta }^{N\mathrm{\Lambda }K}`$, and $`\mathrm{\Delta }^{N\mathrm{\Sigma }K}`$.
Due to the smallness of the u and d quark masses, $`\mathrm{\Delta }^{NN\pi }`$ is necessarily very small, and its determination requires a very precise knowledge of the $`g_{\pi NN}`$ coupling ($`g_A`$ and $`F_\pi `$ are already known to enough precision, leaving most of the uncertainty in the determination of $`\mathrm{\Delta }^{NN\pi }`$ to the uncertainty in $`g_{\pi NN}`$). The most recent determination of $`g_{\pi NN}`$ from $`NN`$, $`N\overline{N}`$ and $`\pi N`$ data is by the Nijmegen group. They analyzed a total of twelve thousand data and arrived at $`g_{\pi NN}=13.05\pm 0.08`$. Similar results are obtained by the VPI group. Since the errors quoted are only statistical, in our fit below we will increase the error by about a factor of two in order to roughly account for systematic uncertainties. There is still some disagreement between determinations of $`g_{\pi NN}`$ by different groups. Larger values have been obtained, such as $`g_{\pi NN}=13.65\pm 0.30`$ by Bugg and Machleidt, and a similar result by Loiseau et al.. As we find out below, our analysis of the discrepancies strongly favors the smaller $`g_{\pi NN}`$ values. Using $`F_\pi =92.42`$ MeV, $`\left(\frac{g_A}{g_V}\right)^{NN\pi }=1.267\pm 0.004`$ , Eq. (7) gives,
$`\mathrm{\Delta }_{\mathrm{expt}}^{NN\pi }`$ $`=`$ $`0.014\pm 0.006\mathrm{for}g_{\pi NN}=13.05\pm 0.08,`$ (40)
$`\mathrm{\Delta }_{\mathrm{expt}}^{NN\pi }`$ $`=`$ $`0.056\pm 0.020\mathrm{for}g_{\pi NN}=13.65\pm 0.30.`$ (41)
The determination of the $`g_{KN\mathrm{\Lambda }}`$ and $`g_{KN\mathrm{\Sigma }}`$ couplings relies on a more sparse data set. The Nijmegen group analyzed data from $`Y\overline{Y}`$ production at LEAR, and they obtained: $`g_{KN\mathrm{\Lambda }}=13.7\pm 0.4`$ and $`g_{KN\mathrm{\Sigma }}=3.9\pm 0.7`$. These values are consistent with an earlier analysis by Martin, where only an upper bound for $`g_{KN\mathrm{\Sigma }}`$ is given. Using $`F_K=1.22F_\pi `$ and $`\left(\frac{g_A}{g_V}\right)^{N\mathrm{\Lambda }K}=0.718\pm 0.015`$ and $`\left(\frac{g_A}{g_V}\right)^{N\mathrm{\Sigma }K}=0.340\pm 0.017`$ , Eq. (7) gives,
$`\mathrm{\Delta }_{\mathrm{expt}}^{N\mathrm{\Lambda }K}`$ $`=`$ $`0.17\pm 0.03`$ (42)
$`\mathrm{\Delta }_{\mathrm{expt}}^{N\mathrm{\Sigma }K}`$ $`=`$ $`0.17\pm 0.14`$ (43)
Disregarding SU(2) breaking, which implies that there is no contribution from the term proportional to $`b_{20}`$ to these discrepancies, we can use the three measured discrepancies to determine the two LO parameters in HBChPT:
$`MF_{19}`$ $`=`$ $`0.4\pm 0.1\mathrm{GeV}^1,`$ (44)
$`MD_{19}`$ $`=`$ $`0.7\pm 0.2\mathrm{GeV}^1,`$ (45)
where $`M`$ is here the common baryon-octet mass in the chiral limit. Both choices for $`g_{\pi NN}`$, given in Eqs. (40) and (41), lead to values for $`MF_{19}`$ and $`MD_{19}`$ that agree within the quoted uncertainties. The LO discrepancies resulting from our fit are:
$`\mathrm{\Delta }^{NN\pi }`$ $`=`$ $`0.017;\mathrm{\hspace{0.33em}\hspace{0.33em}0.018},`$ (46)
$`\mathrm{\Delta }^{N\mathrm{\Lambda }K}`$ $`=`$ $`0.17;\mathrm{\hspace{0.33em}\hspace{0.33em}0.18},`$ (47)
$`\mathrm{\Delta }^{N\mathrm{\Sigma }K}`$ $`=`$ $`0.17;\mathrm{\hspace{0.33em}\hspace{0.33em}0.19},`$ (48)
where the quoted results correspond respectively to the smaller and larger $`g_{\pi NN}`$ couplings. The larger value $`\mathrm{\Delta }^{NN\pi }=0.056`$ of Eq. (41), corresponding to the larger $`g_{\pi NN}`$ coupling, cannot come out consistently from the fit. To understand this one can use the Dashen-Weinstein relation, Eq. (23), which holds exactly in our LO calculation. For the results of the discrepancies involving the hyperons the term proportional to $`\mathrm{\Delta }^{N\mathrm{\Sigma }K}`$ in the Dashen-Weinstein relation is about one fifth of that proportional to $`\mathrm{\Delta }^{N\mathrm{\Lambda }K}`$, and the right hand side of Eq. (23) would imply that $`\mathrm{\Delta }^{NN\pi }`$ must be about $`1.5\%`$. The only way to accomodate a larger $`\mathrm{\Delta }^{NN\pi }`$ would be larger $`\mathrm{\Delta }^{N\mathrm{\Lambda }K}`$ and $`\mathrm{\Delta }^{N\mathrm{\Sigma }K}`$ or else a large deviation from the Dashen-Weinstein relation. The latter seems unlikely because the corrections to the relation must be suppressed by two powers in HBChPT (this is so because the corrections to the axial-vector couplings and to the discrepancies are of $`𝒪(p^2)`$). On the other hand the former possibility would require that the magnitudes of $`g_{KN\mathrm{\Lambda }}`$ and $`g_{KN\mathrm{\Sigma }}`$ be unrealistically large. In fact, $`\mathrm{\Delta }^{N\mathrm{\Lambda }K}`$ and $`\mathrm{\Delta }^{N\mathrm{\Sigma }K}`$ should be close to unity, implying a serious failure of the low energy expansion. Thus, we conclude that only the smaller values of $`\mathrm{\Delta }^{NN\pi }`$, and thus of $`g_{\pi NN}`$, are consistent. This shows the importance of the current analysis of the GTD in SU(3).
Finally, the coupling constants required in the meson dominance model resulting from our analysis are as follows:
$`d_\mathrm{\Pi }^{}F^{}`$ $`=`$ $`2.4\pm 0.5\mathrm{GeV}`$ (49)
$`d_\mathrm{\Pi }^{}D^{}`$ $`=`$ $`4.5\pm 0.5\mathrm{GeV}.`$ (50)
Since here $`F^{}`$ and $`D^{}`$ are baryon-meson couplings, it is not unreasonable that they should have values similar to those of, say, the pion-nucleon coupling. This would imply that the coupling $`d_\mathrm{\Pi }^{}`$ should be a few hundred MeV. This makes the meson dominance picture quite plausible.
In conclusion, we have shown that the GTD in SU(3) is given at leading order by two tree-level contributions, and that the corrections are suppressed by two powers in HBChPT. Some of the loop corrections were calculated explicitly and found to be small. Our leading order analysis indicates a strong preference for a smaller Goldberger-Treiman discrepancy in the pion-nucleon sector, thus favoring the smaller values of the pion-nucleon coupling extracted in recent partial wave analyses.
## Acknowledgements
We would like to thank Juerg Gasser for allowing us to use material from an earlier unpublished collaboration and for useful discussions. We also thank G. Höhler and Ulf Meißner for useful comments, and Jan Stern for bringing to our attention the Dashen-Weinstein relation. This work was supported by the National Science Foundation through grant # HRD-9633750 (JLG and MS), and # PHY-9733343 (JLG) and by the Department of Energy through contract DE-AC05-84ER40150 (JLG, RL), and in part by Natural Sciences and Engineering Research Council of Canada (RL), the Fundación Antorchas of Argentina (MS) and by the grant # PMT-PICT0079 of the ANPCYT of Argentina (MS).
|
no-problem/9901/hep-ex9901041.html
|
ar5iv
|
text
|
# Momentum Spectra in the Current Region of the Breit Frame in Deep Inelastic Scattering at HERA
## Introduction
This paper reports the results of a study of the properties of the hadronic final state in positron-proton deep inelastic scattering (DIS). The event kinematics of DIS are determined by the negative square of the four-momentum transfer of the virtual exchanged boson, $`Q^2q^2`$, and the Bjorken scaling variable, $`x=Q^2/2Pq`$, where $`P`$ is the four-momentum of the proton. In the Quark Parton Model (QPM), the interacting quark from the proton carries four-momentum $`xP.`$ The variable $`y`$, the fractional energy transfer to the proton in its rest frame, is related to $`x`$ and $`Q^2`$ by $`yQ^2/xs`$, where $`\sqrt{s}`$ is the positron-proton centre of mass energy.
A natural frame in which to study the dynamics of the hadronic final state in DIS is the Breit frame . In this frame the exchanged virtual boson is completely space-like and has a four-momentum $`q=(0,0,0,Q=2xP^{Breit})(E,p_x,p_y,p_z)`$, where $`P^{Breit}`$ is the momentum of the proton in the Breit frame. The particles produced in the interaction can be assigned to one of two regions: the current region if their $`z`$-momentum in the Breit frame is negative, and the target region if their $`z`$-momentum is positive. The advantage of this frame is that it gives a maximal separation of the incoming and outgoing partons in the QPM. In this model the maximum momentum a particle can have in the current region is $`Q/2.`$
The current region in the Breit frame is analogous to a single hemisphere of $`e^+e^{}`$ annihilation. In $`e^+e^{}q\overline{q}`$ annihilation the two quarks are produced with equal and opposite momenta, $`\pm \sqrt{s}/2.`$ The fragmentation of these quarks can be compared with that of the quark struck from the proton which has outgoing momentum $`Q/2`$ in the Breit frame. In the direction of this struck quark the scaled momentum spectra of the particles, expressed in terms of $`x_p=2p^{Breit}/Q,`$ are expected to have a dependence on $`Q`$ similar to that observed in $`e^+e^{}`$ annihilation at energy $`\sqrt{s}=Q.`$
Within the modified leading log approximation (MLLA) there are predictions of how the higher order moments of the parton momentum spectra should evolve with energy scale . The parton level predictions depend on two free parameters, a running strong coupling, governed by a QCD scale $`\mathrm{\Lambda },`$ and an energy cut-off, $`Q_0,`$ below which the parton evolution is truncated. The hypothesis of local parton hadron duality (LPHD) , which relates the observed hadron distributions to the calculated parton distributions via a constant of proportionality, is used in conjunction with the predictions of the MLLA allowing the calculation to be directly compared with data.
## Results
The moments of the $`\mathrm{ln}(1/x_p)`$ distributions have been investigated up to the 4th order; the mean $`(l),`$ width $`(w),`$ skewness $`(s)`$ and kurtosis $`(k)`$ were extracted from the distribution by fitting a distorted Gaussian of the following form:
$$\frac{1}{\sigma _{tot}}\frac{d\sigma }{d\mathrm{ln}(1/x_p)}\mathrm{exp}\left(\frac{1}{8}k\frac{1}{2}s\delta \frac{1}{4}(2+k)\delta ^2+\frac{1}{6}s\delta ^3+\frac{1}{24}k\delta ^4\right)$$
where $`\delta =(\mathrm{ln}(1/x_p)l)/w,`$ over a range of 3 units ($`Q^2<160\mathrm{GeV}^2`$) or 4 ($`Q^2160\mathrm{GeV}^2`$) units in $`\mathrm{ln}(1/x_p)`$ around the mean. The equation was motivated by the expression used for the MLLA predictions of the spectra in ref. .
Figure 1 shows the skewness of the $`\mathrm{ln}(1/x_p)`$ spectra as a function of $`\mathrm{ln}(Q).`$ It is evident that the skewness decreases with increasing $`Q.`$ Similar fits performed on $`e^+e^{}`$ data shows a reasonable agreement with our results at high $`Q^2,`$ consistent with the universality of fragmentation for this distribution. The ARIADNE Monte Carlo model gives a reasonable description of the data. The data are compared with the MLLA predictions of ref. , using a value of $`\mathrm{\Lambda }=175\mathrm{MeV},`$ for different values of $`Q_0.`$ The MLLA calculations predict a negative skewness which decreases towards zero with increasing $`Q`$ in the case of the limiting spectra ($`Q_0=\mathrm{\Lambda }`$). This is contrary to the measurements. A reasonable description of the behaviour of the skewness with $`Q`$ can be achieved for a truncated cascade ($`Q_0>\mathrm{\Lambda }`$), but a consistent description of the mean, width, skewness and kurtosis cannot be achieved . A range of $`\mathrm{\Lambda }`$ values were investigated and none gave a good description of all the moments. We conclude that the MLLA predictions, assuming LPHD, do not describe the data. It should be noted though that a moments analysis has been performed , taking into account the limitations of the massless assumptions of the MLLA predictions, where good agreement was found between the limiting case of the MLLA and $`\mathrm{e}^+\mathrm{e}^{}`$ data over a large range of energy, $`3.0<\sqrt{s}<133.0\mathrm{GeV}.`$
## Summary
Charged particle distributions have been studied in the current region of the Breit frame in DIS. The moments of the $`\mathrm{ln}(1/x_p)`$ spectra in the current region at high $`Q^2`$ exhibit the same energy scale behaviour as that observed in $`e^+e^{}`$ data. The moments cannot be described by the MLLA calculations together with LPHD.
|
no-problem/9901/quant-ph9901007.html
|
ar5iv
|
text
|
# Interplay of creation, propagation, and relaxation of an excitation in a dimer
## A
Definitions of matrices in Eq. (20)
Appendix contains the definitions of the matrices $`𝒥_1`$, $`𝒥_2`$, $`𝒢_1(t)`$, $`𝒢_2(t)`$, $`_1(t)`$, $`_2(t)`$, $`_3(t)`$, and $`_4(t)`$ in Eq. (20). The influence of their elements on the excitation dynamics is also discussed.
The matrices $`𝒥_1`$ and $`𝒥_2`$ describe the dynamics of the free exciton system:
$`𝒥_1={\displaystyle \frac{1}{\mathrm{}}}\left[\begin{array}{ccccc}0& 0& 0& 2J& 0\\ 0& 0& 0& 2J& 0\\ 0& 0& 0& 2\epsilon & 0\\ J& J& 2\epsilon & 0& 0\\ 0& 0& 0& 0& 0\end{array}\right],𝒥_2={\displaystyle \frac{1}{\mathrm{}}}\left[\begin{array}{cccc}0& \epsilon & 0& J\\ \epsilon & 0& J& 0\\ 0& J& 0& \epsilon \\ J& 0& \epsilon & 0\end{array}\right].`$ (A10)
The matrices $`𝒢_1(t)`$ and $`𝒢_2(t)`$ stemm from the exciton-phonon interaction:
$`𝒢_1(t)=\left[\begin{array}{ccccc}0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0\\ A& C& E& F& 0\\ B& D& F& E& 0\\ 0& 0& 0& 0& 0\end{array}\right],𝒢_2(t)=\left[\begin{array}{cccc}A_1& B_1& C_1& D_1\\ B_1& A_1& D_1& C_1\\ C_2& D_2& A_2& B_2\\ D_2& C_2& B_2& A_2\end{array}\right].`$ (A20)
The time-dependent coefficients $`A(t),\mathrm{},D_2(t)`$ are given as follows
$`A(t)`$ $`=`$ $`{\displaystyle \frac{J\epsilon }{\mathrm{\Delta }^2}}G^2\overline{g}_{2,1}+{\displaystyle \frac{J}{2\mathrm{\Delta }}}G^2\overline{g}_{3,2},`$ (A21)
$`B(t)`$ $`=`$ $`{\displaystyle \frac{J\epsilon }{\mathrm{\Delta }^2}}G^2\overline{g}_{2,2}{\displaystyle \frac{J}{2\mathrm{\Delta }}}G^2\overline{g}_{3,1},`$ (A22)
$`C(t)`$ $`=`$ $`{\displaystyle \frac{J\epsilon }{\mathrm{\Delta }^2}}G^2\overline{g}_{2,1}+{\displaystyle \frac{J}{2\mathrm{\Delta }}}G^2\overline{g}_{3,2},`$ (A23)
$`D(t)`$ $`=`$ $`{\displaystyle \frac{J\epsilon }{\mathrm{\Delta }^2}}G^2\overline{g}_{2,2}{\displaystyle \frac{J}{2\mathrm{\Delta }}}G^2\overline{g}_{3,1},`$ (A24)
$`E(t)`$ $`=`$ $`G^2\overline{g}_{1,1}2{\displaystyle \frac{J^2}{\mathrm{\Delta }^2}}G^2\overline{g}_{2,1},`$ (A25)
$`F(t)`$ $`=`$ $`G^2\overline{g}_{1,3},`$ (A26)
$`A_1(t)`$ $`=`$ $`G^2\overline{g}_{1,4}+{\displaystyle \frac{J^2}{\mathrm{\Delta }^2}}G^2\left[\overline{g}_{2,4}+\overline{g}_{2,5}+\overline{g}_{2,10}\right],`$ (A27)
$`B_1(t)`$ $`=`$ $`G^2\overline{g}_{1,8}+{\displaystyle \frac{J^2}{\mathrm{\Delta }^2}}G^2\left[\overline{g}_{2,6}\overline{g}_{2,8}+\overline{g}_{2,9}\right],`$ (A28)
$`A_2(t)`$ $`=`$ $`G^2\overline{g}_{1,7}+{\displaystyle \frac{J^2}{\mathrm{\Delta }^2}}G^2\left[\overline{g}_{2,5}\overline{g}_{2,7}\overline{g}_{2,10}\right],`$ (A29)
$`B_2(t)`$ $`=`$ $`G^2\overline{g}_{1,11}+{\displaystyle \frac{J^2}{\mathrm{\Delta }^2}}G^2\left[\overline{g}_{2,6}+\overline{g}_{2,9}\overline{g}_{2,11}\right],`$ (A30)
$`C_1(t)`$ $`=`$ $`{\displaystyle \frac{J\epsilon }{\mathrm{\Delta }^2}}G^2\left[\overline{g}_{2,4}\overline{g}_{2,5}\overline{g}_{2,10}\right]+{\displaystyle \frac{J}{2\mathrm{\Delta }}}G^2\left[\overline{g}_{3,6}+\overline{g}_{3,8}\overline{g}_{3,9}\right],`$ (A31)
$`D_1(t)`$ $`=`$ $`{\displaystyle \frac{J\epsilon }{\mathrm{\Delta }^2}}G^2\left[\overline{g}_{2,6}+\overline{g}_{2,8}\overline{g}_{2,9}\right]+{\displaystyle \frac{J}{2\mathrm{\Delta }}}G^2\left[\overline{g}_{3,4}+\overline{g}_{3,5}+\overline{g}_{3,10}\right],`$ (A32)
$`C_2(t)`$ $`=`$ $`{\displaystyle \frac{J\epsilon }{\mathrm{\Delta }^2}}G^2\left[\overline{g}_{2,5}\overline{g}_{2,7}\overline{g}_{2,10}\right]+{\displaystyle \frac{J}{2\mathrm{\Delta }}}G^2\left[\overline{g}_{3,6}\overline{g}_{3,9}+\overline{g}_{3,11}\right],`$ (A33)
$`D_2(t)`$ $`=`$ $`{\displaystyle \frac{J\epsilon }{\mathrm{\Delta }^2}}G^2\left[\overline{g}_{2,6}+\overline{g}_{2,9}\overline{g}_{2,11}\right]+{\displaystyle \frac{J}{2\mathrm{\Delta }}}G^2\left[\overline{g}_{3,5}\overline{g}_{3,7}\overline{g}_{3,10}\right].`$ (A34)
The functions
$`\overline{g}_{1,j}(t)`$ $`=`$ $`{\displaystyle _0^{tt_0}}𝑑\tau g_j(\tau ),`$ (A35)
$`\overline{g}_{2,j}(t,\mathrm{\Delta }^{})`$ $`=`$ $`{\displaystyle _0^{tt_0}}𝑑\tau g_j(\tau )\mathrm{sin}^2(\mathrm{\Delta }^{}\tau ),`$ (A36)
$`\overline{g}_{3,j}(t,\mathrm{\Delta }^{})`$ $`=`$ $`{\displaystyle _0^{tt_0}}𝑑\tau g_j(\tau )\mathrm{sin}(2\mathrm{\Delta }^{}\tau ),j=1,\mathrm{},11`$ (A37)
describe the response of the exciton subsystem to the phonon one. The phonon subsystem is characterized by the functions In numerical calculations, we assume that $`\mathrm{}\mathrm{\Omega }_kG_k^i=G_i`$ in one half of the $`k`$–space and $`\mathrm{}\mathrm{\Omega }_kG_k^i=G_i^{}`$ in the remaining half of the $`k`$–space ($`i=1,2`$) (for details, see, Ref. ). This assumption is in agreement with hermiticity of $`\widehat{H}_{\mathrm{e}\mathrm{ph}}`$. Further, the mean numbers of phonons $`n_\mathrm{B}(\mathrm{}\mathrm{\Omega }_k)`$ are assumed to be $`k`$–independent ($`n_\mathrm{B}(\mathrm{}\mathrm{\Omega }_{k_0})=n_\mathrm{B}`$). The remaining summations $`\frac{1}{N}_k\mathrm{sin}(\mathrm{\Omega }_k\tau )`$ and $`\frac{1}{N}_k\mathrm{cos}(\mathrm{\Omega }_k\tau )`$ in Eqs. (A6) are replaced by the expressions $`\mathrm{sin}(\mathrm{\Omega }_{ph}\tau )\mathrm{exp}(\gamma _{ph}\tau )`$ and $`\mathrm{cos}(\mathrm{\Omega }_{ph}\tau )\mathrm{exp}(\gamma _{ph}\tau )`$, respectively. The frequency $`\mathrm{\Omega }_{ph}`$ then characterizes a mean phonon oscillation frequency and $`\gamma _{ph}`$ describes damping originating in dephasing.
$`g_1(\tau )`$ $`=`$ $`{\displaystyle \frac{1}{G^2N}}{\displaystyle \underset{k}{}}\mathrm{\Omega }_k^2|G_k^1G_k^2|^2\left[2n_\mathrm{B}(\mathrm{}\mathrm{\Omega }_k)+1\right]\mathrm{cos}(\mathrm{\Omega }_k\tau ),`$ (A38)
$`g_2(\tau )`$ $`=`$ $`{\displaystyle \frac{1}{G^2N}}{\displaystyle \underset{k}{}}\mathrm{\Omega }_k^2|G_k^1G_k^2|^2\mathrm{sin}(\mathrm{\Omega }_k\tau ),`$ (A39)
$`g_3(\tau )`$ $`=`$ $`{\displaystyle \frac{1}{G^2N}}{\displaystyle \underset{k}{}}\mathrm{\Omega }_k^2(G_k^1G_k^2)(G_k^1+G_k^2)\mathrm{sin}(\mathrm{\Omega }_k\tau ),`$ (A40)
$`g_4(\tau )`$ $`=`$ $`{\displaystyle \frac{1}{G^2N}}{\displaystyle \underset{k}{}}\mathrm{\Omega }_k^2|G_k^1|^2\left[2n_\mathrm{B}(\mathrm{}\mathrm{\Omega }_k)+1\right]\mathrm{cos}(\mathrm{\Omega }_k\tau ),`$ (A41)
$`g_5(\tau )`$ $`=`$ $`{\displaystyle \frac{1}{G^2N}}{\displaystyle \underset{k}{}}\mathrm{\Omega }_k^2\mathrm{Re}[G_k^1G_k^2]\left[2n_\mathrm{B}(\mathrm{}\mathrm{\Omega }_k)+1\right]\mathrm{cos}(\mathrm{\Omega }_k\tau ),`$ (A42)
$`g_6(\tau )`$ $`=`$ $`{\displaystyle \frac{1}{G^2N}}{\displaystyle \underset{k}{}}\mathrm{\Omega }_k^2\mathrm{Im}[G_k^1G_k^2]\left[2n_\mathrm{B}(\mathrm{}\mathrm{\Omega }_k)+1\right]\mathrm{cos}(\mathrm{\Omega }_k\tau ),`$ (A43)
$`g_7(\tau )`$ $`=`$ $`{\displaystyle \frac{1}{G^2N}}{\displaystyle \underset{k}{}}\mathrm{\Omega }_k^2|G_k^2|^2\left[2n_\mathrm{B}(\mathrm{}\mathrm{\Omega }_k)+1\right]\mathrm{cos}(\mathrm{\Omega }_k\tau ),`$ (A44)
$`g_8(\tau )`$ $`=`$ $`{\displaystyle \frac{1}{G^2N}}{\displaystyle \underset{k}{}}\mathrm{\Omega }_k^2|G_k^1|^2\mathrm{sin}(\mathrm{\Omega }_k\tau ),`$ (A45)
$`g_9(\tau )`$ $`=`$ $`{\displaystyle \frac{1}{G^2N}}{\displaystyle \underset{k}{}}\mathrm{\Omega }_k^2\mathrm{Re}[G_k^1G_k^2]\mathrm{sin}(\mathrm{\Omega }_k\tau ),`$ (A46)
$`g_{10}(\tau )`$ $`=`$ $`{\displaystyle \frac{1}{G^2N}}{\displaystyle \underset{k}{}}\mathrm{\Omega }_k^2\mathrm{Im}[G_k^1G_k^2]\mathrm{sin}(\mathrm{\Omega }_k\tau ),`$ (A47)
$`g_{11}(\tau )`$ $`=`$ $`{\displaystyle \frac{1}{G^2N}}{\displaystyle \underset{k}{}}\mathrm{\Omega }_k^2|G_k^2|^2\mathrm{sin}(\mathrm{\Omega }_k\tau ).`$ (A48)
The constant $`G`$ having the meaning of the mean exciton–phonon interaction constant has been introduced into Eqs. (A3) and (A4) as well as into the definitions given in Eq. (A6) in order to get $`g_1(\tau ),\mathrm{},g_{11}(\tau )`$ dependent only on the dispersion of coupling constants. The symbols $`\mathrm{Re}`$ and $`\mathrm{Im}`$ denote real and imaginary parts. The new symbol $`\mathrm{\Delta }^{}=\mathrm{\Delta }/\mathrm{}`$ has been introduced here.
The time-dependent coefficients $`B(t)`$ and $`D(t)`$ given in Eq. (A3) renormalize the transfer integral $`J`$. The coefficients $`A(t)`$ and $`C(t)`$ are important for relaxation to equilibrium state (see, e.g., Refs. ).
The matrices $`_1(t)`$, $`_2(t)`$, $`_3(t)`$, and $`_4(t)`$ originate in the interaction with an optical field:
$`_1(t)`$ $`=`$ $`\left[\begin{array}{ccccc}2\overline{M}_1& 0& 2\overline{O}_1& 2\overline{O}_2& 2M_1\\ 0& 2\overline{N}_1& 2\overline{P}_1& 2\overline{P}_2& 2N_1\\ \overline{P}_1& \overline{O}_1& \overline{M}_1+\overline{N}_1& \overline{M}_2+\overline{N}_2& O_1P_1\\ \overline{P}_2& \overline{O}_2& \overline{M}_2\overline{N}_2& \overline{M}_1+\overline{N}_1& P_2O_2\\ 2\overline{M}_1& 2\overline{N}_1& 2\overline{P}_12\overline{O}_1& 2\overline{P}_22\overline{O}_2& 2M_1+2N_1\end{array}\right],`$ (A54)
$`_2(t)`$ $`=`$ $`\left[\begin{array}{cccc}2K_1& 2K_2& 0& 0\\ 0& 0& 2L_1& 2L_2\\ L_1& L_2& K_1& K_2\\ L_2& L_1& K_2& K_1\\ 2K_1& 2K_2& 2L_1& 2L_2\end{array}\right],`$ (A60)
$`_3(t)`$ $`=`$ $`\left[\begin{array}{ccccc}K_1& 0& L_1& L_2& K_1\\ K_2& 0& L_2& L_1& K_2\\ 0& L_1& K_1& K_2& L_1\\ 0& L_2& K_2& K_1& L_2\end{array}\right],`$ (A65)
$`_4(t)`$ $`=`$ $`\left[\begin{array}{cccc}2M_1+N_12\stackrel{~}{M}_1& 2M_2+N_22\stackrel{~}{M}_2& O_1\stackrel{~}{O}_1\stackrel{~}{P}_1& O_2\stackrel{~}{O}_2\stackrel{~}{P}_2\\ 2M_2N_22\stackrel{~}{M}_2& 2M_1+N_1+2\stackrel{~}{M}_1& O_2\stackrel{~}{O}_2\stackrel{~}{P}_2& O_1+\stackrel{~}{O}_1+\stackrel{~}{P}_1\\ P_1\stackrel{~}{O}_1\stackrel{~}{P}_1& P_2\stackrel{~}{O}_2\stackrel{~}{P}_2& 2N_1+M_12\stackrel{~}{N}_1& 2N_2+M_22\stackrel{~}{N}_2\\ P_2\stackrel{~}{O}_2\stackrel{~}{P}_2& P_1+\stackrel{~}{O}_1+\stackrel{~}{P}_1& 2N_2M_22\stackrel{~}{N}_2& 2N_1+M_1+2\stackrel{~}{N}_1\end{array}\right].`$ (A70)
The coefficients $`K_1(t)`$, $`K_2(t)`$, $`L_1(t)`$, and $`L_2(t)`$ describing the influence of the coherent part of an optical field have the form (the constants $`\stackrel{~}{F}_{K_0}^1`$ and $`\stackrel{~}{F}_{K_0}^2`$ are assumed to be real):
$`K_1(t)`$ $`=`$ $`\omega _{K_0}\stackrel{~}{F}_{K_0}^1\mathrm{Im}\left[\stackrel{~}{𝒜}(t)\mathrm{exp}(i\delta ^{}t)\right],`$ (A72)
$`K_2(t)`$ $`=`$ $`\omega _{K_0}\stackrel{~}{F}_{K_0}^1\mathrm{Re}\left[\stackrel{~}{𝒜}(t)\mathrm{exp}(i\delta ^{}t)\right],`$ (A73)
$`L_1(t)`$ $`=`$ $`\omega _{K_0}\stackrel{~}{F}_{K_0}^2\mathrm{Im}\left[\stackrel{~}{𝒜}(t)\mathrm{exp}(i\delta ^{}t)\right],`$ (A74)
$`L_2(t)`$ $`=`$ $`\omega _{K_0}\stackrel{~}{F}_{K_0}^2\mathrm{Re}\left[\stackrel{~}{𝒜}(t)\mathrm{exp}(i\delta ^{}t)\right].`$ (A75)
The symbol $`\delta ^{}`$ denotes the frequency mismatch ($`\delta ^{}=(E+\epsilon )/\mathrm{}\omega _{K_0}`$) and the envelope $`\stackrel{~}{𝒜}(t)`$ of the field is defined as follows,
$$\stackrel{~}{𝒜}(t)=𝒜(t)\mathrm{exp}(i\omega _{K_0}t).$$
(A76)
The coefficients $`M_1(t),\mathrm{},P_2(t)`$ reflect statistical properties of the optical field (noise) and can be expressed in the form
$`M_1(t)`$ $`=`$ $`(\stackrel{~}{F}_{K_0}^1)^2i_1(\stackrel{~}{F}_{K_0}^1)^2{\displaystyle \frac{\epsilon }{\mathrm{\Delta }}}i_4\stackrel{~}{F}_{K_0}^1\stackrel{~}{F}_{K_0}^2{\displaystyle \frac{J}{\mathrm{\Delta }}}i_4,`$ (A77)
$`M_2(t)`$ $`=`$ $`(\stackrel{~}{F}_{K_0}^1)^2i_3+(\stackrel{~}{F}_{K_0}^1)^2{\displaystyle \frac{\epsilon }{\mathrm{\Delta }}}i_2+\stackrel{~}{F}_{K_0}^1\stackrel{~}{F}_{K_0}^2{\displaystyle \frac{J}{\mathrm{\Delta }}}i_2,`$ (A78)
$`N_1(t)`$ $`=`$ $`(\stackrel{~}{F}_{K_0}^2)^2i_1+(\stackrel{~}{F}_{K_0}^2)^2{\displaystyle \frac{\epsilon }{\mathrm{\Delta }}}i_4\stackrel{~}{F}_{K_0}^1\stackrel{~}{F}_{K_0}^2{\displaystyle \frac{J}{\mathrm{\Delta }}}i_4,`$ (A79)
$`N_2(t)`$ $`=`$ $`(\stackrel{~}{F}_{K_0}^2)^2i_3(\stackrel{~}{F}_{K_0}^2)^2{\displaystyle \frac{\epsilon }{\mathrm{\Delta }}}i_2+\stackrel{~}{F}_{K_0}^1\stackrel{~}{F}_{K_0}^2{\displaystyle \frac{J}{\mathrm{\Delta }}}i_2,`$ (A80)
$`O_1(t)`$ $`=`$ $`(\stackrel{~}{F}_{K_0}^1)^2{\displaystyle \frac{J}{\mathrm{\Delta }}}i_4+\stackrel{~}{F}_{K_0}^1\stackrel{~}{F}_{K_0}^2i_1+\stackrel{~}{F}_{K_0}^1\stackrel{~}{F}_{K_0}^2{\displaystyle \frac{\epsilon }{\mathrm{\Delta }}}i_4,`$ (A81)
$`O_2(t)`$ $`=`$ $`(\stackrel{~}{F}_{K_0}^1)^2{\displaystyle \frac{J}{\mathrm{\Delta }}}i_2+\stackrel{~}{F}_{K_0}^1\stackrel{~}{F}_{K_0}^2i_3\stackrel{~}{F}_{K_0}^1\stackrel{~}{F}_{K_0}^2{\displaystyle \frac{\epsilon }{\mathrm{\Delta }}}i_2,`$ (A82)
$`P_1(t)`$ $`=`$ $`(\stackrel{~}{F}_{K_0}^2)^2{\displaystyle \frac{J}{\mathrm{\Delta }}}i_4+\stackrel{~}{F}_{K_0}^1\stackrel{~}{F}_{K_0}^2i_1\stackrel{~}{F}_{K_0}^1\stackrel{~}{F}_{K_0}^2{\displaystyle \frac{\epsilon }{\mathrm{\Delta }}}i_4,`$ (A83)
$`P_2(t)`$ $`=`$ $`(\stackrel{~}{F}_{K_0}^2)^2{\displaystyle \frac{J}{\mathrm{\Delta }}}i_2+\stackrel{~}{F}_{K_0}^1\stackrel{~}{F}_{K_0}^2i_3+\stackrel{~}{F}_{K_0}^1\stackrel{~}{F}_{K_0}^2{\displaystyle \frac{\epsilon }{\mathrm{\Delta }}}i_2.`$ (A84)
The functions $`i_1(t,\mathrm{\Delta }^{},\delta ^{}),\mathrm{},i_4(t,\mathrm{\Delta }^{},\delta ^{})`$ characterize the response of the exciton subsystem to the photon field:
$`i_1(t,\mathrm{\Delta }^{},\delta ^{})`$ $`=`$ $`\omega _{K_0}^2{\displaystyle _{t_0}^t}𝑑\tau \mathrm{cos}\left[\mathrm{\Delta }^{}(t\tau )\right]\mathrm{Re}\left[\delta \stackrel{~}{N}(t,\tau )\mathrm{exp}\left[i\delta ^{}(t\tau )\right]\right],`$ (A85)
$`i_2(t,\mathrm{\Delta }^{},\delta ^{})`$ $`=`$ $`\omega _{K_0}^2{\displaystyle _{t_0}^t}𝑑\tau \mathrm{sin}\left[\mathrm{\Delta }^{}(t\tau )\right]\mathrm{Re}\left[\delta \stackrel{~}{N}(t,\tau )\mathrm{exp}\left[i\delta ^{}(t\tau )\right]\right],`$ (A86)
$`i_3(t,\mathrm{\Delta }^{},\delta ^{})`$ $`=`$ $`\omega _{K_0}^2{\displaystyle _{t_0}^t}𝑑\tau \mathrm{cos}\left[\mathrm{\Delta }^{}(t\tau )\right]\mathrm{Im}\left[\delta \stackrel{~}{N}(t,\tau )\mathrm{exp}\left[i\delta ^{}(t\tau )\right]\right],`$ (A87)
$`i_4(t,\mathrm{\Delta }^{},\delta ^{})`$ $`=`$ $`\omega _{K_0}^2{\displaystyle _{t_0}^t}𝑑\tau \mathrm{sin}\left[\mathrm{\Delta }^{}(t\tau )\right]\mathrm{Im}\left[\delta \stackrel{~}{N}(t,\tau )\mathrm{exp}\left[i\delta ^{}(t\tau )\right]\right].`$ (A88)
The photon field correlation function $`\delta \stackrel{~}{N}(t,\tau )`$ is of the form:
$$\delta \stackrel{~}{N}(t,\tau )=\delta N(t,\tau )\mathrm{exp}\left[i\omega _{K_0}(t\tau )\right].$$
(A89)
The coefficients $`\overline{M}_1(t),\mathrm{},\overline{P}_2(t)`$ are defined similarly as the coefficients $`M_1(t),\mathrm{},P_2(t)`$ in Eqs. (A10) and (A11); only the photon field correlation function
$$\delta \stackrel{~}{N}_v(t,\tau )=\delta N_v(t,\tau )\mathrm{exp}\left[i\omega _{K_0}(t\tau )\right]$$
(A90)
occurs in Eq. (A11) instead of $`\delta \stackrel{~}{N}(t,\tau )`$. Thus, the coefficients with bars include in addition effects of vacuum fluctuations.
Also the coefficients $`\stackrel{~}{M}_1(t),\mathrm{},\stackrel{~}{P}_2(t)`$ are defined similarly as the coefficients $`M_1(t),\mathrm{},P_2(t)`$ in Eqs. (A10) and (A11): only the expression $`\delta \stackrel{~}{N}(t,\tau )\mathrm{exp}\left[i\delta ^{}(t\tau )\right]`$ in Eq. (A11) must be replaced by the expression $`\delta \stackrel{~}{N}_a(t,\tau )\mathrm{exp}\left[i\delta ^{}(t+\tau )\right]`$, where
$$\delta \stackrel{~}{N}_a(t,\tau )=\delta N_a(t,\tau )\mathrm{exp}\left[i\omega _{K_0}(t+\tau )\right].$$
(A91)
The coefficients $`\overline{O}_2(t)`$ and $`\overline{P}_2(t)`$ renormalize the transfer integral $`J`$ both at the positions (4,1), (1,4) and (4,2), (2,4) in the matrix $`𝒥_1`$, in contrast to the coefficients originating in the exciton–phonon interaction.
The influence of the coefficient $`\stackrel{~}{M}_1(t)`$ ($`\stackrel{~}{N}_1(t)`$) in the equations for $`\rho _{1r}(t)`$ and $`\rho _{1i}(t)`$ ($`\rho _{2r}(t)`$ and $`\rho _{2i}(t)`$) is remarkable. When, e.g., $`\stackrel{~}{M}_1(t)`$ ($`\stackrel{~}{N}_1(t)`$) is negative, it represents damping of $`\rho _{1r}(t)`$ ($`\rho _{2r}(t)`$), but at the same time amplification of $`\rho _{1i}(t)`$ ($`\rho _{2i}(t)`$). This property is connected with phase relations in the photon field reflected by $`\delta \stackrel{~}{N}_a(t,\tau )`$ given in Eq. (A14).
|
no-problem/9901/cond-mat9901336.html
|
ar5iv
|
text
|
# A self-similar model for shear flows in dense granular materials
## I Introduction
Dry granular materials exhibit a large range of dynamical behaviours (for a general review see ). An assembly of non-cohesive macroscopic particles confined in a box can be considered as a solid, but will start flowing when submitted to a large enough shear stress. However, the underlying mechanism can be very different, depending on the shearing rate imposed and the resulting density of the material. Upon rapid shearing or shaking, a fluidized state is reached in which the stress is transfered from the boundaries to the bulk through binary collisions. This dynamics is well understood under the scope of the so-called kinetic theory . In many practical situations however, gravity maintains a high density in the material so that the stress is mainly transfered by rubbing friction between particles in persistent contact. In this regime, all relative motion between particles tend to be confined to very narrow regions of the material, making a continuous approach inadequate. Within these so-called shear band, each particle is submitted to large forces which rapidly fluctuate as the packing environment and the associated force network evolve. The flow is driven by collective and jerky moves of large sets of particles, hold together by transient force chains. Although this regime is relevant to soil mechanics and geology (earthquakes and pyroclastic flows for example) and important in industrial applications (granular transport in hoppers and pipes), a satisfactory description is to date lacking for such fully developed dense flows.
A strong experimental effort has however been seen recently to probe the grain-scale dynamics within these shear bands, which now provides a good test for physical models: two series of experiments have been performed independently on 2-D and 3-D Couette cells. A collection of particles is confined between a fixed outer cylinder and a rotating inner one. In the 2-D experiment, the system consists of one monolayer of disc-like particles squeezed between 2 horizontal plates. In this geometry, a shear band is always present at the vicinity of the rotating inner cylinder. By different techniques, the authors were able to precisely measure the decay of the average particle velocity from the inner moving wall to the immobile region towards the outer cylinder. Surprisingly enough, the two results were found to be significantly different: in 2-D the velocity $`V`$ decayed with the distance $`r`$ to the inner wall according to an exponential law, whereas in 3-D, the velocity profile was well fitted by a Gaussian centered on the surface of the wall. These results showed to be robust for polydisperse or highly irregularly shaped particles. For round and monodisperse grains however, a layering effect and a strong slippage between adjacent layers were observed which result in more complicated profiles.
In this note we will focus on the Gaussian/exponential behaviours as the spatial dimension changes. Because a Gaussian form cannot be derived from a local differential equation (for example see ), the 3-D velocity profile suggests that a purely local model would fail to describe it. Here we introduce non-local effects by postulating a correlation length in the particles displacements which depends on their position within the shear band. This approach comes to modeling the flow as a succession of transient fractures allowing the coherent motion of clusters of various sizes. This hypothesis alone allows us to produce an average velocity profile consistent with what was observed in both 2- and 3-D. Finally, we will show that the probability distribution and the force spectrum derived from this model are in good agreement with experimental results found in the literature.
## II The model
We consider a plane shear geometry: a wall is moving at a constant speed $`V_0`$ along a half-space of granular material. The flow velocity is assumed to vanish far from the moving wall (see figure 1). We impose a no slip condition at the wall so that $`V(0)=V_0`$ (this boundary condition mimics a classical experimental realization where the first layer of particles is glued onto the moving wall). In our scheme, the motion of each particle can be decomposed into successive steps of typical length $`d_g`$ occurring at velocity $`v=V_0`$. Between two steps, the particle remains immobile. One grain can thus either be at rest or move together with the wall<sup>*</sup><sup>*</sup>*note, this bimodal velocity distribution for the grains have been roughly observed in experiments. The interface between moving and static particles defines a failure surface for the pack. We define $`P(r)`$ as the stationary probability of a failure surface to be present, per unit of time and length (in units of particle diameter) along the $`y`$-axis, at a distance $`r`$ from the moving wall (as in ref ). Although a reduction in granular density by roughly $`10\%`$ \- the so-called Reynolds dilatancy - is known to accompany shear flow, we will first consider a uniform density. Both two and three dimensional geometries will be addressed; $`D`$ will denote in the following the space dimension. A ’fracture’ will either be a line in 2-D, or a plane in 3-D.
We first consider the dynamics of the first freely moving layer in the region $`d_g<r<2d_g`$. As the wall moves by a distance $`d_g`$ in a time $`\tau _0=d_g/V_0`$, a particle located in this layer is either dragged by the wall producing a failure further away, or stays immobile so that a crack develops at exactly $`r=d_g`$. Assuming a Coulomb-type slip condition, the probability of the latter depends on the ratio of the normal and shear stress to which is submitted the particle during the time period $`\tau _0`$. Based on this mechanical consideration, and by analogy with thermally activated processes, Pouliquen & al have proposed an expression for the probability $`p_0`$ of slippage between $`2`$ particles from $`2`$ consecutive layers in a granular material slowly flowing down a pipe. In their picture, the spatial stress fluctuation induced by the randomness of the packing structure plays the role of a temperature, allowing them to apply classical rate processes theory. In the quasi-static regime we address here, the average shear and normal stress are uniform in the material since each layer is constantly in mechanical equilibrium. In the limit of uniform density, it is therefore natural to assume a uniform value for $`p_0`$. Although, the derivation of the slippage rate not only requires the knowledge of the static probability $`p_0`$. A typical fluctuation time-scale, related to the flow itself also needs to be introduced. This time-scale corresponds to the relaxation time of the stress network around the chosen particle. For a particle in the first layer, the only relevant time-scale is $`\tau _0`$ which is the time needed for a shear strain of $`1`$ to be established. We can now write down the rate of slippage at the wall $`P(d_g)`$ as :
$$P(d_g)dt\frac{dr}{d_g}=𝒜p_0\frac{V_0dt}{d_g}\frac{dr}{d_g}$$
(1)
$`𝒜`$ is a constant introduced for normalization. It should be noted that we implicitly suppose a clear separation between the successive hopping events or equivalently that $`p_0<<1`$ .
The calculation of $`P(r)`$ away from the wall is based on a self-similar argument. We postulate that, for a yielding event to occur at a distance $`r`$ from the wall, the crack must extend radially over a length of the order $`r`$. Hence the shearing of the material at a distance $`r`$ from the wall requires the coherent motion of a solid cluster of size $`r`$ in all directions. This cluster is dragged by the wall over a distance $`d_g`$ as one solid object, in the same way as was the single particle from the first layer (see figure 1). This self-similar description of the cluster size is introduced to account for the perturbation of the force network in the vicinity of the wall. Chain forces, which are responsible for the rigidity of the dense pack, are screened by the proximity of a solid boundary. Independent motion of particles are thus only allowed in the vicinity of the wall whereas the bulk behaves as a solid body. We note that an identical scaling have been successfully postulated in (at least) 2 very different contexts to describe the effect of a wall on dynamical structures. To estimate the Prandtl mixing layer in turbulent flows near a wall, the characteristic length of the vortices are assumed to increase like the distance to the boundary layer. The derivation of polymer dynamics near a wall in the semi-dilute regime also assumes that the ‘blob’ size (the dynamical coherence length of the monomers) grows like the distance to the wall. As the width of the shear band in granular flows is always of the order of $`10`$ bead diameters, regardless of the bead size, the bead diameter appears to be the only relevant length-scale in this problem. A linear increase of the coherence length with the distance to the wall is therefore the only reasonable choiceA different mapping of the coherence length $`l(r)`$ may have to be used for a different geometry. For example, the stationary dense granular flow along an inclined plane seems to exhibit a characteristic length $`H`$ depending on the inclination and roughness of the inclined plane. We postpone the discussion of the coherence length in this case for a further study.. For simplicity we will take exactly $`r`$ as the coherence length in the following.
Given this general picture, the scaling analysis follows: we define $`N_g(r)`$ as the number of particles involved in a crack developing at a distance $`r`$ :
$$N_g=\left(\frac{r}{d_g}\right)^{D1}$$
The probability for this set of particles to become simultaneously unstable is $`(p_0)^{N_g}`$, whereas the time-scale $`\tau (r)`$ of the stress fluctuations experienced by the cluster of size $`r`$ reads:
$$\frac{1}{\tau (r)}=\frac{V_0}{r}$$
The scaling for $`\tau (r)`$ was made consistent with the criterion used for the first layer: $`\tau (r)`$ corresponds to a shear strain of 1 for the cluster of size $`r`$. Two different time-scales are thus involved in the dynamics of these blocks: a rapid one, $`\tau _0`$ associated with the release of the stress when a yielding occurs; a slower one, $`\tau (r)`$ which characterizes the stress fluctuations experienced by the particles at a distance $`r`$ from the wall.
Finally, a particle located at $`y=r`$ has $`N_g`$ possible locations along the crack and the expression for $`P(r)`$ eventually reads:
$$P(r)dt\frac{dr}{d_g}=AN_g(p_0)^{N_g}\frac{V_0dt}{r}\frac{dr}{d_g}$$
(2)
When yielding occurs at a distance $`r`$ from the wall, every particle below $`y=r`$ moves over a distance $`d_g`$ whereas no motion occurs above $`y=r`$. Such an event represents a discontinuity in the instantaneous velocity profile (the velocity is $`V_0`$ below the crack and $`0`$ above during a time $`\tau _0`$). The constitutive differential equation for the stationary mean velocity profile $`V(r)`$ thus reads :
$$\frac{V}{r}=P(r)$$
(3)
with the boundary conditions $`V(0)=V_0`$ and $`V(\mathrm{})=0`$..
For $`D=2`$, we obtain:
$$\frac{V_{2D}}{r}=\frac{A}{d_g}V_0\left(p_0\right)^{r/d_g}$$
which by integration gives an exponential velocity profile $`V_{2D}(r)`$:
$$V_{2D}=V_0e^{\frac{r}{\lambda d_g}}\mathrm{with}\lambda =\frac{1}{\mathrm{ln}(p_0)}\mathrm{and}A=\frac{1}{\lambda }$$
For $`D=3`$, it gives:
$$\frac{V_{3D}}{r}=\frac{Ar}{d_g^2}V_0\left(p_0\right)^{(r/d_g)^2}$$
so that the velocity profile $`V_{3D}(r)`$ is Gaussian:
$$V_{3D}=V_0e^{\frac{r^2}{2(\sigma d_g)^2}}\mathrm{with}\sigma ^2=\frac{1}{2\mathrm{l}\mathrm{n}(p_0)}\mathrm{and}A=\frac{1}{\sigma ^2}$$
The function $`P(r)`$ then reads respectively in two and three dimensions:
$$P_{2D}(r,t)=\frac{V_0}{\lambda d_g}e^{\frac{r}{\lambda d_g}}\mathrm{and}P_{3D}(r,t)=\frac{rV_0}{\sigma ^2d_g^2}e^{\frac{r^2}{2(\sigma d_g)^2}}$$
Although a few hypothesis have been used to derive these results, one should note that the major assumption is that the persistence length of the fractures increases linearly with $`r`$. This argument alone produces the main behaviour of $`P(r)`$ as a function of $`p_0`$. The rescaling of the stress fluctuation time-scale $`\tau (r)`$ only controls the pre-factor of the exponential and Gaussian velocity profiles. A correction could also arise from a spatial dependence of $`p_0`$. For instance, the shear induced density profile neglected in the present approach might affect the uniformity of the average shear and normal stress per particle within the shear band. Considering that $`p_0`$ is a decreasing function of the density, this correction may increase the velocity decay and narrow the shear band. This may explain deviations from the Gaussian and exponential profiles observed in experiments with rounded and monodisperse particles where the largest density gradients are present. However, an exact solution of (3) for a general $`p_0`$ is highly non-trivial and should be investigated in a further work.
## III Forces distribution and power spectrum
In addition to the velocity profiles, the knowledge of the function $`P(r)`$ gives appropriate information for computing other properties of the flow such as the probability distribution function (PDF) of the forces acting on the wall as well as the power spectrum of this signal. These two quantities have been experimentally studied in both two and three dimensional granular shear flows. In the latter experiment, the normal force on the bottom of a shear cell was monitored using a force transducer which could accommodate a large number (from 4 to 100) of particles. The resulting measurement corresponded to the integrated value of many individual contact forces. The measured force signal F(t) appeared as a series of narrow peaks of various heights. To compare these results with the present model, we suppose that each peak is the signature of a failure in the structure of the sheared pack. Assuming that the slippage of one grain releases a given force, the motion of a cluster of size $`r`$ produces a peak of intensity $`F(r)`$ proportional to $`N_g`$, or equivalently:
$$F(r)r^{D1}$$
These peaks last for a time $`\tau _0`$ and are uncorrelated in time. ¿From here, we can derive the form of the probability distribution function (PDF) $`\rho (F)`$ by identifying $`r^{D1}`$ with $`F`$ in $`P(r)`$:
$$\rho _{2D}(F)=\rho _{3D}(F)=\frac{\mathrm{exp}(F/F_0)}{F_0}$$
(4)
where $`F_0`$ is the mean force of the distribution. Surprisingly enough, these PDF’s are the same for 2- and 3-D. Both agree with experimental observations at large forces. The behaviour at low forces is not probed by experiments although it may have more complicated features.
One may note that eq. (4) is formally identical to what has been observed for large forces on individual particles in a compressed static array. But this static distribution would narrow upon spatial averaging. By contrast, the width of the force distribution in the present description is insensitive to the integration areaThe finite size should merely show up as a cut of at high forces. in agreement with experimental observations. In the continuously sheared regime, the wide force distribution arises from the coherence in the force release induced by the sudden motion of large clusters, and not from the purely static force distribution on individual particles.
The power spectrum of the force fluctuations can also be obtained in this description as follows: the power signal is decomposed into a stochastic succession of peaks of height $`F^2(r)`$ and width $`\tau _0`$. The period $`\tau ^{}(r)`$ between two successive pulses of amplitude $`F^2(r)`$ is simply the inverse of the rate $`P(r)`$. For a given $`r`$, and therefore a given $`F(r)`$, the signal is the so-called telegraph noise, for which the power spectrum $`Q_r(\omega )`$ reads:
$$Q_r(\omega )=\frac{F^2(r)}{\pi }\frac{\tau ^{}(r)\tau _0}{(\tau ^{}(r)+\tau _0)^2}\frac{1/T(r)}{\omega ^2+(1/T(r))^2}$$
$$\mathrm{with}1/T(r)=1/\tau ^{}(r)+1/\tau _0$$
Therefore, since failures at different distances from the wall are uncorrelated, the power spectrum $`Q(\omega )`$ for the overall signal is given by the sum:
$$Q(\omega )=\frac{1}{d_g}_0^{\mathrm{}}Q_r(\omega )𝑑r$$
(5)
$`Q(\omega )`$ exhibits qualitatively the same behaviour for 2 and 3 dimensions, dominated by Lorentzian-like functions (see figure (2) ). For frequencies much larger than $`\omega _0=1/\tau _0`$, the power spectrum behaves like $`\omega _0/\omega ^2`$ as observed in experiments; this is a direct consequence of the stochastic description of the process. On the other hand, the expression (5) displays a well-defined non-zero limit as $`\omega 0`$. By comparison, the power spectra measured in exhibit a continuous, although small increase as $`\omega 0`$.
## IV Conclusion-Acknowledgements
We have proposed a stochastic description of slowly sheared granular material that can capture the main characteristics of the flow: the average velocity profiles and force fluctuations in 2 and 3 spatial dimensions. We want to underline the relatively good agreement with experimental data with regards to the small number of ingredients introduced. In particular, we ignored the existence of granular density variations along the direction normal to the flow which may induce significant corrections. The crucial assumption of this model lies in the self-similar structure of moving clusters that rapidly form and disappear as the material flows. This assumption has been qualitatively suggested by the mere observation of 3-D Couette flows watched from below through a transparent bottom plate where the jerky and multi-scale dynamics is clearly visible. We hope that this tentative description will motivate further quantitative investigations to probe the nature of space and time correlations in the motion of neighboring particles in such systems.
Here we have focused on recent experimental data in order to test this self-similar model but we think that this simple description of dense granular dynamics as a super-imposition of coherent moves can be extended to many more geometries and processes than just plane shear. The issue is to correctly prescribe the coherence length mapping $`l(r)`$, and describe the probability $`P(r)`$ for each realization. In particular, we anticipate that the existence of multiple relaxation times recently demonstrated in granular compaction experiments could be associated with rearrangements over different length-scales during the relaxation of the packing structure. To that extent, this model may offer a clue to understand the numerous features that granular material share with glassy liquids below the glass transition (aging, slow relaxation, jamming).
It is a pleasure to thank D. Mueth, H. Jaeger, S. Nagel, T. Witten and L. Kadanoff for many interesting and stimulating discussions. G.D. is supported by the David Grainger Fellowship. C.J. is supported in part by the ONR grant: N00014-96-1-0127, the MRSEC with the National Science Foundation DMR grant: 9400379, and would also like to thank the Argonne National Laboratory for its support.
|
no-problem/9901/quant-ph9901056.html
|
ar5iv
|
text
|
# High-sensitivity optical measurement of mechanical Brownian motion
## Abstract
We describe an experiment in which a laser beam is sent into a high-finesse optical cavity with a mirror coated on a mechanical resonator. We show that the reflected light is very sensitive to small mirror displacements. We have observed the Brownian motion of the resonator with a very high sensitivity corresponding to a minimum observable displacement of $`2\times 10^{19}`$ $`\mathrm{m}/\sqrt{\mathrm{Hz}}`$.
PACS : 05.40.Jc, 04.80.Nn, 42.50.Lc
Thermal noise plays an important role in many precision measurements . For example, the sensitivity in interferometric gravitational-wave detectors is limited by the Brownian motion of the suspended mirrors which can be decomposed into suspension and internal thermal noises. The latter is due to thermally induced deformations of the mirror surface. Experimental observation of this noise is of particular interest since its theoretical evaluation strongly depends on the mirror shape and on the spatial matching between light and internal acoustic modes . It is also related to the mechanical dissipation mechanisms which are not well known in solids . Mirror displacements induced by thermal noise are however very small and a highly sensitive displacement sensor is needed to perform such an observation.
Monitoring extremely small displacements has thus become an important issue in precision measurements and several sensors have been developed. A technique commonly used for the detection of gravitational waves by Weber bars is based on capacitive sensors . Another promising technique consists in optical transducers . Reflexion of light by a high-finesse Fabry-Perot cavity is very sensitive to changes in the cavity length. Such a device can thus be used to monitor displacements of one mirror of the cavity, as it has been proposed for gravitational wave bar detectors where the mirror is mechanically coupled to the bar , or for the detection of Brownian motion in gravitational wave interferometers . In this letter we report a high-sensitivity observation of the Brownian motion of internal modes of a mirror. The sensitivity reached in our experiment is better than that of present sensors and comparable to the one expected in gravitational wave interferometers.
We use a single-ended Fabry-Perot cavity composed of an input coupling mirror and a totally reflecting back mirror. The intracavity intensity shows an Airy peak when the cavity length is scanned through a resonance, and the phase of the reflected field is shifted by $`\pi `$. The slope of this phase shift strongly depends on the cavity finesse and for a lossless resonant cavity, a displacement $`\delta x`$ of the back mirror induces a phase shift $`\delta \phi _x`$ of the reflected field on the order of
$$\delta \phi _x8\frac{\delta x}{\lambda },$$
(1)
where $``$ is the cavity finesse and $`\lambda `$ is the optical wavelength. This signal is superimposed to the phase noise of the reflected field. If all technical noise sources are suppressed, the phase noise $`\delta \phi _n`$ corresponds to the shot noise of the incident beam
$$\delta \phi _n\frac{1}{2\sqrt{\overline{I}}},$$
(2)
where $`\overline{I}`$ is the mean incident intensity counted as the number of photons per second. The sensitivity of the measurement is given by the minimum displacement $`\delta x_{min}`$ that yields a signal of the same order of magnitude as the noise
$$\delta x_{min}\frac{\lambda }{16\sqrt{\overline{I}}}.$$
(3)
One expects to be able to detect a displacement corresponding to a small fraction of the optical wavelength for a high-finesse cavity and an intense incident beam.
In our experiment the coupling mirror has a curvature radius of 1 meter and a typical transmission of 50 ppm (Newport high-finesse SuperMirror). The back mirror is coated on the plane side of a small plano-convex mechanical resonator made of silica. The coating has been made at the Institut de Physique Nucléaire (Lyon) on a 1.5-mm thick substrate with a diameter of 14 mm and a curvature radius of the convex side of 100 mm. The two mirrors are mounted in a rigid cylinder which defines the distance and the parallelism between them. The cavity length is close to 1 mm so that the TEM<sub>00</sub> optical mode of the cavity has its waist in front of the back mirror with a size of 90 $`\mu `$m.
The mirror motion is due to the excitation of internal acoustic modes which have been extensively studied for plano-convex resonators . For a curvature radius of the convex side much larger than the thickness of the resonator, those modes can be described as gaussian modes confined around the central axis of the resonator. The intracavity field experiences a phase shift proportional to the longitudinal deformation of the resonator averaged over the beam waist and only compression modes which induce such a longitudinal deformation are coupled with the light. In the following we focus on the fundamental mode which has a waist equal to 3.4 mm and a resonance frequency close to 2 MHz.
We have measured the optical characteristics of the cavity. Its bandwidth is equal to 1.9 MHz and its free spectral range is equal to 141 GHz. These values correspond to a cavity length of 1.06 mm and a finesse $``$ of 37000. We also measured the reflexion coefficient of the cavity at resonance to derive the transmission of the coupling mirror and the cavity losses. We found a transmission of 60 ppm and losses equal to 109 ppm.
The light entering the cavity is supplied by a titane-sapphire laser working at 810 nm and frequency-locked to a stable external cavity by sideband techniques . We use a triple servoloop to monitor the laser frequency via a mirror mounted on a piezoelectric ceramic and an electro-optic modulator placed inside the laser cavity. The residual jitter is mainly concentrated at low frequency and is less than 3 kHz rms. The frequency noise is less than 15 $`\mathrm{mHz}/\sqrt{\mathrm{Hz}}`$ above 1 MHz. The laser frequency is locked to a resonance of the high-finesse cavity by monitoring the residual light transmitted by the back mirror via a control of the external cavity length. We use a mode cleaner to reduce the astigmatism of the laser beam. It consists of a non-degenerate linear cavity locked at resonance with the laser. The transmitted beam corresponds to a fundamental mode of the cavity which is gaussian TEM<sub>00</sub>. The mode matching of the resulting beam with the high-finesse cavity is equal to 98%. The intensity after the mode cleaner is actively stabilized by a variable attenuator inserted in front of the mode cleaner. One gets a 100-$`\mu `$W incident power on the high-finesse cavity with residual relative intensity fluctuations less than 0.2%. Note that the incident power is low enough to neglect quantum effects of radiation pressure. Quantum noise induced by radiation pressure is less than 1% of the phase noise $`\delta \phi _n`$.
The phase of the field reflected by the high-finesse cavity is measured by homodyne detection (fig. 1). The reflected field is mixed on two photodiodes (FND100 from EGG Instruments) with a 10-mW local oscillator derived from the incident beam. We use a set of quarter-wave plates, a half-wave plate and polarizing beamsplitters to separate and mix those fields. The two photocurrents are preamplified with wideband and low-noise transimpedance amplifiers and their difference is sent to a spectrum analyzer. The overall quantum efficiency of the detection system is equal to 91%. The signal obtained on the spectrum analyzer is proportional to the fluctuations of the quadrature component of the reflected field in phase with the local oscillator. A servoloop monitors the length of the local oscillator arm so that we detect the phase quadrature of the reflected field. This setup is thus similar to an interferometer with dissymmetric arms. It indeed performs an interferometric measurement of the back-mirror position, the sensitivity being increased by the cavity finesse.
The last part of the experimental setup is used to optically excite the mechanical resonator. A 500-mW auxiliary beam derived from the titane-sapphire laser is intensity-modulated by an acousto-optic modulator and reflected from the rear on the back mirror. A modulated radiation pressure force is thus applied to the resonator. The amplitude of this force can be changed by varying the depth of the intensity modulation. The auxiliary laser beam is uncoupled from the cavity by frequency filtering due to an optical frequency shift of 200 MHz induced by the acousto-optic modulator and by spatial filtering due to a tilt angle of 10 between the beam and cavity axes. We have checked that the auxiliary beam has no spurious effect on the homodyne detection.
Figure 2 shows the experimental result of the optical excitation. Each square is obtained for a different modulation frequency of the auxiliary laser beam around the expected frequency for the fundamental mode of the mechanical resonator. The power of phase modulation of the reflected field is normalized to the shot-noise level, independently measured by sending only the local oscillator in the homodyne detection. We have checked that the phase noise of the reflected field corresponds to the shot-noise level when the laser is out of resonance with the high-finesse cavity. Any deviation of the phase from the shot-noise level is thus due to the interaction of the light with the cavity. Such a deviation reflects the mirror motion and the resonance in figure 2 corresponds to the excitation of the fundamental acoustic mode of the resonator. The solid curve is a lorentzian fit which shows that the mechanical response has a harmonic behavior around the resonance frequency with a quality factor $`Q`$ of 44000.
As explained in the end of this paper we have calibrated the measured displacement and the resulting scale is shown on the right of figure 2. The displacement at resonance corresponds to an amplitude of $`1.6\times 10^{15}`$ m. One can estimate the radiation pressure exerted by the auxiliary beam as $`F_{rad}=2\mathrm{}k\delta I=1.2\times 10^9`$ N where $`2\mathrm{}k`$ is the momentum exchange during a photon reflection and $`\delta I`$ is the intensity modulation. One thus finds that the mechanical susceptibility $`\chi \left[\mathrm{\Omega }\right]`$ has a lorentzian shape around the mechanical resonance frequency $`\mathrm{\Omega }_M`$
$$\chi \left[\mathrm{\Omega }\right]=\frac{\chi _0}{1\mathrm{\Omega }^2/\mathrm{\Omega }_M^2i/Q},$$
(4)
with $`\chi _0=3.2\times 10^{11}`$ $`\mathrm{m}/\mathrm{N}`$.
Figure 3 shows the phase noise spectrum of the reflected beam obtained with a resolution bandwidth of 1 Hz and for the same frequency range (500-Hz span around the fundamental resonance frequency). The auxiliary laser beam is now turned off (no optical excitation) and the resonator is at room temperature. The spectrum is obtained by an average over 1000 scans of the spectrum analyzer. It is normalized to the shot-noise level and the vertical scale is smaller than the one of figure 2. The thin line in figure 3 corresponds to a theoretical estimation of the thermal noise at 300 K by using the mechanical susceptibility $`\chi \left[\mathrm{\Omega }\right]`$ derived from optical excitation (eq. 4). Note that there is no adjustable parameter and the excellent agreement with experimental data clearly shows that the peak observed in figure 3 corresponds to the thermal noise of the fundamental mode of the resonator.
We have calibrated the observed displacements by frequency modulation of the incident laser beam. The detuning between the laser and the cavity resonance indeed only depends on the optical frequency and on the cavity length. A displacement $`\delta x`$ of the back mirror is thus equivalent to a frequency modulation $`\delta \nu `$ of the laser related to $`\delta x`$ by
$$\frac{\delta \nu }{\nu }=\frac{\delta x}{L},$$
(5)
where $`\nu `$ is the optical frequency and $`L`$ the cavity length. We can thus calibrate the observed displacements by measuring the frequency modulation which yields the same phase signal for the reflected field.
The frequency modulation of the laser beam is obtained by applying a sinusoidal voltage on the internal electro-optic modulator of the laser. We determine the amplitude $`\delta \nu `$ of modulation by locking the mode cleaner at half-transmission and by measuring the intensity modulation of the transmitted beam. This intensity modulation is proportional to the ratio $`\delta \nu /\nu _{cav}`$ between the amplitude of frequency modulation and the cavity bandwidth $`\nu _{cav}`$ of the mode cleaner. We have determined this bandwidth with a good accuracy by measuring the transfer function of the mode cleaner at resonance for an intensity-modulated incident beam.
Figure 4 shows the result of the calibration. We applied a sinusoidal voltage to the laser with different amplitudes at a frequency of 2 MHz. The horizontal axis represents the amplitude $`\delta \nu `$ of frequency modulation determined from the mode-cleaner cavity. The vertical axis corresponds to the power of phase modulation observed in the field reflected by the high-finesse cavity. Experimental results represented by squares are obtained with a 1-Hz resolution bandwidth of the spectrum analyzer and are normalized to the shot-noise level. The linear fit (solid curve in figure 4) has a slope equal to 2 as expected in log-log scales since the power of phase modulation must be proportional to the square of the frequency modulation. From equation (5) one can associate a displacement $`\delta x`$ to any observed phase modulation of the reflected field. In particular, the shot-noise level corresponds to a frequency modulation $`\delta \nu _{min}`$ equal to 96 $`\mathrm{mHz}/\sqrt{\mathrm{Hz}}`$. The smallest observable thermal displacement $`\delta x_{min}`$ which corresponds to the shot-noise level is thus equal to
$$\delta x_{min}\left[2\mathrm{MHz}\right]=L\frac{\delta \nu _{min}}{\nu }=2.8\times 10^{19}\mathrm{m}/\sqrt{\mathrm{Hz}}.$$
(6)
This experimental result can be compared to the theoretical prediction. Equation (3) corresponds to a static analysis for a lossless cavity and for a perfect detection system. Cavity filtering at non zero frequency and losses reduce the theoretical sensitivity. The proper expression of the minimum displacement at frequency $`\mathrm{\Omega }`$ is
$$\delta x_{min}\left[\mathrm{\Omega }\right]=\frac{\lambda }{16\sqrt{\overline{I}}}\frac{T_c+A}{\sqrt{\eta }T_c}\sqrt{1+\left(\mathrm{\Omega }/\mathrm{\Omega }_{cav}\right)^2},$$
(7)
where $`\eta `$ is the quantum efficiency of the detection, $`T_c`$ the transmission of the coupling mirror, $`A`$ the cavity losses and $`\mathrm{\Omega }_{cav}`$ the cavity bandwidth. The cavity behaves like a low-pass filter with a cutoff frequency $`\mathrm{\Omega }_{cav}`$. We have thus performed another sensitivity measurement at the frequency of 500 kHz. We have found that the shot-noise level corresponds to a frequency modulation $`\delta \nu _{min}`$ of 68 $`\mathrm{mHz}/\sqrt{\mathrm{Hz}}`$ and the sensitivity $`\delta x_{min}`$ is then equal to
$$\delta x_{min}\left[500\mathrm{kHz}\right]=2\times 10^{19}\mathrm{m}/\sqrt{\mathrm{Hz}}.$$
(8)
Both experimental values (eqs. 6 and 8) are in perfect agreement with theoretical values deduced from equation (7) with the parameters of the cavity (finesse $`=37000`$, coupler transmission $`T_c=60`$ppm, cavity losses $`A=109`$ppm, cavity bandwidth $`\mathrm{\Omega }_{cav}/2\pi =1.9`$MHz, quantum efficiency $`\eta =0.91`$, wavelength $`\lambda =810`$nm and incident power $`P=\left(hc/\lambda \right)\overline{I}=100`$ $`\mu `$W). The discrepancy is less than 5%.
In conclusion, we have observed the Brownian motion of internal acoustic modes of a mirror with a very high sensitivity. This result demonstrates that a high-finesse cavity is a very efficient displacement sensor. The possibility to observe the thermal noise even far on the wings of the mechanical resonances opens up the way to a quantitative study of the spectral dependence of the Brownian motion. This would allow to discriminate between different dissipation mechanisms in solids. Let us emphasize that our device also allows to study with a very high accuracy the mechanical characteristics of the various acoustic modes (resonance frequency, quality factor, spatial structure, effective mass) and their coupling with the light. It is furthermore possible to obtain even larger sensitivities by increasing the finesse of the cavity or the incident light power. Mirrors with losses of the order of 1 ppm are now available and cavity finesses larger than $`3\times 10^5`$ have been obtained . For an incident power of 1 mW one would obtain a sensitivity better than $`10^{20}`$ $`\mathrm{m}/\sqrt{\mathrm{Hz}}`$.
We gratefully thank J.M. Mackowski of the Institut de Physique Nucléaire (Lyon) for the optical coating of the mechanical resonator. YH acknowledges a fellowship from the Association Louis de Broglie d’Aide à la Recherche.
|
no-problem/9901/astro-ph9901022.html
|
ar5iv
|
text
|
# THE LYMAN–𝛼 FOREST OF THE QSO IN THE HUBBLE DEEP FIELD SOUTHBased on observations made with the NASA/ESA Hubble Space Telescope by the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5–26555.
## 1. INTRODUCTION
A satisfactory connection between quasar absorption lines and galaxies close to the QSO line of sight has been provided so far only by relatively low redshift studies ($`z<1.5`$), as galaxies at high redshifts are much harder to observe. For $`z>2`$, large samples of quasar absorption lines have been provided because at these redshifts the many strong UV lines are redshifted to the optical range, where very sensitive ground based observations can be obtained. However a complete picture in the whole observable redshift range is missing because of the lack of information at intermediate redshifts ($`1.5<z<2`$).
With the advent of the high resolution UV spectrograph Space Telescope Imaging Spectrograph (STIS), a detailed study of the intermediate redshift Ly$`\alpha `$ forest and the connection to galaxy properties can now be obtained. Indeed, this was one of the motivations for the Hubble Deep Field South (HDFS). Much of the ground-based and HST HDFS campaign (Williams et al. 1999) has been devoted to the observations of the quasar J2233–606 ($`z_{em}=2.23`$) that lies in the STIS field (about 5 and 8 arcmin from the WFPC2 and NICMOS fields). The study of the quasar incorporates both spectroscopy (Sealy et al. 1998; Savaglio 1998; Outram et al. 1998; Ferguson et al. 1999) and imaging (Gardner et al. 1999).
Here we present the interesting features shown by the distribution of the Ly$`\alpha `$ clouds obtained by the fitting of the absorption lines in the redshift range $`1.20<z<2.20`$ in the high and medium resolution spectra taken with STIS/HST (Ferguson et al. 1999<sup>1</sup><sup>1</sup>1See also the HDFS web site at http://www.stsci.edu/ftp/observing/hdf/hdfsouth/hdfs.html) and UCLES/AAT (Outram et al. 1998). The spectral resolution ranges from FWHM = 10 km s<sup>-1</sup> in the interval $`\lambda \lambda =26703040`$ Å, to 50 km s<sup>-1</sup> in $`\lambda \lambda =30403530`$ Å, and to 8.5 km s<sup>-1</sup> in $`\lambda \lambda =35303900`$ Å.
## 2. The Doppler parameter and the column density redshift distribution
The line parameters have been obtained using the MIDAS package FITLYMAN (Fontana & Ballester 1995) in the spectral range 2670–3900 Å through $`\chi ^2`$ minimization of Voigt profiles. At shorter wavelengths, the Lyman limit of the system at $`z1.943`$ absorbs a large fraction of the QSO flux, preventing the fit of absorption lines. The optical UCLES/AAT and STIS/HST spectra have been combined to simultaneously fit the Lyman series of the Ly$`\alpha `$ clouds and that gives more robust results than using the Ly$`\alpha `$ line alone. Only the parameters of Ly$`\alpha `$ clouds at $`z<1.60`$ have been obtained using the Ly$`\alpha `$ absorption line alone. The redshift distribution of the Doppler parameters and of the column densities are shown in Fig. 1, together with the instrumental Doppler width and column density 4$`\sigma `$ detection limit for the Ly$`\alpha `$ lines. The sample does not include HI column densities associated with known metal systems, and lines close ($`z>2.2`$) to the quasar redshift that are considered to be affected by the ionization of the quasar. The number of identified intervening metal systems is 5. Three of them have a complex structure (for a total of 10 Ly$`\alpha `$ components) with HI column density in each system $`\mathrm{log}N_{HI}>16`$. These clouds are presumably associated with galaxies and we consider them a different population of clouds with respect to lower HI column density clouds of the intergalactic medium, although many of them are presumably polluted of metals by nearby galaxies (Cowie & Songaila, 1998). Metal systems have been included in the sample only to compare the number density redshift evolution with previous studies at lower and higher redshifts (see section 3). The total number of Ly$`\alpha `$ clouds excluding metal systems is then 210. The reported 4$`\sigma `$ detection limit is determined by considering the error array of the spectra and does not take into account the line blending effect; at $`z>1.7`$ it is not very important because the line density at intermediate redshifts is relatively low, but it becomes particularly strong at lower redshifts, due to the presence of Lyman series lines of the higher redshift Ly$`\alpha `$ clouds that prevents detection of weak lines ($`N_{HI}<10^{13.5}`$ cm<sup>-2</sup>). We therefore consider the completeness limit in the whole redshift range of $`N_{HI}10^{14}`$ cm<sup>-2</sup>.
Fig. 1– The Doppler parameter (lower panel) and column density (upper panel) vs. redshift of the Ly$`\alpha `$ forest along the J2233–606 line of sight. The solid and dotted lines in the lower panel are the instrumental Doppler widths along the spectra for the Ly$`\alpha `$ and Ly$`\beta `$ forests respectively. The dashed, the dotted and the solid lines in the upper panel represent the 4$`\sigma `$ HI column density detection limit in the case of Doppler parameter of 40, 30 and 20 km s<sup>-1</sup> respectively.
Some interesting features can be noticed in the column density redshift distribution. There is a peak of high HI Ly$`\alpha `$ clouds at $`z=1.921.99`$ ($`60`$ $`h_{65}^1`$ Mpc comoving). A metal system with a $`z1.943`$ Lyman limit was first detected from the test STIS observations of the quasar. Therefore this peak might indicate the presence of a high density of galaxies at those redshifts. At lower redshifts (in the interval $`1.383<z<1.460`$), we see a region with a low density of lines. The corresponding line density is $`dn/dz=65`$, compared with an observed mean over the whole range of $`dn/dz=210`$ lines. There are no lines with HI column density larger than the completeness limit of 10<sup>14</sup> atoms/cm<sup>-2</sup>, while from the mean observed in the whole redshift range we would expect 4 lines. Although this is not statistically very significant, it is suggestive of the presence of a “void” of comoving size of 94 $`h_{65}^1`$ Mpc. The redshift measurements of galaxies from multi–color ground observations and from the very deep STIS images of the field will probably confirm if this is real. The lack of Ly$`\alpha `$ lines can also be caused by a strong ionization of the clouds by the local UV radiation field, for instance in a quasar environment.
It is worth noting the decrement of lines going from the QSO redshift to $`1.38`$, shown by the histogram of the entire sample (Fig. 2). This effect disappears when selecting strong lines, and so it may be due to the detection limit variation along the spectrum. For smaller redshifts, the number of lines increases again. We notice that at $`z1.335`$, a CIV system has been tentatively identified (Ferguson et al. 1999) and a quasar at a distance of about 44.5” from the line of sight ($`300`$ $`h_{65}^1`$ kpc) has been found from the ground by EMMI/NTT observations (Tresse et al. 1998). This might also be an indication of an overdensity of objects at that redshift.
Fig. 2– Histogram of the HI column density of Ly$`\alpha `$ clouds as a function of redshift for different thresholds. Error bars are the square-root of the number in each bin.
## 3. The number density evolution at $`z=1.202.20`$
The number density evolution of the Ly$`\alpha `$ forest, described by a power law of the type $`dn/dz(1+z)^\gamma `$, has been studied at high redshift ($`1.7<z<4.1`$) using samples of high resolution data by Giallongo et al. (1996) and Kim et al. (1997). For lines with $`\mathrm{log}N_{HI}14.0`$, $`dn/dz`$ shows a fast evolution, with $`\gamma 3.6`$. The analysis at low redshifts (Weymann et al. 1998), based on low resolution spectroscopy of the Quasar Absorption Line Survey (QALS), gives in the range $`0.0<z<1.5`$ a much flatter $`dn/dz`$, consistent with weak negative evolution (for $`\mathrm{\Omega }_o=1`$) $`\gamma =0.16\pm 0.16`$. We note that in the two redshift regimes a variation of $`\gamma `$ with column density threshold has been found (larger for increasing threshold). This would go the opposite direction than what is shown in Fig. 2 where $`\gamma `$ is smaller for higher threshold. The discrepancy can be explained if a large number of lines with $`\mathrm{log}N_{HI}<14`$ is present, but not detected at $`z=1.41.8`$.
Fig. 3– Number density evolution of the Ly$`\alpha `$ clouds with $`\mathrm{log}N_{HI}14.0`$ from $`z=0`$ to $`z=4`$. Filled symbols are for samples that include metal systems, open symbols do not. In the sample of Kim et al., only lines with $`\mathrm{log}N_{HI}16.0`$ are included. The low redshift line is the fit with $`\gamma =0.16`$ obtained by a sample of Ly$`\alpha `$ clouds with equivalent width $`EW0.24`$ Å (Weymann et al. 1998) which corresponds to $`\mathrm{log}N_{HI}14.0`$ for $`b26`$ km s<sup>-1</sup>. The high redshift line is the fit with $`\gamma =3.62`$.
The problem in comparing high and low redshift data stems from the different techniques used to measure column densities. At high redshifts, high resolution observations allow line fitting of Voigt profiles, and those give more robust results than the curve–of–growth technique used to estimate HI from equivalent width measurements of low resolution data at low redshifts. On the other hand, the strong evolution of line density makes the line deblending of complex structures rather difficult at high redshifts. The Ly$`\alpha `$ forest of J2233–606 lies between the two redshift regimes and so it is extremely useful to evaluate the correct connection between the two. The results for the two samples of Ly$`\alpha `$ clouds with and without metal lines are shown in Fig. 3 for lines with $`\mathrm{log}N_{HI}14.0`$. The data from J2233–606 have been binned into three redshift ranges $`\mathrm{\Delta }z=1.2011.535,1.5351.870`$ and $`1.8702.204`$ corresponding roughly to the STIS high resolution and medium resolution intervals and to the UCLES high resolution interval. The last two bins give a number density of lines that would better match the high and low redshift observations if metal systems are excluded. The first bin gives a larger number of lines, $`66\pm 14`$ including metal systems, compared with $`40\pm 4`$ lines given the QALS best fit (Weymann et al. 1998). That deviation is not particularly significant, and could indicate a peculiarity along the line of sight to J2233–606. However, a similar excess at $`1.505<z<1.688`$ ($`63\pm 14`$) is also shown by the quasar UM18, which is part of the FOS sample of the QALS, but not included in the fitting of the number density evolution. The excess found in J2233–606 can be due to the presence of a cluster of lines at $`z1.335`$, as suggested in the previous section. Other possible explanations would be the blending effect by which nearby lines with column density below the threshold would artificially appear in the limited signal–to–noise spectrum as stronger single lines with HI above the threshold, or the presence of a small number ($`5`$ would be enough) of unidentified metal lines interpreted as Ly$`\alpha `$ lines. This latter possibility looks very unlikely because even in clusters, metal lines typically appear very narrow at high resolution.
Fig. 4– Two point correlation function of the Ly$`\alpha `$ forest in J2233–606 for lines with $`\mathrm{log}N_{HI}>13.0`$ and 13.8. The Poisson error is also given as dashed and solid lines respectively.
## 4. The clustering properties
A positive signal in the clustering properties of the Ly$`\alpha `$ forest has been found thanks to the advent of high resolution spectroscopy. The investigation has been mainly based on the analysis of the two point correlation function (Cristiani et al. 1997; Kim et al. 1997) but also on that of the power spectrum of mass density fluctuations (Hui et al. 1997; Croft 1998; Amendola & Savaglio 1998). Results have shown that the clustering is present up to $`\mathrm{\Delta }v300`$ km s<sup>-1</sup> and that the amplitude increases with increasing column density threshold of the lines. Most interesting for models of large scale structure formation is the redshift evolution found, showing larger values at smaller redshifts. Although the analysis of the two point correlation function is uncertain when the sample is too small, we report the result on the forest of J2233–606 in Fig. 4 for two column density thresholds. The signal is significant in the first bin ($`D0.8`$ $`h_{65}^1`$ Mpc) and it is higher at the higher HI threshold at about the 6 and 4 $`\sigma `$ level for $`\mathrm{log}N_{HI}>13.0`$ and 13.8 respectively. In a previous study by Cristiani et al. (1997), the two point correlation function for $`\mathrm{log}N_{HI}>13.8`$ at $`\mathrm{\Delta }v=100`$ km s<sup>-1</sup> has shown a redshift evolution, being $`\xi _{100}0.20`$ at $`<z>=3.85`$ (consistent with no clustering), $`\xi _{100}0.75`$ at $`<z>=3.40`$, and $`\xi _{100}0.85`$ at $`<z>=2.40`$. The line of sight of J2233–606 shows $`\xi _{100}1.3`$ at $`<z>=1.7`$ and confirms this evolution.
## 5. The Doppler parameter distribution
Thanks to the high resolution observations of J2233–606, the analysis of the distribution of the Doppler parameter of the Ly$`\alpha `$ forest can be performed for the first time at these low redshifts. At higher redshifts Kim et al. (1997) have found that the median value of the Doppler parameter is $`b30`$ km s<sup>-1</sup> at a redshift of $`z3.7`$, and appeared to increase to $`b3540`$ km s<sup>-1</sup> at a redshift of $`z2.3`$ for lines with $`13.8<\mathrm{log}N_{HI}<16`$, which has been interpreted as an increase of the temperature or kinematic broadening of the clouds. The Doppler distribution in J2233–606 is shown in Fig. 5 for the whole sample and for the subsample of lines (60% of the total) with smaller uncertainties ($`\sigma (b)<8`$ km s<sup>-1</sup> and $`\sigma (\mathrm{log}N_{HI})<0.5`$). The fit of the two Gaussians (in the interval $`0<b<60`$ km s<sup>-1</sup>) gives in the two cases a mean value of 27 and 26 km s<sup>-1</sup> respectively ($`\chi ^2`$ is 2.4 and 1.5), and these are slightly different than the median value (29 and 25 km s<sup>-1</sup>) in the whole $`b`$ range. The median values are larger if lines with $`13.8<\mathrm{log}N_{HI}<16`$ are selected, being 31 km s<sup>-1</sup> in the two cases. Thus, the median value we find over the redshift range $`1.20<z<2.20`$ is considerably below the value of $`b40`$ km s<sup>-1</sup> expected on the basis of the redshift evolution suggested by Kim et al. (1997).
## 6. Conclusions
In contrast to the empty field used for the HDF North program, the HDFS has been selected close to a high redshift quasar, J2233–606 ($`z_{em}=2.23`$). The QSO is centered in the STIS field, located approximately 5 and 8 arcmin from the WFPC2 and NICMOS fields respectively. The Ly$`\alpha `$ forest of J2233–606 has been observed at high and medium resolution (FWHM = 10, 50 and 8.5 km s<sup>-1</sup> in the wavelength intervals $`\lambda \lambda =26703040`$ Å, $`\lambda \lambda =30403530`$ Å and $`\lambda \lambda =35303900`$ Å) using STIS/HST (Ferguson et al. 1999) and UCLES/AAT (Outram et al. 1998), giving the opportunity to analyze the properties of the intergalactic medium in the large redshift interval $`z=1.202.20`$. This is the first time that the Ly$`\alpha `$ forest is studied in detail for $`z<\text{ }2`$. More than 200 Ly$`\alpha `$ cloud redshifts have been found and 63 have HI column density in excess of $`10^{14}`$ atoms/cm<sup>-2</sup>, 11 of which have associated metal absorption. The redshift distribution of the lines has shown a deficiency of lines (5 are found over a mean expected of 16) in the interval $`1.383<z<1.460`$, corresponding to a comoving size of 94 $`h_{65}^1`$ Mpc ($`\mathrm{\Omega }_o=1`$). The region around the metal system at $`z=1.942`$, also responsible of the Lyman limit observed for $`\lambda <2700`$ Å, shows an excess of strong lines in the interval $`1.92<z<1.99`$, indicating a clustering of Ly$`\alpha `$ clouds. An excess of lines is also found for $`1.20<z<1.38`$. The evolution of the number density in the total sample shows a rapid decrease of lines with redshift. This is partially due to the blending effect of low HI column density lines, which is stronger at lower redshifts. Indeed it tends to disappear when selecting only strong lines ($`N_{HI}10^{14}`$ cm<sup>-2</sup>). On average the number of strong Ly$`\alpha `$ clouds per unit redshift is higher than the extrapolation from studies at lower and higher redshifts: $`63\pm 8`$ at $`<z>=1.702`$ including metal systems are found to be compared with 41 and 27 expected from the low and the high redshift predictions. A larger sample of lines will confirm if this line density is typical at that redshift or if it is peculiar to the J2233–606 line of sight. The two point correlation function has shown a positive signal up to a scale of the order of 3 $`h_{65}^1`$ Mpc. The amplitude found seems to confirm the evolution with redshift, being larger at lower $`z`$. The mean Doppler parameter of the Ly$`\alpha `$ forest is around $`2627`$ km s<sup>-1</sup>, corresponding to a temperature, in the case of thermal broadening, of $`4.3\times 10^4`$ K. The median Doppler parameter evolution reported by Kim et al. (1997) is not confirmed by this Ly$`\alpha `$ forest.
###### Acknowledgements.
It is a pleasure to thank L. Amendola, B. Carswell, S. Casertano, G. Ganis, B. Jannuzi, M. Livio, P. Outram and R. Weymann for useful suggestions.
Fig. 5– Doppler parameter histograms of Ly$`\alpha `$ clouds in J2233–606 in the whole sample (upper panel) or in the subsample of lines with 1$`\sigma `$ error in the column density and in $`b`$ smaller than 0.5 $`dex`$ and 8 km s<sup>-1</sup> respectively (124 lines over a total of 210). The fit of a Gaussian for $`b<60`$ km s<sup>-1</sup> is shown as well. Error bars are the square-root of the number of lines in each bin.
|
no-problem/9901/astro-ph9901200.html
|
ar5iv
|
text
|
# Probing the two temperature paradigm for advection dominated accretion flow: test for the component thermalization time-scale passed.
## Abstract
We report here on a calculation of thermalization time-scale of the two temperature advection dominated accretion flow (ADAF) model. It is established that time required to equalize the electron and ion temperatures via electron-ion collisions in the ADAF with plausible physical parameters greatly exceeds age of the Universe, which corroborates validity one of the crucial assumptions of the ADAF model, namely the existence of a hot two temperature plasma. This work is motivated by the recent success (Mahadevan 1998a,b) of ADAF model in explaining the emitted spectrum of Sgr A.
accretion, accretion discs — black hole physics — Galaxy: center
Identification of the nature of enigmatic radio source Sgr A at the Galactic center has been source of debates since its discovery. Observations of stellar motions at the Galactic center (Eckart & Genzel, 1997; Genzel et al., 1996) and low proper motion ($``$ 20 km sec<sup>-1</sup>; Backer, 1996) of Sgr A indicate that, on the one hand, it is a massive $`(2.5\pm 0.4)\times 10^6M_{}`$ object dominating the gravitational potential in the inner $`0.5`$ pc region of the galaxy. On the other hand, observations of stellar winds and other gas flows in the vicinity of Sgr A suggest that the mass accretion rate $`\dot{M}`$ is about $`6\times 10^6M_{}`$yr<sup>-1</sup> (Genzel et al., 1994). This implies that the luminosity of the central object should be more than $`10^{40}`$ erg sec<sup>-1</sup>, provided the radiative efficiency is the usual 10%. However, observations indicate that the bolometric luminosity is actually less than $`10^{37}`$ erg sec<sup>-1</sup>. This discrepancy has been a source of exhaustive debate in the recent past.
The broad-band emission spectrum of Sgr A can be reproduced either in the quasi-spherical accretion model (Melia, 1992, 1994) with $`\dot{M}2\times 10^4M_{}`$ yr<sup>-1</sup> or by a combination of disk plus radio-jet model (Falcke et al., 1993a, 1993b). As pointed out by Falcke and Melia (1997), quasi-spherical accretion seems unavoidable at large radii, but the low actual luminosity of Sgr A points toward a much lower accretion rate in a starving disk. Therefore, Sgr A can be described by a model of a fossil disk fed by quasi-spherical accretion. Recently, Tsiklauri & Viollier (1998) have proposed an alternative model for the mass distribution at the galactic center in which the customary supermassive black hole is replaced by a ball composed of self-gravitating, degenerate neutrinos. It has been shown that a neutrino ball with a mass $`2.5\times 10^6M_{}`$, composed of neutrinos and antineutrinos with masses $`m_\nu 12.0`$ keV$`/c^2`$ for $`g=2`$ or $`m_\nu 14.3`$ keV$`/c^2`$ for $`g=1`$, where $`g`$ is the spin degeneracy factor, is consistent with the current observational data. See also Munyaneza, Tsiklauri and Viollier 1998 for future tests of the model. Tsiklauri & Viollier 1999 have performed calculations of the spectrum emitted by Sgr A in the framework of standard accretion disk theory, assuming that Sgr A is a neutrino ball with the above mentioned physical properties, and established that at least part of the calculated spectrum, were the observational data is most reliable, is consistent with the observations.
Probably the most successful model which is consistent with the observed emission spectrum of Sgr A has been developed by Narayan et al., 1995, 1998 (see also Manmoto et al., 1997). This model is based on the concept of advection dominated accretion flow (ADAF), in which most of the energy released by viscosity in the disk is carried along with the plasma and lost into the black hole, while only a small fraction is actually radiated off. Recent papers by Mahadevan 1998a,b have significantly advanced ADAF model. Inclusion of additional emission component, namely synchrotron radiation from $`e^\pm `$ created via decay of charged pions, which in turn are produced through proton-proton collisions in the ADAF, has significantly improved fitting of the Sgr A spectrum in the low frequency band. After removing the latter discrepancy ADAF model of the Sgr A, apart from the size versus frequency constraints (Lo et al. 1998 and references therein) which remain problematic for all current emission models of the radio source anyway, seems to be the most viable alternative. Thus, basic assumptions of the ADAF model should be carefully examined from the point of view physical consistency. As appropriately pointed out by Mahadevan (1998b), in order for the ADAF solutions to exist two basic assumptions in plasma physics must be satisfied: (a) existence of a hot two temperature plasma, and (b) the viscous energy generated primarily heats the protons. As to the assumption (b) its validity would be hard to verify as it is related to the yet unknown mechanism of viscosity in the accretion flow. The latter problem stands on its own in astrophysics. As concerns the assumption (a) it is from the field of plasma physics, a branch of physics which has been expensively studied in the laboratory, were we seem to have more comprehensive in comparison to astrophysics understanding of the underlying basic physical phenomena. Thus, motivated by the recent success (Mahadevan 1998a,b) of ADAF model in explaining the emitted spectrum of Sgr A, we set out with the aim to check validity of the assumption (a).
It would be reasonable to believe that if the time required to equalize the electron and ion temperatures appears to be sufficiently large, then one might be confident that the assumption of the existence of the two temperature plasma in the ADAF is physically justified. The relevant time scale for the temperature equalization can be calculated using well formulated methods know in plasma physics. The rate at which temperature equilibrium between the electrons and ions is approached is determined by (see e.g. Melrose, 1986):
$$\frac{dT_e}{dt}=\nu _{eq}^{(e,i)}(T_iT_e),$$
$`(1)`$
$$\frac{dT_i}{dt}=\nu _{eq}^{(i,e)}(T_iT_e),$$
$`(2)`$
with
$$\nu _{eq}^{(e,i)}=\frac{e^2q_i^2n\mathrm{ln}\mathrm{\Lambda }^{(e,i)}}{3(2\pi )^{1/2}\pi m_em_i\epsilon _0^2(V_e+V_i)^{3/2}}.$$
$`(3)`$
Here $`\mathrm{ln}\mathrm{\Lambda }^{(e,i)}`$ is the Coulomb logarithm for electron-ion collisions given by $`\mathrm{ln}\mathrm{\Lambda }^{(e,i)}=22.00.5\mathrm{ln}n_e+\mathrm{ln}T_e`$, ($`T_e>1.4\times 10^5`$ K), $`e`$ and $`q_i`$ are charges of electrons and ions respectively, $`m_e`$, $`m_i`$ and $`T_e`$, $`T_i`$ are their masses and temperatures (in Kelvin), $`V_e`$ and $`V_i`$ are thermal velocities of the electrons and ions, $`n`$ is the number density of plasma (we have assumed the global charge neutrality of the ADAF, i.e. $`n=n_e=n_i`$) and finally as the SI system of units is used, $`\epsilon _0=8.8541878\times 10^{12}`$ F$`/`$m.
Now, writing the thermal velocities $`V_{e,i}`$ as $`V_{e,i}=\sqrt{T_{e,i}/m_{e,i}}`$ in the Eqs.(1) – (3), these become closed set of ordinary differential equations for $`T_e`$ and $`T_i`$. We set $`q_i=e`$ and $`m_i`$ as a mass of proton. We solve numerically Eqs.(1) – (3) using Numerical Recipes Software, namely odeint driving routine with fifth order Cash-Karp Runge-Kutta method (tolerance error $`10^{15}`$). Calculations were performed for the values of the number densities ranging from $`10^{16}`$ to $`10^{31}`$ m<sup>-3</sup>. The lower end of the number density’s range is taken according to the actual value of the $`n`$ in the ADAF around Sgr A (Manmoto 1997, 1999). In the Fig.1 the number density (in cm<sup>-3</sup>) profile is plotted. As established by Manmoto (1997, 1999) such ADAF number density profile corresponds to the case when best fit of the ADAF model to the observed emission spectrum is achieved. We gather form this plot that the maximal value of the $`n`$ actually attained is somewhat less than $`10^{10}`$ cm<sup>-3</sup> — the value we use in our calculations. Naturally, as more dilute plasma is as more time will be required to equilibrate electron and ion temperatures via electron-ion collisions. Therefore, actual time of the temperature equilibration is even larger than those values obtained here.
The results of our calculations are presented in the Fig. 2. For the initial values of the temperatures we have used $`T_e=10^{9.5}`$ K and $`T_i=10^{12}`$ K respectively (Mahadevan 1998a,b). We gather from the plot that the time required to equalize the temperatures of ions and electrons significantly exceeds the age of the Universe. Therefore we conclude that the assumption of the existence of a hot two temperature plasma is valid, or at least initial temperature difference will not be washed out by the electron-ion collisions within the age of the Universe.
The only concern one has bear in mind is that the formulas used in this paper, strictly speaking, are valid for non-relativistic plasma regime (Melrose, 1999) while the electron and ion temperatures concerned are relativistic (for the electrons $`\gamma 200`$). However, the temperature equalization time-scale obtained is so large, that even inclusion of the relativistic effects in our estimates would not change basic besults of this paper drastically.
I would like to thank T. Manmoto (Kyoto University) for calculating the ADAF number density profile and kindly providing the Fig. 1. Also, I am thankful to D. Melrose (University of Sydney) for providing the exact reference of his book and useful comments on its contents, and to R. Mahadevan (IoA, Cambridge) for clarifying to me some points of the ADAF model.
figure captions:
Fig. 1: The number density ($`n`$) profile which corresponds to the case when best fit of the ADAF model (Manmoto 1997, 1999) to the observed emission spectrum of the Sgr A is achieved.
Fig. 2: Solutions of the Eqs.(1) – (3) for the three values of the number density (log-log plot). Thin lines correspond to $`n=n_e=n_i=10^{31}`$ m<sup>-3</sup>, thick lines correspond to $`n=n_e=n_i=10^{26}`$ m<sup>-3</sup>, while the thickest lines correspond to $`n=n_e=n_i=10^{16}`$ m<sup>-3</sup>. Solid lines correspond to the $`T_e(t)`$’s whereas dashed lines correspond to the $`T_i(t)`$’s. Note that equalization of the temperatures occurs at times greatly exceeding age of the Universe ($`10^{10}`$ yr) and decreasing of the number density postpones the temperature equalization which is, of course, in accordance to the general physical expectations.
|
no-problem/9901/hep-ph9901298.html
|
ar5iv
|
text
|
# 1 "Handbag" diagrams : a) for DVCS (left) and b) for meson production (right).
Deeply Virtual Electroproduction
of Photons and Mesons.
M. Guidal<sup>a</sup>, M. Vanderhaeghen<sup>b</sup>
<sup>a</sup> IPN Orsay, F-91406 Orsay, France
<sup>b</sup> University Mainz, D-55099 Mainz, Germany
Much of the internal structure of the nucleon has been revealed during the last two decades through the inclusive scattering of high energy leptons on the nucleon in the Bjorken -or "Deep Inelastic Scattering" (DIS)- regime ($`Q^2,\nu `$ and $`x_B=\frac{Q^2}{2M\nu }`$ finite). Simple theoretical interpretations of the experimental results and quantitative conclusions can be reached in the framework of QCD, when one sums over all the possible hadronic final states. For instance, unpolarized DIS brought us evidence of the quark and gluon substructure of the nucleon, quarks carrying about 45% of the nucleon momentum. Furthermore, polarized DIS revealed that about 25% of the spin of the nucleon is carried by the quarks’ intrinsic spin.
Now, with the advent of the new generation of high-energy, high-luminosity lepton accelerators combined with large acceptance spectrometers, a wide variety of exclusive processes in the Bjorken regime can be envisaged to become accessible experimentally. Until recently, no sound theoretical formalism could really allow to interpret in a unified way such processes. It now appears that such a coherent description is under way through the formalism of new generalized parton distributions, the so-called ’Off-Forward Parton Distributions’ (OFPD’s). It has been shown that these distributions, which parametrize the structure of the nucleon, allow to describe, in leading order perturbative QCD (PQCD), various exclusive processes such as, in particular, Virtual Compton Scattering () and (longitudinal) vector and pseudo-scalar meson electroproduction . Maybe most importantly, Ji showed that the second moment of these OFPD’s gives access to the sum of the quark spin and the quark orbital angular momentum to the nucleon spin, which may shed light on the "spin-puzzle".
In this paper, after a brief summary of the properties of the OFPD’s, we give some examples of what could be the experimental opportunities to access the OFPD’s at the current high-energy lepton facilities : JLab ($`E_e`$ 6 GeV), HERMES ($`E_e`$=27 GeV) and COMPASS ($`E_\mu `$=200 GeV).
Recently, Ji and Radyushkin have shown that the leading order PQCD DVCS amplitude in the forward direction can be factorized in a hard scattering part (exactly calculable in PQCD) and a nonperturbative nucleon structure part as is illustrated in Fig.(1-a). In these so-called “handbag" diagrams of Fig.(1), the lower blob which represents the structure of the nucleon can be parametrized, at leading order PQCD, in terms of 4 generalized structure functions, the OFPD’s. These are defined as $`H,\stackrel{~}{H},E,\stackrel{~}{E}`$, and depend upon three kinematical invariants : $`x`$, $`\xi `$, $`t`$. $`H`$ and $`E`$ are spin independent and $`\stackrel{~}{H}`$ and $`\stackrel{~}{E}`$ are spin dependent.
The OFPD’s $`H`$ and $`\stackrel{~}{H}`$ are actually a generalization of the parton distributions measured in deep inelastic scattering. Indeed, in the forward direction, $`H`$ reduces to the quark distribution and $`\stackrel{~}{H}`$ to the quark helicity distribution measured in deep inelastic scattering. Furthermore, at finite momentum transfer, there are model independent sum rules which relate the first moments of these OFPD’s to the elastic form factors. The OFPD’s reflect the structure of the nucleon independently of the reaction which probes the nucleon. They can also be accessed through the hard exclusive electroproduction of mesons -$`\pi ^0`$, $`\rho ^0`$, $`\omega `$, $`\varphi `$,…- (see Fig.( 1-b)) for which a QCD factorization proof was given recently . According to Ref., the factorization applies when the virtual photon is longitudinally polarized because in this case, the end-point contributions in the meson wave function are power suppressed. It was also shown in Ref. that the cross section for a transversely polarized photon is suppressed by 1/$`Q^2`$ compared to a longitudinally polarized photon. Because the transition at the upper vertices of Fig.( 1-b) will be dominantly helicity conserving at high energy and in the forward direction, this means that the vector meson will also be predominantly longitudinally polarized (notation $`\rho _L^0,\omega _L,\varphi _L`$) for a longitudinal photon. By identifying then the polarization of the vector meson through its decay angular distribution, one can obtain the longitudinal part of the electroproduction cross sections.
It was also shown in that leading order PQCD predicts that the vector meson channels ($`\rho _L^0`$, $`\omega _L`$, $`\varphi _L`$) are sensitive only to the unpolarized OFPD’s ($`H`$ and $`E`$) whereas the pseudo-scalar channels ($`\pi ^0,\eta ,\mathrm{}`$) are sensitive only to the polarized OFPD’s ($`\stackrel{~}{H}`$ and $`\stackrel{~}{E}`$). In comparison to meson electroproduction, we recall that DVCS depends at the same time on both the polarized and unpolarized OFPD’s.
For a first exploratory approach, we will now show that the meson channels hold the best promises due to the relatively high cross-sections. First estimates for the $`\pi ^0`$, $`\rho _L^0`$ cross sections were given in Refs. besides the $`\gamma `$-channel using an educated guess for the OFPD’s, which consists of a product of elastic form factors by quark distributions measured in DIS. This ansatz satisfies the first sum rules and the corresponding distributions obviously reduce to the quark distributions from DIS in the forward direction.
We compare in Fig.(2), the $`\rho _L^0`$, $`\pi ^0`$ and $`\gamma `$ cross sections as function of the beam energy at a fixed $`Q^2`$ = 2 GeV<sup>2</sup> and $`x_B`$ = 0.3. It is clear on this picture that the $`\rho `$ channel is very favorable. Its cross section is the highest because it depends on the unpolarized OFPD’s ($`H`$ and $`E`$). The $`\omega _L`$ channel has a cross section that is substantially higher than the ratio $`\sigma _\omega `$/$`\sigma _\rho =\frac{1}{9}`$ predicted by the diffractive mechanism and this is essentially due to the quark exchange mechanism (QEM). The $`\omega _L`$ and $`\rho _L^0`$ channels probe different combination of the $`u`$ and $`d`$ OFPD’s and a measurement of both therefore allows to separate these $`u`$ and $`d`$-quark unpolarized OFPD’s. The $`\pi ^0`$ channel depends on the polarized OFPD’s ($`\stackrel{~}{H}`$ and $`\stackrel{~}{E}`$) and therefore the PQCD QEM mechanism gives a lower cross section. The DVCS is proportional to both the polarized and the unpolarized OFPD’s as was already mentioned but it has an extra $`\alpha _{em}`$ coupling (due to the final state photon) which reduces the cross section. (By comparison, the meson final states go through the exchange of a gluon and therefore has a $`\alpha _S`$ coupling). Furthermore, at JLab energies, the DVCS suffers from the competing process which leads to the same final state, the Bethe-Heitler process. This extra “parasite" mechanism is dominant at 6 GeV and renders the extraction from the cross section of the DVCS process very difficult. This “parasite” process is absent in the case of meson electroproduction. Going up in energy, the increasing virtual photon flux factor boosts the $`\rho _L^0`$, $`\pi ^0`$ leptoproduction cross sections and the DVCS part of the $`\gamma `$ leptoproduction cross section. For the $`\gamma `$ electroproduction cross section the BH process is hardly influenced by the beam energy and therefore overwhelms the DVCS cross section at low beam energies. For a study of the OFPD’s such a figure seems to favor high-energy experimental facilities such as COMPASS. However, it should be clearly kept in mind that the actual count rates will be weighted by the luminosity. So, in spite of the relatively "low" energy of the JLab incident beam, the higher luminosity and the better resolution that one can reach with the JLab large acceptance CLAS detector will allow equivalent count rates to the other two facilities in the roughly same equivalent range (but in a shorter period).
Before considering the extraction of the OFPD’s from the data, it is mandatory to first demonstrate that the scaling regime has been reached. In leading order PQCD, the DVCS transverse cross section $`\frac{d\sigma _T}{dt}`$ is predicted to behave as $`\frac{1}{Q^4}`$ whereas the mesons’ longitudinal cross sections will obey a $`\frac{1}{Q^6}`$ scaling (due to the "extra" gluon exchange for the mesons, see Fig. 1). Recently, an experiment has been approved at JLab where it is proposed to investigate this scaling behavior. Figure 3 shows the estimated lever arm reachable at JLab with 400 hours of beam time in the CLAS detector for the $`\rho ^0`$ channel. With a maximum $`Q^2`$ of $``$ 3.5 GeV<sup>2</sup> (for $`x_B`$ around .3), the cross section can be measured over about a decade. This should provide a sufficient lever arm to test the scaling prediction and test at what value of $`Q^2`$ this $`\frac{1}{Q^6}`$ scaling behavior sets in. With a JLab 8 GeV incident energy, the lever arm extends to 4.5 GeV<sup>2</sup>.
In conclusion, we believe that a broad new physics program, i.e. the study of exclusive reactions at large $`Q^2`$ in the valence region (where the quark exchange mechanism dominates), opens up. By "constraining" the final state of the DIS reaction, instead of summing over all final states, one accesses some more fundamental structure functions of the nucleon, i.e. the OFPD’s. These functions provide a unifying link between a whole class of various reactions (elastic and inelastic) and fundamental quantities as diverse as form factors, parton distributions, etc…
|
no-problem/9901/cond-mat9901344.html
|
ar5iv
|
text
|
# Self-organization of vortices in type-II superconductors during magnetic relaxation
## I Introduction
The magnetic response of hard type-II superconductors, in particular magnetic flux creep, is a timely issue in contemporary research (see for review ). In early 60s a very useful model of the critical state was developed to describe magnetic behavior of type-II superconductors . One of the distinguishing features of this behavior, observed experimentally, is that the density of flux lines varies across the whole sample. This model of the critical state remains in use, even though a significant progress has been made in understanding the particular mechanisms of a magnetization and creep in type-II superconductors . It has also been noted that the magnetic flux distribution in type-II superconductors is, in many aspects, similar to a sandpile formed when, for example, sand is poured onto a stage . When a steady state is reached the slope of such a pile is analogous to the critical current density $`j_c`$ of a superconductor. Study of the dynamics (i. e. sand avalanches) of such strongly-correlated many-particle systems has led to a development of a new concept, called self-organized criticality (SOC), proposed originally by Bak and co-workers . Tang first analyzed direct application of SOC to type-II superconductors . Later numerous studies significantly elaborated on this topic .
In practice, especially in high-T<sub>c</sub> superconductors, persistent current density $`j`$ in the experiment is much lower than the critical current density $`j_c`$ due to ”giant” flux creep . The concept of SOC is strictly applied only to the critical state $`j=j_c`$ and it describes the system dynamics towards the critical state. Nevertheless, it is tempting to analyze magnetic flux creep in type-II superconductors during which the system moves out of the critical state, in a SOC context, because thermal activation can trigger vortex avalanches . However, it was found that modifications of the relaxation law due to vortex avalanches are minor and can hardly be reliably distinguished in the analysis of experimental data. Furthermore, flux creep universality has been analytically demonstrated in the elegant paper by Vinokur et al. . Universality of the spatial distribution of the electric field during flux creep has also been found by Gurevich and Brandt . The direct application of SOC to the problem of magnetic flux creep thus meets a number of serious general difficulties. It is clear that critical scaling (power laws for vortex-avalanche lifetimes and size distributions) observed in the vicinity of the critical state must change during later stages of relaxation due to a time-dependent (or current-dependent) balance of the Lorentz and pinning forces.
In this paper we propose a new physical picture of self-organization in a vortex matter during magnetic flux creep in type-II superconductors. In this approach the driving parameter is the energy barrier for magnetic flux creep rather than the current density. We show that notwithstanding its minor influence on the relaxation rate, self-organized behavior may be observed by measuring magnetic noise during flux creep.
## II Barrier for magnetic flux creep as the driving parameter of self-organization
We consider a long superconducting slab infinite in the $`y`$ and $`z`$ directions and having width $`2w`$ in the $`x`$ direction. The magnetic field is directed along the $`z`$ axis. In this geometry, the flux distribution is one-dimensional, i.e., $`𝐁(𝐫,t)=(0,0,B(x,t))`$. As a mathematical tool for our analysis we use a well known differential equation for flux creep :
$$\frac{B}{t}=\frac{}{x}\left(Bv_0\mathrm{exp}\left(U(B,T,j)/T\right)\right)$$
(1)
Here $`B`$ is the magnetic induction, $`v=v_0\mathrm{exp}\left(U(B,T,j)/T\right)`$ is the mean velocity of vortices in the $`x`$ direction and $`U(B,T,j)`$ is the effective barrier for flux creep. Note that we adopt units with $`k_B=1`$, thus energy is measured in kelvin. Since in our geometry $`4\pi M=\underset{V}{}\left(BH\right)𝑑V`$ we get for the mean volume magnetization $`m=M/V`$ from Eq. 1:
$$\frac{m}{t}=A\mathrm{exp}\left(U(H,T,j)/T\right)$$
(2)
where $`AHv_0/4\pi w`$.
It is important to emphasize that we do not modify the pre-exponent factor $`Bv_0`$ of Eq. 1 or $`A`$ of Eq. 2, as suggested by previous works on SOC (see e.g. ). Such modifications result only in logarithmic corrections to the effective activation energy, and they may be omitted in a flux creep regime . Instead, we concentrate on the details of the spatial behavior of flux creep barrier $`U\left(x\right)`$, as analyzed in detail in our previous work . In that work Eq. 1 was solved numerically and semi-analytically for different situations. We emphasize that, in general, the barrier for flux creep depends on magnetic field $`B`$, and persistent current density $`j\left(x\right)`$ is not uniform across the sample (see Fig. 1). Thus, $`j`$ cannot be used as a driving parameter for a SOC model. Instead the relevant parameter is $`U`$, which stays constant across the sample. Also, since experiments on magnetic relaxation are usually carried out at constant temperature and at high magnetic field, we can assume $`U(B,T,j)=U\left(j\right)`$. Central results of Ref. are shown in Fig. 1 using a ”collective creep” - type dependence $`U\left(j\right)=U_0\left(B/B_0\right)^n\left(\left(j_c/j\right)^\mu 1\right)`$, with $`n=5`$ and $`\mu =1`$as an example, see also Eq. 5 below (other models are analyzed in Ref. as well and produce essentially similar results). Filled squares in Fig. 1 represent the distribution of the magnetic induction $`B\left(x\right)/H`$ at some late stage of relaxation (so that $`j<j_c`$), the solid line represents the normalized current density profile (note that $`j_c`$ is constant across the sample), and open circles show the profile of the effective barrier for flux creep $`U\left(x\right)/T`$. All quantities are calculated numerically from Eq. 1. The important thing to note is that the energy barrier $`U\left(x\right)`$ is nearly independent of $`x`$, so that its maximum variation $`\delta U`$ is of order of $`T`$. As also shown from general arguments , such behavior means that the fluxon system organizes itself to maintain a uniform distribution of the barrier $`U`$ across the sample.
The vortex avalanches are introduced in an integral way. An avalanche of size $`s`$ causes a change in the total magnetic moment $`\delta Ms`$. This change is equivalent to a change of the average current density $`\delta j=\delta M\gamma =\gamma s`$, where $`\gamma =2c/wV`$. If the barrier for flux creep is $`U\left(j\right)`$, then the variation of current $`\delta j`$ leads to a variation of the energy barrier
$$\delta U=\left|\frac{U}{j}\right|\delta j=\gamma \left|\frac{U}{j}\right|s$$
(3)
As mentioned above, maximum fluctuation in the energy barrier $`\left|\delta U\right|_{\mathrm{max}}`$ is of order of $`T`$ in the creep regime ($`\delta U<<U`$). Any fluctuation $`\delta U`$ larger than $`T`$ is suppressed before it arrives to the sample edge due to exponential feedback of the local relaxation rate, which is proportional to $`\mathrm{exp}\left(U/T\right)`$, (Eq. 1). This means that only fluctuations $`\delta UT`$ can be observed in global measurements of the sample magnetic moment. Thus,
$$s_m=\frac{T}{\gamma \left|\frac{U}{j}\right|}VT$$
(4)
where we denote as $`s_m`$ the maximum possible avalanche, which depends on time via $`U/j`$. It is worth to note that Eq. 4 gives the correct dependence of $`s_m`$ on the system size and on temperature. It is clear that in a finite system the largest possible avalanche must be proportional to the system volume. Since it is thermally activated, it is proportional to temperature $`T`$, consistent with our derivation. The characteristic time-dependent upper cut-off of the avalanche size was experimentally observed by Field et al. who studied magnetic noise spectra at different magnetic field sweep rates, i.e. at different time windows of the experiment.
Our central idea is that in the vicinity of $`j_c`$ the system of fluxons, indeed, exhibits self-organized critical behavior, as initially proposed by Tang . During flux creep, it maintains itself in a self-organized, however not critical state in the sense that it cannot be described by the critical scaling. The self-organization manifests itself by the appearance of almost constant across the sample $`U`$. Avalanches do not vanish, but there is a constrain on the largest possible avalanche, see Eq. 4. Importantly, $`s_m`$ depends upon current density and, as we show below, decreases with decrease of current (or with increase of time), so their relative importance vanishes.
In order to calculate physically measured quantities let us derive the time dependence of $`s_m`$ assuming a very useful generic form of the barrier for flux creep, introduced by Griessen .
$$U\left(j\right)=\frac{U_0}{\alpha }\left[\left(\frac{j_c}{j}\right)^\alpha 1\right]$$
(5)
This formula describes all widely-known functional forms of $`U\left(j\right)`$ if the exponent $`\alpha `$ attains both negative and positive values. For $`\alpha =1`$ Eq. 5 describes the Anderson-Kim barrier ; for $`\alpha =1/2`$ the barrier for plastic creep is obtained. Positive $`\alpha `$ describes collective creep barriers . In the limit $`\alpha 0`$ this formula reproduces exactly logarithmic barrier . An activation energy written in the form of Eq. 5 results in an ”interpolation formula” for flux creep if the logarithmic solution of the creep equation $`U\left(j\right)=T\mathrm{ln}(t/t_0)`$ is applied (for $`\alpha 0`$):
$$j\left(t\right)=j_c\left(1+\frac{\alpha T}{U_0}\mathrm{ln}\left(\frac{t}{t_0}\right)\right)^{\frac{1}{\alpha }}$$
(6)
For $`\alpha =0`$, a power-law decay is obtained: $`j\left(t\right)=j_c\left(t_0/t\right)^n`$, where $`n=T/U_0`$.
Using this general form of the current dependence of the activation energy barrier, we obtain from Eq. 4
$$s_m\left(j\right)=\frac{Tj}{\gamma U_0}\left(\frac{j}{j_c}\right)^\alpha $$
(7)
and
$$s_m\left(t\right)=\frac{Tj_c}{\gamma U_0}\left(1+\frac{\alpha T}{U_0}\mathrm{ln}\left(\frac{t}{t_0}\right)\right)^{\left(1+\frac{1}{\alpha }\right)}.$$
(8)
As we see, the upper limit for the avalanche size decreases with the decrease of current density or with the increase of time for all $`\alpha >1`$. For $`\alpha <1`$ the curvature
$$\frac{^2U}{j^2}=\frac{\left(\alpha +1\right)}{j^2}U_0\left(\frac{j_c}{j}\right)^\alpha $$
(9)
is negative and largest avalanche does not change with current, but is limited by its value at criticality). In this case, self-organized criticality describes the system dynamics down to very low currents. On the other hand the Kim-Anderson barrier must be always relevant when $`jj_c`$ , thus our model produces a correct transition to a self-organized critical state at $`j=j_c`$. In practice, most of the observed cases obey $`\alpha 1`$ and $`s_m`$ decreases with decrease of current density (due to flux creep).
## III Avalanche distributions and the power spectrum
Before starting with calculation of the power spectrum of the magnetic flux noise due to flux avalanches, let us stress that the time dependence of $`s_m`$ is very weak (logarithmic, see Eq. 8). This allows us to treat the process of the flux creep as quasi-stationary, which means that during a short time, as required for the sampling of the power spectrum, current density is assumed to be constant. In more sophisticated experiments the external field can be swept with the constant rate, which insures that the current density does not change, although $`j<j_c`$. Actually, constant sweep rate fixes a certain time window of the experiment $`t/t_01/\left(H/t\right)`$. Thus, decreasing the sweep rate allows the noise spectra to be studied at effectively later stages of the relaxation.
Once an avalanche is triggered by a thermal fluctuation, its subsequent dynamics is governed only by interactions between vortices for which motion is not due to thermal fluctuations. Thus, we expect same relationship between the avalanche lifetime $`\tau `$ and its size $`s`$ as in the case of a sandpile: $`\tau \left(t\right)s^\sigma \left(t\right)`$ and $`\tau _m\left(t\right)s_m^\sigma \left(t\right)`$, respectively. Using the simplified version of the distribution of lifetimes estimated for a superconductor in a creep regime from computer simulations by Pan and Doniach ,
$$\rho \left(\tau \right)\mathrm{exp}\left(\tau /\tau _m\right),$$
(10)
and assuming that avalanches of size $`s`$ and lifetime $`\tau `$ contribute the Lorentzian spectrum,
$$L(\omega ,\tau )\frac{\tau }{1+\left(\omega \tau \right)^2}$$
(11)
the total power spectrum of magnetic noise during flux creep is
$$S\left(\omega \right)=\underset{0}{\overset{\mathrm{}}{}}\rho \left(\tau \right)L(\omega ,\tau )𝑑\tau .$$
(12)
Using Eq. 10 we find:
$$S\left(p\right)\frac{1}{2p^2}\left[\mathrm{cos}\left(\frac{1}{p}\right)Re\left(Ei\left(\frac{i}{p}\right)\right)\mathrm{sin}\left(\frac{1}{p}\right)Im\left(Ei\left(\frac{i}{p}\right)\right)\right].$$
(13)
Here $`p\omega \tau _m\left(t\right)`$ and $`Ei\left(x\right)=\underset{0}{\overset{\mathrm{}}{}}e^{x\eta }/\eta 𝑑\eta `$ is the exponential integral. The power spectrum $`S(\omega ,t)`$ described by Eq. 13 is plotted in Fig. 2 using a solid line. Since there an upper cutoff for the avalanche lifetime at $`\tau _m`$, the lowest frequency which makes sense is $`2\pi /\tau _m`$. Thus, only frequency domain $`2\pi /\tau _m<\omega `$ ($`p>1`$) is important. In the limit of large $`p`$, the spectral density of Eq. 13 has a simple asymptote:
$$S\left(\omega \right)\frac{\mathrm{ln}\left(p\right)\gamma _e}{p^2}$$
(14)
where $`\gamma _e0.577\mathrm{}`$ is Euler’s constant. This simplified power spectrum is shown in Fig. 2 by a dashed line. For $`p>10`$ this approximation is quite reasonable. The usual way to analyze the power spectrum is to present it in a form $`S\left(\omega \right)1/\omega ^\nu `$ and extract the exponent $`\nu `$ simply as $`\nu =\mathrm{ln}\left(S\right)/\mathrm{ln}\left(\omega \right)`$. In our case the parameter $`p=\omega \tau _m`$ is a reduced frequency, so the exponent $`\nu `$ can be estimated as
$$\nu =\frac{\mathrm{ln}\left(S\right)}{\mathrm{ln}\left(p\right)}=2\frac{1}{\mathrm{ln}\left(p\right)\gamma _e}$$
(15)
This result is very important, since it fits quite well the experimentally observed values of $`\nu `$ which were found to vary between $`1`$ and $`2`$ . As seen from Fig. 2, it is impossible to distinguish between real $`1/\omega ^\nu `$ dependence and that predicted by Eq. 13 at large enough frequencies. Remarkably, in many experiments the power spectrum was found to deviate significantly from the $`1/\omega ^\nu `$ behavior at lower frequencies, which fits, however, Eq. 13.
Using Eq. 13 or Eq. 14 one can find the temperature, magnetic field and time dependence of the power spectrum substituting $`p=\omega \tau _m=\omega s_m^\sigma `$ and using values of $`s_m(H,T,t)`$ derived in the previous section. Specifically, from Eq. 8 we obtain that any given frequency amplitude of a power spectrum increases with time in the collective creep regime, but saturates in the case of the logarithmic barrier and remains constant in the case of the Kim-Anderson barrier.
In general, we emphasize that the power spectrum of the magnetic noise during flux creep depends on time. Since parameter $`p`$ decreases with the increase of time, the exponent $`\nu `$ becomes closer to $`1`$ during flux creep. At these later stages of relaxation the effect of the avalanches is negligible and magnetic noise is mostly determined by thermally activated jumps of vortices with the usual (non-correlated) $`1/\omega `$ power spectrum. Thus, the manifestation of the avalanche-driven dynamics during flux creep is noise spectra with $`1/\omega ^\nu `$ and decreasing $`\nu \left(t\right)`$ when sampled at different times during relaxation. This explains the experimental results obtained by Field et al. , who measured directly vortex avalanches at different sweep rates. Those found that the exponent $`\nu `$ decreased from a relatively large value of $`2`$ at a large sweep rate of $`20G/\mathrm{sec}`$ to a smaller value of $`1.5`$ for a sweep rate of $`1G/\mathrm{sec}`$. This is in a good agreement with our model.
## IV Conclusions
In conclusion, self-organization of vortices in hard type-II superconductors during magnetic flux creep was analyzed. Using results of a numerical solution of the differential equation for flux creep, it was argued that the self-organized criticality describes the system dynamics at $`j=j_c`$. During flux creep, the vortex system remains self-organized, but there is no criticality in the sense that there are no simple power laws for distributions of the avalanche size, lifetime, and for the power spectrum. The driving parameter of the self-organized dynamics is the energy barrier $`U(B,j)`$ and not the current density $`j`$, as proposed by previous work. Using a simple model the power spectrum $`S\left(\omega \right)`$ of the magnetic noise is predicted to depend on time. Namely, fitting $`S\left(\omega \right)`$ to a $`1/\omega ^\nu `$ behavior will result in a time-dependent exponent $`\nu \left(t\right)`$ decreasing in the interval between $`2`$ and $`1`$.
Acknowledgments: We acknowledge fruitful discussions with L. Burlachkov and B. Shapiro. We thank F. Nori for critical remarks. D.G. acknowledges support from the Clore Foundations. This work was partially supported by the National Science Foundation (DMR 91-20000) through the Science and Technology Center for Superconductivity, and by DOE grant DEFG02-91-ER45439.
Figure captions
Fig.1 Results of numerical solution of Eq. 1 for $`U\left(j\right)=U_0\left(B/B_0\right)^5\left(j_c/j1\right)`$ at $`j<j_c`$. Spatial distribution of magnetic induction $`B\left(x\right)/H`$ (filled squires); corresponding profile of the normalized current density (solid line) and the corresponding profile of the effective barrier for flux creep $`U\left(x\right)/T`$ (open circles).
Fig. 2 The power spectrum $`S(\omega ,t)`$ described by Eq. 13 (solid line) and approximate asymptotic solution (dashed line).
|
no-problem/9901/cond-mat9901299.html
|
ar5iv
|
text
|
# Arrested Cracks in Nonlinear Lattice Models of Brittle Fracture
## Abstract
We generalize lattice models of brittle fracture to arbitrary nonlinear force laws and study the existence of arrested semi-infinite cracks. Unlike what is seen in the discontinuous case studied to date, the range in driving displacement for which these arrested cracks exist is either very small or precisely zero. Also, our results indicate that small changes in the vicinity of the crack tip can have an extremely large effect on arrested cracks. Finally, we briefly discuss the possible relevance of our findings to recent experiments.
Recent years have seen a rebirth of interest by the physics community in the issue of dynamic fracture. This is due to a variety of new experimental results which are not explainable within the confines of the traditional engineering approach to fracture . These results include a dynamical instability to micro-branching , the formation of non-smooth fracture surfaces and the rapid variation of the fracture energy (including dissipative losses incurred during cleavage) with crack velocity . These issues are reviewed in a recent paper by Fineberg and Marder .
One approach for dealing with dynamic fracture involves restricting the atomic interactions to those occurring between neighboring sites of an originally unstrained lattice. These lattice models can never be as realistic as full molecular dynamics simulations, but compensate for this shortcoming by being much more amenable to analysis, both numerical and (via the Wiener-Hopf technique) otherwise. This approach was pioneered by Slepyan and co-workers and further developed by Marder and Gross and most recently by ourselves . Most of the results to date have been obtained using a simplified force law which is linear until some threshold displacement at which point it drops abruptly to zero. Below, we will study a generalization for which the force is a smooth function of the lattice strain. One of our goals is to learn which aspects of fracture are sensitive to microscopic details and which are universal.
One interesting aspect of these lattice models concern the existence of a range of driving displacements $`\mathrm{\Delta }`$ for which non-moving semi-infinite crack solutions can be found. For the aforementioned discontinuous force model, there exists a wide range of these arrested cracks. For example, ref. found that $`\mathrm{\Delta }`$ could range from 40% below to 40% above the Griffith displacement $`\mathrm{\Delta }_G`$, the driving at which it first becomes energetically favorable for the system to crack. This phenomena is connected to the existence of a velocity gap, i.e. a minimal velocity for stable crack propagation. Experimentally, no such gap has been reported, even for materials such as single-crystal silicon which should be at least approximately describable by lattice models. It is therefore of some interest to study how the arrested crack range depends on the microscopic details of the assumed atomic force law. Here we present the results of such as a study, including the finding that this range drops rapidly towards zero as the force law is made smoother and hence more realistic.
As in ref. , we work with a square lattice and with scalar displacements (mode III). We focus on arrested cracks and write the static equation as
$$0=f\left(u_{i+1,j}u_{i,j}\right)+f\left(u_{i,j}u_{i1,j}\right)f\left(u_{i,j+1}u_{i,j}\right)+f\left(u_{i,j}u_{i,j1}\right)$$
(1)
Here the indices $`\{i,j\}`$ label the lattice site and $`u`$ is the displacement. Sites on the last row of the the lattice, $`j=N_y`$, are coupled to a row with fixed displacement $`\mathrm{\Delta }`$. The first row, $`j=1`$, is coupled to a $`j=0`$ displacement field $`u_{i,0}`$ which via symmetry equals $`u_{i,1}`$. Finally, $`f`$ is a nonlinear function of its argument, the lattice strain. We investigate two forms :
$$f_e(u)=u\frac{1+\mathrm{tanh}(\alpha (1u))}{1+\mathrm{tanh}\alpha }$$
(2)
$$f_p(u)=\frac{u\alpha ^{\alpha +1}}{(u+\alpha )^{\alpha +1}}$$
(3)
For both of these forms, increasing $`\alpha `$ reduces the length scale over which $`f`$ falls to zero once outside the Hooke’s law regime ($`u<1`$). The exponential force $`f_e`$ reduces to the familiar discontinuous force (linear until complete failure) as $`\alpha \mathrm{}`$.
Our procedure for finding solutions is in principle straightforward. At large positive $`i`$ in the uncracked material, we know that the system will adopt a uniformly strained state. Conversely, at large negative $`i`$ the cracked state will have a large displacement $`u_{i,1}`$ and (almost) zero strains for $`j>1`$. Fixing the boundary condition $`\mathrm{\Delta }`$ allows us to easily find these asymptotic states. Once found, these solutions are used as fixed displacements for the columns $`i=N_x+1`$ and $`i=N_x1`$ respectively. The arrested crack then requires us to solve for $`(2N_x+1)N_y`$ variables. We impose the equation at motion at all sites except for the crack “tip”, ($`i=0,j=1`$) where instead we specify the displacement; this approach perserves the banded structure of the system. Newton’s algorithm then allows us to converge to a solution. Afterwards, the residual equation of motion becomes a solvability condition with which $`\mathrm{\Delta }`$ can be determined. The range of allowed values of $`\mathrm{\Delta }`$ for arrested cracks is found as one systematically sweeps through the value of the aforementioned fixed displacement.
In Fig. 1 we present our results for the exponential model. For illustration, we have chosen to show data for $`N_y=10`$ as a function of $`\alpha `$. For large $`\alpha `$, the range of $`\mathrm{\Delta }`$ is large and there is a marked asymmetry between the rising segment of $`\mathrm{\Delta }`$ versus imposed displacement and the (much steeper) falling segment. As $`\alpha \mathrm{}`$, the falling portion becomes vertical. These segments represent different crack solutions at fixed $`\mathrm{\Delta }`$; as $`\mathrm{\Delta }`$ reaches the end of its allowed range, these solution branches collide and disappear in a standard saddle-node bifurcation point. To verify this, we have performed a linear stability calculation of these solutions, assuming purely inertial dynamics (i.e. setting the left hand side of Eq. 1 to $`\ddot{u}_{i,j}`$). As expected, there is a single mode of the spectrum for the growth rate $`\omega `$ for which $`\omega ^2`$ goes from negative to positive as we go up the rising segment, reach the maximal driving, and then go back down.
Fig. 1 demonstrates that as the potential is made smoother, the range of arrested cracks shrinks dramatically. In Fig. 2, we show this range as a percentage of $`\mathrm{\Delta }_G`$. The best fit to our data suggests that the range vanishes as an essentially singular function of $`\alpha `$,
$$\frac{\mathrm{\Delta }_{\text{max}}\mathrm{\Delta }_{\text{min}}}{\mathrm{\Delta }_G}A\mathrm{exp}\frac{\alpha _0}{\alpha }$$
(4)
where for $`N_y=10`$, $`\alpha _06.6`$ and otherwise is a slowly varying function of $`N_y`$ as long as the system is sufficiently large compared to the potential fall-off.
Let us now turn to the power-law form. Based on our findings above, we would expect that this rather smooth force law would give rise to a range which is practically zero. We have verified this prediction in two ways. First, for the case $`\alpha =3`$ we performed our usual scan over imposed $`u_{0,1}`$ displacement and noted that the selected $`\mathrm{\Delta }`$ varies by less than $`10^6`$. Second, we computed the stability spectrum and found a mode at $`\omega ^2<10^6`$; this value is indicative of how close we are at a randomly chosen displacement to the extremal value of $`\mathrm{\Delta }`$ at the saddle-node bifurcation . These numbers are consistent with our numerical accuracy and hence the true range is probably even smaller. Needless to say, ranges of this size would be unmeasurable. It is interesting to point out that the almost-zero mode is nothing other than a spatial translation of the crack. That is, translating the crack with respect to the underlying fixed lattice is almost a symmetry of the solution.
So, by making the potential smoother one tends to eliminate arrested crack solutions. How does this change come about? To try to address this question, we plot in Fig. 3 the lattice strain field $`u_{i+1,1}u_{i,1}`$ for $`N_xiN_x`$ for the three potentials, exponential with $`\alpha =5`$ or $`2`$ and power-law with $`\alpha =3`$. For this comparison, we have found (stable) solutions with $`u_{0,1}=.75`$ for all three potentials, and then normalized the strains by dividing with the respective values of $`\mathrm{\Delta }`$. First, we note that beyond $`x5`$, the different cases are virtually indistinguishable and all lie on the expected $`x^{1/2}`$ universal curve . The interior “process-zone” region is affected by changing the potential, but rather minimally. For example, the two exponential cases differ in only one or two points, yet this is sufficient to shrink the arrested crack range by almost an order of magnitude. The power-law choice has a process-zone which is a bit wider and there is less maximal strain, but that is all. We thus conclude that the existence and size of the arrested crack range are extremely sensitive to microscopic details! We note in passing that the process-zone for any specific potential quickly reaches an asymptotic size once $`N_y`$ is sufficiently large and in particular does not increase indefinitely in the macroscopic limit. Treatments which include a mesoscopic size “cohesive-zone” are therefore not accurate representations of this class of lattice models.
In a recent experiment on fracture in silicon, no arrested cracks were observed. A molecular dynamics simulation using a modified Stillinger-Weber potential also exhibited no arrested cracks when studied at high enough temperature. However, the potentials used here were rather short-ranged, as compared with some estimates that arise from density-functional theory . Our results indicate that increasing the range and thereby using smoother potentials will eliminate (at least as far as experimentally attainable precision is occurred) arrested cracks and may offer a simpler explanation of the experimental finding than one which requires thermal creep. This could of course be tested in principle by re-doing the experiments at a reduced temperature.
###### Acknowledgements.
HL acknowledges the support of the US NSF under grant DMR98-5735; DAK acknowledges the support of the Israel Science Foundation and the hospitality of the Lawrence Berkeley National Laboratory. The work of DAK was also supported in part by the Office of Energy Research, Office of Computational and Technology Research, Mathematical, Information and Computational Sciences Division, Applied Mathematical Sciences Subprogram, of the U.S. Department of Energy, under Contract No. DE-AC03-76SF00098. Also, DAK acknowledges useful conversation with M. Marder and G. Barenblatt.
|
no-problem/9901/math9901128.html
|
ar5iv
|
text
|
# References
A NOTE ON A CONJECTURE OF XIAO
Miguel A. BARJA<sup>1</sup><sup>1</sup>1Partially supported by CICYT PS93-0790 and HCM project n.ERBCHRXCT-940557..
Departament de Matemàtica Aplicada I. Universitat Politècnica de Catalunya. Barcelona. Spain
Francesco ZUCCONI<sup>2</sup><sup>2</sup>2Partially supported by HCM project n.ERBCHRXCT-940557..
Dipartamento di Matematica e Informatica. Università degli Studi di Udine. Udine. Italy
When $`f:SB`$ is a surjective morphism of a complex, smooth surface $`S`$ onto a complex, smooth, genus $`b`$ curve $`B`$, such that the fibre $`F`$ of $`f`$ has genus $`g`$, it is well known that $`f_{}\omega _{S/B}=`$ is a locally free sheaf of rank $`g`$ and degree $`d=𝒳𝒪_S(b1)(g1)`$ and that $`f`$ is not an holomorphic fibre bundle if and only if $`d>0`$. In this case the slope, $`\lambda (f)=\frac{K_S^28(b1)(g1)}{d}`$, is a natural invariant associated by Xiao to $`f`$ (cf. ). In \[Conjecture 2\] he conjectured that $``$ has no locally free quotient of degree zero (i.e., $``$ is ample) if $`\lambda (f)<4`$. We give a partial affirmative answer to this conjecture:
Theorem 1. Let $`f:SB`$ be a relatively minimal fibration with general fibre $`F`$. Let $`b=g(B)`$ and assume that $`g=g(F)2`$ and that $`f`$ is not locally trivial.
If $`\lambda (f)<4`$ then $`=f_{}\omega _{S/B}`$ is ample provided one of the following conditions hold
1. $`F`$ is non hyperelliptic.
2. $`b1`$.
3. $`g(F)3`$.
Proof. (i) If $`q(S)>b`$ the result follows from Corollary 2.1. Now assume $`q(S)=b`$. By Fujita’s decomposition theorem (see , and also for a proof)
$$=𝒜_1\mathrm{}_r$$
where $`h^0(B,(𝒜_1\mathrm{}_r)^{})=0`$, $`𝒜`$ is an ample sheaf and $`_i`$ are non trivial stable degree zero sheaves. Then we only must prove that $`_i=0`$. If $`F`$ is not hyperelliptic and rank $`(_i)2`$ the claim is the content of Proposition 3.1. If rank$`(_i)=1`$ we can use §4.2 or Theorem 3.4 to conclude that $`_i`$ is torsion in $`\text{Pic}^0(B)`$. Hence it induces an étale base change
By flatness $`\stackrel{~}{f}_{}\omega _{\stackrel{~}{S}/\stackrel{~}{B}}=\sigma ^{}(f_{}\omega _{S/B})`$. Since $`\sigma `$ is étale $`\lambda (f)=\lambda (\stackrel{~}{f})`$ and $`\sigma ^{}(_i)=𝒪_{\stackrel{~}{B}}`$ is a direct summand of $`\stackrel{~}{f}_{}\omega _{\stackrel{~}{S}/\stackrel{~}{B}}`$. In particular by $`q(\stackrel{~}{S})>\stackrel{~}{b}=g(\stackrel{~}{B})`$ hence $`\lambda (\stackrel{~}{f})4`$ by Theorem 3.3: a contradiction.
(ii) If $`b=0`$ the claim is trivial. If $`b=1`$, any stable degree zero sheaf has rank one, then as in (i) we conclude.
(iii) If $`g=2`$ and $`𝒜`$, then $`=𝒜`$ where $``$ torsion and we are done. The only non trivial case if $`g=3`$ is $`=𝒜`$ where $`𝒜`$ an ample line bundle and $``$ a stable, degree zero, rank two vector bundle. Then $`K_{S/B}^2(2g2)\text{deg }𝒜=4d`$ and we are done by \[Theorem 2\] $`\mathrm{}`$
Theorem 3.3 of Xiao says that if $`q(S)>b`$ and $`\lambda (f)=4`$ then $`=𝒪_B`$, where $``$ is a semistable sheaf. We have the following improvement:
Theorem 2. Let $`f:SB`$ be a relatively minimal non locally trivial fibration. If $`\lambda (f)=4`$ then $`=f_{}\omega _{S/B}`$ has at most one degree zero, rank one quotient $``$
Moreover, in this case $`=𝒜`$ with $`𝒜`$ semistable and $``$ torsion.
Proof. As in the previous theorem the torsion subsheaf $``$ becomes the trivial one after an étale base change; thus
$$\stackrel{~}{f}_{}\omega _{\stackrel{~}{S}/\stackrel{~}{B}}=\stackrel{~}{𝒜}𝒪_{\stackrel{~}{B}},\stackrel{~}{𝒜}=\sigma ^{}𝒜.$$
By \[Theorem 3.3\], $`\stackrel{~}{𝒜}`$ is semistable. Then $`𝒜`$ is also semistable by \[Proposition 3.2\]. $`\mathrm{}`$
|
no-problem/9901/nucl-th9901043.html
|
ar5iv
|
text
|
# Flash of Prompt Photons from the Early Stage of Heavy-Ion Collisions
## From PCM to VNI; An Ode to KKG
It is incumbent on us to provide a comprehensive and accurate description of relativistic collision of nuclei from the instant of nuclear contact to the formation of hadronic states which are ultimately detected in experiments. Such collisions are expected to create: a) the conditions which prevailed at the time of early universe- a few micro-seconds after the big-bang, and thus, b) a strongly interacting matter under conditions of extreme temperatures and densities which would throw light on the “hallowed” quark-hadron phase transition.
A significant step in this direction was provided by the parton cascade model (PCM) pcm1 which was proposed to study the time evolution of the parton phase space distribution in relativistic nuclear collisions. In this approach the space-time description is formulated within renormalization group-improved QCD perturbation theory embedded in the framework of relativistic transport theory. The dynamics of the dissipative processes during the early stage of the nuclear reactions is thus simulated as the evolution of multiple internetted parton cascades associated with quark and gluon interactions. The model was considerably improved and extended pcm2 to include a number of new effects like individual time scale of each parton-parton collision, formation time of parton radiation, effective suppression of radiative emissions from virtual partons due to enhanced absorption probability of others in regions of dense phase space occupation and the effects of soft gluon interference for low energy gluon emissions, which all become important in nuclear collisions. With these improvements the model was used pcm3 to study the dynamics of partons in relativistic collision of gold nuclei at BNL RHIC and CERN LHC energies. In particular very useful information about the evolution of partons from pre-equilibrium to a thermalized quark-gluon plasma was obtained along with the temperature, energy-density, and entropy-density etc. It was demonstrated that energy densities in excess of an order of magnitude of the critical energy-density when a quark-hadron phase transition is expected, could be attained in such collisions. The model was further used to study the evolution of chemical evolution pcm4 , the production of strangeness, charm, and bottom pcm5 , dileptons pcm6 , and very recently- single photons pcm7 in such collisions.
The next important step pcm8 involved combining the above parton cascade model with a phenomenological cluster hadronization model chm1 ; chm2 ; chm3 which is motivated by the “preconfinement” property prec of partons which is seen from the tendency of quarks and gluons produced in parton cascades to arrange themselves in colour neutral clusters, already at the perturbative level cnc . This approach provided a decent description of the experimentally measured momentum and multiplicity distributions for $`p\overline{p}`$ collisions at $`\sqrt{s}`$ = 200 – 1800 GeV, and was further used to predict the multiplicity distributions likely to be attained in relativistic heavy ion collisions at BNL RHIC and CERN LHC. A critical review of these developments along with details can be found in pcm9 .
However, the above description of hadron formation pcm8 did not explicitly account for the colour degree of freedom of the partons, which is, after-all, at the origin of confinement. To be specific, the “ansatz” for the confinement picture was based exclusively on the dynamically evolving space-time separations of nearest-neighbour colour charges in the parton cascade, rather than on the details of the colour structure of produced gluons, quarks, and anti-quarks. Thus, it was assumed that due to the above mentioned “pre-confinement” property of QCD, the partons which are close in colour (in particular minimal colour singlets) are also close in phase space. In other words, instead of using a colour flow description, the colour structure during the development of the cascade was ignored and at the end of perturbative evolution, colour neutral clusters were formed from partons which had a minimal separation in coordinate and momentum space.
It is known that this correspondence between the colour and space-time structures of a parton cascade is not an equivalence, but holds only in the average color . Thus, it has been argued that the colour structure of the cascade tree provides in principle, exact microscopic information about the flow of colour charges, whereas the space-time structure is based on our model for the statistical kinetic description of parton emission and the nearest-neighbour search, which may be subject to fluctuations that deviate from the exact colour-flow pcm10 . This issue is expected to become increasingly important when more particles populate a phase-space region, e.g. for small $`x`$ region in deep inelastic collisions and in hadron-nucleus and nucleus-nucleus collisions. Thus it is expected that in such cases, it is increasingly likely that the nearest neighbours in momentum and phase space would not necessarily form a colour singlet. It is also likely that the “natural” colour-singlet partner for a given parton within the same cascade (its “endogamous” partner) might actually be disfavoured in comparison with a colour singlet partner from a different but overlapping cascade (an “exogamous” partner). These consideration were incorporated in the hadronization scheme with colour flow discussed in Ref. pcm10 , where it was provided that if the space-time separation of two nearest-neighbour partons allows coalescence, they can always produce one or two color-singlet clusters, accompanied, if necessary, by the emission of a gluon or a quark that carries away any unbalanced net colour (see fig.5, Ref. pcm10 ).
This parton cascade -cluster hadronization model with colour flow is now available in the form of a fortran programme- VNI vni , and has already formed basis for some interesting studies at SPS energies pcm11 , and also to investigate the effect of the hadronic cascades at the end of the hadronization pcm12 .
There are very few examples in recent times, where one person has contributed so much in such a short time.
## Photons from cascading partons
Photons, either radiated or scattered, have remained one of the most effective probes of every kind of terrestrial or celestial matter over the ages. Thus, it is only befitting that the speculation of the formation of deconfined strongly interacting matter - some form of the notorious quark-gluon plasma (QGP) - in relativistic heavy ion collisions, was soon followed by a suggestion shuryak that it should be accompanied by a characteristic radiation of photons. The effectiveness of photons in probing the history of such a hot and dense matter stems from the fact that, after production, they leave the system without any further interaction and thus carry unscathed information about the circumstances of their birth. This is a very important consideration indeed, as the formation of a QGP is likely to proceed from a hard-scattering of initial partons, through a pre-equilibrium stage, to perhaps a thermally and chemically equilibrated state of hot and dense partonic matter. This matter will hadronize and interaction among hadrons will also give rise to photons. In this letter we concentrate on photons coming from the early partonic stage in such collisions.
During the partonic stage, photons emerge from two different mechanisms: firstly, from collisions between partons, i.e., Compton scattering of quarks and gluons and annihilation of quarks and antiquarks; secondly, from radiation of excited partons, i.e. electromagnetic brems-strahlung of time-like cascading partons. Whereas the former mechanism has been studied in various contexts joe ; crs , the latter source of photons is less explored sjoestrand , although, as we shall show, it is potentially much richer both in magnitude and complexity.
The Parton Cascade Model (PCM) pcm9 , provides a fully dynamical description of relativistic heavy ion collisions. It is based on the parton picture of hadronic interactions and describes the nuclear dynamics in terms of the interaction of quarks and gluons within the perturbative quantum chromodynamics, embedded in the framework of relativistic transport theory. The time evolution of the system is simulated by solving an appropriate transport equation in a six-dimensional phase-pace using Monte Carlo methods. The procedure implemented in the computer code VNI vni follows the dynamic evolution of of scattering, radiating, fusing, and clusterizing partons till they are all converted into hadrons. VNI, the Monte Carlo implementation of the PCM, has been adjusted on the basis of experimental data from $`e^+e^{}`$ annihilation and $`pp`$ $`(p\overline{p})`$ collisions.
As recounted earlier, the PCM has been extensively used to provide valuable insight into conditions likely to be achieved at RHIC and LHC energies pcm9 . Very recently, it has been found pcm11 to provide reasonable description to a large body of particle spectra from $`Pb+Pb`$ and $`S+S`$ collisions at CERN-SPS energies as well.
Prompt photons are ideally suited to test the evolution of the partonic matter as described by the PCM. They would accompany the early hard scatterings and the approach to the thermal and chemical equilibration. Most importantly, the PCM is free of assumptions of any type about the initial conditions, since the space-time evolution of the matter is calculated causally from the moment of collision onwards and at any point the state of the matter is determined by the preceding space-time history.
## $`qq\gamma `$ and $`qqg`$
There are some important and interesting differences between a scattering or a branching leading to production of photons and gluons, as has been pointed out nicely by Sjöstrand sjoestrand .
Consider an energetic quark produced in a hard scattering. It will radiate gluons and photons till its virtuality drops to some cut-off value $`\mu _0`$. The branchings $`qq\gamma `$ and $`qqg`$ appear in the PCM on an equal footing and as competing processes with similar structures. The probability, for a quark to branch at some given virtuality scale $`Q^2`$, with the daughter quark retaining a fraction $`z`$ of the energy of the mother quark, is given by:
$$d𝒫=\left(\frac{\alpha _s}{2\pi }C_F+\frac{\alpha _{\mathrm{em}}}{2\pi }e_q^2\right)\frac{dQ^2}{Q^2}\frac{1+z^2}{1z}dz$$
(1)
where the first term corresponds to gluon emission and the second to photon emission. Thus, the relative probability for the two processes is,
$$\frac{𝒫_{qq\gamma }}{𝒫_{qqg}}\frac{\alpha _{\mathrm{em}}e_q^2}{\alpha _sC_F}\frac{1}{200},$$
(2)
for $`\alpha _{\mathrm{em}}=1/137`$, $`\alpha _s=0.25`$, $`e_q^2`$ = 0.22 and $`C_F`$ = 4/3. This does not mean, though, that we can simulate emission of photons in a QCD shower by simply replacing the strong coupling constant $`\alpha _s`$ with the electromagnetic $`\alpha _{\mathrm{em}}`$ and the QCD colour Casimir factor $`C_F`$ by $`e_q^2`$. One has to keep in mind that the gluon, thus emitted, may branch further, either as $`ggg`$, or as $`gq\overline{q}`$; implying that the emitted gluon has an effective non-zero mass. As the corresponding probability for the photon to branch into a quark or a lepton pair is very small, this process is neglected and we take the photon to have a zero mass. (However, if we wish to study the dilepton production from the collision, this may become an important contribution pcm6 ; see later.)
Secondly, the radiation of gluons from the quarks is subject to soft-gluon interference which is enacted by imposing an angular ordering of the emitted gluons. This is not needed for the emitted photons. To recognize this aspect, consider a quark which has ‘already’ radiated a number of ‘hard’ gluons. The probability to radiate an additional ‘softer’ gluon will get contributions from each of the existing partons which may further branch as $`qqg`$ or $`ggg`$. It is well-known (see e.g.; Ref. cnc ) that if such a soft gluon is radiated at a large angle with respect to all the other partons and one adds the individual contributions incoherently, the emission rate would be overestimated, as the interference is destructive. This happens as a soft gluon of a long wave-length is not able to resolve the individual colour charges and sees only the net charge. The probabilistic picture of PCM is then recovered by demanding that emissions are ordered in terms of decreasing opening angle between the two daughter partons at each branching, i.e., restricting the phase-space allowed for the successive branchings. The photons, on the other hand, do not carry any charge and only the quarks radiate. Thus this angular ordering is not needed for them.
Finally, the parton emission probabilities in the QCD showers contain soft and collinear singularities, which are regulated by introducing a cut-off scale $`\mu _0`$. This regularization procedure implies effective masses for quarks and gluons,
$$m_{\mathrm{eff}}^{(q)}=\sqrt{\frac{\mu _0^2}{4}+m_q^2}m_{\mathrm{eff}}^{(g)}=\frac{\mu _0}{2},$$
(3)
where $`m_q`$ is the current quark mass. Thus the gluons cannot branch unless their mass is more than $`2m_{\mathrm{eff}}^{(g)}=\mu _0`$, while a quark cannot branch unless its mass is more than $`m_{\mathrm{eff}}^{(q)}+m_{\mathrm{eff}}^{(g)}`$. An appropriate value for $`\mu _0`$ is about 1 GeV pcm9 ; a larger value is not favoured by the data, and a smaller value will cause the perturbative expression to blow up. These arguments, however, do not apply for photon emission, since QED perturbation theory does not break-down and photons are not affected by confinement forces. Thus, in principle quarks can go on emitting photons till their mass reduces to current quark mass. One may further argue that if the confinement forces screen the “bare” quarks the effective cut-off can be of the order of a GeV. Thus we can choose the cut-off scale $`\mu _0`$ separately for the emission of photons and get valuable insight about confinement at work.
The discussion above was focussed on the production of photons from the branching of quarks. We also include the parton scattering processes in the PCM which yield photons: $`q+\overline{q}g+\gamma ,q+gq+\gamma ,`$ from annihilation and Compton processes, the perturbative cross-sections of which are well-known. In the PCM approach we treat these processes within a perturbative QCD if the transverse momentum of the process ($`p_T`$) is lager than some cut-off $`p_T^0`$; see p0 . Thus our results for these contributions will be strictly valid only for $`p_T>p_T^0`$. (This cut-off is introduced in the collision frame of the partons, and thus we shall have contributions even for smaller $`p_T`$ in the nucleus-nucleus c.m. frame.)
## Results
We study four examples: $`S+S`$ and $`Pb+Pb`$ collisions at SPS energies and $`S+S`$ and $`Au+Au`$ collisions at RHIC energies.
In Fig. 1 we have plotted the production of single photons from such a partonic matter in the central rapidity region for $`Pb+Pb`$ system at SPS energies. The dot-dashed histogram shows the contribution of Compton and annihilation processes mentioned above. The dashed and the solid histograms show the total contributions (i.e., including the branchings $`qq\gamma `$) when the virtuality cut-offs for the photon production is taken respectively as 0.01 and 1 GeV.
We see that prompt photons from the quark branching completely dominate the yield for $`p_T3`$ GeV, whereas at larger transverse momenta the photons coming from the collision processes dominate. The reduction of the virtuality cut-off for the $`qq\gamma `$ branching is seen to enhance the production of photons having lower transverse momenta as one expects.
We have also shown the production of single photons from $`pp`$ collisions for $`\sqrt{s}`$ 24 GeV, obtained by WA70 wa70 , NA24 na24 , and UA6 ua6 collaborations scaled by the nuclear thickness for zero impact parameter for the collision of lead nuclei. The solid curve gives the perturbative QCD results jean for the $`pp`$ collisions scaled similarly. The dashed curve is a direct extrapolation of these results to lower $`p_T`$.
In Fig. 2 we have plotted the transverse momenta of the single photons in several rapidity bins for $`Pb+Pb`$ and $`S+S`$ systems at SPS energies. We see that the transverse spectra scale reasonably well with the ratio of the nuclear overlap for central collisions for the two systems $`T_{\mathrm{Pb}\mathrm{Pb}}/T_{\mathrm{S}\mathrm{S}}`$ 15.4, which is indicative of the origin of these photons basically from a collision mechanism. (Note that the time-like partons are generated only if there is a collision.) The slight deviation from this scaling seen at lower $`p_T`$ results in a $``$ 20% increase in the integrated yield at central rapidities. This is a good measure of the multiple scatterings in the PCM. In fact we have found that the number of hard scatterings in the $`Pb+Pb`$ system is $``$ 17 times more than that for the $`S+S`$ system which also essentially determines the ratio of the number of the photons produced in the two cases. We also note that the inverse slope of the $`p_T`$ distribution decreases at larger rapidities, which is suggestive of the fact that the “hottest” partonic system is formed at central rapidities.
In Fig. 3 we have plotted our results for $`S+S`$ and $`Au+Au`$ systems at RHIC energies in the same fashion as Fig. 2 above. We see that the inverse slope of the $`p_T`$ distribution is now larger and drops only marginally at larger rapidities, indicating that the partonic system is now “hotter” and spread over a larger range of rapidity. Even though the $`p_T`$ distribution of the photons is seen to roughly scale with the ratio of the nuclear overlap functions for central collisions $`T_{\mathrm{Au}\mathrm{Au}}/T_{\mathrm{S}\mathrm{S}}`$ 14.2, the integrated yield of photons for the $`Au+Au`$ is seen to be only about 12 times that for the $`S+S`$ system at the RHIC energies. We have again checked that the number of hard scatterings for the $`Au+Au`$ system is also only about 12 times that for the $`S+S`$ system. (Again note that we have switched off soft-scatterings completely, and the hard scatterings are permitted only if the $`p_T`$ is more than $`p_T^0`$ which is taken to be larger at higher energies; see p0 .)
This contrasting behaviour at SPS and RHIC energies seen in our work has a very interesting physical origin. At the SPS energies, the partonic system begins to get dense and multiple scatterings increase; specially for heavier colliding nuclei. At RHIC energies, the partonic system gets quite dense, and then the Landau Pomeranchuk effect starts playing an important role. We have implemented this in the PCM semi-phenomenologically; by inhibiting a new scattering of partons till the passage of their formation time after a given scattering. In a separate publication we have demonstrated that these competitive mechanisms can be seen at work by comparing results for zero impact parameter for different colliding nuclei.
## Discussions and summary
Before concluding, we would like to make some other observations.
Firstly, recall such branchings of the partons produced in hard collisions correspond to a next-to-leading-order correction in $`\alpha _s`$. These are known to be considerably enhanced for collinear emissions. The parton shower mechanism incorporated in the PCM amounts to including these enhanced contributions to all orders, instead of including all the terms for a given order rkellis .
It may also be added that the first-order corrections to the Compton and annihilation processes in the plasma have been studied by a number of authors pradip ; however in the plasma the $`Q^2(2T)^2`$, thus their contribution is limited to very low $`p_T`$. $`Q^2`$ is obviously much larger in the early hard scatterings, and thus the radiations from the emerging partons are much more intense and also populate higher transverse momenta, as seen in the present work.
The large yield of photons from the branching of energetic quarks preceding the formation of dense partonic matter opens an interesting possibility to look for a similar contribution to dilepton (virtual photon) production in such collisions. In fact a large yield of low-mass dileptons was reported from $`qq\mathrm{}\overline{\mathrm{}}`$ processes in PCM pcm6 calculations at RHIC energies. It is quite likely that this process also makes a substantial contribution to the “excess” low mass dileptons observed in sulfur and lead induced collisions studied at SPS energies.
Recall again that we have only included the contribution of photons from the partonic interactions in this work. It is quite likely that the hadrons produced at the end will also interact and produce photons which has been extensively studied in recent times (see, e.g. Ref. crs ). A comparison of typical results from, say, Ref. crs with the present work shows that at the SPS energies the emission from the early hard partonic scatterings is of the same order as the photon production from later hadronic reactions, for $`p_T`$ 2–3 GeV, and dominates considerably over the same at higher transverse momenta. A comparison of our predictions (Fig.4) with the preliminary results reported by the WA98 collaboration WA98 in fact clearly demonstrates that the pre-equilibrium contributions evaluated in the present work will play a very important role in providing a proper description to the single photon data at lager $`p_T`$ from such collisions.
We conclude that the formation of a hot and dense partonic system in relativistic heavy ion collisions may be preceded by a strong flash of photons following the early hard scatterings. Their yield will, among several other intersting aspects, also throw light on the extent of multiple scattering encountered in these collisions.
## ACKNOWLEDGEMENTS
Most of this work was done when I visited Klaus at the Brookhaven National Laboratory during December 15, 1997 to March 15, 1998. Little did I know that I would not see Klaus again. I was in e-mail contact with him till a day before he departed. I wondered what should I write, having done several things and having planned several things in collaboration with him and yet not being able to attend the workshop in his memory. The choice was not easy. This work appeared in print pcm7 in the September 1998 issue of the Physical Review C; with Klaus going away in a ‘flash’ just when these issues were perhaps being mailed. Yes Klaus;
> The world was listening
> With so much attention,
> Alas! you dozed off
> While telling your tales..
Boss!, I will always miss you, remember you, and endeavour to complete all that we planned.
This work was supported in part by the D.O.E under contract no. DE-AC02-76H00016.
|
no-problem/9901/astro-ph9901056.html
|
ar5iv
|
text
|
# Three-Dimensional Evolution of the Galactic Fountain
## 1. Introduction
For more than thirty years it has been known that the Galactic halo is populated with HI clouds, whose origins are still not completely understood. The halo clouds were first detected through 21 cm line emission surveys of the Northern Galactic hemisphere carried out by Muller et al. (1963) and were classified, according to their velocity deviations from that of the local standard of rest (LSR), as intermediate velocity couds (IVCs), having velocity deviations between $``$20 and $``$90 km s<sup>-1</sup>; high velocity clouds (HVCs) with velocity deviations between $``$90 and $``$220 km s<sup>-1</sup>; and very high velocity clouds (VHVCs) with velocities above $``$220 km s<sup>-1</sup> (for a thorough review of these HI clouds, see Wakker (1990) and Wakker & van Woerden (1997)).
Very little is known about the origin of the halo clouds. This is principally due to a lack of knowledge of their distances. However, some insight regarding their origins has been provided by the combination of 21-cm emission data with absorption line data. These studies have shown that the metallicity of some of these clouds (although, see Lu et al. 1998 and Wakker et al. 1999 for counter-examples) is similar to that found in disk gas (de Boer et al. 1991), implying that these clouds are not formed from primordial gas, but from gas expelled from the Galactic disk by some energetic events such as supernovae.
### 1.1. Models for the Origin of HI Clouds
Two distinct classes of models have been proposed for the outflow, depending on the distribution of supernovae in the Galactic disk. The first class, referred to as “Galactic fountains”, assumes that the flow results from the hot gas, heated by the isolated supernovae in the Galactic disk, forming a continuous outflow on the Galactic scale (Shapiro & Field 1976). The second class, referred to as “Galactic chimneys”, assumes that the flow of gas has its origin in clusters of supernovae that have blown out a hole in the Galactic disk, through which the hot gas flows ballistically into the halo (Tomisaka & Ikeuchi 1986).
#### Galactic Fountain
Models of gas flow on the galactic scale were first introduced by Shapiro & Field (1976) and subsequently developed by Bregman (1980), Kahn (1981) and others. The Galactic fountain originates from the widespread supernovae that warm up the disk gas to temperatures of $`10^6`$ K, and flows at a rate of $`10^{19}`$ g cm<sup>-2</sup> s<sup>-1</sup> into the halo (Bregman 1980; Kahn 1981). The upflowing gas cools and condenses into neutral hydrogen clouds that rain onto the disk, with velocities of the order of 60 to 100 km s<sup>-1</sup> (Kahn 1981). The models assume the height to which the hot gas will rise and the expected rate of condensation in the cooling gas depend only on the temperature of the gas at the base of the fountain and on the rate of cooling of the upflowing gas.
Kahn (1981) showed analytically that most of the gas entering a low temperature fountain would be transonic, i.e., the intercloud medium would become supersonic at some point not far from the Galactic disk. The location of a sonic level for the fountain gas would determine the initial conditions of the flow as it enters the fountain, and therefore would be the cause for any asymmetry that may exist between hemispheres (see discussion in Avillez et al. 1995). In this model the ascending and descending parts of the fountain gas are separated by a shock wave. The ascending flow entering the shock wave is heated and becomes part of a hot layer that supports the descending cool layer on top of it. Rayleigh-Taylor instabilities grow at the interface separating the two layers. In consequence, the cool gas breaks up into cloudlets falling towards the disk with intermediate velocities.
Houck & Bregman (1990) using two-dimensional quasi-static models, confirmed the formation of intermediate clouds in a transonic fountain as predicted by Kahn (1981). After the clouds formed they were removed from the computational grid and were treated as independent entities moving in a path governed by gravitational and centrifugal effects. Using this procedure, Bregman (1980) was able to reproduce partially the distribution of HI clouds in the halo. The drawback of these models is the absence of interactions between clouds and the rest of the flow as a result of the clouds being removed from the computational domain.
More sophisticated models considering a two dimensional evolution of the disk and halo gas under the effects of stellar winds and supernova heating have been developed by Rosen et al. (1993) and Rosen & Bregman (1995). Two co-spatial fluids representing the stars and gas in the interstellar medium were used. These models reproduced the presence of a multiphase media with cool, warm and hot intermixed phases having mean scale heights compatible with those observed in the Galaxy. However, the structure and properties of the ISM changed according to the overall energy injection rate. As a consequence, the models were unable to reproduce, in the same simulation, both the structure of the ISM near the disk and, at the same time, the presence of HI gas with high and intermediate velocities.
#### Localized Outbursts: “Chimneys”
HI emission surveys of the Galactic disk by Heiles (1984) and Koo et al (1992) revealed the presence of holes in the emission maps. These holes have sizes varying between a few hundred parsec and a kilo parsec. Surveys of nearby spiral galaxies (M31 and M33) by Brinks & Bajaja (1986) and Deul & Hartog (1990) have shown the presence of holes with similar sizes. The holes are associated to isolated as well clustered supernovae running along the spiral arms of the galaxies.
Supernovae in OB associations generate large cavities of low density gas in the Galactic disk. These cavities are surrounded by a thin shell of cold gas swept up by the blast waves of successive supernovae that occur inside the cavity. As a consequence, the shell is accelerated upwards and Rayleigh-Taylor instabilities develop at shell cap leading to its disruption. The hot gas inside the cavity escapes into the halo in a highly energetic outflow, breaking through the Lockman layer. In the disk and lower halo the outflow is confined to a cone-shaped structure by the remains of the old shell (Norman & Ikeuchi 1989). The energies involved in such a phenomenon are of the order of $`10^{53}`$ erg (Heiles 1991). As the hot gas rises into the halo, it cools and returns to the Galactic disk, forming a fountain. The height to which the gas rises varies between 5 and 10 kpc. Therefore, chimneys can account for the presence of HI clouds at greater heights than the large scale outflows previously discussed.
Two-dimensional modelling has been carried out by Tomisaka & Ikeuchi (1986), Mac Low et al (1989) and Tenorio-Tagle et al (1990) using the Dickey-Lockman (1990) gas distribution in the halo. These authors have shown that superbubbles are able to break through the Lockman layer, provided they are generated in OB associations that have a high number of stars and are displaced from the Galactic plane to a height of at least 100 pc. The number of supernovae needed for such a multiple explosion varies between 50 and 100 (Tenorio-Tagle & Bodenheimer 1988; Tenorio-Tagle et al 1990). Events like this must be regarded as unusual, although there is evidence for such events in other galaxies (Meaburn 1980).
### 1.2. Objectives of the Study
It is clear that substantial progress has been made in modelling the Galactic fountain since it was first introduced by Shapiro & Field in 1976. The models described in the previous sections give a two-dimensional description of the evolution of the disk and halo gas, and simulate the vertical distribution of IVCs and HVCs in the halo. However, such two-dimensional calculations impose natural limitations on the evolution of the gas and clouds. The absence of a third dimension constrains the motion to a vertical plane perpendicular to the Galactic disk. If three-dimensional evolution is considered, the overall structure of the flow may suffer modifications that lead to the generation of phenomena that may not be identified in the two-dimensional calculations. Furthermore the models have considered simplified conditions for the stars and ISM. The Galactic fountain models assumed that supernovae were randomly distributed in the disk and occurred with a rate of 3 per century, corresponding to a massflux of hot ($`T10^6`$ K) material into the halo of the order of $`10^{19}`$ g cm s<sup>-1</sup>, whereas global models such as ! those of Rosen & Bregman (1995) showed that at this rate the models are unable to reproduce the presence of HVCs in the halo. The chimney models neglected the presence of a dynamical thick disk which would provide constraints to the structure of the chimneys. The formation of the chimneys was carried out in a medium where no other phenomena was present.
The principal objective of this research is to develop a three-dimensional model to account for the collective effects of type Ib, Ic and II supernovae on the structure of the interstellar medium in the galactic disk and halo, and the formation of the major features already observed in the Galaxy; these include the Lockman and Reynolds layers, large scale outflows, chimneys, and HI clouds, features that all contribute to the overall disk-halo-disk cycle known as the Galactic fountain.
In section 2 the numerical modelling used in this study is presented followed by a discussion on the evolution of the simulations in section 3. Section 4 deals with the IVCs and HVCs detected in the simulations. In section 5 a discussion of on the results is carried out, followed by a summary of the fountain model (section 6).
## 2. Numerical Modelling
### 2.1. Model of the Galaxy
The study of the evolution of the Galactic disk rests in the realization that the Milky Way has a thin and thick disks of gas in addition to a stellar disk. The thin gas disk has a characteristic thickness comparable to that of the stellar disk of Population I stars. The thick gas disk is composed of warm neutral and ionized gases with different scale heights - 500 pc (Lockman et al. 1986) and 950 pc (Reynolds 1987), respectively.
The stellar disk has a half thickness of 100 pc and the vertical mass distribution, $`\rho _{}`$, inferred from the stars kinematics, given by (Avillez et al. 1997),
$$\rho _{}=\rho _,sech^2\left[\left(2\pi G\beta _{}\rho _{}\right)^{1/2}z\right]$$
(1)
where $`z`$ varies between $`100`$ pc and $`100`$ pc, $`\rho _,=3.0\times 10^{24}`$ g cm<sup>-3</sup> is the mass density contributed by Population I stars near the Galactic plane (Allen 1991) and the constant $`\beta _{}=1.9\times 10^{13}`$ cm<sup>-2</sup> s<sup>2</sup>. This mass distribution generates a local gravitational potential, $`\mathrm{\Phi }`$, of the form
$$\mathrm{\Phi }=\frac{2}{\beta _{}}\mathrm{ln}\mathrm{cosh}\left[\left(2\pi G\rho _{}\beta _{}\right)^{1/2}z\right].$$
(2)
The stellar disk is populated by supernovae types Ib, Ic and II, whose major fraction occurs along the spiral arms of the Galaxy close to or in HII regions (Porter & Filippenko 1987). Supernovae types Ib and Ic have progenitors with masses $`M15\text{ }\mathrm{M}_{}`$, whereas Type II SNe originate from early B-type precursors with $`9\text{ }\mathrm{M}_{}M15\text{ }\mathrm{M}_{}`$.
The rates of occurrence of supernovae types Ib+Ic and II in the Galaxy are $`2\times 10^3`$ yr<sup>-1</sup> and $`1.2\times 10^2`$ yr<sup>-1</sup>, respectively (Cappellaro et al. 1997). Similar rates have been found by Evans et al. (1989) in a survey of 748 Shapley-Ames galaxies. The total rate of these supernovae in the Galaxy is $`1.4\times 10^2`$ yr<sup>-1</sup>. $`60\%`$ of these supernovae occur in OB associations, whereas the remaining $`40\%`$ are isolated events (Cowie et al. 1979).
### 2.2. Basic Equations and Numerical Methods
The evolution of the disk gas is described by the equations of conservation of mass, momentum and energy:
$$\frac{\rho }{t}+\left(\rho 𝐯\right)=0;$$
(3)
$$\frac{\left(\rho 𝐯\right)}{t}+\left(\rho \mathrm{𝐯𝐯}\right)=p\rho \mathrm{\Phi };$$
(4)
$$\frac{\left(\rho e\right)}{t}+\left(\rho e𝐯\right)=p𝐯n^2\mathrm{\Lambda };$$
(5)
where $`\rho `$, $`p`$, $`e`$, and $`𝐯`$ are the mass density, pressure and specific energy and velocity of the gas, respectively. The set of equations is complete with the equation of state of ideal gases.
$`\mathrm{\Lambda }`$ is the functional approximation of the cooling functions of Dalgarno & McCray (1972), Raymond et al (1976) and the isochoric curves of Shapiro & Moore (1976), except in the range of temperatures of $`10^55\times 10^6`$ K where the simple power law of the temperature (Kahn 1976)
$$\mathrm{\Lambda }=1.3\times 10^{19}T^{0.5}\text{erg cm}\text{3}\text{ s}\text{-1}$$
(6)
is applied. Between $`5\times 10^6`$ and $`5\times 10^7`$ the cooling function has been approximated by $`T^{0.333}`$ (Dorfi 1997). In order to avoid any cooling below zero temperature (this is known as catastrophic cooling), $`\mathrm{\Lambda }`$ is zero below 200 K.
The equations of evolution are solved by means of a three-dimensional hydrodynamical scheme using the piecewise parabolic method (Collela & Woodward 1984) and the adapted mesh refinement algorithm of Berger & Collela (1989).
### 2.3. Computational Domain and boundary Conditions
The simulations were carried out using a Cartesian grid centered on the Galactic plane with an area of 1 kpc<sup>2</sup> and extending from -4 kpc to 4 kpc. Grid resolutions of 5 pc and 10 pc were used for $`270z270`$ pc and for $`z250`$ pc and $`z+250`$ pc, respectively. The cells located at $`270z250`$ pc and $`250z270`$ pc of the coarser and finer grids overlap to ensure continuity between the two grids. The solution in the finer grid, at ranges $`270z250`$ pc and $`250z270`$ pc, results from a conservative interpolation from the data in the coarser cells to the fine grid cells as prescribed by Berger & Collela (1989).
The boundary conditions along the vertical axis are periodic; outflow boundary conditions are used in the upper and lower parts of the grid parallel to the Galactic plane.
### 2.4. Initial Conditions
The initial configuration of the system in the computational domain considers the vertical distribution of the disk and halo gases in accordance to the Dickey & Lockman (1990) profile, and reproduces the distribution of the thin thick disk gas in the solar neighborhood. The gas is initially in hydrostatic equilibrium with the gravitational field, thus requiring an initial gas temperature as defined by the density distribution.
Throughout the simulations the presence of the stellar disk is required with a thickness of 200 pc and an area of 1 kpc<sup>2</sup>, corresponding to a volume of $`V=2\times 10^8`$ pc<sup>3</sup>. Isolated and clustered supernovae ot types Ib, Ic and II occur in the volume $`V`$ at rates obtained from the values discussed in §2.1 and given by
$$\sigma \frac{V}{V_G}=1.42\times 10^3\sigma \text{ yr}\text{-1}$$
(7)
where $`V_G=1.4\times 10^{11}`$ pc<sup>3</sup> is the volume of the Galactic disk with a radius of 15 kpc and a thickness of 200 pc, and $`\sigma `$ is the rate of supernovae observed in the volume $`V_G`$. Table 1 presents the rates of supernovae in the Galaxy as well as in the simulated disk. In the simulated stellar disk isolated and clustered supernovae form at time intervals of $`1.26\times 10^5`$ yr and $`8.4\times 10^4`$ yr, respectively.
The time interval between two successive superbubbles in the volume $`V`$ varies between $`1.9\times 10^6`$ yr and $`5.23\times 10^6`$ yr (Ferrière 1995), but following Norman and Ikeuchi (1989), and adopting an average value of $`N_{SN}=30`$ supernovae per superbubble the time interval of formation of superbubbles in the volume $`V`$ is
$$\frac{N_{SN}}{\sigma _{OB}}=2.5\times 10^6\text{yr},$$
(8)
the value used in this study.
### 2.5. Numerical Setup of Supernovae
Each supernova is setup in the beginning of phase II, with a thermal energy content of (Kahn 1975)
$$E_{Therm}=0.36\rho _{}a$$
(9)
and a kinetic energy content of
$$E_{Kin}=0.138\rho _{}a,$$
(10)
where $`a=2E/\rho _{}`$, $`\rho _{}`$ is the local density of the medium where the supernova occurs and $`E`$ is the energy of the explosion.
The generation of supernovae in the stellar disk is carried out in a semi-random way where the coordinates of the supernova are determined from three new random numbers converted to the dimensions of the stellar disk. The distances are mapped anywhere within the volume of a cell in the grid.
The location of a new supernova in the grid is based in the following constraints: (a) vertical density distribution of population I stars given by equation (1), (b) type of newer supernova, (c) rates of occurrence of supernovae types Ib+Ic and II and (d) rate of formation of superbubbles.
When a supernova is due to appear in a specific location, a supernova generator checks for the occurrence of a previous supernova at that location. If this test is negative, then a new supernova is setup. If the test is positive, then the time delay between the old supernova and the new one is checked. Now the decision is based on the comparison of the this time delay and that of a new supernova within a superbubble. If the delay is smaller than the time delay within the superbubble, then no supernova is generated and the generator chooses a new position from a new set of random numbers, else a supernova is setup up.
Using simple rules, the generator is able to form supernovae in the stellar disk with a distribution compatible with observations and, at the same time, defining the type of supernovae and thus the mass loaded into the surrounding medium during its explosion.
## 3. Evolution of the Simulations
The simulations were carried out for a period of 1 Gyr with supernovae being generated at time 0.
In general the simulations start from a state of hydrostatic equilibrium breaking up in the first stages of the simulation. The system then evolves into a statistical steady equilibrium where the overall structure of the ISM seems similar on the global scale.
Initially there is an imbalance between cooling and heating of the gas in the disk giving rise to an excess of radiative cooling over heating because of the small number of supernovae during this period. The gas originally located in the lower halo starts cooling and moves towards the midplane, colliding there with gas falling from the opposite side of the plane as can be seen in Figure 1. The figure shows grey-scale maps of the density distribution around the Galactic plane taken at $`y=500`$ pc at four times: 2, 6, 10 and 14 Myr. After 10 Myr of evolution, the major fraction of the initial mass is confined within a slab having a characteristic thickness of 100-150 pc (Figure 1 (c)). As the supernovae occur they warm up the gas in the slab gaining enough energy to overcome the gravitational pull of the disk and therefore, expanding upwards (Figure 1 (d)) redistributing matter and energy in the computational domain.
Figure 2 presents the total massflux of the descending and ascending gases measured at $`z=140`$ pc and averaged over the entire area of the Galactic disk during the first 500 Myr of evolution. During the first 10-20 Myr the descending gas has a maximum value of $`3.4\times 10^{19}`$ g cm<sup>-2</sup> s<sup>-1</sup> (that is $`4.8\times 10^2`$ $`\mathrm{M}_{}`$ kpc<sup>-2</sup> yr<sup>-1</sup>). The ascending gas massflux peaks at $`2.8\times 10^{19}`$ g cm<sup>-2</sup> s<sup>-1</sup> ($`3.9\times 10^2`$ $`\mathrm{M}_{}`$ kpc<sup>-2</sup> yr<sup>-1</sup>). The expansion of the disk gas diminishes during the next 20 Myr, with the consequent decrease in the total massflux. At 50 Myr a balance between descending and ascending flows sets in. Both the ascending and descending gases have massfluxes of approximately $`3\times 10^{20}`$ g cm<sup>-2</sup> s<sup>-1</sup> ($`4.2\times 10^3`$ $`\mathrm{M}_{}`$ kpc<sup>-2</sup> yr<sup>-1</sup>), which corresponds to a total inflow rate of 2.97 $`\mathrm{M}_{}`$ yr<sup>-1</sup> in the Galaxy on one side of the Galactic plane.
This balance is observed during the rest of the simulations indicating that a quasi-equilibrium between the descending and ascending gases dominates the evolution of the disk-halo gas. However, there are some periods when one of the flows dominates over the other. This is related with periodic imbalances between cooling and heating leading to compressions and expansions of the disk gas. Such an effect has already been found in the two-dimensional simulations of the interstellar medium carried out by Rosen & Bregman (1995).
### 3.1. Galactic Disk
After the first 100 Myr the system evolves in such a way to approach a statistical steady equilibrium where the dynamics of the gases in the disk and the halo are similar throughout the simulations. As such, the ISM will have the same structure at different times and locations in the disk and halo, although it experiences changes due to local processes such as supernovae, collisions with HI clouds, etc. In order to stress this point, pictures showing the time evolution of the disk gas between 200 and 500 Myr are presented (Figures 3-7).
The images show the distribution of the gas density at different locations in the disk. Edge-on maps of the disk gas for $`\left|z\right|250`$ pc, taken at $`y=30`$ pc and $`y=790`$ pc (Figures 3 and 4) show a disk composed of a multiphase medium where cool ($`T<8000`$ K), warm ($`8000T10^5`$ K) and hot ($`T>10^5`$ K) phases co-exist. The major fraction of the disk is filled with warm and hot gas. Most of that gas originates from supernovae spread over the disk. Embedded in this medium are cold sheets and clouds with various sizes and forms. The cold gas is mainly confined to the thin disk with a thickness varying between 20 and 50 pc and having a wiggly structure resulting from supernovae or collisions of descending clouds with the disk gas.
Sheets wiggling in directions perpendicular to the plane, resembling “worms” crawling out of the disk, are observed in all the images. They are associated with broken shells or supershells surrounding cavities of low density gas ($`\rho 10^{26}`$ g cm<sup>-3</sup>) generated by isolated and correlated supernovae, respectively.
Clouds result from broken shells or from cool gas that, during its descent towards the disk, is compressed by the interaction with the hot gas flowing upwards in the disk. The interaction of shock waves increase the local density by factors of four times the gas ahead the shock. Therefore, triggering the formation of clouds and sheets.
Face-on maps of the disk gas taken at $`z=\pm 250`$ pc (Figures 5 and 6) show the presence of clouds as well as sheets with thicknesses of 5 pc and widths of several tens of parsec even hundreds of parsec. Due to the resolution of the calculations in the disk clouds with dimensions smaller than 5 pc are not resolved in the images.
The distribution and sizes of the sheets varies with the increase of $`z`$ within the disk. In the midplane the sheets are thin and short as result of the interaction of supernovae and collision with other clouds (Figure 7).
As the blast waves from supernovae expand in the disk, they swept up clouds and sheets triggering their disruption into smaller structures. At $`z=\pm 250`$ pc the presence of sheets results from: (a) cold gas descending from above and having sheet-like forms acquired during their formation as result of Rayleigh-Taylor instabilities that occurred in larger clouds and (b) breaking up of shells and supershells that expanded upwards and are displaced from the Galactic plane.
The midplane is populated with large regions of cool gas surrounding bubbles of hot gas. The bubbles have different sizes and most of them are isolated in the disk. However, some merge with others forming networks of hot gas.
As the supernovae occur they change the local structure of the interstellar medium, but are unable to change the global structure. Individual and clustered supernovae dominate the local environment depending on their spatial location. Isolated supernovae change the structure of the inner parts of the disk whereas supernovae in OB associations dominate the upper regions of the disk. During their explosions correlated supernovae easily disrupt the interstellar medium and push material into the halo in collimated structures surrounded by walls of cold gas resembling “chimneys”.
Effects like these are observed in Figures 8 and 9 showing the sequential evolution of the disk gas in a region where a chimney evolves. Figure 8 (c) shows the presence of a supernova with a radius of some 50 pc (the supernova is well inside phase III of its evolution) located at $`x=400`$ pc, $`z=100`$ pc. As the shell breaks, the inner parts of the remnant expand into the surrounding medium. A second supernova occurs one million years later at the base of the chimney at $`z=100`$ pc (Figure 8 (c)), and in consequence a large amount of hot gas is released into the halo through a tunnel with a width of 110 pc (Figure 8 (d)).
As the supernovae occur in the disk they warm it up to temperatures of $`10^6`$ K. After $`5\times 10^5`$ years, each supernova releases $`295\text{ }\mathrm{M}_{}`$ of hot gas (Avillez 1998), contributing to the formation of “reservoirs” of hot gas in the disk having enough energy to expand upwards. In its ascending motion the hot gas interacts with the denser gas distributed in the thick disk. Such a configuration is Rayleigh-Taylor unstable and as a result, the hot gas expands with finger-like structures appearing to carve the cooler layers that compose the thick disk (Figure 9).
In general the disk gas has temperatures varying between $`10^4`$ and $`10^5`$ K filling $`60\%`$ of the disk volume. This gas is distributed in the warm ionized layer intermixed with most of the warm neutral gas. Gas with temperatures larger than $`10^5`$ K fills on average a $`10\%`$ of the disk volume, being the major fraction of such a hot gas located in the thin disk, although a small fraction of it is found in the warm ionized medium. Gas with temperatures smaller than $`10^4`$ K is mainly located at $`|z|500`$ pc (filling on average $`25\%`$ of the disk volume).
Some of this gas is found in the wiggly cool layer located in the midplane as well in the form of small clouds, but the major fraction of it is distributed in the warm neutral layer on either side of the thin disk (see Avillez 1998 for a detailed description).
The warm neutral medium confines the disk gas, preventing it from escaping buoyantly into the halo, unless some highly energetic event occurs in the disk such as correlated supernovae, feeding directly the ionized layers located above the warm neutral medium. The ionized gas extends upwards with a density that decreases smoothly up to 1.4 kpc, showing a steep decrease between this height and 1.5 kpc (Figure 10). At greater heights the gas has lower densities with temperatures greater than $`10^6`$ K. The diffuse ionized region located at $`1.4z1.5`$ kpc acts as a interface between the thick disk and the halo gas.
## 4. Halo Clouds
As the hot gas rises into the halo it cools and condenses into clouds of variable sizes that rain back into the Galactic disk. Their sizes vary between a few parsec to tens and hundred of parsec. The clouds are classified, according to their sizes in cloudlets, clouds and complexes, and according to their velocity, $`v_z`$, in low, intermediate, high, and very high velocity clouds. Cloudlets have sizes varying between 5 and 50 pc, whereas clouds have greater sizes.
Complexes are formed by clouds found close together which have comparable velocities. Their sizes can reach hundreds of parsec. A complex is identified based on the velocity dispersion of their components relative to the bulk of the complex. If the clouds have a velocity dispersion smaller than the local sound speed then they belong to the complex.
The clouds form as result of small variations in the local density and in places where shock waves intersect. When the gas in the halo is swept up by a shock wave its density is increased by factors of four leading to an increase of its rate of cooling. This process is more efficient if several shocks intersect at the same location in the halo. Figure 11 shows a region in the halo where gas after being swept up by several shock waves starts condensing into clouds - the region located at $`200x400`$ pc and $`2300z2600`$ pc.
The cooler gas is sustained by the hot stream coming from below in an unstable configuration favoring the growth of Rayleigh-Taylor instabilities at the interface separating the two gases, and therefore leading to the formation of the clouds. The clouds surfer further instabilities breaking up into cloudlets that rain over the disk.
Face-on maps of the temperature distribution measured at $`z=2730`$ and $`z=1670`$ pc (Figure 12) show the effects of a set of clouds in the region they are passing through. The clouds interact with hot gas present in that region at the time they arrived there, as well with hot gas moving upwards engulfing them at its passage.
In general the clouds have a sheet-like structure compatible with them being the result of Rayleigh-Taylor instabilities at the base of other clouds located at higher heights.
The clouds have a multiphase structure, where the core of the cloud is formed by cold gas ($`T10^3`$ K) and surrounded by warmer gas with temperatures of $`10^4`$ K. The clouds are embedded in a hot medium with temperatures greater or equal to $`10^5`$ K (Figures 11 and 12).
The phases within the cloud have velocity dispersions varying between 1 km s<sup>-1</sup> to 10 km s<sup>-1</sup> as can be seen in Figure 13 (a) and (c).
The figure presents a set of grey-scale images showing the velocity distribution of clouds detected at $`z=860`$ pc, $`z=1200`$ pc and $`z=1000`$ pc. There is a maximum velocity dispersion of 4 to 5 km s<sup>-1</sup>, having the center of the cloud the smallest z-velocity and an increase occurs towards the edge.
The images show the passage of several clouds through the layer they were detected (Figure 13 (c) and (d)). The descending velocity varies from cloud to cloud, where the smaller ones have a larger descending velocity then the larger cloud located at ($`x=800`$, $`y=300`$) pc. However, the velocity dispersion between the different clouds is very small. This indicates that they form a complex and are connected by velocity bridges.
The distribution of clouds in the halo vary with $`z`$. There is no preferable region in the halo where the HI clouds form. However, a large amount of clouds has been detected at heights between 300 pc and 1.5 kpc, with the major distribution of clouds occurring at $`z>800`$ pc.
The bulk of the HI clouds has intermediate velocities ($`20`$ km s<sup>-1</sup> up to $`90`$ km s<sup>-1</sup>), as can be seen in the histogram presented in Figure 14. $`30\%`$ of the clouds have velocities varying between -20 and -40 km s<sup>-1</sup>.
Only a small fraction of the clouds ($`5\%`$) has high velocities varying between -90 and -139 km s<sup>-1</sup>. The detection of such a small number of HVCs may be the result of the reduced vertical extent of the grid in this study $`\left|z\right|`$/\[kpc\]$`4`$, instead of using the full extent up to 10 kpc.
## 5. Discussion and Comparison with Observations
The striking feature of the simulations is the constant presence of a thin cold wiggly disk gas overlayed by a thick disk of warm gas. The thick disk has two components: a warm neutral layer with a scale height of 500 pc (warm HI disk) and a warm ionized component extending to an height of 1 to 1.5 kpc above the thin HI disk (Table 2).
The vertical distribution of the layers that compose the Galactic disk are compatible with observations carried out by Lockman (1984), Lockman et al. (1986) and Reynolds (1987) who found that the disk has warm neutral and ionized components. Lockman identified a thin HI disk with a gaussian distribution with $`\sigma _z=135`$ pc and an exponential distribution with a scale height of some 500 pc. Reynolds observed a warm ionized medium extending far beyond the neutral layer with scale heights of 1 kpc.
The simulations show both layers are fed with the hot gas coming from the thin disk through two major processes: large scale outflows and chimneys. Chimneys result from correlated supernovae generating superbubbles which blow holes in the disk, provided the superbubbles are displaced at some 100 pc from the Galactic plane and located in the vicinity of a region with local density gradients in the $`z`$-direction(Figures 8 and 9).
The supershells are accelerated in the vertical direction and high pressure gas forces its way out through relatively narrow channels (chimneys) with widths of some 100 - 150 pc.
Not all superbubbles create chimneys. In those cases, the superbubbles as well as isolated supernovae expand and die within the disk, releasing large amounts of hot gas with enough energy to be held gravitationally expanding buoyantly upwards to generate large scale outflows (Figure 9). The ascending flow triggers the growth of Rayleigh-Taylor instabilities as it interacts with the cooler denser medium in the thick disk. As a consequence the hot gas acquires finger like-structures with mushroom caps in their tips. As the fingers expand they suffer further instabilities until they cool and merge with the warm gas.
The gas in the finger cools from outside towards the center of the finger, thus giving it the appearance of being enveloped by a thin sheet of cooler gas (Figures 3, 8 and 9). A structure called the “anchor” and having properties similar to those described above has been observed in the southern Galactic hemisphere and reported by Normandeau & Basu (1998).
The kinematics and morphology of the bubbles in the disk are similar to that observed in face-on galaxies M31 (Brinks & Bajaja 1986) and M33 (Deul & Hartog 1988). These surveys revealed the presence of roughly elliptical features with a relative absence of neutral gas and having sizes varying between 40 and 150 pc. Some of these regions show a clear shell structure, indicating a relationship to isolated or clustered supernovae (Deul & Hartog 1988).
As the hot gas ascends into the halo it cools condensing into clouds that rain back over the disk. Their distribution in the halo varies with $`z`$. The major fraction of clouds have intermediate velocities and are mainly found between 800 and 2.2 kpc.
The clouds have a multiphase structure with the cold gas having temperatures of some $`10^3`$ K embedded in a warmer phase. A local analysis the clouds shows components of different velocities with dispersions up to 20 km s<sup>-1</sup>. The clouds found closed together have smaller velocity dispersions between them, suggesting the presence of velocity bridges connecting the clouds.
These results are compatible with observations regarding to the location and distribution of IVCs in the halo (Wesselius & Fejes (1973), Kuntz & Danly (1996)) as well with theoretical predictions of Houck & Bregman (1990) in placing the major fraction of these clouds between 1 and 2 kpc, and observations of the clouds internal structure (Cram & Giovanelli 1976; Shaw et al. 1996; see Wolfire et al. 1995 for an interpretation of these observations).
Only a small number of high velocity clouds have been detected in the simulations. This suggests that most of the high velocity clouds are formed at heights greater than 4 kpc, and therefore would not be detectable in the simulations. Their formation within the Galactic fountain is only possible provided the fountain gas rises to some 10 kpc. Such a scale height is incompatible with the analytical models described in §1, unless the injection level of the fountain is shifted some 1.5 kpc above the plane.
## 6. Conclusions: Where Starts the Galactic Fountain?
The Galactic fountain is defined as the cycle performed by the gas escaping from the Galactic disk and rising into the halo, where eventually it cools and condenses into clouds. Such a fountain includes the outflows from isolated as well as clustered supernovae scattered in the Galactic disk generating large scale as well as localized outbursts causing the formation of an unstable mixed structure of neutral and ionized gases spread in the disk and halo.
The upper parts of the thick ionized disk, located at some 1.4 kpc above the plane, act as a disk-halo interface where the hot plasma expands upwards in a smooth flow. This process resembles the classical fountain predicted by Shapiro & Field (1976), Bregman (1980) and Kahn (1981) but with the injection level located at some 1.5 kpc above the Galactic plane.
The theoretical descriptions of Kahn (1981) can then be applied to this flow. The flow starts subsonic, becomes supersonic at some 3.4 kpc above the injection level, cools and continues its motion ballistically for more 4.4 kpc (Avillez 1998) reaching the maximum height at $`z9.3\pm 1`$ kpc. The clouds that will be formed during the descent of the cold gas will have, at some height, high velocities (a detailed study is presented in Avillez 1999).
The descending clouds headed by a shock, sweep up the halo gas in their passage, thereby shocking and heating it. A thin layer of shocked gas is formed between the shock and the descending cloud. The configuration is Rayleigh-Taylor unstable and the cloud breaks up into cloudlets with sheet-like forms. These structures rain over the disk with intermediate velocities (Berry et al. 1997; Avillez 1998) at an inflow rate of 2.97 $`\mathrm{M}_{}`$ yr<sup>-1</sup> on either side of the Galactic plane.
##### Acknowledgments.
This paper is dedicated to the memory of Prof. Franz D. Kahn. He will be sadly missed. I would like to thank Ana Inez Gomes de Castro, Bart Wakker, Brad Gibson and Joel Bregman for their careful reading of the manuscript and suggestions which lead to its improvement.
## References
Allen, C. W. 1991, Astrophysical Quantities, the Athalon Press, London
Avillez, M. A., Berry, D. L. & Kahn, F. D. 1995, in The Formation of the Milky Way, ed. E. J. Alfaro & A. J. Delgado, Cambridge University Press, Cambridge, 126
Avillez, M. A., Berry, D. L. & Kahn, F. D. 1997, in Proceedings of the IAU Colloquium No. 166 “The Local Bubble and Beyond”, eds. D. Breitschwerdt, M. J. Freyberg, J. Trumper, Lecture Notes in Physics, 506, 495
Avillez, M. A. 1998, The Evolution of Galactic Fountains, Ph.D. Thesis, University of Évora, Portugal
Avillez, M. A., 1999, in preparation.
Berger, M. J. & Colella, P. 1989, J.Comp.Phys., 82, 64
Berry, D. L., Avillez, M. A. & Kahn, F. D. 1997, in Proceedings of the IAU Colloquium No. 166 “The Local Bubble and Beyond”, eds. D. Breitschwerdt, M. J. Freyberg, J. Trumper, Lecture Notes in Physics, 506, 499.
de Boer, K. S., Herbstmeier, U. & Mebold, U. 1991, in Proc. IAU Symposium 144, The Interstellar Disk-Halo Connection in Galaxies, Ed. H. Bloemen, Kluwer Acad. Publ., p. 161
Bregman, J. N. 1980, ApJ, 236, 577
Brinks, E., Bajaja, E. 1986, A&A, 169, 14
Colella, P. & Woodward, P. 1984, J.Comp.Phys., 54, 174
Cowie L.L., Songaila A., York D., 1979, ApJ, 230, 469
Cram, T. R. & Giovanelli, R. 1976, A&A, 48, 39
Dalgarno, A. & McCray, R. A. 1972, ARA&A, 10, 375
Deul, E. R. & den Hartog, R. H. 1990, A&A, 229, 362
Dickey, J. M. & Lockman, F. J. 1990, ARA&A, 28, 215
Dorfi, E. A. 1997, SAAS-FEE Lecture Notes, in press
Evans, R., van den Bergh, S. & McClure, R. D. 1989, ApJ, 345, 752.
Ferrière, K. M. 1995, ApJ, 441, 281
Heiles, C. 1984, ApJS, 55, 585
Houck, J. C. & Bregman, J. N. 1990, ApJ, 352, 506
Kahn, F. D. 1975, in Proceedings of the 14th International Cosmic Ray Conference, Munich, ed. K. Pinkau, 11, 3566
Kahn, F. D. 1976, A&A, 50, 145
Kahn, F. D. 1981, in Investigating the Universe, ed. F. D. Kahn, D. Reidel Publ. Co., Dordrecht, 1
Koo B.-C., Heiles C. & Reach W. T. 1992, ApJ, 390, 108
Kuntz, K. D. & Danly, L. 1996, ApJ, 457, 703
Lockman F. J., 1984, ApJ, 283, 90
Lockman, F. J., Hobbs, L. M. & Shull, J. M. 1986, ApJ, 301, 380
Lu L., Savage B., Sembach K., Wakker B., Oosterloo T., 1998, AJ, 115, 162
Mac Low, M.-M., McCray, R. & Norman, M. L. 1989, ApJ, 337, 141
Meaburn, J. 1980, MNRAS, 92, 365
Muller, C. A., Oort, J. H. & Raymond, E. 1963, CR Acad. Sci., Paris, 257, 1661
Norman, C. A. & Ikeuchi, S. 1989, ApJ, 345, 372
Normandeau, M. & Basu, S. 1998, in Proceedings of the Naramata Workshop, in press (astro-ph/9811238)
Porter, A. C. & Filippenko, A. V. 1987, AJ, 93, 1372
Raymond, J. C., Cox, D. P. & Smith, B. W. 1976, ApJ, 204, 290
Reynolds, R. J. 1987, ApJ, 323, 118
Rosen A., Bregman, J. N. & Norman, M. L. 1993, ApJ, 413, 137
Rosen, A. & Bregman, J. N. 1995, ApJ, 440, 634
Shapiro, P. R. & Field, G. B. 1976, ApJ, 205, 762
Shapiro, P. R. & Moore, R. T. 1976, ApJ, 207, 460
Shaw, C. R., Bates, B., Kemp, S. N., Keenan, F. P., Davies, R. D. & Roger, R. S. 1996, ApJ, 473, 849
Tenorio-Tagle, G. & Bodenheimer, P. 1988, ARA&A, 26, 145
Tenorio-Tagle, G., Różyczka, M. & Bodenheimer, P. 1990, A&A, 237, 207
Tomisaka, K. & Ikeuchi, S. 1986, PASJ, 38, 697
Wakker, B. P. 1990, Ph.D. thesis, Rijks Univ. Groningen
Wakker, B. P. & van Woerden, H. 1997, ARA&A, 35, 217
Wakker, B. P., et al. 1999, these proceedings
Wesselius, P. R. & Fejes, I. 1973, A&A, 24, 15
Wolfire, M. G., McKee, C. F., Hollenbach, D. & Tielens, A. G. G. M. 1995, ApJ, 453, 673
|
no-problem/9901/astro-ph9901227.html
|
ar5iv
|
text
|
# The Close Environment of Seyfert Galaxies and Its Implication for Unification Models
## 1 Introduction
In the eighties it was found that a relatively large fraction of Seyferts had a close companion (Dahari (1984, 1985)), although claims that this excess was due to selection effects were never dismissed (Fuentes-Williams & Stocke (1988)). More recent work revealed significant differences between Seyfert 1 and Seyfert 2 galaxies (Laurikainen et al. (1994)), or at least marginal differences (De Robertis, Yee, & Hayhoe (1998)). In both cases an excess of companions for Seyfert 2 (Sy2) but not for Seyfert 1 (Sy1) with respect to non-active galaxies. However, Rafanelli, Violato, & Baruffolo (1995) also recently, found no significant difference between Sy1 and Sy2. We are left in an uncomfortable situation: the three most recent and comprehensive works provide inconsistent results, probably because data are so that the inherent complexity (and definition ambiguity) of the problem is starting to affect statistical inferences. It is not among the aims of the present paper to toroughly compare all the previous work; this has been been done by Laurikainen & Salo (1995) and by Dultzin-Hacyan et al. (1999). The discrepancy must be however accounted for. In this work we use for the first time complete and correctly defined samples, as well as other important methodological improvements which lead us to confirm the result that it is only Seyfert type 2 galaxies that have excess companions. Actually, there are other indications of intrinsic differences between Sy1 and Sy2 galaxies. It has long been known that Sy1 nuclei reside in earlier morphological type galaxies than Sy2 nuclei. This has recently been confirmed from a refined morphological classification of deep HST images by Malkan, Gorjian, & Tam (1998), along with other morphological differences which they believe to be intrinsic differences in host galaxy properties, and which thus undermine one of the postulates for UM. Dultzin-Hacyan, Masegosa, & Moles (1990) showed that while the mid-infrared ($`25\mu `$m) emission in Sy1 is synchrotron radiation (or dust re-emission of it), in Sy2 it is dust re-emission of starlight (see also Mouri & Taniguchi (1992); Dultzin-Hacyan & Benitez (1994); Gu et al. (1997)). In §2 sample selction is described. The main results are summarized and discussed in §3, in §4 a brief discussion of the results is given and finaly, in §5 possible interpretations are analyzed, and some conclusions are drawn.
## 2 Sample Selection and Analysis
The samples of Seyfert galaxies were compiled from the catalog by Lipovetsky, Neizvestny, & Neizvestnaya (1988). This catalog was compiled on the basis of the Second Byurakan Survey (SBS), which is a survey based solely on the UV excess method. The reason was to avoid the possibility of the inclusion of Seyfert 2 galaxies serendipitously discovered because they belonged to interacting systems (which is the case for some of the galaxies in the catalog by Veron-Cetty & Veron (1991)). The importance of these observational effects were stressed by Marziani (1991), who also revealed a fraction of interacting systems larger for Seyfert 2 than for Seyfert 1.
The present sample consists of 72 Sy1 and 60 Sy2. Both samples are volume limited, and the $`\mathrm{V}/\mathrm{V}_{\mathrm{max}}`$ test assures uniformity – and thus completeness (Schmidt 1976) – to a level of 92%, The redshifts are limited to $`0.007z0.035`$ (Sy1) and to $`0.007z0.020`$ (Sy2), and we selected galaxies with high galactic latitudes, in order to avoid extinction, and confusion due to galactic stars. In past work this has not been properly taken into account. For example, the Laurikainen et al. (1994) sample is biased toward low-galactic latitude objects. In Laurikainen et al. (1994) the sample contains 53 % of Seyfert galaxies at galactic latitude $`\mathrm{b}_{\mathrm{II}}\stackrel{>}{}45^{}`$, while only 27 % are so in the Rafanelli, Violato, & Baruffolo (1995) sample. Including low galactic latitude field produces a bias toward a lower fraction of companion, as detection is more difficult because of confusion and absorption. Even if the bias is equally present in the Seyfert and control samples, a large fraction of low $`\mathrm{b}_{\mathrm{II}}`$ fields may introduce a bias against intrinsic differences between Seyfert and control samples. In this study, we have selected exclusively galaxies with $`\mathrm{b}_{\mathrm{II}}\stackrel{>}{}40^{}`$. Also, rich clusters were avoided.
For the control samples, the above criteria were also imposed. One important methodological improvement of this work is the definition of control samples of non-active galaxies which match the Seyfert galaxies in all respects except that they are not Seyfert. In order to achieve this, two control samples were defined, one for each type of Seyfert galaxies, because both the Hubble type and redshift distributions of the two types of Seyfert galaxies differ. The control samples were obtained from a list of more than 10,000 objects of the CfA catalog (Huchra, Davis, & Latham (1983)). For each control sample we first matched Hubble type distributions by artificialy trimming the sample, then we randomly extracted two subsamples, and proceeded to match redshift distributions. We did not match absolute magnitudes since this would introduce a bias. Matching absolute magnitudes (e.g. De Robertis et al. 1998) may bias the control sample toward intrinsically higher luminosity objects, as the Seyfert galaxies host an active nucleus whose luminosity is expected to be comparable to that of the whole galaxy. We did match the diameters distribution. Seyfert nuclei reside most frequently in giant galaxies. Giant galaxies are relatively rare, and usually “do not come alone”: dwarf galaxies are frequently observed in their immediate surrounding. It is therefore crucial to any statistical study to have a control sample matching not only redshift distributions, but the diameters as well as the morphological type of the Seyfert sample.
The control samples are complete (in volume) to a confidence level of up to 97%. Although the above mentioned similarities were long ago known to be requiered for a proper comparison (Osterbrock 1993), in previous works matching the distributions was impossible to achieve mantaining the same densities, due to the selection of small control samples from nearby galaxies. The search for possible excess of companions within 100 Kpc is inconsistent with the choice of the control sample galaxies in the vicinity of the Seyferts (Rafanelli, Violato, & Baruffolo (1995); Salvato & Rafanelli (1997)). If Seyferts are at, or close to, the center of a region of moderate galaxy density enhancement, “looking around for the closest non-active spiral galaxy” means to move (presumably a few 100 Kpc) away from the enhacement and to select areas wich are systematically of lower density, hence underestimating the fraction of companions for the control sample, and therefore creating a spurious excess for Seyfert galaxies.
The procedure to estimate the foreground/background galaxy contamination is as crucial in this type of statistical work, as is the correct definition of control samples. The fraction of Seyfert galaxies with “physical” companions (proximate in space) is the fraction with companions observed within the given search radius diminished by the fractions of galaxies with an optical companion. As in previous studies, we derived the probability of finding an optical companion within a given search radius from the Poisson distributions.
The use of the Lick counts given by Shane & Wirtanen (1967) to estimate the projection effects can introduce an important bias (as in Rafanelli, Violato, & Baruffolo (1995); Salvato & Rafanelli (1997); see also Laurikainen et al. (1994)). One of the main improvements in this work was the determination of the number density $`\rho `$, that goes into the formula for the predicted number of background galaxies within each area. The determination was made directly from the DSS plates using FOCAS (Faint Object Classification and Analysis System; Jarvis & Tyson (1981)) to count galaxies in regions of one square degree surrounding each galaxy. In this work the background densities between samples are statistically equal (according to a Mann-Whitney’s U test). Data on individual objects and on searched sky fields will be presented in a comprehensive form elsewhere (Krongold et al. 1999, in preparation).
## 3 Results
We identified all galaxies with at least one companion within three times the diameter of the galaxy (3 $`\mathrm{D}_\mathrm{S}`$). The search was performed automatically on the DSS with FOCAS, and was limited to galaxies that could be unambiguously distinguished from stars by the FOCAS algorithm. This procedure reduces to a minimum several bias present in previous works, and discussed in the previous section.
Of 72 Seyfert 1 galaxies $``$ 39 % have one companion vs. $``$ 40 % of the 72 galaxies of the Seyfert 1 control sample. The expected number of optical companions from Poisson statistics is 18 % and 15 % for Seyfert 1 and control sample respectively. If optical companions are subtracted, then the percentage of galaxies with presumably physical companions is $`18`$ and $`19`$ % for the Seyfert and control sample respectively. No significant difference is thus found between the Seyfert 1 and its control sample. It is important to stress that the fraction of control sample galaxies with companions is much higher (a factor $`2`$) than the expectation value from Poisson statistics. Of 60 Seyfert 2 galaxies $``$ 70 % vs 42 % of the control sample show a companion within 3$`\mathrm{D}_\mathrm{S}`$. The percentage expected from Poisson statistics is $``$ 34 % and $``$ 26 % for the Seyfert and comparison sample. Thus a large excess (statistically significant to a confidence level of $``$ 99.5 %) appears to be present for the Seyfert 2 galaxies. Also in the case of the Seyfert 2 and its control sample, the fraction of galaxies with companion found is a factor $``$2 above the expectation value. This is an important results of its own (further discussed in §4), whichs most likely reflects a strongly non-Poissonian distributions of galaxies on scales of $`\stackrel{<}{}100`$ Kpc.
The cumulative distribution for the projected linear distance $`\mathrm{D}_\mathrm{C}`$ (in Kpc) of the first companion is shown in Fig. 1, without correction for optical companions. For these measurements, close companions were re-identified by eye on the DSS field, and measurements of centroid position and of diameters were made on computer screen. We searched for companion galaxies of diameters $`\mathrm{D}_\mathrm{C}`$$`\stackrel{>}{}`$ 4 Kpc (assuming $`\mathrm{H}_0=75\mathrm{k}\mathrm{m}\mathrm{s}^1\mathrm{Mpc}^1`$; this is the limiting diameter that can be resolved on the DSS by eye and by algorithm at reshift z$``$0.030), within a search radius in all cases equal or larger than 100 Kpc of projected linear distance (and in any case $`\stackrel{<}{}`$ 250 Kpc). Above the limiting search radius, we assumed a “non detection”, and a lower limit to the companion distance was set equal to the search radius. At 50 Kpc we get 48% galaxies with companions for Sy1 and 66% for Sy2, frequencies which are close to those obtained with a variable search radius equal to 3 $`\mathrm{D}_\mathrm{S}`$ . The three left panels are for Seyfert 1 and the three panels on the right for Seyfert 2. The uppermost panels show the distribution for all detected galaxies, the middle panel for companion galaxies whose diameter is 10 Kpc $`\stackrel{>}{}`$ $`\mathrm{D}_\mathrm{C}`$$`\stackrel{>}{}`$ 4 Kpc, and the lowermost panel for, large, bright companions ($`\mathrm{D}_\mathrm{C}`$ $`\stackrel{>}{}`$ 10 Kpc). The thin lines show the cumulative, unbinned distributions for Seyferts (filled line) and for control samples (dotted lines). The distributions binned over 20 Kpc is shown up to a projected linear distance of 100 Kpc (i.e., no lower limits are included). The error bars on the binned control sample frequencies were set with a “bootstrap” technique (Efron & Tibshirani (1993)) by randomly resampling the control galaxies into a large number of pseudo-control samples (3000), and by taking an uncertainty equal to twice the standard deviation of the distribution of companion frequency among the pseudo-control samples.
For the binned distribution (thick lines) a marginal statistical difference is present for small companions within 20 Kpc, and for large companions if the search radius is extended up to 100 Kpc. The situation for Seyfert 2 galaxies is markedly different. The unbinned distributions for all galaxies $`\mathrm{D}_\mathrm{C}`$ $`\stackrel{>}{}4`$ Kpc is significantly different (at a 98 % confidence level), while for small companions the distributions are only marginally statistically different. Thus the difference is driven by a higher frequency of close, large companions, as shown by the lowermost panel on the right in Fig. 1. For companions with $`\mathrm{D}_\mathrm{C}`$ $`\stackrel{>}{}`$ 10 Kpc, the difference in the binned distribution is statistically significant up to $``$ 60 Kpc.
Summing up, our analysis shows an excess of bright companions within a search radius of $``$ 60 Kpc (or 3 $`\mathrm{D}_\mathrm{S}`$) for Seyfert 2 but not for Seyfert 1 galaxies, and an excess of galaxies in the close surrounding of both control and Seyfert galaxies with respect to the expectation of Poisson statistics.
## 4 Discussion
All studies based on the DSS (including obviously this one) have limitations intrinsic to the data: on the DSS, there is a bias against low surface brightness galaxies at one end, and against compact galaxies at the other end. To make things worse, several authors applied a “sharp mask” to the data, ignoring the environment beyond a search radius three times the diameter of the Seyfert galaxies, labeling each Seyfert galaxies as “with” or “without” companions actually disregarding the complexity and the richness of fields around several Seyfert galaxies. A search radius of 3 $`\mathrm{D}_\mathrm{S}`$ is variable from object to object: this may introduce an additional bias that is not controlled. In addition, even recent studies have been based on computer-unaided measurements on the plates (Laurikainen et al. (1994)), or worse, on printed enlargements (Rafanelli, Violato, & Baruffolo (1995)).
The use of the Shane & Wirtanen (1967) counts has provided an estimate of the number of optical companions expected by chance alignment with background (or foreground) galaxies. The probability of a chance alignment within a given search radius is assumed to follow a Poisson distribution. However, as clearly shown in this study, the actual distribution of galaxies within 100 Kpc is markedly non-Poissonian: control sample galaxies show a much higher fraction (a factor $``$2) of companions than expecteded solely on the basis of Poisson statistics. This result is especially robust since counts were performed over one square degree around the Seyfert galaxies on the DSS, and is consistent with the observation that, in samples of perturbed and interacting galaxies (like Vorontsov-Velyaminov’s) or galaxy pairs (like Karashentsev’s), the fraction of Seyfert galaxies appears to be comparable or lower than that expected for fields galaxies (see the thorough analaysis in Laurikainen & Salo (1995) for references): clearly only a minority of interacting systems shows Seyfert-type activity. In other words, gravitational interaction may be a sufficient conditions for activity, but it is certainly not a necessary condition.
It is important to stress, that in spite of the above mentioned limitations, the results obtained by both Salvato & Rafanelli (1997), and De Robertis, Hayhoe, & Yee (1998) are not actualy in contradiction with the results of the present work. De Robertis, Hayhoe, & Yee (1998) conclude that “while the companion frequency for Sy2 galaxies is formally higher, the result is not statisticaly significant (though it is in the same sense as Laurikainen & Salo (1995)).” Moreover, they do find that the mean environment of Sy1 is different from that of Sy2 at a grater that 95% confidence level, from spatial covariance amplitude analysis. The marginal statistical significance of the differences found by De Robertis, Hayhoe, & Yee (1998), are in our opinion, just due to small number statistics, as they are confirmed by Laurikainen et al. (1994), and by the present work which avoids several sources of bias.
## 5 Conclusions
We confirm an important, disturbing result: Seyfert 2 galaxies have an excess of nearby companions while Seyfert 1 do not. And if Seyfert 1 do not, what about quasars? The evidence provided till now about the occurrence of quasars in interacting host galaxies is based on studies of a few objects and not statistically significant. We must stress, however, that this study and the previous ones address a small subset of interaction phenomenologies that could give rise to accretion toward a galaxy nucleus: either we are studying bound systems or in a stage in which the two galaxies are sufficently close but not yet merging: evolved mergers can be well classified as isolated galaxies from the DSS, or unbound encounters, with separation $`\stackrel{<}{}100`$ Kpc. In both cases we are considering galaxies whose diameters is $`\stackrel{>}{}`$ 4-5 Kpc: it is not possible to perform a search (with recognition either by eye or by algorithms implemented on a computer) of smaller galaxies up to z$``$ 0.03 without introducing a redshift-dependent bias. Morphological disturbances in the inner galactic disks, which may have been produced in a very close encounter with a small companion, are not detected efficiently on the DSS, and were obviously not looked at. Hyperbolic encounters can have the companion projected further away than the limiting search radius in a time $`\mathrm{t}_{\mathrm{fly}\mathrm{by}}0.9\times 10^8\mathrm{s}_{100\mathrm{K}\mathrm{p}\mathrm{c}}\mathrm{v}_{1000\mathrm{k}\mathrm{m}/\mathrm{s}}`$ yr. The limitation to three diameters, which corresponds to 60–80 Kpc, is likely to be inadequate: the enhancement may be genuinely restricted to $`\stackrel{<}{}`$ 100 Kpc, or may be due to a larger density of galaxies over a larger scale: on scales $``$ 1 Mpc peculiar motions with respect to the Hubble flow are expected to dominate. If we think of the Local Cluster, we realize that a reasonable search radius should be indeed $`\stackrel{>}{}`$ 500 Kpc. Studies with a large search radius ($``$ 500 Kpc) and involving faint companions – which cannot be carried out on photographic material – have yet to be done.
The role of interactions in the induction of nuclear activity is a complex and open issue. It is particularly difficult to disentangle the differences involved in “monster fueling” and/or circumnuclear starburst triggering (see e.g. the recent discussion in De Robertis, Hayhoe, & Yee (1998)). Moles, Marquez, & Perez (1995) investigated all the Seyfert and LINER galaxies with known morphology, and found that they are all in interaction or have non-axisymmetric distortions usually with bars, and/or rings or both. The response to non-axisymmetric perturbations, is also known to be dependent on the buldge-to-disk ratio. And we must remember that Sy1 nuclei tend to reside in earlier Hubble type galaxies thatn Sy2 nuclei, and both types in earlier types than Starburst nuclei (Terlevich, Melnick, & Moles (1987)).
Does the difference between the environment of Seyfert 1 and 2 galaxies revealed in this and in previous studies pose a challenge to unification scheme for Seyfert galaxies? In its simplest form, the answer is yes. A “minimalist” interpretation would require to see Sy2 as obscured Sy1 because of interaction: strong interaction with a comparably sized companion enhances overall star formation, drives molecular gas toward the center of the galaxy, which may in turn obscure the active nucleus’ BLR. If an “obscuring torus scenario” applies, and if sources are observed at random orientation, then almost all interacting Sy2 should be obscured Sy1. This interpretation allows for an observational verification: spectropolarimetry of interacting Sy2 galaxy should reveal a“hidden” BLR in the majority of cases. As only $``$ 1/3 of Sy 1 has a companion, this implies that about 2/3 of Sy1 should be genuinely unobscured objects. An alternative scheme was proposed by Dultzin-Hacyan (1995): radiation due to accretion unto a black hole (BH) decreases, while the relative contribution of a circumnuclear starburst (SB) radiation increases from Seyfert nuclei types 1 to 2. Intermediate types can be obviously explained due to intermediate proportions of these contributions. Statistical studies of the multifrequency emission of Seyferts (Mas-Hesse et al. (1995); Dultzin-Hacyan & Ruano (1996)), independently support this scheme. It is also strongly supported by direct observations which show that Sy2 galaxies have more circumnuclear star-forming regions than Sy1, both in the optical (Gonzalez-Delgado & Perez (1993)), and in the IR (Maiolino & Rieke (1995); Maiolino et al. (1997)). Both alternatives are actualy complemenmtary, since it is the interaction that drives the needed obscuring and/or star-forming material to the nucleus. One thing is clear, these views do not deny the possibility that some, even the majority of Sy2 galaxies are obscured Sy1. But an “only orientation” difference between Seyfert types is not sustainable.
This work was supported by grant IN109896 from DGAPA-UNAM and Italian Ministry for University Research (MURST) under grant Cofin98-02-32
|
no-problem/9901/cond-mat9901227.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The field of non-linear rheology is roughly fifty years old and far from mature. Non-linear fluids often display elastic effects, and have effective viscosities which depend on stress or strain rate. Polymeric liquids typically shear-thin , although branched polymers thicken dramatically in extensional flow; colloidal suspensions of platelike particles (clays) typically shear-thicken; and solutions of surfactant (soap) molecules can shear-thicken *or* shear-thin.
Non-linear rheology and polymer dynamics are immense fields, and in this short (subjective) review I will focus on a few subfields. I will discuss recent advances in modelling the dynamics of flexible and semiflexible polymer melts, including linear and complex topologies; and then review progress in our knowledge of the surfactant *wormlike micelle* system, to which concepts from polymer dynamics have been successfully applied. Unlike conventional polymers, the micellar microstructure can change qualitatively in flow conditions; these *transitions* have many features in common with equilibrium phase transitions, and have excited great interest. A good collection of results from a wide range of complex fluids may found in \[$``$2\].
## 2 Polymer Melts and Solutions
The most accepted molecular model for the dynamics of flexible entangled polymers has been the Doi-Edwards (DE) theory, based on de Gennes’ reptation concept, in which polymers are envisaged to occupy “tubes” that model entanglement constraints. This has done a reasonable job in predicting linear rheology, with a few notable exceptions such as the failure to predict the scaling of the zero frequency viscosity as $`M^{3.4}`$, although recent work suggests that contour length fluctuations are a key to this puzzle.
Recent work in the linear regime includes applying the tube picture to molecules with complex branched topologies \[$``$4\]. Star polymers afford stringent tests of tube model ideas, because diffusion is dominated by arm retraction in its tube, which is exponential in the retraction potential. Inclusion of higher order Rouse modes along with “dynamic dilution” of the tube has led to remarkably good agreement with experiment . In addition to star polymers, molecular models have recently been developed for progressively more complex topologies, paving the way for understanding the flow behavior of industrially-important long-chain-branched polymers, which strain-harden in extensional flow while softening in shear flow \[$``$4\].
Although the DE tube picture works well in the linear regime, it has several defects at high strain rates, particularly in steady shear. For example: experiments show a slightly increasing plateau shear and an increasing normal stress for strain rates $`\gamma `$ above an inverse reptation time $`\tau _r^1`$, while theory predicts a *decreasing* stress $`\sigma \gamma ^1`$ and a constant normal stress; and the DE theory predicts a high strain rate viscosity which decreases with molecular weight, while experiments merge onto a molecular weight-independent curve. The defect in DE theory is that, as the tube representing entanglement constraints rotates into the flow, the entrapped polymer feels a reduced stress and, at strain rates above the inverse tube relaxation time $`\tau _r^1`$, remains oriented and presents a decreasing stress with increasing strain rate. Although corrections due to tube stretching have accounted for some problems in startup flows, this only applies near $`\gamma \tau _r^1`$ and still predicts a pronounced stress maximum. Another mechanism is needed to relax the chain and hence increase the stress by providing more misaligned material for the flow to “grip”. The key is believed to lie in “convected constraint release”, whereby the entanglement (tube) mesh convects away at high strain rates, leaving a relaxed coil. Early applications of this idea \[7, $``$8\] have cleared up several problems with the DE theory, although the theory still predicts a slightly decreasing stress with strain rate, so it should be regarded at provisional.
While moderately successful molecular theories of flexible polymer and rigid rod dynamics have existed since the 70’s, the study of semiflexible polymers (in which $`d<L_p<L`$, where $`d`$ is the polymer diameter, $`L_p`$ the persistence length, and $`L`$ the length) is quite young, mainly due to the severe mathematical difficulties in treating the bend degrees of freedom and length constraint. However, with increasing attention being paid to biological polymers such as actin , a deeper understanding of the dynamics of semiflexible polymer solutions is emerging. Direct imaging of tagged fluorescent polymers is possible \[$``$10\], and several techniques have been developed for measuring elastic moduli, including direct (torsional oscillator ) and indirect (from various optical techniques \[$``$12, $``$13\]). Experiments indicate the vestiges of a plateau modulus, less pronounced than that of flexible polymer solutions. For polymers with $`L_p<L_e`$, the distance between entanglements or confinement constraints (or deflection length ), one expects the behavior of flexible solutions. However, for $`L_p>L_e`$ one expects qualitatively different effects due to the perturbation of bending modes by tube constraints. While Odijk and Semenov have studied the dynamics and statistics of individual filaments, only recently have molecular theories for the stress response of entangled solutions emerged. Unlike flexible polymers, semiflexible chains have a bending energy which maintains $`L_p`$, and one can distinguish between longitudinal and transverse conformational changes. Two pictures have emerged for the origin of elastic stress in concentrated solutions: Isambert and Maggs argued that semiflexible chains can slide along their tubes longitudinally, and relaxation only occurs when transverse motions allow escape from the tubes. MacKintosh *et al.* argued that, if longitudinal motion is suppressed, then the modulus is due to the applied tension and the relaxation of bending modes (which are present in the quiescent state due to thermal fluctuations). In the case of solutions the former mechanism is expected to hold at times longer than that on which chain tension can relax . These pictures have been made more quantitative by Morse \[$``$19\], who has developed a molecular theory at the level of the Doi-Edwards theory and included the bending curvature explicitly in the expression for the microscopic stress tensor.
## 3 Flow instabilities in Wormlike Micelles
DE theory predicts a bulk flow instability in polymer melts which has not been seen; however, a suggestive instability known as the “spurt effect” has been seen in extrusion, in which the throughput increases dramatically above a critical pressure gradient, often accompanied by a spatial pattern in the extrudate . Current opinion is that this is a surface instability, although the picture is not settled . However, there is a polymeric system which displays a well-documented bulk instability and has been the subject of intense investigation in the past decade.
Certain aqueous surfactant solutions (*e.g.* cetylpyridinium chloride/sodium salicylate \[CPCl/NASal\]; cetyltrimethylammonium bromide(CTAB)/NaSal) self-assemble into flexible cylindrical micelles with an annealed length distribution that can encompass polymeric dimensions (microns). These solutions comprise a surfactant (*e.g.* CPCl) and an ionizing salt (*e.g.* NaSal) which together determine micellar dimensions, flexibility, and interactions. Salt and concentration effects are quite delicate, with Coulomb interactions playing an important and poorly-understood role. Micelle reaction kinetics introduce additional timescales to the Rouse and reptation times of conventional polymers: in the limit of fast breaking times the stress relaxation of entangled micelles often obeys a simple single exponential (“Maxwell fluid”), and properties can be calculated quite confidently , in good agreement with experiment . Some non-linear properties can be calculated in this limit, and a maximum in the shear stress (analogous the stress maximum in DE theory) is predicted at an inverse relaxation time (the geometric mean of the reptation and breaking times) , in quantitative agreement with experiment \[$``$27\].
The non-linear rheology of micellar systems was first studied by Rehage and Hoffman \[$``$27\], who discovered dramatic shear thinning (analogous to the DE instability) and shear-thickening, depending on the salt/surfactant/water composition. The shear-thinning systems were the first to be systematically studied (see Figure 1). Above a critical strain rate $`\gamma _p`$ an apparent phase separation into macroscopic coexisting regions occurs, at a reproducible and history-independent stress $`\sigma _p`$, in which the high strain rate material is well aligned (typically birefringent) and the low strain rate material remains relatively disordered. The underlying flow curve has the stress maximum $`\sigma _{max}`$ (predicted by Cates for semi-dilute systems), while the composite steady state flow curve has a plateau beginning at $`\sigma _p<\sigma _{max}`$. \[It is important to note that micellar systems have slow dynamics, and one can trap metastable states for $`\sigma >\sigma _p`$ .\] This occurs in semi-dilute systems, of order a few percent surfactant , or in more concentrated systems (of order $`30\%`$) with a nearby equilibrium nematic transition . In the former case the dynamic instability is believed to be polymeric in nature , while the latter may be due to nematic effects (probably both effects are present). No theories exist for nematic transitions under shear in micelles, although recent work includes phase diagrams for model rigid-rod suspensions in shear flow \[$``$32\], for which only a few results exist \[Ref. reported a shear-induced nematic transition in a liquid crystal polymer melt\].
Shear banding can be inferred from rheological measurements and directly observed optically. Quantitative measurements include the fraction of material and degree of alignment in the two phases, inferred from neutron scattering \[$``$34\]; and the velocity profile, measured directly using magnetic resonance imaging \[$``$35\]. Shear-banding can incorporate different concentrations in the two phases, which is expected when flow modifies intermicellar interactions (as near a nematic transition) rather than simply the micellar conformation (as might be expected in more dilute systems). A signature of this is a slope in the “plateau” stress with increasing mean strain rate \[$``$36, $``$32\], indeed seen in concentrated solutions which often have an underlying nematic transition \[37, 38, 39, $``$34\].
Groups have begun investigating metastability. Berret *et al.* examined slow transients in 10-20% CPCl/NaSal solutions. After increasing the strain rate into the two-phase region the stress decayed slowly in time from the underlying constitutive curve onto the stress plateau, with behavior $`\sigma \mathrm{exp}\left\{t/\tau \left(\gamma \right)\right\}^\alpha ,(\alpha =2)`$, which they interpreted as one-dimensional nucleation and growth. Grand *et al.* \[$``$42\] studied transients in more dilute ($`1\%`$) CPCl/NaSal solutions and found similar stress decays, with $`\alpha (2,2.5,3)`$, and $`\tau (\gamma )`$ diverging above or below (depending on composition) the strain rate $`\gamma _p`$ at the onset of banding. They also performed controlled stress experiments, and discovered a stress $`\sigma _{jump}>\sigma _p`$, below which the system remained on the low strain rate branch indefinitely and above which the system eventually jumped to the high strain rate branch. Their data suggested that some compositions behave “spinodal-like” and others behave “nucleation-like” (as Berret’s did), but it is to early to completely embrace the language of first-order transitions (given, *e.g.*, $`\sigma _{jump}`$, which has no equilibrium analog).
Fischer and Rehage showed how shear-thinning systems can be tuned by changing surfactant and salt composition, from a shear banding material to a material with a stress plateau (the rheological signature of banding) but *without* banding \[$``$43\]. The shear and normal stresses apparently follow the *Giesekus model*, which is one of the simplest non-linear constitutive equations (comprising a Maxwell model with the simplest stress-dependent relaxation time). A molecular understanding for this behavior is lacking.
With critical micelle concentrations of order a few parts per million, micelles entangle at astonishingly low dilutions. Amazingly, systems which shear-thin at concentrations of a few percent can undergo a shear-*thickening* transition at fractions of $`<0.1\%`$ \[$``$27, 44, $``$46\]. This this shear-induced structure (SIS) is still undetermined; early suggestions for the mechanism included runaway micellar growth due to flow-alignment , but the observed strain rate (of order inverse milliseconds) is much slower than the necessary micellar reorientation time, of order $`\mu \text{s}`$. It is probable that charge, which controls the dramatic increase in micellar length for concentrations near the overlap concentration , plays an important role. Like the shear-thinning systems, macroscopic “phase separation” occurs; Hu *et al.* \[$``$46\] found a gel-like phase that forms upon increasing the applied stress, with the mean strain rate decreasing (see Figure 2) as more material turns into gel, and increasing again after complete conversion. The gel is observed to fracture in flow, and slightly shear-thins. Applying a strain rate above the critical strain rate induces immediate complete conversion. Attempts to visualize the SIS using cryo-TEM have given few clues to the microstructure . Note that coexistence in the shear-thinning micelles occur under controlled strain rate conditions, while coexistence in this thickening system occurs for controlled stress; in both cases banding occurs in the radial direction, indicating banding at a common shear stress. These differences may be coincidences of the constitutive behaviors of the coexisting phases, or due to whether stress or strain rate ultimately determines the SIS.
Berret *et al.* \[$``$51\] studied cethyltrimethylammonium tosylate (CTAT) micelles, and found shear-thickening phase separation under controlled *strain rate* conditions above a critical strain rate $`\gamma _c\varphi ^{0.55}`$ (an increase in $`\gamma _c`$ with $`\varphi `$ was also found by Hu *et al.* \[$``$46\]); this concentation dependence remains unexplained. The composite curve $`\sigma _p(\gamma )`$ has a positive slope, in contrast to the S curve of Ref. \[$``$46\], possibly because the “thick” phase is not thick enough; alternatively, phase separation along the vorticity direction (at a common strain rate) would also be consistent with a positive slope $`d\sigma _p/d\gamma `$ for the composite flow curve \[$``$32\]. The SIS is shear-thinning, displays an oriented structure in neutron scattering, and does not have the extremely long recovery times found in Ref \[$``$46\]. Qualitatively similar data were reported for a CTAB-NaTOS solution by Harmann and Cressely . We finish our (incomplete) zoo of micellar thickening transitions by mentioning yet another study on CPCl/NaSal: revisiting early experiments by Rehage, Wheeler *et al.* found spatio-temporal instabilities consisting of dark and light oscillating vertical bands (in cylindrical Couette flow); these accompany formation and destruction of new microstructure (evident from turbidity), and are suggestive of a Taylor-Couette elastic instability .
Athough wormlike micelles can have much simpler rheology than their polymer cousins due to the frequent presence of a single relaxation time in the fast breaking limit, this simplicity is delicate, and strong flows can dramatically affect the micelle microstructure. We are far from a general molecular theory for these transitions, and do not even *know* the nature of the microstate in most cases. Continuum constitutive models may provide some insight, although when these models succeed we usually do not know why (*e.g.* The Giesekus model \[$``$43\]). A popular constitutive model is the local Johnson-Segalman model , which is relatively simple and displays the non-monotonic flow curve characteristic of shear-thinning micelles. Numerical calculations resemble some startup experiments , and authors have attempted to determine plateau stress $`\sigma _p`$ for the onset of banding in this model . It has become apparent that this, or any local, model does not give a unique selected banding stress , and an additional assumption is necessary \[58, $``$32\]. There is growing consensus that non-local (*i.e.* gradient terms) contributions to constitutive equations supply an unambiguous determination of the plateau stress \[$``$32\].
## 4 Outlook
Despite progress in understanding of flexible and semiflexible polymer dynamics, there is no shortage of problems for the immediate future. We lack a credible complete molecular understanding for *any* of these micellar flow-induced transtitions: such a model must presumably include charge and concentration, as well as polymeric effects, to account for the (still unknown, in most cases!) structural changes under flow. By comparing and contrasting living and non-living polymers we may be able to extract important physics. Besides these systems, many other complex fluid systems undergo a variety of flow-induced phase transitions, and it seems reasonable to hope that this variety of transitions may be put on a common ground, akin to the thermodynamics of equilibrium phase transitions.
I am indebted to J-F Berret, ME Cates, SL Keller, CYD Lu, FC MacKintosh, TCB McLeish, DJ Pine, and G Porte for much discussion and advice.
|
no-problem/9901/hep-ex9901033.html
|
ar5iv
|
text
|
# Heavy Flavour Physics at HERA
## Introduction
At HERA positrons of 27.5 GeV collide with 820 GeV protons yielding a center of mass energy of 300 GeV. Heavy flavours are predominantly produced in pairs by photon gluon fusion. Charm quark production is expected to be a factor 200 more abundant than bottom quarks at this energy. Heavy flavour processes give new opportunities of studying perturbative QCD at center of mass energies roughly a factor 10 higher than in fixed target experiments.
There are several methods to tag heavy flavours: “Open” charm production is tagged via reconstruction of $`D^{}`$ (H1 and ZEUS) or via semi–leptonic decays to electrons (ZEUS). b quarks have been measured via semi–muonic decays by H1. Finally “hidden” charm is studied via reconstruction of $`J/\psi `$ (see fig. 1). $`\psi (2s)`$ and $`\mathrm{{\rm Y}}`$ Mesons have as yet been reconstructed only in diffractive processes \[1a\] at HERA and will not be reported here.
The integrated luminosity delivered by HERA has steadily increased over the years. This review will cover data from 1995 ($`6\text{pb}^1`$), 1996 ($`10\text{pb}^1`$) and 1997 ($`26\text{pb}^1`$). Most results are preliminary.
The usual kinematic variables for deep inelastic scattering are used:
$`s=(k+P)^2;Q^2=(kk^{})^2;x=\frac{Q^2}{2Pq};y=\frac{qP}{kP};W_{\gamma p}^2=(q+P)^2=syQ^2`$
where $`k`$ and $`P`$ are the four vectors of incoming electron and proton, and $`q`$ of the exchanged photon.
## Determination of $`𝑭_\mathrm{𝟐}^𝒄`$
The inclusive cross section for production of charm in deep inelastic scattering (DIS) can be written as
$$\frac{d^2\sigma ^{epec\overline{c}X}}{dxdQ^2}=\frac{2\pi \alpha ^2}{xQ^4}(1+(1y)^2)F_2^c(x,Q^2)$$
where the contribution due to $`F_L`$ has been neglected since it is expected to be small.
Charm is tagged through reconstruction of $`D^+D^0\pi ^+`$ with subsequent decay $`D^0K^{}\pi ^+`$ and also the charge conjugate decay. ZEUS has presented a new analysis of semileptonic charm decays $`ce^++X`$. The electron was identified using the electromagnetic calorimeter and the specific energy loss $`dE/dx`$ in the driftchamber. Details of the analysis from H1 and ZEUS can be found in \[1b\].
For $`D^{}`$ production a comparison to the RAPGAP Monte Carlo rapgap simulation is shown in fig. 3. Reasonable agreement is found. After unfolding detector effects cross sections are obtained in a restricted kinematical region, examples from H1 data are shown in fig. 3. The data are compared to a NLO calculation by Harris and Smith harris using the Peterson fragmentation function. The agreement is good and the extrapolation to the full kinematic region is done with this calculation.
The resulting $`Q^2`$ and $`x`$ dependence of $`F_2^c`$ is shown in fig. 4. The data span $`Q^2`$ values from 1.8 to 130 GeV<sup>2</sup> and $`510^5x0.02`$. The agreement of the different data sets is reasonable within errors. Also shown is the theoretical NLO calculation using the GRV94-HO parton density functions which reproduces the data well. A strong rise of $`F_2^c`$ towards low $`x`$ is observed at fixed $`Q^2`$ and in $`Q^2`$ strong scaling violations are seen at fixed $`x`$. $`F_2^c`$ gives a contribution of between 10% (low $`Q^2`$) and 30% (high $`Q^2`$) to the inclusive $`F_2`$ at an $`x510^4`$.
## Photoproduction of $`𝑫^{\mathbf{}}`$
When the exchanged photon is almost real, contributions due to its hadronic nature have to be taken into account (“resolved” processes). In NLO QCD calculations an unambiguous separation of the direct process (fig. 7a) and resolved processes (b and c) is no longer possible, only the sum of the two is well defined. There are two approaches to calculate the photoproduction cross sections in next to leading order:
The “massive” approach frixione where only the light quarks u, d and s and gluons are active partons in the photon (and proton). Charm is only generated in the hard subprocess (see also fig. 7b). This approach is valid for $`m_c\mathrm{\Lambda }_{QCD}`$. The “massless” approach kniehl ; cacciari where also charm is an active flavour. This approach is valid at $`p_tm_c`$.
The high statistics data from ZEUS dstarz-vanc are shown in fig. 7. They are found to be above the massive and massless calulations. The comparison of H1 data \[1b\] with massive calculations shown in fig. 7 is satisfactory.
ZEUS has presented an analysis of $`D^{}`$ events which contain two jets dstarz-vanc . In these events the observed momentum fraction $`x_\gamma ^{obs}`$ can be calculated, which describes the fraction of the photon energy contributing to the production of the two jets. A significant tail at low $`x_\gamma ^{obs}`$ is found in the data. In the generator HERWIG this tail can be described by charm excitation in the photon, while considering only light flavours leads to discrepancies.
## Gluon density from $`𝑫^{\mathbf{}}`$ events
H1 extracted the proton’s gluon density function in DIS ($`2<Q^2<100\text{GeV}^2,\mathrm{\hspace{0.17em}0.05}<y<0.7`$) and in photoproduction ($`0.02<y<0.32;\mathrm{\hspace{0.17em}0.29}<y<0.62;Q^20`$) \[1b\].
The observed momentum fraction $`x_g^{obs}`$ of the gluon is reconstructed from the kinematics of the final state and a differential cross section $`d\sigma /dx_g^{obs}`$ is determined, which for the DIS data is shown in fig. 8. The correlation of $`x_g^{obs}`$ with the true $`x_g`$ as given by the NLO QCD calculations harris – also shown in fig. 8 – is used in an iterative unfolding procedure to obtain $`d\sigma /dx_g`$. The gluon density is then obtained by reweighting the calculation with the measured cross section. The result is shown in fig. 8 as a momentum distribution $`x_gg(x_g)`$. The range $`10^3<x_g<0.02`$ is covered. The data from photoproduction and DIS agree well within the large errors. They also agree with the result from an analysis of scaling violations in the inclusive measurement of $`F_2`$ h1newglue .
## $`𝒃\overline{𝒃}`$ Production
Due to the higher mass of the $`b`$ quark the total cross section for $`b\overline{b}`$ production is expected to be 200 times smaller than that for $`c\overline{c}`$ production. The theoretical uncertainties in calculating the next to leading order predictions are, however, smaller btheor . H1 determined the cross section for the first time in the HERA energy range using semi–muonic $`b`$ decays \[1c\].
A photoproduction event sample was selected containing two jets of transverse energy $`E_T>`$ 6 GeV and a muon of transverse momentum (relative to the beam direction) $`p_T^\mu >`$ 2 GeV in the central detector region $`35^{}<\theta ^\mu <130^{}`$.
The thrust axis<sup>2</sup><sup>2</sup>2The thrust axis is the axis which maximizes $`T=\text{max}(\frac{\mathrm{\Sigma }p_i^L}{\mathrm{\Sigma }|p_i|})`$, where the sum runs over all particles belonging to the jet and $`p_i^L`$ is the component of the particle momentum parallel to the thrust axis. was determined for each jet in order to approximate the $`b`$ flight direction. The transverse momentum of the muon $`p_{t,rel}^\mu `$ with respect to the jet is used as a discriminating variable: Muons from $`b`$ decay show a $`p_{t,rel}^\mu `$ spectrum extending to higher values than $`c`$ decays (see fig. 9 for an illustration of the method).
The background comes from the production of the light quarks $`u,d`$ and $`s`$, which is roughly a factor 2000 larger than $`b`$ production. Punch through and decay in flight lead to false muon signatures. The contribution is determined from data using an independent dataset and using the muon fake probability and the hadron composition from a well tuned and checked simulation program. The resulting $`p_{t,rel}^\mu `$ spectrum is shown in fig. 9 indicating the background contribution (23.5%), which is absolutely determined. The fractions of $`b`$ and $`c`$ quarks are obtained from a fit to the data distribution, yielding (51.4 $`\pm `$ 4.4)% and $`(23.5\pm 4.3)`$%, respectively.
The cross section in the visible kinematic range of $`Q^2<1\text{GeV}^2;p_T^\mu >2\text{GeV}`$; $`95W_{\gamma p}270\text{GeV};\mathrm{\hspace{0.17em}35}^{}<\theta ^\mu <130^{}`$ is determined as
$$\sigma (epeb\overline{b}+X)^{vis}=\mathrm{\hspace{0.17em}0.93}\pm 0.08_{0.12}^{+0.21}\text{nb},$$
where the first error is statistical and the second systematic. Contributions to the systematic errors are the branching ratio $`bX\mu \nu `$, the energy scale of calorimeters and detector efficiencies.
The corresponding direct LO cross section from the AROMA AROMA simulation is $`0.19\text{nb}`$, roughly a factor 5 lower. The fraction of $`c`$ quarks determined from this analysis leads to the same cross section for $`epec\overline{c}X`$ as previously determined from analysis of $`D^{}`$ production ccbar .
## Inelastic $`𝑱\mathbf{/}𝝍`$ production
New data on charmonium ($`J/\psi ,\psi ^{}`$) and $`\mathrm{{\rm Y}}`$ production have been presented by H1 and ZEUS \[1a\]. Here we will concentrate on “inelastic” $`J/\psi `$ production as opposed to the diffractive processes which dominate the cross section at low $`Q^2`$. Inelastic $`J/\psi `$ production could at lower $`W_{\gamma p}`$ (fixed target regime) be well described by the Colour Singlet Model (CSM). For HERA the CSM cross section calculations are available in NLO Kraemer .
As is well known the CSM fails to describe charmonium production in $`p\overline{p}`$ collisions at high $`p_T`$ fail . Colour octet contributions have been proposed for an adequate description. The NRQCD factorisation approach (NRQCD = Non Relativistic QCD) describes any process $`A+BJ/\psi +X`$ as a sum over colour singlet and colour octet contributions.
Whereas the transformation of a colour singlet $`{}_{}{}^{3}S_{1}^{}`$ state into a $`J/\psi `$ can be calculated using the measured semileptonic decay width, the transition of a colour octet state to $`J/\psi `$ is non–perturbative and at present not calculable. Therefore predictions for the cross section at HERA use the non perturbative transition matrix elements extracted from the CDF data.
The ZEUS collaboration has updated their photoproduction data \[1d\]. The results for the $`\gamma p`$ cross section as function of $`W_{\gamma p}`$ and of $`z`$ are shown in fig. 10. The data agree well with the next to leading order pQCD calculation in the color singlet model Kraemer . The variable $`z`$ is defined as $`z=\frac{P_\psi P}{Pq}\frac{E_\psi }{E_\gamma }`$ where the latter approximation holds in the proton rest frame. In fig. 10 b in addition to the CSM in NLO calulations using the NRQCD/factorisation approach com ; Cano ; Martin are shown. The upper curve was calculated in LO using the transition matrix elements extracted from CDF data in LO and shows a strong rise towards high $`z`$ values. The lower curves also take into account higher orders approximately as explained in refs. com ; Cano ; Martin . Doing so leads to modifications in the non perturbative matrix elements and/or in the cross sections themselves. The net effect is a decrease of the predicted rise at high $`z`$.
H1 has for the first time determined the cross sections for inelastic $`J/\psi `$ production at $`Q^2>2\text{GeV}^2`$ \[1d\]. The results are shown in fig. 11. Two data sets are shown, a completely inclusive one (open points) and one where the diffractive contributions have been removed by a cut on the energy in the forward region of the detector as suggested by Fleming and Mehen fleming , whose LO calculations are shown for comparison. The data are seen to be far above the CS contributions. The magnitude of the data is reproduced better taking into account colour octet contributions. The shape of the latter leaves, however, much room for improvement, in particular in the rapidity $`y^{}`$ in the $`\gamma ^{}p`$ center of mass system. Note that the NRQCD calculations are performed at the parton level, no smearing due to the transition into $`J/\psi `$ is taken into account.
## Summary
Due to increased statistics detailed analyses of heavy flavour production in $`ep`$ collisions are performed in a variety of channels and kinematic regions. $`b\overline{b}`$ production was observed for the first time in photoproduction via semi–muonic decay of the b–quark. The cross section was found to be considerably larger than the leading order predictions for the direct process.
In the range $`2<`$$`Q^2`$$`<\mathrm{\hspace{0.17em}130}`$$`\text{GeV}^2`$cross sections and the charm contribution to $`F_2^c`$ are determined and found to agree with next to leading order predictions. In photoproduction the validity of different approaches to calculate next to leading order corrections is being studied in various kinematic regions.
Since photon gluon fusion is the dominant process a direct determination of the gluon density in the proton was carried out in DIS and in the photoproduction regime. The result agrees with the indirect determinations from scaling violations.
Inelastic $`J/\psi `$ production is studied in photoproduction and DIS and is well described in photoproduction by the color singlet model alone in next to leading order. In DIS the data have been compared to LO color singlet and color octet predictions (at parton level). In the latter rough agreement in absolute normalisation is found, while the color singlet model reproduces the shape of the data slightly better.
### Acknowledgement
I wish to thank the organisers for a very pleasant and fruitful meeting and my colleagues at ZEUS and H1 for supplying their data and for discussions.
|
no-problem/9901/cond-mat9901193.html
|
ar5iv
|
text
|
# An Unsettled Issue in the Theory of the Half-Filled Landau Level
## Abstract
The purpose of this paper is to identify an unsettled issue in the theory of the half-filled Landau level, and state our point of view.
Whether the half-filled Landau level can be described in terms of a “Fermi liquid” of composite fermions has been controversial. (Hereafter we shall use the phrase “Fermi liquid” in a loose sense. It does not imply, e.g., a finite quasiparticle weight, but simply means that the long wavelength/low energy current-current correlation functions resemble those of a liquid of fermions in zero magnetic field.)
This controversy is somewhat side-tracked by the recent development of an alternative description. In this new description $`\nu =1/2`$ is pictured as a liquid of fermionic dipoles with each dipole being made up of a composite fermion and a correlation hole. (This way of describing $`\nu =1/2`$ originates from an insightful paper by Read.) Recent evidences suggest that this description has similar infrared difficulty as the fermion Chern-Simons theory.
The following are the main points of this paper:
1. The fermion Chern-Simons theory is not equivalent to the composite Fermi liquid theory of Halperin, Lee and Read. The former is a general formulation, the latter is a bold statement about the dynamics.
2. It can be shown that the neutral fermion (dipole) action in Ref. is the same as that for the fermion Chern-Simons theory in the lowest Landau level.
3. The heart issue is whether, within the fermion Chern-Simons theory, one can describe the half-filled Landau level as composite fermions moving in zero magnetic field.
4. If the electron Hall conductivity is $`e^2/2h`$ and its longitudinal resistivity is non-zero, then a) the composite fermion Hall conductivity ($`\sigma _{xy}^{CF}`$) is $`\frac{e^2}{2h}`$, and b) the neutral fermion Hall conductivity ($`\sigma _{xy}^N`$) is zero. There is strong evidence that real systems (which all theories intend to describe) do show $`\sigma _{xy}=e^2/2h`$ and $`\rho _{xx}>0`$. This is consistent with the presence of particle-hole symmetry.
5. $`\sigma _{xy}^{CF}=\frac{e^2}{2h}`$ implies that the (polarization) charge current carried by neutral fermions does not have off-diagonal correlation at $`𝐪=0`$.
6. Despite their large (negative) Hall conductance, the composite fermions do have some aspect of a Fermi liquid - their transverse current-current correlation resembles that of electrons in zero magnetic field. This mixed behavior is due to the fact that the composite fermion motion is the superposition of two different types of dynamics: the guiding-center-like intra-dipole dynamics, and the zero-field-like inter-dipole dynamics.
I. The composite Fermi liquid theory
An amazing fact about $`\nu =1/2`$ is that aside from a large Hall conductance the behavior of electrons near $`\nu =1/2`$ resembles that near zero field. This statement applies to magneto-transport data, as well as other Fermi-surface resonance experiments.
Shortly after Willett et al’s discovery of an anomaly in the acoustic wave propagation, a very novel idea was put forward by Halperin, Lee and Read (HLR). For the reasons described below, we shall refer to the HLR work as the “composite Fermi liquid theory” (CFLT). The CFLT rests on the fermion Chern-Simons description. In this description one views each electron as a composite fermion carrying two quanta of fictitious magnetic flux (see Fig.1). Unlike the electrons, the composite fermions see two different magnetic fields: the applied field $`B`$, and the fictitious field $`b`$. While $`B`$ is space-time independent, $`b`$ is solenoid-like and time dependent.
The virtue of the fermion Chern-Simons description is that it suggests a novel mean-field theory. In this mean-field theory one lets the averaged $`b`$ ($`\overline{b}=2\varphi _0\overline{\rho }`$) cancel $`B`$. (Here $`\overline{\rho }`$ is the average electron/composite fermion density.) After the cancellation the composite fermions see zero magnetic field hence form a Fermi liquid. This mean-field theory is the basis of Ref..
In reality $`b`$ is space-time dependent, hence can not cancel $`B`$ exactly. Attempts to go beyond mean-field theory have not lead to a conclusive result. On this account HLR made a bold conjecture. They assert that the cancellation between $`b`$ and $`B`$ is not spoiled by the fluctuations beyond mean-field theory. Moreover they assert that the sole effect of the fluctuations is to renormalize the Fermi liquid parameters of composite fermions.
An consequence of HLR’s assertion is that the composite fermion Hall conductance vanish:
$$\sigma _{xy}^{CF}(\omega =0,𝐪=0)=0.$$
(1)
Eq. (1) lies at the heart of the issue we shall discuss.
At this point it is useful to contrast the mean-field theory for $`\nu =1/2`$ with that for incompressible filling factors. The difference lies in the fact that for incompressible filling factors the mean-field theory predicts integer quantum Hall states, while for $`\nu =1/2`$ it predicts a Fermi liquid. Since the former is incompressible (hence does not have low energy $`b`$ fluctuations), the statement that $`b`$ cancels part of $`B`$ is asymptotically exact. The same can not be said about $`\nu =1/2`$, because the mean-field composite fermion state is compressible.
II. The composite fermion Hall conductance
Now let’s come to the main issue - the validity of Eq. (1). First let’s recall the following exact relation between the electron and composite fermion resistivity tensors ($`\rho _{\alpha \beta }`$ and $`\rho _{\alpha \beta }^{CF}`$):
$$\rho _{\alpha \beta }=\rho _{\alpha \beta }^{CF}+ϵ_{\alpha \beta }\frac{2h}{e^2}.$$
(2)
In the above $`\rho _{\alpha \beta }^{CF}`$ is defined so that $`\sigma _{\alpha \beta }^{CF}(\rho _{\alpha \beta }^{CF})_{\alpha \beta }^1`$ is the conductivity deduced from the statistical-gauge-propagator-irreducible current-current correlation function of composite fermions. As usual, in the presence of long-range interaction, the irreducible current-current correlation describes the particle response to the total (i.e. external+internal) field.
The physics of Eq. (2) is the fact that the Hall voltage seen by the composite fermions differs from that seen by the electrons by an amount equals to $`2\frac{h}{e^2}\times I`$. This difference comes from the fact that in the composite fermion representation (Fig.2) there is a flux current $`I_\varphi =2\frac{hc}{e}\frac{I}{e}`$ in addition to the charge current $`I`$. This flux current generates an extra transverse voltage equals to $`\frac{1}{c}I_\varphi =2\frac{h}{e^2}I`$.
As a result the longitudinal ($`V_L,V_L^{CF}`$) and Hall ($`V_H,V_H^{CF}`$) voltages seen by the electron and the composite fermion are related by
$`V_L=V_L^{CF}`$ (3)
$`V_H=V_H^{CF}+2{\displaystyle \frac{h}{e^2}}I.`$ (4)
After dividing both sides of Eq. (4) by $`I`$ one obtains Eq. (2).
Next, we discuss another argument that is important for setting up the issue concerning Eq. (1) - the particle-hole symmetry. In the absence of disorder, particle-hole symmetry emerges at $`\nu =1/2`$ after the projection onto the lowest Landau level. The presence of such symmetry implies that
$$\sigma _{xy}=\frac{e^2}{2h}.$$
(5)
A caricature of the proof goes as follows. Upon the particle-hole conjugation the electron conductivity tensor transforms as
$`\sigma _{xx}(\nu )=\sigma _{xx}^h(1\nu )`$ (6)
$`\sigma _{xy}(\nu )={\displaystyle \frac{e^2}{h}}\sigma _{xy}^h(1\nu ).`$ (7)
In the above $`\sigma _{\alpha \beta }^h`$ is the conductivity tensor of holes. The physical meaning of Eq.(7) is clear - after particle-hole conjugation the new vacuum is a full Landau level and the total current is the sum of the Hall current carried by the full Landau level and the current carried by the holes. At $`\nu =1/2`$ we have $`\nu =1\nu =1/2`$ and particle-hole symmetry. As the result $`\sigma _{\alpha \beta }^h(1\nu )=\sigma _{\alpha \beta }(\nu )`$, and hence Eq. (5) holds.
In the presence of disorder particle-hole symmetry can at most hold on average. If the probability distribution of the disorder potential satisfies $`P[V(𝐱)]=P[V(𝐱)]`$ we say that the disorder is particle-hole symmetric. It is important to note that while Eq. (2) holds for general disorder, Eq. (5) is only true when the disorder is particle-hole symmetric. In either case when there is disorder we need to interpret $`\sigma _{\alpha \beta }`$ and $`\sigma _{\alpha \beta }^{CF}`$ as the disorder-averaged conductivities.
Putting Eqs.(2) and (5) together we obtain
$$\frac{e^2}{2h}=\frac{\rho _{xy}}{\rho _{xx}^2+\rho _{xy}^2}=\frac{\rho _{xy}^{CF}+2\frac{h}{e^2}}{(\rho _{xx}^{CF})^2+(\rho _{xy}^{CF}+2\frac{h}{e^2})^2}.$$
(8)
After some trivial arithmetic we obtain
$$\sigma _{xy}^{CF}=\frac{\rho _{xy}^{CF}}{(\rho _{xx}^{CF})^2+(\rho _{xy}^{CF})^2}=\frac{e^2}{2h},$$
(9)
so long as
$$(\rho _{xx}^{CF})^2+(\rho _{xy}^{CF})^2>0.$$
(10)
The problem lies in the fact that Eq. (1) and Eq. (9) do not agree.
Thus it seems that the notion of a Fermi liquid of composite fermions is incompatible with the particle-hole symmetry. At this juncture the readers might wonder why should we put so much weight on particle-hole symmetry. After all there is no reason that such symmetry must exist in real systems. To answer this question we quote a recent experimental result of Wong, Jiang and Schaff. In Ref. Wong et al measured $`\rho _{xx}`$ and $`\rho _{xy}`$ near $`\nu =1/2`$ in gated $`GaAs/AlGaAs`$ hetrostructures. What they found is that for a range of carrier densities both $`\rho _{xx}`$ and $`\rho _{xy}`$ are temperature dependent at $`\nu =1/2`$. However their temperature dependence is such that $`\sigma _{xy}=\rho _{xy}/(\rho _{xx}^2+\rho _{xy}^2)`$ is temperature independent and equals to $`e^2/2h`$. Even leaving aside the issue of what is causing $`\sigma _{xy}=e^2/2h`$, the very fact that $`\sigma _{xy}=e^2/2h`$ and $`\rho _{xx}0`$ is sufficient to give $`\sigma _{xy}^{CF}=e^2/2h`$. Recently Jiang has performed particle-hole transformation (Eq. (7)) on the data reported in Ref.. The result is entirely consistent with the presence of particle-hole symmetry.
Since the derivation of $`\sigma _{xy}^{CF}=e^2/2h`$ requires $`(\rho _{xx}^{CF})^2+(\rho _{xy}^{CF})^2>0`$, and in the absence of disorder $`(\rho _{xx}^{CF})^2+(\rho _{xy}^{CF})^2`$ could vanish, it has been suggested that perhaps $`\sigma _{xy}^{CF}=0`$ in the zero disorder limit. A difficulty with this scenario is that once accepting $`\sigma _{xy}^{CF}=0`$ for no disorder, one is forced to conclude that the composite fermion Hall conductance jumps from $`0`$ to $`\frac{e^2}{2h}`$ upon the introduction of infinitesimal amount of particle-hole symmetric disorder. Even more bothersome is the fact that such jump must persist at non-zero temperatures.
In any case our goal is to understand real systems for which Wong et al’s result suggests $`\sigma _{xy}^{CF}=\frac{e^2}{2h}`$. In the following we shall argue that $`\sigma _{xy}^{CF}=e^2/2h`$ implies that the (polarization) charge current carried by neutral fermions does not have off-diagonal correlation. Since the last statement is required by the fact that neutral fermions are globally neutral, we believe that Eq. (9) holds even in the absence of disorder.
III. The neutral fermion (dipole) theory
The physical idea behind the neutral fermion theory is as follow. Let us first consider a different problem where we have a group of distinguishable particles with identical mass at $`\nu =1/2`$. If these particles interact via a sufficiently short-range repulsive force, the ground state wavefunction will be
$$\mathrm{\Psi }_{1/2}(z_1,\mathrm{},z_N)=\underset{(ij)}{}(z_iz_j)^2\mathrm{exp}\{\underset{k}{}|z_k|^2/4\}.$$
(11)
Since Eq. (11) is the ground state in the absence of symmetry constraint, it is the lowest-energy solution.
Of course, the symmetric wavefunction in Eq.(11) is not allowed for electrons. Consequently electron’s Fermi statistics frustrates their energy minimization. To quantify this frustration we view each electron as a boson carrying one quantum of fictitious magnetic flux (Fig.2).
Were it not for the fictitious flux, the bosons would have condensed into the Laughlin liquid described by Eq.(11). Each fictitious flux quantum induces a quasiparticle of charge $`1/2`$ and statistics $`\pi /2`$. Due to global charge neutrality a quasihole of opposite charge are nucleated elsewhere. It turns out that the quasiholes also have statistics $`\pi /2`$.
Thus $`\nu =1/2`$ can be thought of as a liquid of $`\pm 1/2`$ charged anyons floating on top of a Bose quantum Hall liquid. In Refs. it is shown that the action for this defect-liquid is given by
$$S=S_{int}[𝐏]+id^2x𝑑t[2\pi 𝐏\times \dot{𝐏}4\pi 𝐏\times 𝐣].$$
(12)
In the above $`𝐏`$ is the polarization density caused by the defects, and
$$𝐣(𝐱,t)=\underset{i}{}\dot{𝐫}_i(t)\delta (𝐱𝐫_i(t)),$$
(13)
where $`\{𝐫_i(t)\}`$ are the coordinates of electrons. For reason to be discussed shortly we identify $`𝐣`$ as the composite fermion current. The partition function is given by
$$Z=D[\{𝐫_j\}]^{}D[𝐏]e^S,$$
(14)
where $`D[\{𝐫_j\}]`$ denotes the fermion-Feynman path integral over $`\{𝐫_j(t)\}`$, and $`^{}D[𝐏]`$ denotes the function integral over $`𝐏(𝐱,t)`$ under the constraint
$$𝐏(𝐱,t)=\underset{i}{}\delta (𝐱𝐫_i(t))\overline{\rho }.$$
(15)
In this theory the total electric current is the sum of the Hall current carried by the Bose quantum Hall liquid, and the polarization current of the defects. As the result
$`\sigma _{\alpha \beta }={\displaystyle \frac{e^2}{2h}}ϵ_{\alpha \beta }+\sigma _{\alpha \beta }^N.`$ (16)
In Eq. (16) $`\sigma _{\alpha \beta }^N`$ is the conductivity due to the polarization current
$$j_\mu ^N=(𝐏,\dot{𝐏}).$$
(17)
If we assume that the Coulomb interaction binds opposite-charged defects together, we arrive at a system of dipoles. The polarization due to such dipoles is
$$𝐏(𝐱,t)=\underset{i}{}𝐩_i(t)\delta (𝐱𝐫_i(t)),$$
(18)
where $`𝐩_i`$ is the moment of the ith dipole. By substituting Eq. (18) into Eq. (12) and Eq. (15), we can derive the action and the constraint reported in Ref. and Ref.. It is important to note that unlike ordinary dipoles, these dipoles obey Fermi statistics.
Particle-hole symmetry also imposes a constraint on the dynamics of dipoles. Combining Eq. (5) and Eq. (16) we obtain
$$\sigma _{xy}^N=0.$$
(19)
Eq. (19) is easy to understand – a liquid made up of (fermionic) dipoles can not see the external magnetic field, hence has no Hall conductance at $`𝐪=0`$.
IV. The relation between the neutral fermion and the fermion Chern-Simons theories
There has been considerable discussions about whether the neutral fermion and the fermion Chern-Simons theories are actually the same. There are good basis for such suspicion. For example the electron density-density correlation function obtained from neutral fermion theory is similar to that predicted by the fermion Chern-Simons theory. In addition, it has been shown recently that the neutral fermion theory suffers similar infrared problems as the fermion Chern-Simons theory. In the following we shall prove that the neutral fermion theory is in fact the fermion Chern-Simons theory formulated in the lowest Landau level.
To see that we define
$$𝐚4\pi \widehat{z}\times 𝐏.$$
(20)
Substituting Eq. (20) into Eq. (12) and Eq. (15) we obtain
$$S=S_{int}[\frac{1}{4\pi }\times 𝐚]+id^2x𝑑t[\frac{1}{8\pi }𝐚\times \dot{𝐚}+𝐚𝐣],$$
(21)
and
$$\frac{1}{4\pi }\times 𝐚(𝐱,t)=\underset{i}{}\delta (𝐱𝐫_i(t))\overline{\rho }.$$
(22)
We recognize that Eqs.(21,22) are the first-quantized formulation of the fermion Chern-Simons theory in the temporal gauge ($`a_0=0`$). We note that there is no kinetic energy term $`𝑑t_i\frac{m}{2}|\dot{𝐫}|^2`$ because Eq. (21) is an action in the lowest Landau level.
V. The composite fermion Hall conductance in the neutral fermion theory
The equivalence of the neutral fermion and the fermion Chern-Simons theories does not say anything about the validity of Eq. (1). To address that issue let us concentrate on Eq. (21) and Eq. (22). In the following we shall demonstrate that due toEq. (22) there is no mixing between the longitudinal and transverse components of $`𝐚`$.
Let us write
$$𝐚=𝐚_T+\chi .$$
(23)
By direct substitution it is simple to prove that in the absence of boundary
$$S=S[𝐚𝐚_T]id^2x𝑑t\chi [𝐣+\frac{1}{4\pi }\times \dot{𝐚_T}].$$
(24)
The last term vanishes by Eq. (22) and the composite fermion current continuity equation.
Such no-mixing condition should be respected if we integrate out the composite fermions first. To make the connection to Ref. more transparent, let us restore the gauge freedom in Eq. (21). The new action read
$$S=S_{int}[\frac{1}{4\pi }\times 𝐚]+id^2x𝑑t[\frac{1}{8\pi }ϵ_{\mu \nu \lambda }a_\mu _\nu a_\lambda +a_\mu j_\mu ].$$
(25)
(We note that after restoring the gauge freedom, the constraint (Eq. (22)) is incorporated into the action.)
Now we are ready to integrate out the composite fermions. The resulting action will depend on $`a_\mu `$ alone. After gauge fixing, i.e. $`a_0=0`$, the result had better not contain a longitudinal and transverse mixing term. In order for that to be true a counter term must be generated by the composite fermions to cancel second term in Eq. (25). This requirement translated into $`\sigma _{xy}^{CF}=\frac{1}{4\pi }`$, or equivalently, $`\sigma _{xy}^{CF}=\frac{e^2}{2h}`$. The absence of the longitudinal and transverse mixing guarantees that the polarization current of neutral fermions fluctuates in a time-reversal-invariant fashion.
VI. The mixed behavior of composite fermions
So far we have been focusing on the composite fermion Hall conductance. It turns out that in order for the neutral fermion theory to describe experiments, the composite-fermion transverse current-current correlation $`\mathrm{\Pi }_{tt}(𝐪,\omega )`$ must be similar to that of a Fermi liquid. The puzzle is why despite their large Hall conductance, the composite fermions have some aspect (i.e. $`\mathrm{\Pi }_{tt}(𝐪,\omega )`$) of a Fermi liquid. In the following we offer an explanation.
Let us define
$$j_\mu ^{CM}j_\mu j_{L,\mu }^N.$$
(26)
In the above $`j_\mu `$ is composite fermion current, and
$$j_{L,\mu }^N(𝐏_L,\dot{𝐏_L}),$$
(27)
is the longitudinal component of the polarization current. According to Eq. (15)
$$j_0^{CM}=\overline{\rho }.$$
(28)
As the result $`𝐣^{CM}`$ is a pure transverse current. In this way the composite fermion current is decomposed into a longitudinal and a transverse component:
$$j_\mu =j_\mu ^{CM}+j_{L,\mu }^N.$$
(29)
The reason that the composite fermion current-current correlation shows mixed behavior is due to the fact that while $`<j_\mu ^{CM}j_\nu ^{CM}>`$ and $`<j_{L,\mu }^Nj_{L,\nu }^N>`$ have time-reversal invariant correlation at $`𝐪=0`$, $`<j_\mu ^{CM}j_{L,\nu }^N>`$ does not.
In summary, the conduction at $`\nu =1/2`$ is through two currents: a pure Hall current, and a polarization current. The polarization current is produced by a liquid of fermionic dipoles. Composite fermions are constituents of these dipoles. At long wavelength the dynamics of neutral fermion is zero-field-like, while that of composite fermions is not. Consequently we believe that the composite fermions do not form a Fermi liquid.
Acknowledgement: DHL is supported in part by DOE via Los Alomos National Laboratory and the Lawrence Berkeley National Laboratory. He thanks Steve Kivelson for valuable discussions.
BIBLIOGRAPHY
|
no-problem/9901/astro-ph9901361.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
We discuss observations of the submm-selected galaxy, SMM J02399$``$0136, and what has been learnt about it during the year following its discovery. SMM J02399$``$0136 was the first distant galaxy detected in submm surveys with SCUBA. Its association with a massive, gas-rich starburst/AGN at $`z=2.8`$ has lead to suggestions that the prevalence of AGN in the early Universe may be high (Ivison et al. 1998) and that these AGN may account for a significant fraction of the far-IR background.
## 2 Discovery
The discovery of SMM J02399$``$0136 (Ivison et al. 1998) came as a surprise to all concerned, with the possible exception of Andrew Blain who had been a long-time proponent of submm imaging of the distant Universe using massive cluster lenses (Blain 1997). The discovery images were obtained with SCUBA during uncharacteristically good weather in the summer of 1997 by Smail, Ivison & Blain (1997). As often seems to happen, SMM J02399$``$0136 was seen in the first map, behind the $`z=0.37`$ massive cluster, Abell 370. The area covered during that first night has since increased by two orders of magnitude, with the completion of the SCUBA Lens Survey (Smail et al. 1998; Blain et al. 1999; Smail et al., these proceedings) and the commencement of several large, conventional blank-field surveys (e.g. Eales et al. 1999) but SMM J02399$``$0136 remains the brightest submm-selected galaxy, by virtue of its amplification by the foreground cluster (a factor $`2.4\pm 0.3`$). This amplification aids us in the follow-up of SMM J02399$``$0136 at all wavelengths, and when combined with the lavish archival datasets available for this field, has allowed a detailed view of the nature of this source to be achieved relatively quickly.
## 3 New and archival data
Fortuitously, a deep (10 $`\mu `$Jy beam<sup>-1</sup>) 1.4-GHz map of A 370 obtained some years ago by Frazer Owen and K. S. Dwarakanath revealed a weak, extended radio counterpart within the error box of the submm position of SMM J02399$``$0136 (Fig. 1). A pair of optical counterparts, resolved in archival CFHT images (Kneib et al. 1994), are within $`1^{\prime \prime }`$ of the radio source. L1, the compact component, is marginally resolved with an intrinsic FWHM of 0.3<sup>′′</sup>. L2 has a more complex morphology than L1, showing a ridge of emission to the north and a diffuse region extending south and west towards L1. L1 and L2 are separated by $`3^{\prime \prime }`$ ($`9`$ kpc after correcting for tangential amplification).
The swift provision of near- and mid-IR images from UKIRT and ISO by Tim Naylor and Leo Metcalfe showed that at least one of the two counterparts possessed a spectral energy distribution (SED) whose broad features were consistent with those expected for a submm-bright galaxy.
Since the optical counterparts were relatively bright ($`I_{\mathrm{total}}20.5`$, $`22.7`$), we added a slit to a mask being using for multi-object spectroscopy of A 370 with the CFHT and obtained high-quality optical spectra (Fig. 2). These clearly show that both counterparts are at the same redshift, $`z=2.803\pm 0.003`$. Both have faint continua with narrow lines in emission: L1 shows strong, narrow Ly $`\alpha `$, N v and C iv, hints of weak Si ii, Si iv, He ii and possibly a broad C iii\] line; L2 shows only weak, narrow Ly $`\alpha `$ and Si ii/O i, with the Ly $`\alpha `$ emission extending over at least 8<sup>′′</sup>.
The 1.4-GHz radio emission covers $`7.9^{\prime \prime }\times 2.2^{\prime \prime }`$, with a position angle (PA) of 71, a maximum surface brightness of 221 $`\mu `$Jy beam<sup>-1</sup> and an integrated flux density of $`526\pm 50`$$`\mu `$Jy. This is below the detection thresholds of most radio surveys, even after lens amplification. The rest-frame far-IR-to-5 GHz flux ratio is similar to that seen in nearby starbursts (Condon et al. 1991), which could be taken as evidence that a starburst is the dominant contributor to the far-IR luminosity; however, a recent 5-GHz map shows a PA closer to the optical/IR morphology, which suggests that the 1.4-GHz emission may be from the AGN.
Near-IR spectra of \[O iii\] and Balmer $`\alpha `$ were also obtained. Observations of both lines were extremely challenging, as the atmosphere at 1.9 and 2.5 $`\mu `$m is a better door than a window. Only modest detections were obtained, suggesting narrow cores to the lines; however, there is little hope of detecting broad components (if present).
The overall SED of SMM J02399$``$0136 is shown in Fig. 3. L1 and L2 both have smooth, steep UV–optical–mid-IR continua. Between 120 and 350 $`\mu `$m (rest frame), the SED has the characteristic spectral index $`\alpha +3`$ of optically-thin emission from dust grains. The far-IR luminosity (20–1000 $`\mu `$m), $`L_{\mathrm{FIR}}10^{13}`$ L (after correcting for lensing). The dust mass is $`5\times 10^8`$ M for $`T_\mathrm{d}=50`$k. If the dust is heated primarily by OB-type stars then $`L_{\mathrm{FIR}}`$ corresponds to an SFR ($`>10`$ M stars) of $`2000`$ M yr<sup>-1</sup> ($`6000`$ M yr<sup>-1</sup> if the IMF extends down to much lower masses). Similarly high estimates of the SFR are given by the H$`\alpha `$ luminosity (2000–20000 M yr<sup>-1</sup>) and the radio luminosity, which predicts a supernova rate of 80–400 yr<sup>-1</sup>. By any standards this would be a spectacular starburst.
The most recent observational success was a search for molecular gas in the system (Frayer et al. 1998). The search began at the optical redshift, using the Owens Valley Millimeter Array. After 38 hr of integration time, a weak signal with coherent phases was found at the reddest velocities. A further 16 hr was spent at a lower frequency to obtain the complete line profile shown in Fig. 4. The CO emission is unresolved ($`<5^{\prime \prime }`$) and positionally coincident with L1. It is redshifted by 400 km s<sup>-1</sup> with respect to the optical lines, with $`z_{\mathrm{CO}}=2.808`$. The line is broad ($`710\pm 80`$ km s<sup>-1</sup>), with an apparent double-peaked profile.
The high molecular gas mass implied by the data ($`10^{11}`$ M) lends weight to arguments that a significant fraction of the immense far-IR luminosity is due to star formation. Such a mass is not unique for high-redshift systems but it is several times higher than the most luminous low-redshift IRAS galaxies, implying that SMM J02399$``$0136 will evolve into a massive galaxy. The large gas mass, compared to the dynamical mass, suggests that the gas is a dynamically important component of this galaxy and points to its relative youth. On the other hand, the gas-to-dust ratio (400 for $`T_{\mathrm{dust}}=50`$k) is similar to that found for other high-redshift CO sources, suggesting that like many other high-redshift massive galaxies, SMM J02399$``$0136 is already chemically evolved.
SMM J02399$``$0136 is one of two galaxies from the SCUBA Lens Survey to be detected in CO to date, the other being SMM J14011+0252 at $`z=2.6`$ (Ivison et al. 1999; Frayer et al. 1999). These are the first two members of the submm field population to be investigated in detail. Their optical emission-line characteristics are radically different, with one showing strong AGN signatures, the other an apparently pure starburst spectrum; however, both are found to be associated with gas-rich, massive galaxies, which supports the idea that a significant proportion of the submm galaxy population is made up of proto-ellipticals.
In summary, SMM J02399$``$0136 shows clear signs of the presence of an AGN, both in its optical emission-line properties and its radio morphology. However, there are also indications of an on-going starburst: extended optical emission, narrow and strong H$`\alpha `$ emission, a large mass of dust and a dynamically significant gas reservoir. If asked the question: “Is SMM J02399$``$0136 an AGN or a starburst?”, we’d probably have to answer: “Both”. Critical tests of the relative luminosity of the AGN and the starburst include the identification in polarized light of hidden broad-line components to the rest-frame UV/optical emission lines, a search with AXAF for hard X-ray emission (which should escape an obscured active nucleus), and high-resolution 1.4-GHz images to look at the radio emission characteristics in more detail.
We must wait and see whether a significant fraction of submm-selected galaxies resemble SMM J02399$``$0136. A large AGN contribution to the far-IR background would certainly resolve potential problems concerning over-production of metals, though there are other solutions — modifying the IMF, for example (Blain et al. 1999). A decisive test of the contribution of AGN-powered emission to the extragalactic background awaits the detailed study of a representative sample of the submm-selected galaxies that dominate the submm background emission. The faintness of these sources in the optical/near-IR and millimetre wavebands compared to the sensitivities of current instrumentation means that the advantages of using lens amplification will probably remain clear for these important studies.
## Acknowledgements
We acknowledge support from PPARC and the Royal Society and thank Jacqueline Davidson, Tom Jones and Plaid Cymru for inspiration.
|
no-problem/9901/gr-qc9901054.html
|
ar5iv
|
text
|
# References
A Simple Derivation of the Naked Singularity
in Spherical Dust Collapse
Sukratu Barve<sup>1</sup><sup>1</sup>1e-mail address: sukkoo@relativity.tifr.res.in, T. P. Singh<sup>2</sup><sup>2</sup>2 e-mail address: tpsingh@tifr.res.in
Tata Institute of Fundamental Research,
Homi Bhabha Road, Mumbai 400 005, India.
Cenalo Vaz<sup>3</sup><sup>3</sup>3 e-mail address: cvaz@haar.pha.jhu.edu<sup>4</sup><sup>4</sup>4 On leave of absence from the Universidade do Algarve, Faro, Portugal
Department of Physics, The Johns Hopkins University,
Baltimore, MD 21218, USA
Louis Witten<sup>5</sup><sup>5</sup>5 e-mail address: witten@physics.uc.edu
Department of Physics, University of Cincinnati,
Cincinnati, OH 45221-0011, USA
Abstract
We describe a simple method of determining whether the singularity that forms in the spherically symmetric collapse of inhomogeneous dust is naked or covered. This derivation considerably simplifies the analysis given in the earlier literature, while giving the same results as have been obtained before.
Various authors have shown that the spherical gravitational collapse of inhomogeneous dust results in the formation of a curvature singularity which is naked for certain initial conditions, and covered for other initial conditions. Probably the first (numerical) results on this problem are due to Eardley and Smarr . This was followed by the analytical work of Christodoulou and Newman who considered smooth initial data. Their results were generalised, among others, by Ori and Piran , and by Dwivedi, Jhingan, Joshi and Singh , . As is only to be expected, in one way or the other, these works all deal with propagation of null geodesics in the spacetime of collapsing dust.
While successive works have succeeded in simplifying the earlier analysis, perhaps it can be said that the discussions continue to remain somewhat involved. In the present paper, we describe a short but straightforward method of showing whether the naked singularity in the dust model is covered or naked. We reproduce results obtained previously by other methods. We restrict attention to the marginally bound dust collapse - similar principles may be used to derive results for the non-marginally bound case.
In comoving coordinates $`(t,r,\theta ,\varphi )`$ the spacetime metric for spherical dust collapse is given by
$$ds^2=dt^2R^2dr^2R^2d\mathrm{\Omega }^2$$
(1)
where $`R(t,r)`$ is the area radius at time $`t`$ of the shell having the comoving coordinate $`r`$. A prime denotes partial derivative w.r.t. $`r`$. The energy-momentum tensor for dust has only one non-zero component $`T_0^0=ϵ(t,r)`$, which is the energy density. The Einstein equations for the collapsing cloud are
$$\frac{8\pi G}{c^4}ϵ(t,r)=\frac{F^{}}{R^2R^{}},\dot{R}^2=\frac{F(r)}{R}.$$
(2)
A dot denotes partial derivative w.r.t. time $`t`$. The function $`F(r)`$ results from the integration of the second order equations. Henceforth we shall set $`8\pi G/c^4=1`$.
The second of these equations can be easily solved to get
$$R^{3/2}(t,r)=r^{3/2}\frac{3}{2}\sqrt{F}t$$
(3)
where we have used the freedom in the scaling of the comoving coordinate $`r`$ to set $`R(0,r)=r`$ at the starting epoch of collapse, $`t=0`$. It follows from the first equation in (2) that the function $`F(r)`$ gets fixed once the initial density distribution $`ϵ(0,r)=\rho (r)`$ is given, i.e.
$$F(r)=\rho (r)r^2𝑑r.$$
(4)
Hence $`F(r)`$ has the interpretation of being twice the mass to the interior of the shell labeled $`r`$. If the initial density $`\rho (r)`$ has a series expansion
$$\rho (r)=\rho _0+\rho _1r+\frac{1}{2!}\rho _2r^2+\frac{1}{3!}\rho _3r^3+\mathrm{}$$
(5)
near the center $`r=0`$, the resulting series expansion for the mass function $`F(r)`$ is
$$F(r)=F_0r^3+F_1r^4+F_2r^5+F_3r^6+\mathrm{}$$
(6)
where $`F_q=\rho _q/q!(q+3)`$, and $`q=0,1,2,3..`$We note that we could set $`\rho _1=0`$ without in any way affecting the conclusions of this paper. Further, the first non-vanishing derivative in the series expansion in (5) should be negative, as we will consider only density functions which decrease as one moves out from the center.
According to (3) the area radius of the shell $`r`$ shrinks to zero at the time $`t_c(r)`$ given by
$$t_c(r)=\frac{2r^{3/2}}{3\sqrt{F(r)}}.$$
(7)
At $`t=t_c(r)`$ the Kretschmann scalar
$$K=12\frac{F^2}{R^4R^2}32\frac{FF^{}}{R^5R^{}}+48\frac{F^2}{R^6}$$
(8)
diverges at the shell labeled $`r`$ and hence this represents the formation of a curvature singularity at $`r.`$ In particular, the central singularity, i.e. the one at $`r=0`$, forms at the time
$$t_0=\frac{2}{3\sqrt{F_0}}=\frac{2}{\sqrt{3\rho _0}}.$$
(9)
At $`t=t_0`$ the Kretschmann scalar diverges at $`r=0`$. Near $`r=0,`$ we can expand $`F(r)`$ and approximately write for the singularity curve
$$t_c(r)=t_0\frac{F_n}{3F_0^{3/2}}r^n.$$
(10)
Here, $`F_n`$ is the first non-vanishing term beyond $`F_0`$ in the expansion (6). We note that $`t_c(r)>t_0`$, since $`F_n`$ is negative.
We wish to investigate if the singularity at $`t=t_0,r=0`$ is naked, i.e. are there one or more outgoing null geodesics which terminate in the past at the central singularity. We restrict attention to radial null geodesics. Let us start by assuming that one or more such geodesics exist, and then checking if this assumption is correct. Let us take the geodesic to have the form
$$t=t_0+ar^\alpha $$
(11)
to leading order, in the $`tr`$ plane, where $`a>0`$, $`\alpha >0`$. In order for this geodesic to lie in the spacetime, we conclude by comparing with (10) that $`\alpha n`$, and in addition, if $`\alpha =n`$, then $`a<F_n/3F_0^{3/2}`$.
As is evident from the form (1) of the metric, an outgoing null geodesic must satisfy the equation
$$\frac{dt}{dr}=R^{}.$$
(12)
In order to calculate $`R^{}`$ near $`r=0`$ we first write the solution (3) with only the leading term $`F_n`$ retained in $`F(r)`$ in (6). This gives
$$R=r\left(1\frac{3}{2}\sqrt{F_0}\left[1+\frac{F_n}{2F_0}r^n\right]t\right)^{2/3}.$$
(13)
Differentiating this w.r.t. $`r`$ gives
$$R^{}=\left(1\frac{3}{2}\sqrt{F_0}\left[1+\frac{F_n}{2F_0}r^n\right]t\right)^{1/3}\left(1\frac{3}{2}\sqrt{F_0}t\frac{(2n+3)F_n}{4\sqrt{F_0}}r^nt\right).$$
(14)
Along the assumed geodesic, $`t`$ is given by (11). Substituting this in $`R^{}`$ and equating the resulting $`R^{}`$ to $`dt/dr=\alpha ar^{\alpha 1}`$ gives
$$\alpha ar^{\alpha 1}=\frac{\left(1\frac{3}{2}\sqrt{F_0}\left[t_0+ar^\alpha \right]\frac{(2n+3)F_n}{4\sqrt{F_0}}r^n\left[t_0+ar^\alpha \right]\right)}{\left(1\frac{3}{2}\sqrt{F_0}\left[1+\frac{F_n}{2F_0}r^n\right]\left[t_0+ar^\alpha \right]\right)^{1/3}}.$$
(15)
This is the key equation. If it admits a self-consistent solution then the singularity will be naked (i.e. at least one outgoing null geodesic will terminate at the singularity), otherwise not. We simplify this equation by putting in the requirement mentioned earlier, that $`\alpha n`$. Consider first $`\alpha >n`$. In this case we get, to leading order
$$\alpha ar^{\alpha 1}=\left(1+\frac{2n}{3}\right)\left(\frac{F_n}{2F_0}\right)^{2/3}r^{2n/3}$$
(16)
which implies that $`\alpha =1+2n/3`$, and $`a=(F_n/2F_0)^{2/3}`$. By substituting integral values for $`n`$ we find that only for $`n=1`$ and $`n=2`$ the condition $`\alpha >n`$ is satisfied. Hence the singularity is naked for $`n=1`$ and $`n=2`$, i.e. for the models $`\rho _1<0`$ and for $`\rho _1=0,\rho _2<0`$. There is at least one outgoing geodesic given by (11) which terminates in the central singularity in the past. If $`n>3`$ then the condition $`\alpha >n`$ cannot be satisfied and the singularity is not naked. This is the case $`\rho _1=\rho _2=\rho _3=0`$.
Consider next that $`\alpha =n`$. In this case we get from (15) that
$$nar^{n1}=\frac{\frac{3}{2}a\sqrt{F_0}\frac{(2n+3)F_n}{6\sqrt{F_0}}}{\left(\frac{F_n}{2F_0}\frac{3a}{2}\sqrt{F_0}\right)^{1/3}}r^{2n/3}$$
(17)
which implies that $`n=3`$ and gives an implicit expression for $`a`$ in terms of $`F_3`$ and $`F_0`$. This expression for $`a`$ can be simplified to get the following quartic for $`a`$:
$$12\sqrt{F_0}a^4a^3(4F_3/F_0+F_0^{3/2})3F_3a^23F_3^2/F_0^{3/2}a(F_3/F_0)^3=0.$$
(18)
By defining $`b=a/F_0`$ and $`\xi =F_3/F_0^{5/2}`$ this quartic can be written as
$$4b^3(3b+\xi )(b+\xi )^3=0.$$
(19)
The singularity will be naked if this equation admits one or more positive roots for $`b`$ which satisfy the constraint $`b<\xi /3`$. This last inequality is the same as the condition $`a<F_n/3F_0^{3/2}`$ given below equation (11). We note that $`\xi `$ is negative. This quartic can be made amenable to further analysis by substituting $`Y=2b/\xi ,`$ and then $`\eta =1/6\xi `$, so as to get
$$Y^3(Y2/3)\eta (Y2)^3=0.$$
(20)
As discussed in this quartic has two positive real roots provided $`\eta \eta _1`$ or $`\eta \eta _2`$ where
$$\eta _1=\frac{26}{3}+5\sqrt{3},\eta _2=\frac{26}{3}5\sqrt{3}.$$
(21)
We also require that $`Y<2/3`$. By examining the quartic (20) one can see that if $`\eta \eta _1`$ then $`Y2`$; hence this range of $`\eta `$ is ruled out. Thus the singularity is naked provided $`\eta \eta _2`$, or equivalently $`\xi 25.9904`$.
This completes the analysis to decide whether or not the central singularity is naked, and we get the same results as have been given earlier in the literature, albeit in a much simpler manner. Now we examine whether or not there is an entire family of radial null geodesics which terminate at the naked singularity. For this purpose we assume a solution for the geodesics correct to one order beyond the solution (11), i.e. we take
$$t=t_0+ar^\alpha +dr^{\alpha +\beta }.$$
(22)
where $`d`$ and $`\beta `$ are constants to be determined, and $`a`$ and $`\alpha `$ take the values calculated above. As before, we substitute this form of $`t(r)`$ in the expression (14) for $`R^{}`$ to get
$$R^{}=\frac{\left(1\frac{3}{2}\sqrt{F_0}\left[t_0+ar^\alpha +dr^{\alpha +\beta }\right]\frac{(2n+3)F_n}{4\sqrt{F_0}}r^n\left[t_0+ar^\alpha +dr^{\alpha +\beta }\right]\right)}{\left(1\frac{3}{2}\sqrt{F_0}\left[1+\frac{F_n}{2F_0}r^n\right]\left[t_0+ar^\alpha +dr^{\alpha +\beta }\right]\right)^{1/3}}.$$
(23)
Next, we equate this $`R^{}`$ to the $`dt/dr`$ calculated from (22). For the cases $`n=1,2`$ we get, after retaining terms up to second order
$$\alpha ar^{\alpha 1}+(\alpha +\beta )dr^{\alpha +\beta 1}=\left(1+\frac{2n}{3}\right)\left(\frac{F_n}{2F_0}\right)^{2/3}r^{2n/3}+Dr^{\alpha n/3}.$$
(24)
As before, at the leading order $`a`$ and $`\alpha `$ get fixed. At the next order, we get $`\beta =1+n/3`$ and $`d=D/(2+n/3)`$ where
$$D=\frac{3}{2}\sqrt{F_0}\left(\frac{F_n}{2F_0}\right)^{1/3}\left[1+\frac{1}{3}\left(1+\frac{2n}{3}\right)\left(\frac{F_n}{2F_0}\right)^{2/3}\right].$$
(25)
It thus follows, according to (22), that when $`n=1,2`$ there is to this order only one outgoing geodesic, having the values of $`d`$ and $`\beta `$ given above.
Consider next $`n=3`$. By repeating the above calculation of $`R^{}`$ we get
$$R^{}=\frac{3}{2}2^{1/3}F_0\frac{\xi +b}{\left(\xi +3b\right)^{1/3}}r^2+3.2^{1/3}bd\frac{1}{\left(\xi +3b\right)^{4/3}}r^{2+\beta }+O(r^5).$$
(26)
Here, $`O(r^5)`$ is a term of order $`r^5`$ which is independent of $`d`$. Further analysis depends on whether or not $`\beta `$ is less than $`3`$. Assume first that $`\beta <3`$. Then the $`O(r^5)`$ term can be ignored. By equating $`R^{}`$ to $`dt/dr=3ar^2+(3+\beta )dr^{2+\beta }`$, we get the earlier quartic (19) for $`b`$. At the next order, $`d`$ drops out and one gets an equation for $`\beta `$, i.e.
$$3+\beta =3b.2^{1/3}\frac{1}{\left(\xi +3b\right)^{4/3}}.$$
(27)
Since $`d`$ drops out, this means that $`d`$ is arbitrary, and there will be an entire family of outgoing null geodesics terminating at the singularity, provided $`\beta `$ is non-negative. It is essential that $`\beta `$ be non-negative, otherwise these geodesics will not lie in the spacetime, as is evident from a comparison with the singularity curve (10). As we saw above, the quartic (20) and hence (19) has two positive roots in the naked singular range. We now show that $`\beta `$ as defined in (27) is positive at one of the roots, and negative at the other root. Let us write the quartic (20) as $`V(Y)=0`$ where
$$V(Y)=Y^3\left(\frac{2}{3}Y\right)\alpha (2Y)^3.$$
(28)
It is then easily shown that
$$\left(\frac{dV}{dY}\right)_{Y=Y_0}=\beta Y_0^2\left(\frac{2}{3}Y_0\right)$$
(29)
where $`Y_0`$ is a real, positive root of the quartic. Since the derivative $`dV/dY`$ must be positive at one of the roots and negative at the other, and since $`Y_0<2/3`$ it follows that $`\beta `$ is positive at one of the roots and negative at the other. Hence one of the roots admits only one outgoing geodesic (for which $`d`$=$`0`$) while the other root admits an entire family of outgoing geodesics. It is easily verified that the family emerges from the larger of the two roots, which lies closer to the singularity curve (10).
However, it also turns out, as is verified numerically, that the positive value of $`\beta `$ does not remain below $`3`$ for all $`\xi `$. It can be shown that $`\beta `$ can be written as
$$\beta =\frac{Y^28Y+4}{\left(2/3Y\right)\left(2Y\right)}.$$
(30)
For values of $`\xi `$ smaller than a certain critical value, $`\beta `$ becomes larger than $`3`$, so that then the $`O(r^5)`$ term in (26) dominates over the term proportional to $`r^{2+\beta }.`$ For such cases, we get from $`dt/dr=R^{}`$ that $`\beta =3`$, and $`d`$ also gets fixed at a particular value. In order to see the family of outgoing rays we will have to look at higher order terms in the various expansions.
Consider next the case of ingoing rays, given by $`dt/dr=R^{}`$, for which we take
$$t=t_0er^3gr^{3+\gamma }$$
(31)
(We consider only the case $`n=3`$). The expression for $`R^{}`$ is
$$R^{}=\frac{3}{2}2^{1/3}F_0\frac{h\xi }{\left(3h\xi \right)^{1/3}}r^2+\frac{3hg}{4}2^{4/3}\frac{1}{\left(\xi 3h\right)^{4/3}}r^{2+\gamma }$$
(32)
where $`h=eF_0`$. Equating this to $`dt/dr=3er^2(3+\gamma )gr^{2+\gamma }`$ gives, at order $`r^2,`$ the following quartic for $`h`$
$$h^3(12h4\xi )+(h\xi )^3=0$$
(33)
which admits a positive root $`h`$ for all $`\xi `$, as expected, so that there is always an ingoing ray. However at the next order, we get the relation
$$(3+\gamma )=\frac{3h}{4}2^{4/3}\frac{1}{\left(\xi 3h\right)^{4/3}}$$
(34)
which cannot be satisfied, unless $`g=0`$, since the l.h.s. is negative and the r.h.s. positive. This shows that there is only one ingoing ray to all orders in the expansion.
ACKNOWLEDGMENTS
We acknowledge partial support of the Junta Nacional de Investigacão Científica e Tecnológica (JNICT) Portugal, under contract number CERN/S/FAE/1172/97. C. V. and L. W. acknowledge the partial support of NATO, under contract number CRG 920096; L. W. acknowledges the partial support of the U. S. Department of Energy under contract number DOE-FG02-84ER40153 and C.V. acknowledges the partial support of the FCT under contract number FMRH/BSAB/54/98.
|
no-problem/9901/physics9901042.html
|
ar5iv
|
text
|
# III. THE PERMISSIBLE EQUILIBRIUM POLARISATION DISTRIBUTION IN A STORED PROTON BEAM aafootnote aUpdated version of a talk presented at the 15th ICFA Advanced Beam Dynamics Workshop: “Quantum Aspects of Beam Physics”, Monterey, California, U.S.A., January 1998. Also in DESY Report 98–096, September 1998.
## 1 A problem and its solution
Following the successful attainment of longitudinal $`e^\pm `$ polarisation in HERA (Article II) it is natural to consider whether it would be possible to complement the polarised $`e^\pm `$ with $`820GeV`$ polarised protons .
As pointed out in Article I, a stored polarised proton beam can only be obtained by injecting and then accelerating a prepolarised beam provided by a suitable source. However, I comment on another concept in the Appendix.
A major obstacle to reaching high energy with the polarisation intact is that the spins must negotiate groups of spin–orbit resonances every $`523MeV`$ (Article I) since the spin tune is approximately proportional to the energy.
However, this problem can be ameliorated by the inclusion of Siberian Snakes . These are magnet systems which rotate spins by $`180`$ degrees around an axis in the horizontal plane independently of the energy of the particle. By the installation of suitable combinations of snakes, the spin tune $`\nu _{spin}`$ can be fixed at one half and then by suitable choice of orbital tunes, resonances can be avoided at all energies, assuming that the dependence of spin tune on synchrobeta amplitude is weak (Article I).
Tracking simulations show that even with snakes, preservation of polarisation up to high energy is nontrivial. For example a $`1`$ milliradian orbit deflection at $`820GeV`$ causes a $`90`$ degree spin rotation (Article I, Eq. (4)). Thus one should check first whether the spin distribution permitted by the requirement of equilibrium at a chosen high energy would be acceptable. There would be no point in trying to accelerate if the answer were negative. Moreover, to arrive at an answer we have the ideal tool at hand, namely the invariant spin field introduced in Article I. The measure for acceptability is the deviation of $`\widehat{n}`$ from $`\widehat{n}_0`$ averaged across phase space. If the average deviation were, say, $`60`$ degrees, then even with $`|\stackrel{}{P}_{eq}(\stackrel{}{u};s)|=1`$ at each point in phase space, a polarimeter would only record about $`50\%`$ polarisation. Thus the optic and ring layout must be chosen so that the deviation is minimised. The invariant spin field can be calculated using the numerical technique ‘stroboscopic averaging’ of the computer code SPRINT <sup>b</sup><sup>b</sup>bThe new version of the SODOM algorithm gives equivalent results. See Article I..
Examples of the invariant spin field for a HERA proton optic with a suitable snake layout are shown in the figures. In this simulation the protons only execute integrable vertical betatron motion. Each figure shows the locus, on the surface of a sphere, of the tip of the $`\widehat{n}`$ vector ‘attached’ to its phase space ellipse at an interaction point on the ring where $`\widehat{n}_0`$ is vertical. The parameters are shown in the captions. An emittance of $`4\pi `$ mm mrad corresponds to ‘1-$`\sigma `$’.
The energy $`800GeV`$ lies well below a resonance structure that survives even in the presence of snakes and $`802GeV`$ is just below this structure. For particles at 1-$`\sigma `$ the spin field is well aligned at $`800GeV`$. At 4-$`\sigma `$ it has opened well beyond $`90`$ degrees at some phases. At $`802GeV`$ the 1-$`\sigma `$ locus deviates by more than $`30`$ degrees from $`\widehat{n}_0`$ at some orbital phases and at 4-$`\sigma `$ the field is almost isotropic! In all four cases the locii are closed as required by the periodicity condition $`\widehat{n}(\stackrel{}{u};s)=\widehat{n}(\stackrel{}{u};s+C)`$ (Article I).
A distribution of spins aligned along an invariant spin field is the ideal starting point for long term tracking studies of spin stability at fixed energy since deviations from equilibrium are then easy to discern.
## Appendix
It has been suggested that by using Stern–Gerlach (SG) forces to drive coherent synchrobeta motion and thereby separate particle bunches into ‘spin–up’ and ‘spin–down’ parts, a proton beam could effectively be polarised . The scheme using transverse SG forces requires running close to spin–orbit resonance but figure 2 illustrates that at high amplitude spin directions become isotropic so that the SG effect would average away. In any case the basic scheme might fail as a result of conservation laws . The longitudinal SG effect would be subject to mixing due to synchrotron oscillation unless some very special means were found to prevent it. Moreover, the longitudinal SG force is a total time derivative of a function of the fields and could integrate to zero .
## References
|
no-problem/9901/astro-ph9901067.html
|
ar5iv
|
text
|
# Relative ages of inner-halo globular clusters
## 1 Introduction
Galactic globular clusters (GGC) are the oldest components of the Galactic halo. The determination of their relative ages and of any age correlation with metallicities, abundance patterns, positions and kinematics allows to establish the formation timescale of the halo and gives information on the early efficiency of the enrichment processes in the proto–galactic material. The importance of these problems and the difficulty in answering to these questions is at the basis of the huge efforts dedicated to gather the relative ages of GGCs in the last 30 years or so (VandenBerg, Stetson, and Bolte 1996, Sarajedini, Chaboyer, Demarque 1997, SCD97, and references therein).
Any method for the age determination of GGCs is based on the position of the turnoff (TO) in the color–magnitude diagram (CMD) of their stellar population. We can measure either the absolute magnitude or the de–reddened color of the TO. In order to overcome the uncertainties intrinsic to any method to get GGCs distances and reddening, it is common to measure either the color or the magnitude (or both!) of the TO, relative to some other point in the CMD whose position does not depend on age.
Observationally, as pointed out by Sarajedini & Demarque (1990) and VandenBerg et al. (1990, VBS90), the most precise relative age indicator is based on the TO color relative to some fixed point on the red giant branch (RGB). Unfortunately, the theoretical RGB temperature is very sensitive to the adopted mixing length parameter, whose dependence on the metallicity is not established yet. As a consequence, investigations on relative ages based on this method (“horizontal method”) might be of difficult interpretation, and need a careful calibration of the relative TO color as a function of the relative age (Buonanno et al. 1998, B98). The other age indicator is based on the TO luminosity relative to the horizontal branch (HB). Though this is usually considered a more robust relative age indicator, it is affected both by the uncertainty on the dependence of the HB luminosity on metallicity and the empirical difficulties to get both the TO magnitude and the HB magnitude for clusters with only blue HBs.
Despite the intrinsic difficulties in gathering relative ages, it is nevertheless astonishing, for those not working in the field, to read the totally contradictory results coming from different groups.
We are still debating whether GGCs are almost coeval (Stetson et al. 1996) or whether the GGCs have continued to form for 5 Gyr (SCD97) or so (i.e. for 30-40% of the Galactic halo lifetime).
Indeed, there is a major limitation to the large scale GGC relative age investigations: the photometric inhomogeneity and the inhomogeneity in the analysis of the databases used in the various studies. And even worst, these etherogeneous collections of data do not allow a reliable treatment of the empirical errors, which sometimes must be guessed, with questionable results (Chaboyer et al. 1996).
Prompted by this major drawback, two years ago our group began the collection of an homogeneous photometric material for a large sample of GGCs, in order to obtain accurate relative ages by using both the horizontal and vertical method in a self-consistent way. The strategy was decided after a preliminary analysis of published CMDs both in the $`B,V`$ and $`V,I`$ bands (Saviane, Rosenberg, and Piotto 1997; hereafter SRP97). SRP97 showed that the $`VI`$ color differences are less sensitive to metallicity than the $`BV`$ ones (while retaining the same age sensitivity). SRP97 also suggested that a high-precision, large-scale investigation in the $`V`$ and $`I`$ bands would have allowed a relative age determination through the horizontal method without the usual limitation of dividing the clusters into different metallicity groups (VBS90).
Here we present the first exciting results of this investigation.
## 2 Data base
In the present investigation only two telescopes (one for the northern and one for the southern sky GGCs) have been used.
Thirtynine clusters have been observed with the ESO/Dutch 0.9m telescope at La Silla, and 16 at the RGO/JKT 1m telescope in la Palma. A total of 30 clusters had CMDs useful for the relative age determinations.
In this observing campaign (the first step of our investigation) all the clusters with $`(mM)_V<16`$ have been observed with 1-m class telescopes. We have also observed 16 clusters within $`(mM)_V<18`$ with 2-m class telescopes, and observations at 4-m class telescopes for the farthest clusters are planned.
The data have been calibrated with the same set of standards. The observations, reduction, and photometry will be described in forthcoming papers. Here suffice to say that the zero-point uncertainties of our calibrations are $`<0.03`$ mag for each band. Three clusters were observed both with the southern and the northern telescopes, thus providing a consistency check of the calibrations: no systematic differences were found, at the level of accuracy of the zero-points.
We are also collecting an independent and even more homogeneous database in the $`B`$, and $`V`$ bands. The data come from two hst programs (GO6095 and GO7470). Within GO7470 we should observe with the WFPC2 the core of 46 clusters; With the already available archive data by the end of GO7470, all the GGCs with $`(mM)_B<18`$ should have been observed with HST. Though the programme main objectives are different, most of the data are suitable for this project. This database allows an independent check of the results from the groundbased data.
In order to have well defined fiducial lines for each CMD, a selection on the photometric catalogs of each cluster was applied by imposing a threshold on the photometric errors, and only the less crowded regions were used. The following points were then measured on the CMD, both for the HST and groundbased samples: Magnitude and color of the TO; Magnitude of the MS point 0.05 mag redder than the TO; Color of the RGB at $`\mathrm{\Delta }m`$ magnitudes above one of the two previous points (where $`\mathrm{\Delta }m`$ was 1.5, 2.0, 2.5, 3.0, 3.5); The magnitude level of the HB. These values were used to calculate a set of both vertical and horizontal parameters. We will name these parameters, generically, $`\delta x_{\mathrm{@}y}`$ or $`\delta x_{\mathrm{@}y}^{0.05}`$. For example, $`\delta (VI)_{\mathrm{@}1.5}^{0.05}`$ is the difference between the $`(VI)`$ color of the RGB and that of the TO. In this case, the RGB point is measured 1.5 mag above the MS point 0.05 mag redder than the TO. In the following, we will use $`\mathrm{\Delta }V_{\mathrm{HB}}^{\mathrm{TO}}`$ as vertical parameter and $`\delta (VI)_{\mathrm{@}2.5}`$ as horizontal parameter. However, the results presented below are independent from this choice, as will be shown in Rosenberg, Saviane, and Piotto (1999, RSP99).
## 3 Methodology
Basically, we followed the B98 strategy. In view of the uncertainties associated to the interpretation of the horizontal parameter (cf. Section 1), we first identified a set of coeval clusters by means of the vertical method. These coeval GGCs allowed to identify an empirical “isochrone” in the $`\delta `$ color vs. \[Fe/H\] plane (a straight line in B98). These isochrones were then compared with the theoretical predictions. Finally, the color differences from the mean line were converted into an age.
The choice of the metallicity scale will be discussed in details in RSP99. In view of its homogeneity, we used the Rutledge et al. (1997) compilation on the Carretta & Gratton (1997) metallicity scale.
### 3.1 Coeval clusters
In Fig. 1, the parameter $`\mathrm{\Delta }V_{\mathrm{HB}}^{\mathrm{TO}}`$ is plotted vs. metallicity, both for the groundbased and the hst sample of GGCs. In the same figure, the theoretical isochrones are represented as dashed lines. The theoretical $`\mathrm{\Delta }V_{\mathrm{HB}}^{\mathrm{TO}}`$ was calculated from the TO of VandenBerg et al. (1998, V98) and Straniero et al. (1997) models, and assuming $`V_{\mathrm{HB}}=0.20[\mathrm{Fe}/\mathrm{H}]+0.98`$ (Chaboyer et al. 1996). These models were chosen, since they are the most recent ones offering both $`BV`$ and $`VI`$ colors.
With our choice for the $`V_{\mathrm{HB}}`$ vs. \[Fe/H\] relation, and within the observational errors, the theoretical isochrones and the observed values show similar trends with metallicity. It must be clearly stated that this result depends on the choice of the theoretical HB luminosity, though the conclusions would be the same if the slope of the $`V_{\mathrm{HB}}`$ vs. \[Fe/H\] relation is changed by not more than $`\pm 15\%`$(see also below). Note that the zero point of the relation for $`V_{\mathrm{HB}}`$ does not affect the relative age. The isochrones can be used to tentatively select a sample of coeval clusters. We will use these clusters to test the isochrones in the $`\delta (VI)_{\mathrm{@}2.5}`$ vs \[Fe/H\] plane (B98). We somehow arbitrarely defined as coeval (from here on fiducial coeval GGCs), those clusters whose vertical parameter was within $`\pm 1\sigma `$ from the isochrone which better fit the data distribution in the $`V_{\mathrm{HB}}`$ vs.\[Fe/H\] plane. These object are marked by heavy symbols in Fig. 1. Interestingly enough, the same set of coeval clusters is selected using either the SCL97 or the V98 isochrones, and using a slope $`\alpha `$ for the $`V_{\mathrm{HB}}`$ vs. \[Fe/H\] relation in the range $`0.17<\alpha <0.23`$ for the V98 isochrones and $`0.15<\alpha <0.20`$ for the SCL97 isochrones. The observed dispersion is $`\sigma =0.1`$ mag with respect to both the SCL97 and V98 isochrones, i.e. fully compatible with the uncertainties in $`\mathrm{\Delta }V_{\mathrm{HB}}^{\mathrm{TO}}`$, strengthening the idea that the selected clusters must be coeval.
### 3.2 Ages from color differences
In Fig. 2, the parameter $`\delta (VI)_{\mathrm{@}2.5}`$ vs. metallicity for the groundbased sample is compared with the SCL97 (top panel) and V98 (bottom panel) isochrones. The trend with metallicity of the $`\delta (VI)_{\mathrm{@}2.5}`$ parameter for the fiducial coeval GGCs (filled circles) is remarkably similar to the theoretical trend. In Fig. 2, the fiducial coeval GGCs are all within a 2 Gyr strip, showing a full consistency with what was found from the vertical method.
The plot in Fig. 3 is the $`BV`$ counterpart of Fig. 2. Also in this case most of the GGCs are within a narrow band. However, as pointed out also by B98, the age width of this band is more difficult to obtain, since the isochorones show different trends with \[Fe/H\]. Also the trend with \[Fe/H\] of the $`\delta (BV)_{\mathrm{@}2.5}`$ for the coeval clusters differs from the isochrones. The differences in $`\delta (BV)_{\mathrm{@}2.5}`$ for different models and different bolometric corrections are widely discussed in B98. Here, we simply note that the recent V98 calculations seem to better approximate the observed data and that, using these isochrones, an age dispersion comparable with that from the vertical method is obtained.
A further remark on the different dependence of the horizontal parameters in $`(BV)`$ and in $`(VI)`$ on the metallicity. Fig. 2 and 3 are plotted on the same scale. Clearly, $`\delta (BV)_{\mathrm{@}2.5}`$ strongly depends on \[Fe/H\], particularly for \[F/H\]$`1.7`$, as already pointed out by VBS90. As a consequence, even a small error on the metal content of a cluster can strongly affect the determination of its relative age. This fact might also explain the apparently larger dispersion of the $`\delta (BV)_{\mathrm{@}2.5}`$ parameter. $`\delta (VI)_{\mathrm{@}2.5}`$ has a much milder dependence on metallicity.
All the above considerations strengthen the conclusions by SRP97 that the $`\delta (VI)`$ parameter is much more reliable than the $`\delta (BV)`$ as a relative age index.
Relative ages were computed only by means of the difference in the $`\delta (VI)_{\mathrm{@}2.5}`$ parameter with respect to the 13 Gyr-SCL97 or 14 Gyr-V98 isochrone fitted to the points. The $`\delta (VI)_{\mathrm{@}2.5}`$ dispersion is 0.01 mag, as expected on the basis of the errors in measuring this parameter.
## 4 Discussion
The dispersions in $`\delta (VI)_{\mathrm{@}2.5}`$ translate in an age dispersion of 1.4 Gyr (adopting the SCL97 models) or 1.6 Gyr (adopting the V98 models), which lowers to 1.3 and 1.4 Gyr if we a remove Pal 12, a known anomalously young cluster (Rosenberg et al. 1998). The age dispersion of the adopted coeval clusters is of 0.75 Gyr for the SCL97 models and 0.70 for the V98 models.
As pointed out above, if we take into account the observational errors, the GGC age dispersion is fully compatible with a null age dispersion.
The relative ages from the horizontal method estimated from Fig. 1 and Fig. 2 are plotted in Fig. 4 vs. \[Fe/H\] and the Galactocentric distance $`R_{GC}`$. The open circles are the ages from V98 models and the open triangles represents the ages from the SCL97 models. Regardless of the model, the relative ages do not depend on the cluster metallicity, though the age dispersion is larger for the intermediate and higher metallicity GGCs. No clear dependence on the galactocentric distance can be identified.
These results indicate that the bulk of the Galactic halo formed on a timescale $`1`$ Gyr; a minor fraction of younger clusters is also present, although their true Galactic origin is still debated. These younger clusters tend to be located in the outer halo: the interpretation of this trend is controversial. They could have formed in isolated Searle & Zinn (1978) fragments later accreted into the halo, or else they could be explained by the SGMC GC model formation of Harris & Pudritz (1994). In this context a delayed formation of the outer GCs is naturally explained (see also Harris et al. 1998).
An age-metallicity relation cannot be detected by this investigation. This means that the early chemical enrichment of the Galactic halo took place on a timescale again $`<1`$ Gyr, up to values roughly half solar.
###### Acknowledgements.
It is a pleasure to thank Peter Stetson for his generosity in providing us with all the software needed for the stellar photometry and for the helpful discussions. Alessandro Chieffi and Don VandenBerg are charmly thanked for the discussions and suggestions and for making available their models in advance of publication.
|
no-problem/9901/hep-ex9901008.html
|
ar5iv
|
text
|
# Hadronic Decays of Beauty and Charm from CLEO
## Introduction
The CLEO experiment has provided important contributions to our understanding of hadronic decays of the beauty and charm systems since it began taking data in the early 1980s. The wealth of results is due primarily to the large data samples collected over the years and the excellent tracking, energy resolution and reasonably good particle ID of the CLEO series of detectors. In this paper we present five analyses on hadronic decays of charm and bottom mesons.
CLEO is currently analyzing data from two separate runs taken with different detector configurations. The first run ended in the Summer of 1995 and includes a total luminosity of 3.1 $`fb^1`$ on and 1.6 $`fb^1`$ taken 60 MeV below the $`\mathrm{{\rm Y}}(4S)`$. Given the $`B\overline{B}`$cross section this sample corresponds to 3.1 million $`B\overline{B}`$pairs. The configuration of the detector during the first run is described in detail in Ref. jrod:CLEOII . This dataset will be referred to as the CLEOII dataset hereafter. At the end of the CLEOII run the detector was significantly improved with the replacement of the inner straw tube drift chambers by a 3 layer silicon vertex (SVX) detectorjrod:CLEOSVX . In addition, the argon-ethane gas in the drift chambers was replaced with a helium-propane mixture which improved both particle ID and the momentum resolution in the drift chambers. Finally, the track fitting software was updated to one based on the Kalman filtering algorithm. These improvements in tracking and particle ID, while featured in only two of the analysis presented here will be become increasingly important in future analyses. The data collected and reconstructed, with the CLEOII upgrade, (CLEOII/SVX), consists of 2.5 $`fb^1`$ on and 1.3 $`fb^1`$ off resonance. The data run for CLEOII/SVX will be completed at the end of 1998.
### First Observation of $`𝑩^\mathrm{𝟎}\mathbf{}𝑫^{\mathbf{}\mathbf{}}𝑫^\mathbf{}\mathbf{+}`$
The Cabbibo suppressed decay $`B^0D^{}D^+`$ is a potentially interesting CP violation mode, whose rate is expected to be comparable to the gold plated CP $`B^0J/\mathrm{\Psi }K_s`$ decay. Since the $`D^+D^{}`$ final state can be obtained from either $`B^0`$ or a $`\overline{B}^0`$ this decay mode mode can be used to extract $`\mathrm{sin}2\beta `$ through $`B^0\overline{B^0}`$ mixing. The amplitude for this decay is dominated by the external tree diagram and we can estimate its rate by comparison to the measured $`B^0D^{}D_s^+`$ rate, after taking into account the appropriate ratio of decay constants and CKM matrix elements. While the expected rate is of order 0.1 %, the rather large number of particles, six in the lowest multiplicity mode, in the decay chain significantly reduces the expected yield.
CLEO has performed a search for this mode by examining all of the currently available data collected on the $`\mathrm{{\rm Y}}(4S)`$ jrod:dstrdstr . This includes the complete CLEOII (3.1 $`fb^1`$) and the available portion of CLEOII/SVX data (2.5 $`fb^1`$). The decay chain is fully reconstructed cutting on kinematic variables to reduce backgrounds. In this analysis only three of the possible four combinations of the $`D^+D^{}`$ were used. The decay mode with two soft $`\pi ^0`$, the $`B^0(D^+\pi ^0)(D^{}\pi ^0)`$ decays was not used due to background considerations. For events in the CLEOII/SVX sample an additional requirement was imposed to take advantage of the better position resolution obtained from the SVX.
The observables used to extract the signal were the beam-constrained mass ($`M_{BC}`$) and the difference between the reconstructed energy and the beam energy ($`\mathrm{\Delta }E`$). At CLEO, the $`B`$s are produced nearly at rest so $`\mathrm{\Delta }\mathrm{E}`$, for real events, is peaked at zero while backgrounds from other decays peak one or more pion mass away from zero. The beam-constrained mass variable is just the usual invariant mass with the beam energy substituted for the measured energy. The resolution of $`M_{BC}`$is significantly better than the invariant mass, by about an order of magnitude, due to the small spread of the beam energy. In Figure 1 we show the scatter plots of the on-$`\mathrm{{\rm Y}}(4S)`$ distributions for events that pass all of event selection criteria. The solid rectangle in Figure 1 (left) is the signal region. A total of 4 events were observed.
To estimate the backgrounds that enter into the signal region, two independent methods were used. The first estimate is based on events in the $`\mathrm{\Delta }\mathrm{E}`$vs. $`M_{BC}`$sideband indicated by the region outside dashed rectangle in Figure 1 (left). An estimate of $`0.26\pm 0.04`$ events is determined from this sideband estimate. A second estimate is obtained by adding contributions from continuum, $`B\overline{B}`$(other kinematically similar $`B`$ decays that can fake the signal) and random combination that are reconstructed as signal. Each of these contributions were modeled by off-resonance data, Monte Carlo, and/or $`D`$ mass sidebands. This estimate predicts a background of $`0.37\pm 0.05`$ events.
The branching fraction measured given the four observed events is:
$$(B^0D^{}D^+)=(6.2_{2.9}^{+4.0}\pm 1.0)\times 10^4.$$
(1)
This value is determined from an unbinned likelihood fit using the larger of the two background estimates. This value is consistent with the expected rate of 0.1% given the measured branching fraction of the $`B^0D^{}D_s^+`$, our knowledge of decay constants and the CKM matrix elements jrod:dstrdstr .
### Angular Distributions in $`𝑩\mathbf{}𝑫^{\mathbf{}}𝝆`$
A full partial wave analysis of the decays $`B^{}D^0\rho ^{}`$ and $`\overline{B}^0D^+\rho ^{}`$ has been performed using the entire CLEOII data sample. These decays proceed primarily through tree-level $`bc`$ $`W`$ emission and, to first order, the amplitude for the decays are independent of a CKM phase. The absence of a weak phase suggests a clean model to study the effects of final state interactions (FSI) in hadronic $`B`$ decays. The full partial wave decomposition, with its own phases, provides us with a way to determine the strong phases through an analysis of the angular distribution of the final states.
In order to extract information on the strong phases we first need to express the differential decay rate in terms of complex amplitudes and helicity angles. In this analysis we use the helicity basis expressed in three components; two, the $`H_\pm `$, represent the transverse components and one, the $`H_0`$ describes the longitudinal component. Squaring and factoring the amplitude gives the differential decay rate in terms of the helicity amplitudes and the helicity angles $`\theta _\rho ,\theta _D^{}`$ and $`\chi `$. The form of the expression is,
$`{\displaystyle \frac{d\mathrm{\Gamma }}{d\mathrm{cos}\theta _D^{}d\mathrm{cos}\theta _\rho d\chi }}=4|H_0|^2\mathrm{cos}^2\theta _D^{}\mathrm{cos}^2\theta _\rho +\left(|H_{}|^2+|H_+|^2\right)\mathrm{sin}^2\theta _D^{}\mathrm{sin}^2\theta _\rho `$
$`+2\left[Re(H_+H_{}^{})\mathrm{cos}2\chi Im(H_+H_{}^{})\mathrm{sin}2\chi \right]\mathrm{sin}^2\theta _D^{}\mathrm{sin}^2\theta _\rho `$
$`+\left[Re(H_+H_0^{}+H_{}H_0^{})\mathrm{cos}\chi Im(H_+H_0^{}H_{}H_0^{})\mathrm{sin}\chi \right]\mathrm{sin}2\theta _D^{}\mathrm{sin}2\theta _\rho .`$ (2)
The two helicity angles are defined in the rest frame of the decay as the angle between one of the daughters and the direction of the parent in the rest frame of the $`B`$. The angle $`\chi `$ is the angle between the two decay planes and is related to the azimuthal direction of the helicity axis by $`\chi =\varphi _D^{}\varphi _\rho `$. In the amplitude, the strong phase information is contained in the terms with the imaginary parts; in Equation (2), no FSI implies that either $`Im(H_+H_0^{}H_{}H_0^{})`$ and $`Im(H_+H_{}^{})`$ are zero, or conversely, that all amplitudes are relatively realjrod:KMP92 .
All events are required to pass a series of selection criteria to fully reconstruct the decay chain of the $`B`$ using three decay modes of the $`D^0`$, the $`K\pi ,K\pi \pi ^0`$ and $`K3\pi `$, and the dominant decay modes of the $`D^{}`$jrod:Dstrpol-conf . Two methods are used to extract the phase information in Equation (2): a moments analysisjrod:DDF98 in which the components of each term in Equation (2) are extracted and a direct determination of the magnitude and phases of the helicity amplitudes from a three dimensional (3D) unbinned maximum likelihood fit of the data to the functional form in Equation (2).
In the unbinned likelihood fit and moments analysis the likelihood function includes terms for the signal and background contributions, factorizing each term into an angular part and a mass part. The mass part characterizes the $`\rho `$ invariant mass with a relativistic Breit-Wigner and the beam-constrained mass with a Gaussian function. The angular part is modeled by Equation (2). To minimize the number of free parameters the fit is first performed ignoring the angular part and the parameters of the mass function are extracted. In the second step, the fit is redone including the angular function and fixing the mass parameters to values extracted from the first fit. The parameters in the angular part of the likelihood function are the phases and magnitudes of the transverse helicity amplitudes relative to the longitudinal component which is set to $`H_0=1`$ and $`\delta _0=0`$ in the fit. The amplitudes are then rescaled to the satisfy the normalization condition $`|H_0|^2+|H_{}|^2|H_+|^2=1`$ The results of the likelihood fit for the strong phases and amplitudes are given in Table 1. The coefficients of Equation (2) have also been determined from the fit and a comparison made with the values obtained from the moments analysis. We find that the results are consistent with each other within statistical errors (see Table 2). Our values for the phases in Table 1 suggest non-trivial strong phases in both the $`B^{}D^0\rho ^{}`$ and $`\overline{B}^0D^+\rho ^{}`$ modes, however the statistical size of our sample does not provide us with an independent confirmation of the results in a 1D fit of the data to the $`\mathrm{sin}\chi `$ or $`\mathrm{sin}2\chi `$ distributions.
The results of the fit are also used to test the factorization hypothesis by comparing the polarization of the $`\overline{B}^0D^+\rho ^{}`$ decay with the polarization in the semi-leptonic $`\overline{B}^0D^+l^{}\overline{\nu }`$ at the appropriate $`q^2`$ scale. The predicted values for the longitudinal polarization in the semi-leptonic decay at $`q^2=m_\rho ^2`$, range from 85% to 88% and the recent CLEO results is $`91.4\pm 15.2\pm 8.9\%`$jrod:Dstrpol-conf . The longitudinal polarization from the fit is $`87.8\pm 3.4\pm 4.0`$ % consistent, within errors, with both the theoretical predictions and the semi-leptonic measurement.
### Measurements in $`𝑩^{\mathbf{}}\mathbf{}𝑫_𝑱^\mathrm{𝟎}𝝅^{\mathbf{}}`$
The $`L=1,n=1`$ charmed mesons are the $`P`$ wave orbital excitations in which four spin-orbit configurations are possible. In the heavy quark limit, these combinations can be identified by the $`j_l`$ quantum number which couples the spin of the light quark with the orbital angular momentum. These states form two doublets: a $`j_l=3/2`$ and a $`j_l=1/2`$ which, from conservation of parity and angular momentum, decay via either $`S`$ or $`D`$ waves. In the heavy quark limit, the $`j_l=1/2`$ decays only via $`S`$ wave while the $`j_l=3/2`$ can decay only via $`D`$ wave. The only $`L=1`$ states so far observed have been the narrow $`L=1,n=1`$ resonances, the $`D_1(2420)`$ and the $`D_2^{}(2460)`$ with widths of order 20 MeV. These states have been assigned to the $`j_l=3/2`$ by observing the angular distributions of the decay products and measurements of the ratio of $`D^{}\pi `$ to $`D\pi `$ decays in continuum production where the $`D_J`$ are unpolarized.
A full partial wave analysis of the decay $`B^{}D_J^0\pi _1^{},D_J^0D^+\pi _2^{},D^+D^0\pi _3^+`$ has been performed to measure the product branching fraction of the $`(B)\times (D_J)`$ decays and the properties, the mass and width, of the broad $`D_1(j=1/2)`$ resonance. The measurements are extracted from a 4D unbinned maximum likelihood fit (4D-MLF) to the data, where the independent variables are the three helicity angles and the invariant mass of the $`D_J^0`$ resonance.
An important point in this analysis is the fact that the $`D_J`$ is completely polarized since it is the decay product of a pseudo-scalar decay to a vector plus another pseudo-scalar. Knowing the initial polarization of the $`D_J`$ and the fact that angular momentum and parity are both conserved provides us with a clear picture, in the heavy quark limit, of the angular decay distribution in final states that first decay through one of the three intermediate $`D_J^0`$ resonances. In other words, we can distinguish among the three possible $`L=1`$ states, those that decay to a $`D^+\pi ^{}`$, not only by using the invariant mass but also by examining the full angular distribution of the final state.
A partial reconstruction technique selects the events from among the entire CLEOII dataset. These events are used in the fit to the 4D maximum likelihood function. In the partial reconstruction method the entire decay is reconstructed, up to a quadratic ambiguity, from the 4-momenta of the three pions ($`\pi _1,\pi _2,\pi _3`$) in the decay chain and imposing energy-momentum conservation at each decay vertexjrod:part\_recon . This method improves statistics by about an order of magnitude over the usual full reconstruction technique since it eliminates the explicit reconstruction of the charmed meson. A trade off to the gain in statistics comes from the increased complexity of the analysis and higher levels of backgrounds. These backgrounds are however, modeled by using the 1.6 $`fb^1`$ of off-resonance data and Monte Carlo simulations. In Figure 2 (left) we show the $`D^+\pi ^{}`$ invariant mass distribution taken from events that pass the selection criteria described in Ref. jrod:ddblstarconf in the on-resonance CLEOII dataset. Superimposed on the plot are the $`D^+\pi ^{}`$ invariant mass projection from the 4D-MLF for each of $`D_J^0`$ candidates plus the total background contribution from various sources; continuum and other $`B\overline{B}`$decays with similar kinematics. The ability of the angular information to distinguish between the three possible resonances is illustrated in Figure 2 (right) where Monte Carlo simulations of the $`B^{}`$ decays to each of the three $`L=1`$ $`D_J`$ resonances are shown.
The 4D likelihood function used in the fit includes terms for the angular distribution, mass amplitudes of the resonances, strong phase shifts and parameters that allow for mixing between the two $`1^+`$ states. It also allows for detector smearing and acceptance. The functional form of the amplitude is
$`𝒜_{BD^+\pi ^{}\pi ^{}}=`$ $`\alpha _{nr}e^{i\delta _{nr}}+\alpha _2A_2a_2e^{i\delta _2}+\alpha _{1n}A_{1n}\left(a_{1d}\mathrm{cos}\beta +a_{1s}\mathrm{cos}\beta e^{i\varphi }\right)`$ (3)
$`+\alpha _{1b}A_{1b}\left(a_{1s}\mathrm{cos}\beta a_{1d}\mathrm{cos}\beta e^{i\varphi }\right)e^{i\delta _1}`$
where the $`\alpha _i`$ allows for different contributions from the various resonant and non-resonant $`D^+\pi ^{}\pi ^{}`$ components, the $`A_i`$ are Breit-Wigner amplitudes, and the $`a_i`$ are the angular ($`D_{m,m^{}}^j`$) amplitudes. The mixing between the narrow and broad $`1^+`$ resonances is currently described by the mixing angles $`\beta `$ and $`\varphi `$ and the strong phases for the resonant and non-resonant components are included via the $`\delta `$ parameters. The $`1n`$,$`2`$,$`1b`$, and $`nr`$ subscripts refer to the narrow $`1^+,j_l=3/2`$ and $`2^+,j_l=3/2`$ resonances, the broad $`1^+,j_l=1/2`$ resonance and the non-resonant component respectively. This parameterization is not unique and an alternative parameterization has been used as a systematic check. The variation in the results due to the alternative parameterization is quoted as an additional systematic error. The fit is performed with the mass and width of the relativistic Breit-Wigners for the narrow $`2^+`$ and the $`1^+`$ states fixed to their measured valuesjrod:PDG . The normalization of each of the three resonant and the non-resonant components plus the mass and width of the broad $`1^+`$ state are allowed to float in the fit. From the fit we extract the invariant mass and width of the broad $`1^+`$ resonance to be:
$`M_{D_1(j=1/2)^0}`$ $`=`$ $`2.461_{0.034}^{+0.041}\pm 0.010\pm 0.032\mathrm{GeV}`$
$`\mathrm{\Gamma }_{D_1(j=1/2)^0}`$ $`=`$ $`290_{79}^{+101}\pm 26\pm 36\mathrm{MeV}`$ (4)
where the first error is the statistical uncertainty, the third is the uncertainty from the parameterization of the amplitude and the second is the systematic uncertainty from all other sources. From the 4D fit we also extract the product branching ratios of the $`B^{}`$ to each of the three $`D_J^0`$ plus a single pion. These results are given in Table 3 together with the yields and the $`B^{}`$ branching fractions using the $`D_J^0`$ branching fractions from isospin symmetry. The second systematic error in Table 3 represents the uncertainty in the parameterization of the grand-amplitude. These results are consistent with the values obtained earlier using a simpler 2D-MLF where not all of the angular information was usedjrod:ddblstarconf . These branching fraction measurements, however, disagree with theoretical expectations from heavy quark effective theory which predict the rates to be about three times smaller than the our resultsjrod:N97 . Our preliminary results on the mass and width of the broad $`1^+`$ charmed meson are in agreement with the quark modeljrod:GK91 .
### Search for First Radial Excitations in Charmed Meson
In 1997 the DELPHI collaboration claimed evidence for the $`1^{st}`$ radially excited charmed meson $`D^+`$jrod:DELPHI . They found an excess of $`66\pm 14`$ events in their sample of reconstructed $`D^+\pi ^{}\pi ^+`$ with a mass of $`2637\pm 2\pm 6`$ MeV and a small width. The assignment of the quantum numbers was based primarily on the mass measurement which is consistent with theoretical expectations for a $`D^+`$jrod:GI85 . The width of the bump is approximately equal to the detector resolution so DELPHI sets an upper bound on the decay width of the $`D^+`$ to be $`<15`$ MeV at the 95% confidence level. The OPAL experiment has also performed a search for the $`D^+`$ in the same final state and in the DELPHI mass window using a similar analysis procedure. They however, found no excess and set an upper limit on $`D^+`$ production of $`f_{Z^0D^+}\times (D^+D^+\pi ^{}\pi ^+)<2.1\times 10^3`$ 95% C.L.jrod:OPAL98 . Both experiments collect data at the $`Z^0`$ mass so $`D^+`$ production is from the $`c\overline{c}`$ and/or $`b\overline{b}`$ continuum. Both experiments also estimate that about half of their candidates are from $`c\overline{c}`$ production.
The analysis procedures used at CLEO are similar to those employed by both DELPHI and OPAL. First pion and kaon candidates are selected from tracks originating at the IP. These are then combined to form $`D^0`$ and $`D^+`$ candidates requiring consistency with particle ID and that the invariant masses be within the nominal values. To test the reconstruction procedure and reduce the systematic errors, the $`D^+`$ yields are compared to $`D_J^0`$ yields since the reconstruction procedures differ only by a single charged pion. Also, the $`D^+`$ to $`D_J^0`$ production ratio allows for a more direct comparison between the LEP and the CLEO results.
Using the entire CLEOII data set we have searched for the $`D^+`$ in the mass region suggested by the DELPHI results. We find no evidence of an excess the region between 2590 MeV and 2670 MeV, and set a preliminary upper limit of:
$$\frac{N_{D^+}(D^+D^+\pi ^{}\pi ^+)}{N_{D_2^0}(D_2^0D^+\pi ^{})+N_{D_1^0}(D_1^0D^+\pi ^{})}<0.16\mathrm{@}90\%\mathrm{C}.\mathrm{L}.$$
(5)
This may be compared with the DELPHI measurement of $`0.49\pm 0.18\pm 0.10`$ for this rate jrod:DELPHI . The invariant mass distributions from DELPHI jrod:DELPHI and CLEO are shown in Figurejrod:fig:dstrpime). The DELPHI result includes both $`b\overline{b}`$ and $`c\overline{c}`$ production.
### Measurement of $`𝑫^\mathrm{𝟎}\mathbf{}𝑲^\mathbf{+}𝝅^{\mathbf{}}`$ Decays
The decay $`D^0K^+\pi ^{}`$ can proceed either a through doubly Cabbibo suppressed decay (DCSD) channel or through $`D^0\overline{D}^0`$ mixing. In the standard model, the rate is expected at $`0.3\%`$ level so this decay can be used to search for exotic or beyond-standard model decay mechanisms. Standard model predictions for $`R`$, defined as $`R=\mathrm{\Gamma }\left(D^0K^+\pi ^{}\right)/\mathrm{\Gamma }\left(D^0K^{}\pi ^+\right)`$, from mixing vary considerably from about $`10^3`$ to $`10^{10}`$. The contributions to $`R`$ from DCSD is of order $`\mathrm{tan}^4\theta _C3\times 10^3`$jrod:DCSD93 . To separate the mixing and DCSD contributions a measurement of the decay time distribution is required. With the new silicon vertex detector, CLEO can now perform this measurement, however, in the discussion that follows we present only an $`R`$ measurement that includes contributions from both mixing and DCSD. We plan to eventually add the time-dependent measurement to this analysis.
To measure the ratio $`R`$ we have to determine the decay rates $`\mathrm{\Gamma }(D^0K^+\pi ^{})`$ and $`\mathrm{\Gamma }(D^0K^{}\pi ^+)`$. These rates are extracted from analyzing high momentum continuum $`D^+D^0\pi _s^+(K\pi )\pi _s^+`$ events, where the sign of the slow pion ($`\pi _s^+`$) tags whether the $`K\pi `$ combination comes from a $`D^0`$ or a $`\overline{D}^0`$. The combination where the sign of the $`K`$ and the slow pion ($`\pi _s^+`$) are the same is referred to as the “wrong sign” combination while events where the $`K`$ and the $`\pi _s^+`$ have opposite signs is referred to as the “right sign” combination. The wrong/right sign signal yields are determined by fitting the distribution of mass differences ($`\delta M`$) between the $`D^+`$ and the $`D^0`$ and requiring the $`D^0`$ invariant mass ($`M_D`$) to be within $`\pm 13`$ MeV ($`2\sigma `$) of the nominal $`D^0`$ mass. An important feature in this analysis is the small width of both the $`M_D`$ and $`\delta M`$ mass distributions as compared to other experiments. This is due primarily to the improvements in the tracking algorithm and the vertex resolution of the SVX detector. For example, in the CLEOII/SVX data the resolution of $`M_D`$ is now 6.5 MeV, while the $`\delta M`$ resolution is 200 keV compared with the CLEOII pre re-processed (data reconstructed prior to the application of the Kalman algorithm) values of about 12 MeV and 750 keV respectively. The improved mass resolution allows for a greater separation of signal from backgrounds. This improvement is evident in the low levels of backgrounds in Figure 4.
Because of the rarity of the $`D^0K^+\pi ^{}`$ events, a significant amount of work has been done to both reject and understand the backgrounds which enter the $`M_D`$ and $`\delta M`$ distributions. The most significant background components are due to real $`\overline{D}^0K^+\pi ^{}`$ combined with a random slow $`\pi _s^+`$ and backgrounds from $`D^+(K^+\pi ^{})\pi _s^+`$ where the kaon and the pion are mis-identified. The random slow $`\pi _s^+`$ background tends to peak in the $`D^0`$ mass region but not in $`\delta M`$, while the doubly mis-identified background tends to peak in $`\delta M`$ but not in the $`D^0`$ mass. The latter is reduced by forming the $`m_{\mathrm{flp}}(D^0)`$, where the mass assignments to the kaon and pion are switched, vetoing the event if it’s “mass-flip” mass is within $`4\sigma `$ of the nominal $`D^0`$ mass. The remaining backgrounds are modeled by Monte Carlo simulations. The background distributions are used in the likelihood function for the 1D maximum likelihood fit.
The preliminary result of the 1D ML fit for the ratio of decay rates is:
$$\frac{\mathrm{\Gamma }(D^0K^+\pi ^{})}{\mathrm{\Gamma }(D^0K^{}\pi ^+)}=0.0032\pm 0.0012\pm 0.0015$$
(6)
This result was obtained from an analysis of the current CLEOII/SVX dataset which includes both the off and on-resonance samples for a total of 3.8 $`fb^1`$. The new result is already more statistically significant, by itself, than the current world average of $`0.0072\pm 0.0025`$jrod:PDG . There is currently about a factor of two more CLEOII/SVX data yet to be analyzed so we expect the statistical significance to improve. We are also working on improving the estimate of the systematic error and expected it to be much smaller than the value quoted in Equation (6). Finally, it is worth noting that while compatible, within errors, with the world average, the new $`R`$ measurement is lower by about a factor of two and therefore more consistent with theoretical expectations.
### Summary and Conclusions
We have presented preliminary results from five CLEO analysis in hadronic decays of bottom and charmed mesons. From the $`B`$ hadronic analyses we have shown results which provide us with the first observation and measurements of the decay $`B^0D^{}D^+`$, a first hint of final state interactions in $`B`$ decays and a measurement of the decay rates of the $`B^{}`$ meson decaying into three of the $`L=1`$ charmed mesons plus a single pion. From the $`B^{}D_J^0\pi ^{}`$ analysis we have the first observation of the broad $`L=1,j_l=1/2`$ charmed meson and have determined its mass and decay width. Our preliminary search for the first radial excitation of the charmed meson (the $`D^+`$ ) has been unable to confirm the observation by the DELPHI collaboration. Finally, our preliminary results on a measurement of the ratio $`\mathrm{\Gamma }(D^0K^+\pi ^{})/\mathrm{\Gamma }(D^0K^{}\pi ^+)`$ gives a results lower than previous measurements. This new measurement is an improvement over the previous CLEO results with a substantial increase in data and improved detector.
|
no-problem/9901/nucl-th9901099.html
|
ar5iv
|
text
|
# Multifragmentation of non-spherical nuclei
## ACKNOWLEDGMENTS
We are grateful to D.H.E. Gross for his encouragement and interest in this project. We are thankful to O. Shapiro for the implementation of the original version of MMMC code. We thank also G. Auger, A. Chbihi and J.-P. Wieleczko for their interest in this work. This work has been supported by the CNRS-JINR Dubna agreement No 98–28.
|
no-problem/9901/cond-mat9901018.html
|
ar5iv
|
text
|
# Path Crossing Exponents and the External Perimeter in 2D Percolation
\[
## Abstract
$`2`$D Percolation path exponents $`x_{\mathrm{}}^𝒫`$ describe probabilities for traversals of annuli by $`\mathrm{}`$ non-overlapping paths, each on either occupied or vacant clusters, with at least one of each type. We relate the probabilities rigorously to amplitudes of $`O(N=1)`$ models whose exponents, believed to be exact, yield $`x_{\mathrm{}}^𝒫=(\mathrm{}^21)/12`$. This extends to half-integers the Saleur–Duplantier exponents for $`k=\mathrm{}/2`$ clusters, yields the exact fractal dimension of the external cluster perimeter, $`D_{EP}=2x_3^𝒫=4/3`$, and also explains the absence of narrow gate fjords, as originally found by Grossman and Aharony.
\]
The fractal geometry of critical percolation clusters has been of interest both for intrinsic reasons and as a window on a range of phenomema. It is characterized by fractal dimensions of various sets , e.g., of the connected clusters, their backbones, the sets of pivotal (singly–connecting) bonds, the clusters’ boundaries (hulls), and their external (accessible) perimeters. A set $`S`$ is said here to be of fractal dimension $`D_S`$ if the density of points in $`S`$ within a box of linear size $`R`$ decays as $`R^{x_S}`$, with $`x_S=dD_S`$ in $`d`$ dimensions.
For two–dimensional (2D) independent percolation, many of the fractal dimensions have been found exactly , though most of these values have not yet been established at a rigorous level. In several cases, Saleur and Duplantier (SD) identified the co–dimension $`x_S`$ with the exponent $`x_k^𝒞`$ which describes the decay law $`P_k^𝒞(r/R)^{x_k^𝒞}`$ for the probability ($`P_k^𝒞`$) that in an annular region $`D(r,R)`$ the small circle of radius $`r`$ is connected to the outer one, of radius $`R>>r`$, by $`k`$ different clusters of occupied sites (or bonds). SD utilized the observation that the statistics of the $`2k`$ boundary lines of the connected clusters correspond to those of loops in some well recognized models: the $`Q=1`$ Potts model (at its critical point) for the bond percolation model and the $`O(N=1)`$ loop model of Domany et al. (at its low temperature phase) for site percolation on the triangular lattice. Using the “Coulomb gas” representation for the corresponding $`\mathrm{}`$-line exponents, $`x_{\mathrm{}}^{O(N)}`$, SD obtained for both models the values, expected to be universal,
$$x_k^𝒞=x_{\mathrm{}=2k}^{O(N=1)}=(4k^21)/12,$$
(1)
where $`k`$ clusters correspond to $`\mathrm{}=2k`$ lines in the loop model.
Among the noteworthy applications of the above formula are the “hull dimension”, i.e., the dimension of the cluster’s perimeter,
$$D_H=2x_1^𝒞=7/4,$$
(2)
and the dimension of the set of “red” (singly connecting) bonds, which are pinching points between two large clusters:
$$D_{SC}=2x_2^𝒞=3/4=1/\nu ,$$
(3)
where $`\nu `$ is the correlation length exponent , in agreement with previously derived values . However, some well known percolation dimensions have eluded this exact approach: the dimension $`D_{EP}`$ of the external (accessible) perimeter (EP) or frontier of a cluster, first studied by Grossman and Aharony (GA), and the backbone dimension. The EP of a cluster is the accessible part of the hull, which excludes deep “fjords” which are connected to the cluster’s complement through very narrow passages (or “gates”). The dimension of the EP was found numerically to be $`D_{EP}4/3`$ . GA also made the puzzling observation that, while typical clusters do show many fjords with only a narrow passage to the complement, once one fills in fjords with passages of width two or three lattice spacings – no fjords of broader microscopic passages, and depth comparable with that of the cluster, are left. This is clearly visible in Fig. 6 of the second Ref. . Both of these observations make the EP look very similar to self–avoiding walks (SAW’s). Although there appeared conjectures attempting to make this relation quantitative , the connection was never elucidated.
In this Letter we report on a resolution of these issues through analysis of the path crossing probabilities.
i. Basing the relation of percolation exponents with the $`O(N=1)`$ exponents on a somewhat different footing than that used in Ref., we extend the list of exact values proposed for critical percolation in 2D. Instead of focusing on entire clusters, we consider the probability $`P_{\mathrm{}}^𝒫(r,R;\tau _1,\mathrm{},\tau _{\mathrm{}})`$ that the annulus $`D(r,R)`$ is traversed by (at least) $`\mathrm{}`$ non–overlapping connected paths, which are “monochromatic” in the sense that each consists of either occupied sites (“color” $`\tau _j=+`$) or vacancies ($`\tau _j=`$). We rigorously prove that for color sequences which include at least one of each type ($`\pm `$) the decay rates of the probabilities are color-independent, and are given by the $`O(N=1)`$ exponents. Assuming the validity of the exact values for the latter, we find that
$$P_{\mathrm{}}^𝒫(r,R;\tau _1,\mathrm{},\tau _{\mathrm{}})(r/R)^{x_{\mathrm{}}^𝒫}$$
(4)
with the path crossing exponents $`x_{\mathrm{}}^𝒫`$ satisfying
$$x_{\mathrm{}}^𝒫=x_{\mathrm{}}^{O(N=1)}=(\mathrm{}^21)/12.$$
(5)
Since the cluster exponents are $`x_k^𝒞=x_{2k}^𝒫`$, Eq. (5) may be viewed as an extension of the SD formula to odd values of $`\mathrm{}`$, or half integer values of $`k`$, and to more general sequences $`\tau _j=\pm `$.
ii. Using the newly acquired values we explain some of the quantitative and qualitative features of the EP of critical clusters mentioned above: its dimension, which we identify as
$$D_{EP}=2x_3^{O(1)}=4/3,$$
(6)
and the interesting fact that – unlike the hull – the EP appears to be self–avoiding on the macroscopic scale.
iii. We consider also the analogous boundary or “surface” exponents, which describe the probability $`\stackrel{~}{P}_{\mathrm{}}^𝒫(r,R;\tau _1,\mathrm{},\tau _{\mathrm{}})`$ that, within the upper half space, a semi-annular region $`\stackrel{~}{D}(r,R)`$ is traversed by $`\mathrm{}`$ paths (see Fig. 1). For the exponents defined by
$$\stackrel{~}{P}_{\mathrm{}}^𝒫(r,R;\tau _1,\mathrm{},\tau _{\mathrm{}})(r/R)^{\stackrel{~}{x}_k^𝒫}$$
(7)
we find
$$\stackrel{~}{x}_{\mathrm{}}^𝒫=\stackrel{~}{x}_{\mathrm{}+1}^{O(N=1)}=(\mathrm{}+1)\mathrm{}/6.$$
(8)
In this case the relation is valid with no restriction on the color sequence $`\tau `$; however there is a shift: $`\mathrm{}`$ crossing paths correspond to $`(\mathrm{}+1)`$ $`O(N=1)`$ lines. Thus, with odd $`\mathrm{}`$, one recovers the cluster boundary exponents $`\stackrel{~}{x}_k^𝒞=\stackrel{~}{x}_{2k1}^𝒫=k(2k1)/3`$, as in Refs. .
Before we turn to describe the arguments for the exact values of the path exponents, as provided by Eqs. (5) and (8), let us present their implications concerning the dimension and shape of the external perimeter. Each point on the accessible EP is next to the end of three paths of lengths comparable with the diameter of the cluster – a path of occupied sites and in addition two distinct dual paths of vacancies (Fig. 2a), which guarantee that the point is not within a fjord of narrow opening (both paths must be able to exit the fjord via the narrow gate). This yields $`D_{EP}=2x_3^𝒫=4/3,`$ in excellent agreement with the numerical results.
GA’s observation concerning the fjords is particularly striking from the perspective of the scaling limit, for which one sends the lattice spacing to zero while keeping the sight on the curves observed on the macroscopic scale. (The limit can be constructed using the analysis of Ref. , which implies that the cluster hulls and EP’s can still be described by means of Hölder continuous random curves.) While the EP is self–avoiding on the lattice scale, like the hull it could have close encounters which appear as self–intersections when viewed from the macroscopic perspective. Yet such close encounters are not observed. Also this puzzle is explained by the generalized path statistics: the occurence in an $`L\times L`$ box of a cluster with a fjord of depth $`R=sL`$ and neck width $`r=ϵL`$ requires there being six paths, two pairs of triplets as used in the derivation of Eq. (6), which meet in a region of size $`r`$ and avoid each other up to a radius $`R`$ (Fig. 2b). The probability of finding such six paths scales as $`ϵ^d\times \left(\frac{ϵ}{s}\right)^{x_6^𝒫}=O(ϵ^{x_6^Pd})`$ where $`d=2`$. Equation (5) yields the exponent value $`x_6^𝒫=2\frac{11}{12}`$ and hence the probability for a randomly picked configuration to exhibit such a gate tends to zero, in the situation where $`s`$ is fixed at some non-infinitesimal value $`0<s<1`$, and $`ϵ0`$. The crucial point here is that the fractal dimension of the set of these gates is negative:
$$D_G=2x_6^𝒫<0.$$
(9)
This explains the asymptotic absence of macroscopic size fjords of neck width $`r`$ anywhere between few multiples of the lattice spacing and $`ϵL`$, for any fixed $`ϵ1`$. The above argument also implies that the EP will not exhibit peninsulas with narrow isthmuses, even though that condition was not built into the construction.
It is of interest to recall here the suggestion which was made on a theoretical as well as a numerical basis , that percolation’s hull and EP dimensions coincide with the dimensions of polymers, respectively at the $`\theta `$-point (the onset of collapse), or in the SAW state:
$$D_H=D_\theta ,D_{EP}=D_{SAW}.$$
(10)
It is natural to conjecture that in the scaling limit the EP coincides, in its local statistics, with a SAW. This may appear to be in conflict with the a-symmetry between the two sides of the EP, however the absence of peninsulas in the scaling limit suggests that the symmetry is restored asymptotically.
The above results (5-9) are based on a rigorous relation of path crossing probabilities with amplitudes of a loop model which we shall now define, and on known exact values for the exponents of the latter (which still remain to be proven at a rigorous level). The arguments, which will be presented more completely in ref. , are formulated for the special model of independent site (i.e., hexagon) percolation on the triangular lattice. For reasons of universality one may expect the conclusions to apply to other 2D percolation models, e.g., the bond percolation, and also to the statistics of the connected clusters of ($`+`$) or ($``$) spins in the 2D Ising model at all temperatures above $`T_c`$.
The loop-model configurations, $`\mathrm{\Gamma }`$, are collections of nonoverlapping loops and lines, in suitable subsets of the plane, which are allowed to have end-points only within prescribed regions. The weight of a configuration is
$$W(\mathrm{\Gamma })=K^𝒩_{}N^{𝒩_𝒫},$$
(11)
with $`𝒩_𝒫`$ the number of closed lines (or “polygons”) and $`𝒩_{}`$ their total length (the number of bonds). For the particular case of hexagon percolation discussed here the fugacities are $`K=1`$, $`N=1`$, i.e., $`W(\mathrm{\Gamma })=1`$ for all $`\mathrm{\Gamma }`$ .
A probability distribution of loop and line configurations in a prescribed region is defined by means of the weights $`W(\mathrm{\Gamma })/Z`$, with $`Z`$ a suitable normalizing factor. Let now $`P_{\mathrm{}}^{O(N)}(r,R)`$ denote the probability that such a system of lines with no end points in the annular domain $`D(r,R)`$ contains at least $`\mathrm{}`$ lines traversing $`D(r,R)`$. For a representation of the surface exponents, we also let $`\stackrel{~}{P}_{\mathrm{}}^{O(N)}(r,R)`$ denote the corresponding event with the lines restricted to lie in the upper half plane, taken here with the “free boundary conditions”. A close variant of the quantity $`P_{\mathrm{}}^{O(N)}(r,R)`$ is the $`O(N)`$ amplitude $`G_{\mathrm{}}^{O(N)}(r,R)`$ which is defined as the sum over $`\mathrm{}`$ lines $`\{\gamma _1,\mathrm{},\gamma _{\mathrm{}}\}`$ spanning the annulus $`D(r,R)`$ of the probability $`\omega (\gamma _1,\mathrm{},\gamma _{\mathrm{}}\mathrm{\Gamma })`$ that the lines are included in $`\mathrm{\Gamma }`$. For $`N=1`$, that probability reduces to the local expression:
$`\omega (\gamma _1,\mathrm{},\gamma _{\mathrm{}})`$ $`=`$ $`2^{𝒩_H(\gamma _1,\mathrm{},\gamma _{\mathrm{}})}2^{𝒦(\gamma _1,\mathrm{},\gamma _{\mathrm{}})}/2^\mathrm{\#}`$ (12)
with $`𝒩^H(\gamma _1,\mathrm{},\gamma _{\mathrm{}})`$ the number of hexagons touched by the lines, $`𝒦(\gamma _1,\mathrm{},\gamma _{\mathrm{}})`$ the number of line clusters – two lines being regarded as in the same cluster if they touch a common hexagon, and $`\mathrm{\#}`$ defined as taking the value $`0`$ if the $`\mathrm{}`$ lines leave room for another curve to traverse the annulus and $`1`$ otherwise. The amplitudes then read
$$G_{\mathrm{}}^{O(N=1)}(r,R)=\underset{\gamma _1,\mathrm{},\gamma _{\mathrm{}}}{}\omega (\gamma _1,\mathrm{},\gamma _{\mathrm{}}),$$
(13)
the sum running over sets of $`\mathrm{}`$ nonoverlapping lines which traverse $`D(r,R)`$. It can be shown that the probabilities and amplitudes agree to the leading order :
$$P_{\mathrm{}}^{O(N=1)}(r,R)=\left(1+o(\frac{r}{R})\right)G_{\mathrm{}}^{O(N=1)}(r,R).$$
(14)
“Coulomb gas” and Bethe Ansatz methods yield the conclusion that the loop model amplitudes, and thus also the probabilities, decay by power laws, $`G_{\mathrm{}}^{O(1)}(r,R)(r/R)^{x_{\mathrm{}}^{O(1)}}`$, and $`\stackrel{~}{G}_{\mathrm{}}^{O(1)}(r,R)(r/R)^{\stackrel{~}{x}_{\mathrm{}}^{O(1)}}`$, with the exponents taking the values given in Eq. (5) and Eq. (8). Our results rest now on the fact that the $`O(N=1)`$ line probabilities are of the same order of magnitude as the path crossing probabilities. Their compatibility is expressed in the following statement.
Proposition In the site percolation model on the triangular lattice:
1) For any “color sequence” $`\{\tau _j=\pm \}_{j=1}^{\mathrm{}}`$ which includes at least one of each kind ($`+`$ and $``$),
$$P_{\mathrm{}}^𝒫(r,R;\tau _1,\mathrm{},\tau _{\mathrm{}})\stackrel{<}{__>}P_{\mathrm{}}^{O(N=1)}(r,R),$$
(15)
where $`A\stackrel{<}{__>}B`$ means that there are constants $`0<c_1,c_2<\mathrm{}`$ with which $`c_1ABc_2A`$ uniformly in $`r`$ and $`R`$.
2) The surface probabilities satisfy
$$\stackrel{~}{P}_{\mathrm{}}^𝒫(r,R;\tau _1,\mathrm{},\tau _{\mathrm{}})=\stackrel{~}{P}_{\mathrm{}+1}^{O(N=1)}(r,R),$$
(16)
without any restriction on the color sequence $`\tau `$.
Let us outline here the proof, whose details will be spelled in ref. . The simplest case of the above relation is in the example of the half-disk amplitude with alternating color paths (as in Fig. 1), which corresponds to $`\stackrel{~}{P}_{\mathrm{}}^𝒫(r,R;+,,+,,\mathrm{})`$. Equation (16) holds there since the statistics of the boundary lines is given exactly by the $`O(N=1)`$ loop model. The result is then extended by establishing independence on the color sequence. This is done by successively conditioning on the suitable “rightmost path” and flipping the site variables left of the line. Thus use is made of the Markov property combined with the spin flip symmetry, which are enabled by the independence and the self duality of the site percolation model on the triangular lattice. The argument is a bit more involved in the case of the full disk. There we need to have at least one traversing boundary line, which we employ to slit the annulus. The previous argument is then applied to the resulting simply connected domain. Equation (15) reflects the fact that the overcounting involved in the selection of the slit is by at most a finite factor.
As noted in , it is possible to obtain some selected path exponents by direct arguments. The values agree with the formulas given above. It is instructive to list specific values of $`x_{\mathrm{}}^𝒫=(\mathrm{}^21)/12`$:
$`\mathrm{}=2`$: $`x_2^𝒫`$ yields $`D_H=D_\theta =7/4`$.
$`\mathrm{}=3`$: $`x_3^𝒫`$ yields $`D_{EP}=D_{SAW}=4/3`$.
$`\mathrm{}=4`$: $`x_4^𝒫`$ yields $`D_{SC}=\nu ^1=3/4`$.
$`\mathrm{}=5`$: $`x_5^𝒫=2`$ can be derived directly.
$`\mathrm{}=6`$: $`x_6^𝒫>2`$ implies that the EP is self–avoiding on the large scale ($`D_G<0`$).
The relation (5) was not claimed for $`\mathrm{}=1`$, or for paths of a single color ($`\widehat{x}_{\mathrm{}}^𝒫`$). Concerning this let us note:
– It can be shown directly that $`x_1^{O(N=1)}=0`$ while $`\widehat{x}_1^𝒫>0`$ . The path exponent is related to the cluster dimension; its value appears to be $`\widehat{x}_1^𝒫=5/48`$ .
– The case of $`\mathrm{}=2`$, with two paths of the same color, is of special interest since it relates to the backbone dimension. Numerically, $`\widehat{x}_2^𝒫=0.3568\pm 0.0008`$ .
For the surface exponents, $`\stackrel{~}{x}_{\mathrm{}}^𝒫=(\mathrm{}+1)\mathrm{}/6`$ of Eq. (8), we note that for $`\mathrm{}\mathrm{}`$ $`\stackrel{~}{x}_{\mathrm{}}^𝒫/x_{\mathrm{}}^𝒫2`$, as it should; and $`\mathrm{}=1`$: $`\stackrel{~}{x}_1^𝒫=1/3`$ is consistent with Cardy’s equation for the crossing probability .
$`\mathrm{}=2`$: $`\stackrel{~}{x}_2^𝒫=1`$ can be derived directly.
$`\mathrm{}=3`$: $`\stackrel{~}{x}_3^𝒫=2`$ is also directly derivable.
The last one is related to a slit-disc exponent which is attributed to J. van den Berg in Ref. .
Finally, we note that the SD formalism also yields predictions for the hull dimensions of Fortuin-Kasteleyn random clusters, describing the $`Q`$-state Potts model. These were recently confirmed in numerical simulations by Hovi and Mandelbrot . In contrast, the values found in that work for the external perimeters do not agree with the generalizations of the SD formulas to odd $`\mathrm{}`$. The results presented here were derived only for site percolation. It would be interesting to see generalizations.
This paper is dedicated to the memory of Tal Grossman. The work was started and carried out while the authors enjoyed the gracious hospitality of the Institut Henri Poincaré, the Institute for Advanced Studies (MA and BD), and of Tel Aviv University (MA). It was supported in part by the NSF Grant PHY-9512729 (MA), a grant from the German Israeli Foundation (AA), and by a grant to the IAS from the NEC Research Institute.
|
no-problem/9901/chao-dyn9901005.html
|
ar5iv
|
text
|
# References
Nowadays it is well-known that the Lorenz model is a paradigm one for low dimensional chaos in dynamical systems in synergetics and this model or its modifications are widely investigated in connection with modelling purposes in meteorology, hydrodynamics, laser physics,superconductivity,electronics,oil industry etc.,see,e.g.refs.1-15 and references therein. From the mathematical point of view, the Lorenz model is a system of nonlinear equations.Needless to say that in general it is vertually impossible to find a closed analytical solutions to the most of nonlinear equations.So one should take advantage of asymptotic approaches or have to recourse to the help of numerical simulations, which is not comprehensive for multi-parameter systems.In this paper we apply the asymptotic method for singularly perturbed nonlinear systems ( ref.16 and references therein) to the Lorenz model.Earlier this method was applied by the author of this paper in refs.17-22 and references therein.
The system under study is of the form:
$$\frac{dx}{dt}=\sigma (yx),$$
$$\frac{dy}{dt}=rxyxz,(10)$$
$$\frac{dz}{dt}=xybz,$$
where $`x`$,$`y`$ and $`z`$ are dynamical variables;$`\sigma `$ ,$`r`$ and $`b`$ are the parameters of the sytem (1).In general initial conditions are : $`x(t=0)=x(0)`$, $`y(t=0)=y(0)`$, $`z(t=0)=z(0)`$. Here we deliberately don’t define the physical meaning of the dynamical variables and parameters, as in different fields of science the meaning is different. The system (1) will be investigated in the extreme cases, when $`\sigma ^1`$ and $`b^1`$ are the small parameters of the problem.According to literature such values of parameters are quite possible (values $`b>>1`$ are possible for non-laser systems ref.23).In order to apply the above-mentioned asymptotic method we rewrite the first equation of system (1) in the following form
$$\sigma ^1\frac{dx}{dt}=yx,(2)$$
According to the theory in the zero-th order of $`\sigma ^1`$ the solution of the system in the larger time domain is determined by the so-called reduced system :
$$y(t)=x(t),$$
$$\frac{dy}{dt}=rxyxz,(3)$$
$$\frac{dz}{dt}=xybz,$$
with the initial conditions $`y(0),z(0)`$.In order to find the solution of the system (1) for small time domain (in the so-called boundary layer) we should make transition to the “new” time $`\tau =\sigma t`$,in other words the time domain $`t=0`$ is to be seen through microscope (as it is magnified $`\sigma `$ times).After this operation for the solution of the system (1)in the boundary layer in the zero-th order of $`\sigma `$ we obtain easily:
$$y(t)=y(0),z(t)=z(0),$$
$$x(t)=x(0)\mathrm{exp}(\sigma t)+y(0)(1\mathrm{exp}(\sigma t)),(4)$$
The solution of the system (1) in the whole time domain is to be constructed by linking (4) and solution to the system (3).The system (2) is the another nonlinear system to be studied by the asymptotic method in question.For these purposes we should make the following tranformations in the system (3):$`y=b^{\frac{1}{2}}y_1`$, $`z=z_1`$ and after that rewrite the third equation of the nonlinear system (3) in the following form:
$$b^1\frac{dx_1}{dt}=y_1^2z_1,(5)$$
In other words system (3) will be studied on condition that $`b^1`$is the small parameter.Acting as in the case of initial system (1) in the zero-th order of $`b^1`$ we easily obtain the solution of the system (3) in the whole time domain:
$$y(t)=b^{\frac{1}{2}}y_1,$$
$$z(t)=y_1^2+(z_1(0)y_1^2(0))\mathrm{exp}(bt),(6)$$
If $`r`$ is not equal to unity, then
$$y_1^2(t)=(r1)\mathrm{exp}(2(r1)t)(A+\mathrm{exp}(2(r1)t))^1,(7)$$
If $`r=1`$ then
$$y_1^2(t)=(y_1^2(0)+2t)^1,(8)$$
In formulai (7) and (8)
$$A=((r1)y_1^2(0))y_1^2(0),y_1(0)=b^{\frac{1}{2}}y(0),(9)$$
Thus for the solution of the initial system (1) in the zero-th order of $`\sigma ^1`$, $`b^1`$ in the whole time domain we obtain:
$$x(t)=(x(0)y(0))\mathrm{exp}(\sigma t)+y_1(t)b^{\frac{1}{2}},$$
$$z(t)=y_1^2(t)+(z_1(0)y_1^2(0))\mathrm{exp}(bt),$$
$$y(t)=y_1(t)b^{\frac{1}{2}},(10)$$
As the analysis of equations (10) show the characteristic time $`t^{charact}`$ of changing $`y(t)`$ from $`y(0)`$ to the stationary value of $`y`$, $`y^{stat}`$ is of the order of $`|(r1)|`$, if $`r`$ is not equal to unity; if $`r=1`$, then $`t^{charact}y_1^2(0)`$.Changing of $`z(t)`$ from $`z(0)`$ to $`z^{stat}`$ occurs through the intermediary quasistationary state $`z^{qstat}=y_1^2(0)`$ with the required time to achieve this state $`t^{qstat}=b^1`$; transition from $`z^{qstat}`$ to $`z^{stat}`$ takes the amount of time equal to $`t^{charac}`$.The changing of $`x(t)`$ from $`x(0)`$ to $`x^{stat}`$ occurs with the same scenario as for $`z(t)`$; the only difference is that the intermediary state for $`x(t)`$ is $`y(0)`$ with the transition time (from $`x(0)`$) $`t^{tr}=\sigma ^1`$.
In this paper we restricted ourselves to the case of zero-th order approximation.In the higher order approach we encounted with analytically hard treatable equations.The degree of adequacy of our formulai could be checked by the comparison with the behavior of the initial Lorenz model when independent variable $`t`$ goes to infinity.Before comparing one should make clear that the asymptotic theory is not applicable when the nonlinear system develops full instability ref.4.This is the case for the Lorenz model, when for the given values of $`\sigma `$ and $`b`$ the value of $`r`$ exceeds the so-called critical value (onset of chaotic behavior):
$$r_{cr}=(3+b+\sigma )\sigma (\sigma 1b)^1,(11)$$
At $`r>r_{cr}`$ the non-zero fixed points (or steady states) of the Lorenz system
$$x^{stat}=y^{stat}=\pm (b(r1))^{\frac{1}{2}},$$
$$z^{stat}=r1,(11)$$
become unstable,and there is a strange attractor over which a chaotic motion takes place. For $`\sigma =10`$, $`b=\frac{8}{3}`$ the critical number is equal to $`r_{cr}=24.74`$.Also it is known that the partial loss of instability in the Lorenz model occurs when $`r>1`$:with this value of $`r`$ the trivial steady state loses its stability ref.4.
The analyses of our formulai show that indeed when $`r>1`$ the system goes to the nontrivial steady state.In the contrary case the trivial steady state is obtained.Of course the presented here in this work results are of the simplest one for the Lorenz model which is capable to exhibit highly complicated behavior.But they are adequate at least for some extreme cases.
In conclusion in this work we investigate the Lorenz model in synergetics with the asymptotic method for singularly perturbed nonlinear systems for some limiting cases. The times of achieving quasistationary and stationary states are estimated.
The author thanks the JSPS for the Fellowship.
|
no-problem/9901/hep-ph9901371.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The purpose of the present paper is to show that the Dynamical String Model offers an alternative approach to describe ultrarelativistic heavy-ion collisions at bombarding energies of a few hundreds GeV/nucleon in terms of extended objects, the so-called hadronic strings.
The existing event generators for high-energy hadronization processes can be classified as follows:
1. Event generators for high-precision description of elementary hadronization processes (including only lepton and proton beams) in vacuum: PYTHIA , HERWIG , ARIADNE , LEPTO , ISAJET . These event generators combine the parton-shower evolution in perturbative QCD terms with non-perturbative hadronization prescription to convert final partonic distributions into hadronic ones. For the latter the LUND string fragmentation model is commonly used with the exception of HERWIG, which uses other considerations for coalescing coloured partons to colour-neutral clusters and fragmenting those into hadrons. The common feature of this class of models is the lack of space-time evolution. Therefore, these models cannot be directly applied for describing hadronization at finite densities, e.g. for that of high-energy hadronization processes involving also nuclei.
2. Event generators for the description of hadronization at finite densities, e.g. high-energy heavy-ion collisions: FRITIOF , PCM , DPM , VENUS , QGSM , RQMD , UrQMD , HSD , HIJET , HIJING , Two-phase simulation of ultrarelativistic nuclear collisions .
These models provide a space-time description (except HIJING) of the hadronization process considered. In a few of them the partonic degrees of freedom are included also in the early stage of the collision in some form. Hadrons are generally treated as point particles with interaction ranges prescribed on the base of the constituent valence-quark picture. As a rule, string excitation is included (with the exception of ) in various forms and the LUND string fragmentation model is used. Generally, string-like excited hadronic states do not propagate and collide with other hadrons in the surrounding. Strings are rather a clever tool of bookkeeping how highly excited hadrons fragment into hadrons of the discrete mass spectrum.
3. Models which intend to provide a space-time description for both the high-energy elementary hadronization processes in vacuum and the more involved hadronization processes at finite densities like those including nuclei, e.g. VNI . VNI gives a full space-time picture of ultrarelativistic heavy-ion collisions, combining the space-time evolution of the parton-shower in its early stage with the later hadronic cascade. The space-time picture of the parton-cascade has also been used in the parton-hadron conversion, based on the ideas introduced in . There are no strings in this model, the hadrons are considered as point particles with finite interaction ranges, as usual.
The Dynamical String Model presented here belongs to the third class of models providing a full space-time description. It can be applied to elementary hadronization processes in vacuum and to hadronization processes at finite densities involving nuclei as well. It is the basic feature of the Dynamical String Model that during the whole space-time evolution of an event all the hadrons are consistently considered as extended, string like objects satisfying the particular laws of string dynamics. On the contrary to other models on the market, the string picture is not merely used as a fragmentation model of excited hadrons. It is taken here as the model for hadron dynamics, according to which the laws of motion, decay, and collision of hadrons are determined.
On the contrary to the models including a parton-shower for the early stage of the evolution, the Dynamical String Model considers the string like collective excitations of the hadrons to be decisive for the evolution of the ultrarelativistic heavy-ion collision, neglecting completely the underlying partonic processes. The good qualitative description of ultrarelativistic heavy-ion collisions for CERN SPS energies of a few hundreds GeV/n obtained in the present paper indicates that the overall qualitative features of the fragment distributions and multiplicities may not be sufficient to clarify the interplay of the string like collective degrees of freedom and that of the partonic ones.
In the Dynamical String Model all kinds of broken line string excitations are taken into account, whereas in other existing models string like excitations are basically longitudinal, yo-yo like as far as no gluon jets (or minijets) are included.
The Dynamical String Model has a rather few number of parameters, as compared to other existing models. That is an advantage, but on the other hand one cannot expect that the model in this form can provide more than an overall qualitative description of ultrarelativistic heavy-ion collisions.
In Sect. 2 we give a description of the Dynamical String Model, and in Sect. 3 the model is applied to the ultrarelativistic heavy-ion collision <sup>32</sup>S(200 GeV/n)+<sup>32</sup>S.
## 2 Dynamical String Model
### 2.1 Motivation
The underlying idea of the Dynamical String Model is that hadrons can be represented by classical one-dimensional objects, the oriented relativistic open bosonic strings as suggested by Artru and Remler . There are experimental evidences that hadrons have string like collective degrees of freedom: (i) the well-known, almost linear Regge-trajectories corresponding to the string tension of $`\kappa 0.9`$ GeV/fm, (ii) the nearly exponential mass spectrum of the resolved hadron resonances ; (iii) the existence of a preferred (longitudinal) direction in elementary fragmentation processes; (iv) the emission of linearly polarized gluons by the excited hadronic system occurring in high-energy pp collisions . Theoretical indications and successful applications of the string model for hadronic physics are overviewed in . For our work the success of the string fragmentation models developed by Artru and Mennessier and by the Lund group was particularly encouraging. The oriented relativistic open string is thought of the idealization of the chromoelectric flux-tube with quark and antiquark (diquark) ends for mesons (baryons). The endpoints are assumed to have vanishing rest masses. The original idea is then modified: the infinitesimally thin strings have been replaced with more realistic thick ones, i.e. with strings exhibiting a finite tranverse size, more precisely a radius $`R`$. The hadronic strings introduced in this manner are treated afterwards in a fully dynamical way in our model. They propagate, collide and decay according to the particular laws deduced from the string picture and from the analogy of hadronic strings with chromoelectric flux tubes, as described below.
Energy and momentum conservations are strictly satisfied in any elementary decay and collision event and in the evolution of the whole hadronic system, as well. No spin is introduced and the angular momentum conservation is not considered.
### 2.2 Mass spectrum
Our starting point is the classical Nambu-Goto string . The classical mechanical string picture provides us with a continuous mass spectrum. All kinds of broken line string configurations can arise during the evolution of any system of hadronic strings due to inelastic string collisions. Furthermore, it has also been shown that those broken line string configurations with arbitrary number of kinks are unavoidable to obtain a realistic exponential mass spectrum of hadronic strings . A finite amount of momentum (and energy) can be carried by the kinks and by the string endpoints as well.
In order to be more realistic, below the mass thresholds of 1.5 GeV for baryons and 1.0 GeV for mesons strings only with discrete rest masses taken from are allowed in the model. Strings in the rotating rod mode are associated to the discrete hadronic states which correspond to the leading Regge-trajectories . It holds $`2M=\kappa \pi \mathrm{}`$ for their lengths $`\mathrm{}`$ and rest masses $`M`$ leading e.g. to $`\mathrm{}`$0.7 fm for the nucleon, and $`\mathrm{}`$0.1 fm for the pion. Particles containing strange valence quarks are completely neglected.
The string endpoints carry the appropriate baryonic charges and baryon number conservation is satisfied. Spin of the hadronic strings, electric charges and flavours of the string endpoints have not been introduced. On the other hand, the degeneracies of the discrete resonance states due to their spin and isospin are taken into account.
### 2.3 Free motion
Any string configuration can be encoded in the trajectory of one of the endpoints of the string, in the so-called directrix , and boosted to any requested velocity as described in in detail. The directrix determines the string configuration at a given time and also its free evolution according to the Nambu-Goto action. Any influence of the assumed transverse size of the hadronic strings on their free motion is neglected. During the evolution of the investigated system each hadronic string is assumed to move freely between the subsequent elementary interaction events (decays and collisions) of its life.
In the numerical code the directrix is stored in less than 200 points, with typically 0.1 GeV rest mass for every linear segment of the broken line string. Whenever it is needed for describing single decay and collision events, the string can be constructed from its directrix unambigously . The string endpoints generally carry a finite amount of momentum, and are described by two string points at the same spatial position, but corresponding to different values of the string parameter .
In order to simulate any individual elementary string interaction event, the participating strings are reconstructed numerically from their directrices. After carrying out a single decay or collision event the final state strings must be converted back in their directrices. Generally, the conversion directrix $``$ string $``$ directrix leads to a doubling of the directrix points with many redundant ones that have to be removed by a reduction algorithm. If the endpoints of two neighbouring directrix segments are almost on a straight line (i.e. the common point of both segments is rather close to one of the endpoints, or to the straight line connecting the endpoints), the segments are replaced by a single linear directrix segment under the constraint that energy and momentum must be conserved.
### 2.4 String collision
In the Dynamical String Model the collision is introduced as a binary interaction of strings. In order to obtain realistic total cross sections for the string-string collisions, a finite transverse size or radius $`R`$ has to be prescribed to the hadronic strings as already established in . This radius is chosen to be identical for all hadronic strings and it is also assumed not to be Lorentz contracted . Strings coming in touch, and remaining after a critical collision time $`\tau _c`$ still closer than their interaction range $`2R`$, interact. The total cross section is assumed to have a purely geometrical origin. Elastic and inelastic string-string collisions are distinguished, based also on geometrical concepts. An inelastic interaction range $`R^{}<R`$ is defined. Strings being closer than $`2R^{}`$ after the time $`\tau _c`$ elapsed since they came in touch, suffer inelastic collision, whereas the peripheral collisions are considered to be elastic ones. If the strings came in touch but after the time $`\tau _c`$ they are at a distance larger than $`2R`$, they do not interact.
The differentiation between elastic and inelastic processes described above was tested by numerical simulation of proton-proton ($`pp`$) collisions in the energy range $`\sqrt{s}=330`$ GeV (see Fig. 1). For determining the total and elastic $`pp`$ cross sections $`10^4`$ collision events were numerically simulated by shooting a projectile proton ($`N_p=1`$) on a target proton ($`N_t=1`$) at rest with an impact parameter $`\rho `$. That is to say, the center of the target proton was positioned on the beam axis, the centers of the projectiles were unifomly distributed on a disk of radius $`\rho 1`$ fm centered on the beam axis in the transverse plane. The initial states of the protons were represented by rotating rod modes. The orientations of the projectile and the target protons were uniformly distributed in the entire solid angle. Projectile protons were produced at a longitudinal distance 3 fm from the target. The simulations were performed by using the time steps $`\mathrm{\Delta }t=0.02`$ fm/c. The numbers $`N_{tot}`$ and $`N_{el}`$ of events with interaction and with elastic collision, respectively, were counted and converted to the corresponding total and elastic cross sections, $`\sigma _{tot}=\rho ^2\pi N_{tot}/(N_tN_pv_p)`$ and $`\sigma _{el}=a^2N_{el}/(N_tN_pv_p)`$ with the projectile velocity $`v_p`$.
It is more involved (and also more time consuming) to calculate the distance of two strings than to determine the distance of two point particles. The distance $`d`$ of two colliding strings is defined as the minimal distance of the points of a projectile string $`a`$ and those of the target string $`b`$. The distance $`d`$ is monitored for each string pair $`a`$ and $`b`$ in every time step as follows.
The distance between strings $`a`$ and $`b`$ can be estimated as $`d_{\mathrm{est}}=|\stackrel{}{x}_a\stackrel{}{x}_b|\frac{1}{2}(l_a+l_b)`$, where $`\stackrel{}{x}_{a,b}`$ are the centers of mass and $`l_{a,b}`$ are the lengths of the strings $`a`$ and $`b`$, respectively. If $`d_{\mathrm{est}}`$ is larger than the string interaction range $`2R`$, then $`d_{\mathrm{est}}`$ is taken for the distance of the string pair. Otherwise, the real distance $`d`$ is computed. If once the real distance $`d`$ is calculated for a string pair and turned out to be larger than the interaction range, $`d>2R`$, it is not checked up to the time $`t=\frac{d2R}{2c}`$ with $`c`$ the speed of light.
After the strings came in touch, i.e. their distance became $`d(t_0)2R`$ at time $`t_0`$, they can interact. The decisions on the interaction and on the interaction channel (if any) are taken after a critical time $`\tau _c`$ elapsed. There is no interaction if the distance of the strings $`d(t_0+\tau _c)>2R`$, while an elastic collision or an inelastic collision takes place if $`2R>d(t_0+\tau _c)>2R^{}`$ or $`d(t_0+\tau _c)<2R^{}`$, respectively.
It was concluded that the elastic and inelastic cross sections shown in Fig. 1 are in qualitative agreement with experimental data for the interaction ranges $`R0.6`$ fm and $`R^{}=0.7R0.4`$ fm and the interaction time $`\tau _c=0.4`$ fm/c. The ratio of the simulated elastic and total cross sections, and the energy dependence of the elastic cross section are rather sensitive to the ratio $`R^{}/R`$ and to $`\tau _c`$. The simulated results are consistent with the following order of magnitude estimates valid for large values of the projectile momentum: $`\sigma _{tot}4R^2\pi 45`$ mb and $`\sigma _{el}4(RR^{})^2\pi 5`$ mb. These estimates do not take into account the orientations of the strings. There is a difference of the results obtained here ($`\sigma _{tot}=45`$ mb for $`R=0.6`$ fm) and in (the same $`\sigma _{tot}`$ for $`R=0.45`$ fm). It is the consequence of introducing the interaction time $`\tau _c`$. Strings overlapping only at their ends for a short time may leave their interaction range during the time $`\tau _c`$ after having come in touch.
In the Dynamical String Model the distance of any pair of strings must be determined in every time step with the algorithm described above. A great amount of computational time can be spared in the simulation of heavy-ion collisions by determining the estimate $`d_{\mathrm{est}}`$ and not calculating the actual distance for $`d_{\mathrm{est}}>2R`$. Once in the simulation of a heavy-ion collision event a string pair came in touch and the time $`\tau _c`$ is over, the channel of the collision (inelastic, elastic, or no interaction) is decided as described above and the appropriate final state in the actual collision channel is generated.
In the simulations presented here the elastic channel is introduced only to reduce the inelastic fraction of the total cross section, but the strings that suffered an elastic collision were let to move further as if nothing had happened.
The inelastic collisions are considered as rearrangements . The rearrangement of infinitely thin colliding strings is a simple cut followed by the reconnection of the string arms at the point of intersection. The order of the reconnection is always unique, as the strings are oriented objects. In the numerical code the rearrangement is carried out in the time step when the interaction time $`\tau _c`$ after the strings came in touch is over, and the criterium $`d<2R^{}`$ is fulfilled. Then the points of the minimal distance define the points where the strings are disjoined and reconnected once again. The reconnection is performed by displacing the appropriate string pieces. Energy and momentum are conserved automatically, and the center of energy of the string pair is conserved by displacing the string pair as a whole appropriately. Owing to the interaction time $`\tau _c`$ the new strings generally can move away, so that any undesired infinite sequence of interactions is avoided.
According to the above prescription of rearrangement for string with continuous mass spectrum, it may happen that one or both of the final state strings would have rest masses below the mass thresholds. Similar final states could also arise as the result of decay. Their treatment shall be discussed later in detail.
Collision of discrete resonances or that of a discrete resonance with a string of the continuous mass spectrum are treated according to the same rules as the collisions of the strings of the continuous mass spectrum, as the resonances are represented by strings in the rotating rod mode.
### 2.5 Decay
The decay law for relativistic strings belonging to the continuous mass spectrum is given by $`dw=\mathrm{\Lambda }dA`$, i.e. the probability $`dw`$ that the string piece having swept the invariant area $`dA`$ breaks is proportional to that area, with the decay constant $`\mathrm{\Lambda }`$ . Making use of the analogy of the hadronic string with the chromoelectric flux tube, the decay is considered as the result of the production of a quark-antiquark pair via tunneling effect in the strong chromoelectric field of the flux tube . Then the decay constant $`\mathrm{\Lambda }=R^2\pi w(R)`$ can be expressed in terms of the quark-antiquark pair production rate $`w(R)`$ depending on the radius of the flux tube . The created quark and antiquark acquire oppositely directed transverse momenta with an approximately Gaussian distribution $`dP(p_T)\mathrm{exp}(p_T^2/2p_{T0}^2)dp_T^2`$ $`(p_{T0}1.43/R)`$ deduced from the analogy with flux tubes .
The decay of strings with rest masses above the threshold are simulated as follows. For any string created, an invariant area $`A_0`$ is chosen according to the distribution $`\mathrm{exp}(\mathrm{\Lambda }A_0)`$. The increment of the invariant area swept by the string is calculated in every time step as the sum of area elements of the linear string segments. In the time step when the invariant area swept by the string exceeds the value $`A_0`$, the string is broken up without any time delay. The decay is performed in the segment for which the probability of decay has a maximum for that time step. The transverse momenta of the new string ends are chosen with the distribution $`dP(p_T)`$, and with uniform distribution in the plane transverse to the decaying string piece in its rest frame. A piece of the string is removed around the breaking point that is required to satisfy energy and momentum conservation.
The decay of discrete resonances is not considered like a string decay. Their lifetimes and decay channels are taken from . According to the exponential decay law of point particles a time $`T_0`$ is chosen for every resonance created, and having that elapsed its decay is performed.
The decay of discrete resonances can result in two or three daughters. For decay into two daughters the magnitudes of their momenta are well-defined, and the direction of the momenta is chosen isotropically in the rest frame of the mother resonance. For decay into three daughters it is assumed that the momenta of the daughters lie in a plane of randomly chosen orientation in the rest frame of the mother resonance, the momenta are of equal magnitude and each neighbouring pair closes the angle 120<sup>o</sup> in the rest frame of the mother. The direction of one of the three momenta are chosen isotropically in the plane. The magnitudes of the momenta determined under these assumptions from energy conservation are in agreement with the average momenta indicated in . Finally, the daughter resonances are represented by the appropriate rotating rods, displaced out of their interaction range in the directions of their momenta.
### 2.6 Final states with discrete resonances
Both rearrangement and decay of strings belonging to the continuous mass spectrum can result in two-particle final states with one or both strings below the mass threshold. These cases are treated in the model in different ways.
1. The case with two rest masses below the threshold is considered as a final state with two discrete resonances. The pair of resonances is chosen randomly according to the degeneracies of the resonances, under the restriction that the sum of the rest masses of the resonance pair must not exceed the invariant mass of the colliding string pair (of the mother string). Then the momenta of the resonances are chosen randomly with isotropic orientation in the rest frame of the pair, satisfying energy and momentum conservation. Finally, the resonances are represented by rotating rods with the appropriate rest masses and momenta and displaced in the direction of their momenta out of their interaction range conserving the center of energy of the pair.
2. The case with one rest mass $`m_r`$ below the threshold is considered as a final state with one discrete resonance and a string belonging to the continuous part of the mass spectrum. The resonance is chosen randomly taking the degeneracies into account, under the restriction that the sum of the rest masses of the final state particles must not exceed the invariant mass of the initial strings (of the mother string). Further on, one has to proceed differently for rearrangement and for decay.
1. For rearrangement: One has to distinguish the cases with a mesonic or baryonic string occurring below the corresponding mass threshold.
1. Meson below the mass threshold. If the rest mass $`m_r`$ below the mesonic threshold $`M_M`$ turned out to be smaller than the pion mass, $`m_r<m_\pi `$, the colliding strings are considered to fuse in a single one. If $`m_\pi <m_r<M_M`$, the resonance with rest mass $`M_r<m_r`$ but closest to $`m_r`$ is chosen with the same momentum $`\stackrel{}{P}_r`$ what the string with rest mass $`m_r`$ would have had. The other string is slightly modified by chopping off its wedge at the point of reconnection and inserting a linear segment of vanishing momentum with rest mass $`m_rM_r`$.
2. Baryon below the mass threshold. Then the possibility of the fusion of both colliding strings is excluded in order to avoid exotic many quark states. Therefore, even if $`m_r`$ is smaller than the nucleon mass, $`m_r<m_N`$, the proton is chosen for the discrete state. The construction of the final state is performed similarly to that for a discrete meson and a string. The mass difference $`m_Nm_r`$, however, is now taken away from the other string by chopping off its wedge and displacing its arms to bring them in connection at their new endpoints. Otherwise, for $`m_N<m_r<M_B`$ (with the baryonic threshold $`M_B`$) the final state is constructed in the same way as for a discrete meson state and a string.
2. For decay: The smallest possible piece at the end of the continuous string is chopped off that is required to satisfy energy and momentum conservation for the final state, when the resonance and the new string endpoint acquire the transverse momenta $`\stackrel{}{p}_T`$ and $`\stackrel{}{p}_T`$, resp. The transverse momenta are chosen randomly according to the distribution $`dP(p_T)`$, and oriented isotropically in the plane perpendicular to the string at its endpoint.
Finally, the discrete resonance is represented by the corresponding rotating rod and positioned so that the centers of energy of the initial and final states must be identical.
### 2.7 Parameters
There are relatively few parameters in our model. The Dynamical String Model has two basic parameters: the string tension $`\kappa 0.9`$ GeV/fm fitted to the slopes of the leading Regge-trajectories, and the string radius $`R0.6`$ fm, fitted to the total proton-proton cross section. The ambiguity in the analogy of strings with chromoelectric flux tubes results in a factor of $`\nu =2`$ uncertainty in the relation between the string tension $`\kappa `$ and the product of the colour charge $`e`$ of the quark and the field strength $``$, $`\nu \kappa =e`$ with $`\nu [1,2]`$ . Two more parameters are the ratio of the inelastic range $`R^{}`$ to the full radius $`R`$ of the string: $`R^{}/R0.7`$ and the collision time $`\tau _c=0.4`$ fm/c fitted to the total and elastic proton-proton cross sections. Furthermore, the masses, degeneracies and lifetimes taken from have been used for the discrete resonances below the mass thresholds, and the mesonic and baryonic mass thresholds $`M_M=1.0`$ GeV and $`M_B=1.5`$ GeV have been chosen.
According to the analogy of the hadronic strings with the chromoelectric flux tubes, the decay constant $`\mathrm{\Lambda }`$ of the string is determined by the parameters $`\kappa `$, $`R`$, and $`\nu `$ . In Table 1 we list the parameter sets used for the simulation of heavy-ion collision events, including also the corresponding total proton-proton cross sections at high energies and the decay constants and mean lifetimes $`(T_l)`$ of the strings. The mean lifetimes are determined for the so-called yo-yo mode according to the exponential decay law, $`T_l=\sqrt{(\mathrm{ln}2)/\mathrm{\Lambda }}`$.
The Dynamical String Model with the parameter sets given in Table 1 has been tested by simulating elementary hadronization processes: two-jet events in $`e^+e^{}hadrons`$ at c.m. energies 20 - 50 GeV and hadronization in proton-proton collisions at 29 and 200 GeV bombarding energies .
The parameter set (a) is consistent with the total proton-proton cross sections. It provides a decay constant for which the simulated results on the Bose-Einstein correlation of like-sign pions and on the average charged particle multiplicity are in good agreement with experimental data for $`e^+e^{}hadrons`$ . Simulated results for the single-particle distributions for the same process and for the proton-proton collisions are also in good qualitative agreement with the corresponding data, but the average charged particle multiplicity in proton-proton collisions is overestimated nearly by 40%.
The parameter set (b) has been found the optimal one in the simulation for reproducing the single-particle data on $`e^+e^{}hadrons`$ , but it leads to an unrealistically small value of the string decay constant and practically no Bose-Einstein correlation of like-sign pions occurs in the simulation using this set. Furthermore, the total proton-proton cross section for high energies is underestimated by the parameter set (b) as seen in Table 1. Single-particle data on proton-proton collisions can be described with a quality similar to that of the corresponding results for parameter set (a), with a similar overestimate of the average charged particle multiplicity. It should be mentioned that according to the calculations in the parameters of the set (a) are although not optimal but still in the range which is acceptable for describing the single-particle distributions in $`e^+e^{}hadrons`$. Thus, the parameter set (a) is preferred on the base of comparing the simulated results with experimental data on the elementary hadronization processes considered above.
The mean lifetime 0.8 fm/c of strings for parameter set (a) is close to the value $`1.2\pm 0.1`$ fm/c determined in . On the other hand the string radius $`R=0.6`$ fm of parameter set (a) is also consistent with the range of its value $`(0.5\pm 0.1)`$ fm, what was established on the base of the experimentally observed strangeness fraction in proton-proton collision .
## 3 Simulation of Ultrarelativistic Heavy-Ion Collisions
The Dynamical String Model described above has been applied to the simulation of ultrarelativistic heavy-ion collision events. For both parameter sets 250 collision events were simulated. Hypothetical nuclei of mass number $`A=35`$ shooted on one another in the c.m.s. were constructed in the following way. The centers of the nucleonic strings (rotating rods of length 0.7 fm) were positioned at the nodes and the centers of the cells of a cubical $`3^3`$ lattice with lattice spacing 2.1 fm drawn in a sphere of the nuclear radius $`R_A=r_0A^{1/3}=3.6`$ fm ($`r_0=1.1`$ fm). The Fermi motion of the nucleons has been neglected, their orientations were chosen randomly according to a uniform distribution in the entire solid angle. Two such ‘cubes’ with parallel edges were boosted to the appropriate c.m.s. momenta. Central collisions with impact parameters less than 2 fm were considered. The center of one of the nuclei was chosen according to a uniform distribution in the transverse plane, within a circle of radius of 2 fm around the projection of the center of the other nucleus to that plane. Constructing the hypothetical nuclei in a cubic configuration gives an extra periodic structure of the nucleus instead of the fluid-like random one. On the other hand the possible effect of this periodicity is completely neutralised by choosing different impact parameters for the individual collision events randomly. In the numerical simulation no side-effects originating from the periodic configuration were seen.
The simulations were performed with the same time steps of $`\mathrm{\Delta }t=0.02`$ fm/c as used to fit the total and inelastic radii of the strings and to perform the test simulations. It is rather important to take into account the elastic string-string collisions, since the secondary collisions in ultrarelativistic heavy-ion collisions play a distinguished role.
The simulations were performed with both parameter sets (a) and (b). The simulated results were transformed back to the laboratory system and compared with experimental data on <sup>32</sup>S + <sup>32</sup>S central collisions at the bombarding energy of $`200`$ GeV/n in the NA$`35`$ experiment at CERN . In order to take into account the difference between the mass numbers of the nuclei in the simulation and the experiment, the simulated distributions were systematically renormalised by the factor $`(32/35)^2`$.
The average multiplicities of the produced hadrons with negative charge (supposed to be $`\pi ^{}`$ in the experiment) are compared to the simulated results in Table 2. As the Dynamical String Model does not account for electric charges, one third of the produced mesons is assumed to be negative based on isospin arguments. The simulated average multiplicity of negatively charged particles is overestimated by about 30% as compared to the multiplicity in the reaction $`{}_{}{}^{32}\mathrm{S}+^{32}\mathrm{S}`$. This can also be seen from the data for the averaged negative charged particle multiplicity per participating baryons which varies slightly for different mass numbers . The overestimate can be the consequence of overestimating the multiplicities in individual string-string collisions, as test simulations for proton-proton collisions have shown. Also the complete neglection of the strangeness channel and the rather crude treatment of elastic string-string collisions can affect the average charged particle multiplicity.
The rapidity and the transverse momentum distributions of negatively charged hadrons are shown in Fig. 2 and Fig. 3, respectively. It can be seen that the experimental spectra are reproduced by the model for both parameter sets qualitatively.
## 4 Conclusions
The Dynamical String Model has been generalised in order to simulate ultrarelativistic heavy-ion collisions at current collider energies. The initialisation of the incoming nuclei and the discrimination of the elastic and inelastic scattering of strings are now included. An effective optimisation of the collision algorithm has been performed in the numerical code. In this way, the model is able to simulate nucleus-nucleus collisions at beam energies of a few hundreds of GeV/nucleon for nuclei with mass numbers up to around $`40`$. The simulated results for the reaction <sup>32</sup>S$`+^{32}`$S at a beam energy of $`200`$ GeV/n are in good qualitative agreement with the experimental data. Therefore, the Dynamical String Model has a predictive power for ultrarelativistic heavy-ion collisions.
## Acknowledgement
This work was supported by the Debrecen Research Group in Physics of the Hungarian Academy of Sciences, by OTKA Project T-023844, by DFG Project 436/UNG/113/123/0, by GSI and BMBF. The authors are grateful to W. Greiner for his kind hospitality at the Institute for Theoretical Physics, Johann Wolfgang Goethe University. K. Sailer thanks for the support of the Alexander von Humboldt Foundation. B. Iványi wishes to express his thanks to the DAAD for their support.
|
no-problem/9901/hep-ph9901310.html
|
ar5iv
|
text
|
# ELECTROWEAK BARYOGENESIS WITH COSMIC STRINGS?
## 1 Introduction
Electroweak baryogenesis is a beautiful idea which fails (or is about to fail) in the best motivated models we have for physics at the Fermi scale ($`100GeV`$). In the Standard Model, LEP II experiments set a lower bound on the mass of the Higgs boson of about 97 GeV, implying that the electroweak phase transition in that model is not first order but rather a crossover . In the Minimal Supersymmetric Standard Model the electroweak phase transition can be first order and sufficiently strong to allow for electroweak baryogenesis, but this occurs in a very small region of parameter space which presumably will be ruled out by LEP II in a couple of years.
One may take the previous negative results as indication that the asymmetry in baryon number was not created at the electroweak epoch, but rather related to the physics of $`BL`$ violation and neutrino masses. To stick to electroweak baryogenesis one can consider extensions of the particle content of the model to get a stronger electroweak phase transition (e.g. extensions which include singlets). In this talk I will consider another possibility: how the remnants of physics at energy scales higher than the electroweak scale (cosmic strings in this case) can be useful to overcome the problems of having a weak electroweak phase transition.
Electroweak baryogenesis requires the co-existence of regions of large and small $`\phi /T`$, where $`T`$ is the temperature and $`\phi `$ the ($`T`$-dependent) Higgs vacuum expectation value. At small or zero $`\phi /T`$ sphalerons are unsuppressed and mediate baryon number violation, while large $`\phi /T`$ is needed to store the created baryon number (for $`\phi /T1`$ sphaleron transitions are ineffective and baryon number is conserved). Below the critical temperature $`T_c^{EW}`$ of the electroweak phase transition and irrespective of whether it is first or second order, $`\phi /T`$ grows until sphaleron transitions are shut-off. For baryogenesis to be possible at those times, we need some region where $`\phi `$ is forced to remain zero or small. The idea we examine in this talk is that this can be the case along topological defects (like cosmic strings) left over from some other cosmological phase transition that took place before the electroweak epoch . If the electroweak symmetry is restored in some region around the strings, sphalerons could be unsuppressed in the string cores while they would be ineffective in the bulk of space, away from the strings. The motion of the string network, in a similar way as the motion of bubble walls in the usual first-order phase-transition scenario, will leave a trail of net baryon number behind.
Some problems with this scenario come immediately to mind. First, it is clear that the space swept by the defects is much smaller than the total volume, so there will be a geometrical suppression factor with respect to the usual bubble-mediated scenario . Another suppression factor arises from the fact that there is a partial cancellation between front and back walls of the string, which tend to produce asymmetries of opposite signs . Another problem comes from the condition that the symmetry restoration region (which naively would be of size $`R_{rest}1/\sqrt{\lambda }\phi `$, where $`\lambda `$ is the quartic Higgs coupling) should be large enough to contain sphalerons (which in the symmetric phase have size $`R_{sph}1/g^2T`$), while outside the strings, sphalerons should be suppressed ($`\phi /T1`$). Combining both conditions one obtains $`\lambda g^4`$, which means the scenario would require small values of the Higgs mass, in conflict with experimental bounds. LEP II tells us that $`\lambda `$ is at least of order $`g^2`$, so that sphalerons won’t fit in the restoration region. In other words, for realistic values of the Higgs mass sphalerons are not going to be fully unsuppressed. We will measure how effective they are by writing the rate of sphaleron transitions per unit time and unit of string length as $`\mathrm{\Gamma }_l=\kappa _l\alpha _w^2T^2`$. For a string with $`R_{rest}=R_{sph}`$, one has $`\mathrm{\Gamma }_lR_{rest}^2`$ equal to the rate in the symmetric phase, corresponding to $`\kappa _l1`$. Values of $`\kappa _l`$ much smaller than 1 would mean that sphalerons are not really unsuppressed inside the strings.
In the rest of the talk I review the careful analysis of this mechanism contained in ref. , to which I refer the interested reader for further details.
## 2 Strings with electroweak symmetry restoration
Cosmic strings are 1-dimensional solitons, stable by topological reasons, that can form in the spontaneous breaking of a symmetry $`G`$ where I consider the simplest case, $`G=U(1)`$, in this talk. A model with a complex scalar $`S`$ and lagrangian
$$=_\mu S^{}^\mu S\lambda _S(S^{}SS_0^2)^2,$$
(1)
admits global strings: configurations with $`S=0`$ along some line (say the $`z`$-axis) and $`S(r)=f(r)S_0e^{i\theta }`$, with $`f(\mathrm{})1`$, where $`r`$ is the distance to the $`z`$-axis and $`\theta `$ the azimuthal angle. The radius of these strings (where most of the energy is trapped) is set by the scale $`1/m_S1/\sqrt{\lambda _S}S_0`$.
If the $`U(1)`$ is made local, in addition to the $`S`$ field, a non-zero gauge field is also present, $`A_\mu =a(r)_\mu \theta /q_S`$, with $`a(\mathrm{})=1`$, where $`q_S`$ is the $`U(1)`$ charge of the $`S`$ field. This gauge field is such that the covariant derivative $`D_\mu S`$ goes to zero for large $`r`$ resulting in a finite energy per unit length of string.
We assume that $`S`$-strings (global or local) form at some temperature $`T_c^S>T_c^{EW}`$ and are present at the time of the electroweak phase transition. To force $`\phi 0`$ in the cores of the strings, the Higgs field must interact either with the $`S`$ field or the $`A_\mu `$ field (if the strings are local):
### 2.1 $`S\phi `$ interaction
Suppose the scalar potential has the form
$$V(S,\phi )=\lambda _S(|S|^2S_0^2)^2\gamma (|S|^2S_0^2)(|\phi |^2\phi _0^2)+\lambda (|\phi |^2\phi _0^2)^2,$$
(2)
with $`\gamma >0`$. The mass squared of the Higgs field in the string background is $`m_\phi ^2(r)\gamma (S_0^2|S(r)|^2)2\lambda \phi _0^2`$, which is negative outside the string core but can be positive inside, so that electroweak symmetry tends to be restored along the strings. Exploring the ($`S_0,\lambda _S,\gamma ,\lambda `$) parameter space, the typical case, with $`\lambda _SS_0^2\lambda \phi _0^2`$ leads to $`R_{rest}1/m_\phi (\mathrm{})`$. The best posible case to get a large restoration region has $`\lambda _S\gamma \lambda `$ and $`S_0\phi _0`$ and gives $`R_{rest}\sqrt{\gamma /\lambda _S}/m_\phi (\mathrm{})`$.
### 2.2 $`SA_\mu `$ interaction
In this case we assume that the Higgs field carries a charge $`q_\phi `$ under the extra $`U(1)`$ responsible for the strings, so that its covariant derivative has an extra piece. As we saw, the $`A_\mu `$ field in the string goes like $`1/q_Sr`$ at large $`r`$ to cancel the azimuthal derivative of $`S`$, give vanishing $`D_\mu S`$ and minimize energy. In $`D_\mu \phi `$, the $`A_\mu `$ contribution is now proportional to $`q_\phi /q_S`$ and the azimuthal derivative of $`\phi `$ can cancel $`D_\mu \phi `$ only if $`q_\phi /q_S`$ is an integer. If that is not the case, a $`Z_\mu `$ boson condensate is induced until the covariant derivative is cancelled . In any case, a non-zero winding of $`\phi `$ forces $`\phi 0`$ in the string core ($`r=0`$). The restoration region around $`r=0`$ is larger in the presence of a non-zero $`Z_\mu `$ string (case of non-integer $`q_\phi /q_S`$).
## 3 Sphaleron rates and CP asymmetry in the string cores
In general, with no tuning of potential parameters nor a $`Z_\mu `$ condensate, $`\phi `$ is zero only at the string core ($`r=0`$) and rises inmediately away from that line. As the symmetry is never really restored in a wide region, the energy of the sphaleron in such background (it can be computed in the lattice looking for a saddle point of the energy functional) is only about a factor 0.7 smaller than the sphaleron energy in the broken phase (alternatively $`\kappa _l10^6`$: that is, sphalerons are not really unsuppressed in this type of strings).
The situation is better when a $`Z_\mu `$-field is induced, in which case $`\kappa _l1/30`$ for $`\phi /T1`$ (this number can be obtained in the lattice using a fully non-perturbative approach and tracking Chern-Simons number in real time evolution). However this number is very sensitive to $`T`$ and drops significantly when $`T`$ decreases.
Fully unsuppressed sphalerons can only be obtained in the global $`U(1)`$ case for large enough $`\gamma /\lambda _S`$. In fact, to obtain an asymmetry of the order of the observed one, one would need $`\gamma /\lambda _S10^{14}`$. On the other hand, stability of the potential requires $`4\lambda /\gamma >\gamma /\lambda _S`$, so that $`\lambda /\lambda _S10^{28}`$. Such an ad-hoc and wild fine-tuning of the parameters prevents us from taking this particular case seriously.
Unsuppressed sphaleron transitions inside the string cores are not sufficient to generate the baryon asymmetry: they must occur in a background with CP asymmetric particle distributions so that the sign of the B-violation is biased. This asymmetry comes about if the interactions between the particles in the plasma and the string walls violate CP. In that case the walls of a moving string act as sources of chiral-number flux (which would be zero if the string velocity $`v_S`$ were zero). This asymmetry diffuses away from the walls and only that inside the string is useful to create baryons (for geometrical reasons it is also clear that this diffusion effect is less efficient for strings than for bubbles). In conclusion, we have to compute the chemical potential $`\mu `$ for chiral number inside the strings. General arguments (confirmed by detailed analysis of particular models) give the result $`\mu =Kv_S^2T`$ for small $`v_S`$, with $`K\stackrel{<}{_{}}0.01`$ and $`\mu =K^{}T`$ for $`v_S1`$ with $`K^{}`$ of order 1.
## 4 Evolution of string networks and efficiency of baryogenesis
To get a final number for the asymmetry generated by this mechanism, we need to know how many strings there are and how quickly they are moving (the best case being that of a dense network of fast moving strings). We can describe the string network by a mean average separation between strings $`R(t)`$ and a mean average velocity $`v_S(t)`$. The evolution of these quantities with time $`t`$ is governed by Hubble expansion ($`H1/2t`$); energy loss by loop formation; and friction with the plasma. The friction force goes like $`Fv_ST^3`$: it is important at early times when it dominates the dynamics of the evolution. This is the friction dominated or Kibble regime, with $`R(t)t^{5/4}`$ and $`v_S(t)t^{1/4}HR(t)`$. Eventually, friction will no longer be important and a scaling regime is reached with $`R(t)1/H`$ and $`v_S1`$.
In conclusion, to get the final number for the baryon asymmetry we start with the equation for the rate of change of baryon number $`N_B`$ per unit time and unit length of string:
$$\frac{dN_B}{dLdt}=1.5[\kappa _l\alpha _w^2T^2]\frac{\mu }{T}.$$
(3)
If we use the results for $`\kappa _l`$ and $`\mu `$ previously discussed, and integrate eq.(3) in one Hubble time (this is because $`\kappa _l`$ is shut-off quickly with decreasing $`T`$) using the network evolution results just presented we end up with the result that
$$\left[\frac{N_B}{N_\gamma }\right]_{strings}\stackrel{<}{_{}}10^{10}\left[\frac{N_B}{N_\gamma }\right]_{observed}.$$
(4)
That is, the mechanism just studied is uncapable of generating a sufficiently large matter-antimatter asymmetry.
## Acknowledgments
I thank J.M. Cline, G.D. Moore and A. Riotto for an enjoyable collaboration on the topic presented.
## References
|
no-problem/9901/cond-mat9901347.html
|
ar5iv
|
text
|
# Suppression of Giant Magnetoresistance by a superconducting contact
## Abstract
We predict that current perpendicular to the plane (CPP) giant magnetoresistance (GMR) in a phase-coherent magnetic multilayer is suppressed when one of the contacts is superconducting. This is a consequence of a superconductivity-induced magneto-resistive (SMR) effect, whereby the conductance of the ferromagnetically aligned state is drastically reduced by superconductivity. To demonstrate this effect, we compute the GMR ratio of clean (Cu/Co)<sub>n</sub>Cu and (Cu/Co)<sub>n</sub>Pb multilayers, described by an ab-initio spd tight binding Hamiltonian. By analyzing a simpler model with two orbitals per site, we also show that the suppression survives in the presence of elastic scattering by impurities.
PACS: 75.50Pa, 74.80Dm, 72.00
During the past decade, electronic properties of hybrid nanostructures have been widely studied, both from a fundamental point of view and for their potential applications. At a fundamental level, new physics associated with such structures arises from the proximity of two electronic ground states with different correlations, which can reveal novel scattering processes not apparent in the separate materials. For example normal-superconducting hybrids, formed when a normal (N) metal or semiconductor is placed in contact with a superconductor (S), have been shown to exhibit a range of unique transport phenomena associated with Andreev scattering at the N-S boundary. Similarly ferromagnetic (F)-normal multilayers and spin-valves exhibit magnetoresistance properties associated with spin-filtering by the F-layers. Perhaps the most spectacular of these is giant magnetoresistance (GMR) in magnetic multilayers, whose resistance, with the current perpendicular to the planes of the layers (CPP), can change by more than 100% under the application of a magnetic field. Until recently N-S and F-S nanostructures have been studied in isolation, but during recent months, a number of experiments have demonstrated that N-F-S hybrids exhibit a range of novel features , including a long-range superconducting proximity effect in the F-material, extending over length scales for an excess of the magnetic length $`\sqrt{\frac{\mathrm{}D}{E_{\mathrm{e}x}}}`$, where $`D`$ is the diffusion coefficient and $`E_{\mathrm{e}x}`$ the exchange splitting.
The aim of this Letter is to address a new phenomenon, namely the effect of superconductivity on CPP-GMR in phase-coherent multilayers of the type (N/F)<sub>n</sub>S, where S is either a superconductor or a normal metal. We shall demonstrate that as superconductivity is induced in the S-contact (eg by lowering the temperature) CPP-GMR is almost completely suppressed. This result is remarkable, since for example CPP-GMR experiments by the Michigan State University group , already employ superconducting contacts. We shall argue that GMR in such experiments is a consequence of spin-flip scattering, which if eliminated, would cause a dramatic suppression of GMR. To demonstrate the superconductivity-induced suppression of GMR, we use the method outlined in to compute the zero-bias, zero-temperature conductance of the (Cu/Co)<sub>n</sub>Pb multilayer sketched in figure 1, described by an spd tight-binding Hamiltonian, with tight-binding parameters fitted to accurate ab-initio density functional calculations . Since we are interested in a phase-coherent nanostructure in which the magnetic moments of successive Co layers are either parallel or anti-parallel, the conductance in the normal state is given by the Landauer formula
$$G_{\mathrm{N}N}=\frac{e^2}{h}\left(T_{}+T_{}\right),$$
(1)
where $`T^\sigma =\mathrm{T}rt^\sigma t^\sigma `$, with $`t`$ the multi-channel transmission matrix for the structure. In the superconducting state, equation (1) is replaced by current-voltage relations derived in , and re-derived in , which in the absence of quasi-particle transmission through the superconductor yields
$$G_{\mathrm{N}S}=\frac{4e^2}{h}R_\mathrm{a},$$
(2)
where $`R_\mathrm{a}=r_\mathrm{a}^\sigma r_\mathrm{a}^\sigma `$ is the Andreev reflection coefficient, which for a spin-singlet superconductor is independent of the spin $`\sigma `$ of the incident quasi-particle. In what follows, when the magnetic moments of adjacent Co layers are aligned (anti-aligned) we denote the conductances by $`G_{\mathrm{N}N}^\mathrm{F}`$, $`G_{\mathrm{N}S}^\mathrm{F}`$ ($`G_{\mathrm{N}N}^\mathrm{A}`$, $`G_{\mathrm{N}S}^\mathrm{A}`$) and therefore the GMR ratios are given by $`M_{\mathrm{N}N}=(G_{\mathrm{N}N}^\mathrm{F}G_{\mathrm{N}N}^\mathrm{A})/G_{\mathrm{N}N}^\mathrm{A}`$, $`M_{\mathrm{N}S}=(G_{\mathrm{N}S}^\mathrm{F}G_{\mathrm{N}S}^\mathrm{A})/G_{\mathrm{N}S}^\mathrm{A}`$.
Consider first the case of quasi-ballistic transport in which there is no disorder within the layers, nor at the interface, but the widths of the Co layers are allowed to fluctuate randomly by 1 atomic layer. Such a structure is translationaly invariant in the direction ($`\stackrel{}{r}_{}`$) parallel to the layers, and the Hamiltonian is diagonal in a Bloch basis ($`\stackrel{}{k}_{}`$). Therefore the trace over scattering channels in equation (1) and (2) can be evaluated by computing the scattering matrix at separate $`k_{}`$ points. Figure 2a shows results for the GMR ratio in the normal and superconducting states, obtained by summing over $`510^3`$ $`k_{}`$ points, and clearly demonstrates a dramatic superconductivity-induced suppression of GMR. Figure 2b and 2c show results for the individual conductances (note the difference in scales) and demonstrate that the GMR ratio $`M_{\mathrm{N}S}`$ is suppressed because $`G_{\mathrm{N}S}^F`$ is drastically reduced compared with $`G_{\mathrm{N}N}^F`$.
To understand this effect, consider the simplest possible model of spin-dependent boundary scattering shown in figure 3, which in the limit of delta-function F layers, reduces to the model used to describe the N-F-S experiment of . Fig 3a (3b) shows a cartoon of a majority (minority) spin, scattering from a series of potential barriers in successive aligned F layers. Since the minority spins see the higher barrier, one expects $`T_{}^\mathrm{F}<T_{}^\mathrm{F}`$. Figures 3c and 3d show the scattering potentials for anti-ferromagnetically aligned layers, for which $`T_{}^\mathrm{A}=T_{}^\mathrm{A}<T_{}^\mathrm{F}`$. For such an ideal structure, GMR arises from the fact that $`T_{}^\mathrm{F}T_{}^\mathrm{F}`$ and $`T_{}^\mathrm{A}`$. In the presence of a single superconducting contact this picture is drastically changed. For ferromagnetically aligned layers, figure 3e shows an incident majority electron scattering from a series of low barriers, which Andreev reflects as a minority hole and then scatters from a series of high barriers (figure 3f). The reverse process occurs for an incident minority electron, illustrating the rigorous result that the Andreev reflection coefficient is spin-independent. Figures 3g and 3h illustrate Andreev reflection in the anti-aligned state. The crucial point illustrated by these sketches is that in presence of a S contact for both the aligned (figures 3e and 3f) and anti-aligned (figures 3g and 3h) states the quasi-particle scatters from N (=4 in the figures) high barriers and N (=4) low barriers and therefore at the level of a classical resistor model, one expects $`G_{\mathrm{N}S}^\mathrm{F}G_{\mathrm{N}S}^\mathrm{A}`$.
Of course the rigorous results of figure 2, obtained using an spd Hamiltonian with 36 orbitals per atomic site (spd$`\times `$2 for spin $`\times `$2 for particle-hole degrees of freedom) go far beyond this heuristic argument. In the case of aligned or anti-aligned F-layers the problem involves two independent spin fluids and therefore the Hamiltonian is block-diagonal with 18 orbitals per site. The Hamiltonian used to obtain these results is of the form
$$H_{\mathrm{s}pd}=H_\mathrm{L}+H_{\mathrm{L}M}+H_\mathrm{M}+H_{\mathrm{M}R}+H_\mathrm{R},$$
(3)
where $`H_\mathrm{L}`$ ($`H_\mathrm{R}`$) describes a semi-infinite left-hand (right-hand) crystalline lead, $`H_{\mathrm{L}M}`$ ($`H_{\mathrm{M}R}`$) is the coupling matrix between surface orbitals on the left (right) lead and the left (right) surface of the magnetic multilayer and $`H_\mathrm{M}`$ is the tight-binding Hamiltonian describing the multilayer. Consider first the retarded Green’s function $`g=(EH_\mathrm{L}H_\mathrm{R}+i0^+)^1`$ of the two decoupled semi-infinite leads. If the surfaces of the leads each contain M atoms, then $`H_{\mathrm{L}M}`$ and $`H_{\mathrm{M}R}`$ are $`36M\times 36M`$ matrices and the portion of $`g`$ involving only matrix elements between orbitals on the left and right lead surfaces is a $`(2\times 36M)\times (2\times 36M)`$ block diagonal matrix $`g^\mathrm{S}`$ whose matrix element $`g_{ij}^\mathrm{S}`$ vanish for $`i,j`$ belonging to different leads. Using a semi-analytic form for $`g^\mathrm{S}`$ derived in , the surface Green function $`G^\mathrm{S}`$ for the leads plus multilayer can be computed by first recursively decimating the Hamiltonian $`H_{\mathrm{L}M}+H_\mathrm{M}+H_{\mathrm{M}R}`$ to yield a $`(72M\times 72M)`$ matrix of couplings $`\stackrel{~}{H}_\mathrm{M}`$ between surface orbitals of the leads, and then computing the inverse
$$G^\mathrm{S}=[(g^\mathrm{S})^1\stackrel{~}{H}_\mathrm{M}]^1.$$
(4)
Once the full surface Green’s function $`G^\mathrm{S}`$ is known, the scattering matrix elements between open scattering channels are obtained using a generalized Fisher-Lee relations . For the calculation of figure 2, involving fcc crystalline leads aligned along the (110) direction the spd Hamiltonians for the bulk materials are known , but the hopping parameters at the interfaces between different materials are not currently available. As a simplest approximation, these surface couplings were chosen to be the geometric mean of their bulk values.
Despite our use of a highly efficient recursive Green’s function technique to exactly evaluate the scattering matrix of a multilayer, currently available computing resources restrict such a calculation to systems with translational invariance parallel to the planes. To demonstrate that the suppression of CPP-GMR is a generic feature of N-F-S hybrids and to study the effect of elastic impurity scattering, we now examine a reduced two band (s-d) model with a Hamiltonian matrix
$$H=\left(\begin{array}{cc}\hfill \underset{¯}{\underset{¯}{H_o}}\underset{¯}{\underset{¯}{h}}& \hfill \underset{¯}{\underset{¯}{\mathrm{\Delta }}}\\ \hfill \underset{¯}{\underset{¯}{\mathrm{\Delta }}}^{}& \hfill \underset{¯}{\underset{¯}{H_o}}^{}+\underset{¯}{\underset{¯}{h}}\end{array}\right).$$
(5)
In this model, $`h_{ij}^{\alpha \beta }=h_i\delta _{ij}\delta _{\alpha \beta }\delta _{\alpha \mathrm{d}}`$ with $`h_i`$ the exchange splitting on site $`i`$ for the d orbital, which vanishes if $`i`$ belongs to a N or S layer and is of magnitude $`h`$ if $`i`$ belongs to a F layer; $`\mathrm{\Delta }_{ij}^{\alpha \beta }=\mathrm{\Delta }_i\delta _{ij}\delta _{\alpha \beta }`$ where the superconducting order parameter $`\mathrm{\Delta }_i`$ vanishes if $`i`$ belongs to a N or F layer and equals $`\mathrm{\Delta }`$ if $`i`$ belongs to the S region (s-wave superconductivity); $`(H_o)_{ij}^{\alpha \beta }=ϵ_i^\alpha \delta _{\alpha \beta }`$ for $`i=j`$, $`\gamma ^{\alpha \beta }`$ for $`i,j`$ nearest neighbors and $`(H_o)_{ij}^{\alpha \beta }=0`$ otherwise. Note that this model is the minimal model including the possibility of s-d interband scattering, that has been shown to play a crucial rôle in describing the scattering properties of a transition metal multilayer . $`ϵ_i^\alpha `$ is chosen to be a random number, uniformly distributed between $`ϵ^\alpha w/2`$ and $`ϵ^\alpha +w/2`$. We choose the parameters of the model to fit the conductance and the GMR obtained from the spd model for Cu/Co , namely (all quantities are expressed in eV) $`ϵ_{\mathrm{C}u}^\mathrm{s}=7.8`$, $`ϵ_{\mathrm{C}u}^\mathrm{d}=4.0`$, $`\gamma _{\mathrm{C}u}^{\mathrm{s}s}=2.7`$, $`\gamma _{\mathrm{C}u}^{\mathrm{d}d}=0.85`$, $`\gamma _{\mathrm{C}u}^{\mathrm{s}d}=1.1`$, $`ϵ_{\mathrm{C}o}^\mathrm{s}=4.6`$, $`ϵ_{\mathrm{C}o}^\mathrm{d}=2.0`$, $`\gamma _{\mathrm{C}o}^{\mathrm{s}s}=2.7`$, $`\gamma _{\mathrm{C}o}^{\mathrm{d}d}=0.85`$, $`\gamma _{\mathrm{C}o}^{\mathrm{s}d}=0.9`$, $`h=1.6`$, $`\mathrm{\Delta }=10^3`$, $`w=0.6`$.
Figure 4a shows results for the GMR ratios $`M_{\mathrm{N}N}`$ and $`M_{\mathrm{N}S}`$ and demonstrates that the suppression of CPP-GMR by superconductivity survives in the presence of disorder. We have investigated a range of higher disorders and system sizes and find superconductivity-induced GMR suppression in all cases. The disorder used in figure 4 has been chosen to illustrate an additional novel feature, not so-far discussed in the literature, namely that ballistic majority spins can co-exist with diffusive minority spins. For a strictly ballistic structure, the conductance $`G`$ is almost independent of length $`L`$ and the product $`GL`$ varies linearly with $`L`$. This behavior occurs for the majority spin in the N-results of figure 4b. In contrast, for minority spins the product $`G_{\mathrm{N}N}^\mathrm{F}L`$ exhibits a plateau for $`L500`$ AP, which is characteristic of diffusive behavior. The same plateaus are also present in the product $`G_{\mathrm{N}N}^\mathrm{A}L`$ and in the curves of $`G_{\mathrm{N}S}^\mathrm{A}L`$ and $`G_{\mathrm{N}S}^\mathrm{F}L`$ shown in figure 4c.
In summary we predict that the presence of a single superconducting contact destroys the sub-gap CPP-GMR of phase-coherent magnetic multilayer. This arises because superconductivity suppresses transport in the majority sub-band in the ferromagnetic alignment, but causes little effect in the antiferromagnetically aligned state. This drastic reduction in $`G_{\mathrm{N}S}^\mathrm{F}`$ compared with $`G_{\mathrm{N}N}^\mathrm{F}`$ is itself a remarkable superconductivity-induced magneto resistance effect. This suppression will be lifted at high biases and finite temperatures, where transport occurs via both Andreev reflection and quasi-particle transmission. Furthermore the presence of spin-flip scattering at the superconducting interface will destroy this effect, because if the spin of an Andreev reflected hole is flipped by such process, before it traverses the multilayer, only the contribution to GMR from layers within a spin-flip scattering length of the N/S interface is suppressed.
Acknowledgments: This work is supported by the EPSRC, the EU TMR Programme and the DERA.
|
no-problem/9901/cond-mat9901116.html
|
ar5iv
|
text
|
# Possibility of direct Mott insulator-to-superfluid transitions in weakly disordered boson systems
## Abstract
We study the zero-temperature phase transitions of a two-dimensional disordered boson Hubbard model at incommensurate boson densities. Via matrix diagonalization and quantum Monte Carlo simulations, we construct the phase diagram and evaluate the correlation length exponent $`\nu `$. In the presence of weak disorder, we obtain $`\nu =0.5\pm 0.1`$, the same value as that in the pure model, near the tip of a Mott insulator lobe, using the dynamical critical exponent $`z=2`$. As the strength of disorder is increased beyond a certain value, however, the value of $`\nu `$ is found to change to $`0.9\pm 0.1`$. This result strongly suggests that there exist direct Mott insulator-to-superfluid transitions around the tip of a Mott insulator lobe in the weak disorder regime.
PACS numbers: 74.40.+k, 67.40.Db, 05.30.Jp
Two-dimensional (2D) interacting boson systems display quantum phase transitions from an insulating state to a superfluid (SF) state at zero temperature. Physical realization of this transition may include disordered thin film superconductors, Josephson-junction arrays, granular superconductors , and <sup>4</sup>He films adsorbed in porous media . In the absence of disorder the insulating state is a Mott insulator (MI), which has a commensurate value of the boson density since, otherwise, excessive bosons or holes will move freely to yield a superfluid. The excessive bosons or holes can be localized, on the other hand, in the presence of disorder, to make an incommensurate insulator. The resulting Bose glass (BG) insulating phase has attracted considerable interest recently. In the mean-field theory , it has been argued that in the presence of disorder the transition from a Mott insulator to a superfluid occurs only through the BG phase. However, a recent quantum Monte Carlo study , performed at the tip of an MI lobe (i.e., at a commensurate value of the boson density), has demonstrated the existence of a direct MI-SF transition. The subsequent renormalization group study has been interpreted to suggest that the direct MI-SF transition occurs around the tip of the MI lobe in high dimensions ($`d>4`$) but only at the tip in lower dimensions ($`2d<4`$). These results raise an interesting question as to whether such a direct MI-SF transition occurs even at incommensurate values of the boson density.
This work investigates the possibility of the direct MI-SF transition off the tip of an MI lobe in two dimensions. We perform both matrix diagonalization and quantum Monte Carlo simulations, and find evidences for the direct transition around the tip of an MI lobe in the weak disorder limit. The superfluid onset points are identified and the correlation length critical exponents are estimated at various disorder strengths. The corresponding phase diagram for weak disorder is constructed.
We consider the boson Hubbard model with disorder, described by the Hamiltonian
$`H={\displaystyle \frac{U}{2}}{\displaystyle \underset{i}{}}n_i^2{\displaystyle \underset{i}{}}(\mu +v_i)n_it{\displaystyle \underset{i,j}{}}(b_i^{}b_j+b_ib_j^{}),`$ (1)
where $`b_i^{}`$ and $`b_i`$ are the boson creation and destruction operators at site $`i`$ on an $`L\times L`$ square lattice, and $`n_ib_i^{}b_i`$ is the number operator. In Eq. (1), $`U`$ is the strength of the on-site repulsion, $`\mu `$ is the chemical potential, $`v_i`$ is the random on-site potential distributed uniformly between $`\mathrm{\Delta }`$ and $`\mathrm{\Delta }`$, and $`t`$ measures the hopping strength between nearest neighboring sites. In the limit of the large number of particles, we may take the phase-only approximation and reduce Eq. (1) to the quantum phase Hamiltonian
$`H={\displaystyle \frac{U}{2}}{\displaystyle \underset{i}{}}n_i^2{\displaystyle \underset{i}{}}(\mu +v_i)n_it{\displaystyle \underset{i,j}{}}\mathrm{cos}(\varphi _i\varphi _j),`$ (2)
where $`\varphi _i`$ is the phase of the bosons condensed at site $`i`$ and satisfies the relation $`[n_i,\varphi _j]=i\delta _{ij}`$. Note that in this representation, $`n_i`$ denotes the deviation from the mean integer number $`n_0`$ (around the mean number $`b_i^{}b_i`$).
In order to determine the phase boundary, we first use the matrix diagonalization method with a truncated basis set. The zero-temperature phase boundary between the MI phase and the SF/BG phase is determined by comparing the ground state energy of the system at commensurate boson density and that of the system containing an extra hole or particle. The basis set is chosen to include the lowest-energy states of the Hamiltonian given by the first and the second term in Eq. (2) (i.e., the zeroth order in $`t`$) and the states coupled to these by hopping up to $`(2n+1)`$th order in $`t`$, where we set $`n=1`$ in this work. We identify the lowest-energy state among the states of which the total boson number is zero as the MI state and the lowest one among the states with the total boson number being unity as the SF/BG state, and determine the phase boundary by locating the values of $`\mu (>0)`$ and $`t`$ at which the energies of the two states become equal.
Since the basis set includes those states coupled up to the 3rd order in $`t`$, the result will be the same as that from the energy perturbation to the same order. The numbers of states involved in this calculation are $`4L^49L^2`$ and $`4L^2+1`$ in the cases of the BG/SF state and of the MI state, respectively. We adopt the Lanczos method to obtain the lowest eigenvalue and the corresponding eigenstate, and take 2000 disorder realizations for each system size. In the pure case ($`\mathrm{\Delta }=0`$), we find the phase diagram which is consistent with the previous perturbation result of Freericks and Monien .
Figure 1 shows the zero temperature phase diagram with the disorder strength $`\mathrm{\Delta }=0.2`$ up to the system size $`L=10`$ obtained from the truncated basis set. The shape of the MI lobe constructed from the previous perturbation result in the pure case should be round and shorter than that obtained here. In the absence of hopping ($`t=0`$), where no SF state is possible, the phase boundary (between the MI and BG states) approaches, as the system size $`L`$ is increased, $`0.5\mathrm{\Delta }`$, as it should.
To distinguish the SF phase from the BG phase, we use localization argument, and define the participation ratio as
$$p_L=\left[\frac{\left(_ip_i^2\right)^2}{_ip_i^4}\right]_{av},$$
(3)
where $`p_i`$ is the probability that particles are found at site $`i`$ , and $`[\mathrm{}]_{av}`$ denotes the average over different disorder realizations. At the generic SF-BG transition, the participation ratio satisfies the scaling relation
$$\frac{p_L}{L^2}=L^y\stackrel{~}{p}\left(\delta _tL^{1/\nu }\right),$$
(4)
where $`\delta _t=(tt_c)/t_c`$ is the distance from the critical point $`t_c`$ and $`\nu `$ is the correlation length exponent. Here the scaling function $`\stackrel{~}{p}`$ and an additional exponent $`y`$ have been introduced . Note that since we just consider the states in which the total number of bosons is unity, this method is useful to determine the critical point very close to the MI phase, i.e., on the phase boundary in Fig. 1.
Figure 2 presents the finite-size scaling of the participation ratio of the SF/BG state. From this scaling behavior, we obtain $`y=0.92\pm 0.03`$, $`\nu =1.5\pm 0.3`$, and $`t_c=0.12\pm 0.02`$; the latter separates the SF state from the BG one as $`t`$ (or $`\mu `$) is varied along the phase boundary in Fig. 1. Accordingly, the system undergoes a direct MI-SF transition as the phase boundary is crossed for small $`\mu `$ ($`0.18`$ in Fig. 1), i.e., near the tip of an MI lobe. Note that the possible origin of this direct MI-SF transition, as discussed later, is the abundance of particle-hole excitations around the tip of a lobe. This suggests that the size of the basis set including the states up to the 3rd order in $`t`$ might not be sufficient to determine the phase boundary unambiguously and to compute the corresponding exponents accurately. Such abundance of particle-hole excitations is expected to forbid the perturbation calculation to be accurate, possibly causing the discrepancy between the perturbative results and the quantum Monte Carlo results in the pure case without disorder.
It is thus needed to investigate the transition via quantum Monte Carlo simulations, which are in general more reliable. For that purpose, we follow the standard procedure to transform the 2D quantum phase Hamiltonian in Eq. (2) to the (2+1)-dimensional classical action
$$S=\frac{1}{K}\underset{(i,t)}{\overset{J=0}{}}\left[\frac{1}{2}𝐉_{(i,t)}^2(\mu +v_i)J_{(i,t)}^\tau \right],$$
(5)
where the integer current vector $`𝐉_{(i,t)}=(J_{(i,t)}^x,J_{(i,t)}^y,J_{(i,t)}^\tau )`$ is divergenceless on each lattice site $`(i,t)`$ as indicated. The coupling constant $`K`$, corresponding roughly to $`\sqrt{t/U}`$, takes the role of the temperature.
We perform quantum Monte Carlo simulations on the classical action in Eq. (5), employing the heat bath algorithm at a classical temperature $`K`$. An important quantity in the analysis is the zero-frequency superfluid stiffness
$$\rho =\frac{1}{L_\tau }\left[n_x^2\right]_{av},$$
(6)
where $`n_x=(1/L)_{(i,t)}J_{(i,t)}^x`$ is the winding number along the (spatial) $`x`$ direction. The finite-size scaling behavior of the superfluid stiffness reads
$$\rho =L^{(d+z2)}\stackrel{~}{\rho }(L^{1/\nu }\delta ,L_\tau /L^z),$$
(7)
where $`\delta =(KK_c)/K_c`$ is the distance from the critical point $`K_c`$, the spatial dimension $`d`$ is two in this work, and $`z`$ is the dynamical critical exponent.
In order to investigate the scaling behavior, the value of $`z`$ should be known in advance. At the very tip of an MI lobe, the direct MI-SF transition in the presence of weak disorder shows the same behavior as the pure system: the dynamical critical exponent $`z=1`$ and the correlation exponent $`\nu =0.67`$ . Off the tip, we expect the dynamical exponent $`z=2`$ in the disordered system since the compressibility is finite at the transition point ; the same number is expected even in the pure case. We thus set the dynamical exponent $`z=2`$, and measure the correlation length exponent $`\nu `$, the value of which was estimated in the previous studies to be $`0.9\pm 0.1`$ in the BG-SF transition and $`0.5\pm 0.1`$ in the MI-SF transition .
Keeping $`L_\tau /L^z`$ constant, we simulate the systems of sizes $`L\times L\times L_\tau =6\times 6\times 9`$, $`8\times 8\times 16`$, and $`10\times 10\times 25`$. To tune the transition, we vary the temperature $`K`$ while fixing $`\mu `$ and $`\mathrm{\Delta }`$. We further take the average over $`60200`$ disorder realizations and, for each disorder realization, perform typically $`400080000`$ Monte Carlo sweeps for equilibration, followed by equally many sweeps for measurement. The equilibration is checked through the use of the standard equilibration test technique .
In Fig. 3, we show the results with $`z=2`$ at $`\mu =0.3`$ and $`\mathrm{\Delta }=0.1`$. One can identify clearly the common crossing point of $`L^z\rho `$ at $`K_c=0.257\pm 0.003`$ as the critical point. The inset of Fig. 3 shows a scaling plot, which yields $`\nu =0.5\pm 0.1`$, the same value as the pure model . On the other hand, in the strong-disorder case ($`\mathrm{\Delta }=0.3`$) shown in Fig. 4, the scaling behavior yields $`\nu =0.9\pm 0.1`$, which agrees well with the previous results of the BG-SF transition . These demonstrate that in the weak disorder regime a direct MI-SF transition takes place not only at the tip but also off the tip of the MI lobe. Figure 5 presents the critical temperature $`K_c`$ as a function of the disorder strength $`\mathrm{\Delta }`$ for $`\mu =0.3`$, displaying the direct MI-SF transition occurring for $`\mathrm{\Delta }<0.16`$. It is of interest that the point on which the value of the correlation length exponent $`\nu `$ changes appears to have the maximum slope on the $`K_c`$-$`\mathrm{\Delta }`$ curve.
Figure 6 summarizes the phase diagram of the disordered boson Hubbard model at the disorder strength $`\mathrm{\Delta }=0.2`$. The phase boundary between the MI and the BG phases is estimated by means of the matrix diagonalization in the system of size $`L=10`$. Here we have used the results in Ref. to scale $`K`$ as a function of $`t`$, which is valid only for small $`t`$.
Finally, we discuss the possible origin of the direct MI-SF transition around the tip of an MI lobe. One might think that off the tip the BG phase intervenes very slimly between the MI and SF phases and that our results supporting the direct MI-SF transition merely reflect finite-size effects. Be this the case, the localization length would be large near the tip and the BG phase would manifest itself only on the length scale exceeding the system size. Then, as pointed out in Ref. , an anomalous behavior is expected to occur to change the correlation exponent $`\nu `$, suggesting continuous change of the exponent with the thickness of the BG phase near the tip. On the other hand, our results in Figs. 5 and 6 indicate that the exponent changes quite abruptly, making the above scenario rather unlikely. Near the tip, the particle-hole excitations are ubiquitous and presumably tend to suppress the disorder effects, allowing the possibility of an extended state for the extra boson. Thus the resulting direct MI-SF transition around the tip may reflect the peculiar nature of boson localization.
In summary, we have studied the disordered boson Hubbard model by means of the matrix diagonalization with restricted basis states, which include those states overlapping with each other through nearest-neighbor hopping up to the 3rd order. The finite-size scaling of the participation ratio at $`\mathrm{\Delta }=0.2`$ gives evidence for the direct MI-SF transition at an incommensurate value of the boson density. To investigate this direct transition more clearly, we have investigated the scaling behavior of the superfluid stiffness via quantum Monte Carlo simulations and found that as the disorder strength is varied, the value of the correlation exponent $`\nu `$ changes rather abruptly from the weak-disorder value $`0.5`$ to the strong-disorder one $`0.9`$. This indicates that the direct MI-SF transition occurs for weak disorder at an incommensurate density. The possible origin of this transition could be the abundance of the particle-hole excitations around the tip of an MI lobe, which suggests the peculiar nature of boson localization.
S.Y.P. would like to thank H. Rieger, G. G. Batrouni, and J. Kisker for helpful discussions. This work was supported in part by the Ministry of Education through the BSRI Program and by the KOSEF through the SRC Program. The work of M.C.C. was also supported in part by the BSRI Program through Hanyang University.
|
no-problem/9901/cond-mat9901217.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The dynamical, many-body problem of diffusing and reacting chemicals provides an ideal testing ground for the many methods of non-equilibrium physics. Such reaction systems occur in nature in a wide variety of guises, from conventional chemical reactions to more exotic processes such as domain coarsening in magnetic systems , and exciton annihilation in crystals . When an equilibrium has been reached in a reversible reaction, the methods of statistical mechanics can be used to give the appropriate reactant-density ratios. However, in the decay to equilibrium and in irreversible processes no such unified approach yet exists. Nevertheless, over the last few decades a great many studies have been made utilising numerical, exact and renormalisation-group techniques, and the time-dependence of reaction systems has been found to be remarkably rich. It has been shown that the collective behaviour of the reaction system is rather sensitive to the statistical properties of a single reactant’s motion, with a basic example given by the case of the single-species diffusive reaction $`A+AO`$. This is because the re-entrancy of random walks in two dimensions and below alters the density decay from the simple mean-field prediction. In these lower dimensions, the density loses all dependence on the microscopic reaction rate and becomes a universal function of only the diffusion length: the reaction is diffusion limited. In this paper we examine the behaviour of the $`A+AO`$ reaction process in the presence of an uncorrelated, quenched random velocity field. Before going on to describe our results and method, we review two cases, first the pure reaction-diffusion process, and second some studies of the reaction process in correlated long-range potential disorder.
The mutually annihilating random walk (MARW) $`A+AO`$ is a fundamental theoretical model in the study of reaction systems. It describes the process whereby diffusing $`A`$ particles may react pairwise, at a rate $`\lambda `$ on contact. The MARW and coalescing random walk $`A+AA`$ are members of the same universality class , and therefore the results below are also valid for the coalescence process (albeit with trivial changes in the prefactors). The basic starting point in the analysis of such systems is the mean-field or rate equation. This corresponds to writing a self-consistent equation for the average reactant density $`\overline{n}`$ as a function of $`t`$, that ignores all spatial correlations (and more seriously anti-correlations),
$`{\displaystyle \frac{\overline{n}}{t}}=D^2\overline{n}2\lambda \overline{n}^2,`$ with the late time result $`\overline{n}{\displaystyle \frac{1}{2\lambda t}}`$ (1)
where we have introduced the diffusion constant $`D`$ and the reaction rate is $`\lambda `$. Neglecting the effect of correlations is equivalent to assuming that the reactants remain well mixed throughout the reaction process. However, due to the statistics of random walks in two dimensions and below, simple diffusion of particles itself is not sufficiently fast to maintain a well-mixed state. For this reason, the mean-field result (1) loses its validity in two dimensions and below. In these lower spatial dimensions the re-entrancy of random walks means that reactants come into contact many times, each time providing an opportunity for a reaction to take place. This implies that even a small reaction rate $`\lambda `$ does not limit the global rate of reaction. Therefore, in two dimensions and below the reaction becomes diffusion limited with the density decays
$`n={\displaystyle \frac{\mathrm{log}(Dt)}{8\pi Dt}}\text{ for }d=2\text{,}n{\displaystyle \frac{𝒜_d}{(Dt)^{d/2}}}\text{ for }d<2`$ (2)
where $`𝒜_d`$ is a universal amplitude, a function only of the dimension $`d`$. This amplitude has been calculated via an $`ϵ`$ expansion for $`d=2ϵ`$ dimensions (also given in equation (8)) and the exact result $`𝒜_1=(8\pi )^{1/2}`$ for one dimension can be found in reference .
Over the last few years there has been an increased interest in the behaviour of reaction systems with reactants that perform motion different from pure diffusion. As well as ballistic gas-phase reactions , studies have appeared with reactants that perform diffusion in the presence of turbulence and also in quenched random velocity fields. Some basic categories for the statistical properties of these random velocity fields have been identified, see for example . In particular, a distinction can be made between uncorrelated Sinai disorder and the long-range correlated potential disorder, with the momentum-space correlator $`\gamma /k^2`$. Studies have been made of the behaviour of the reaction front in the segregated two-species $`A+BO`$ reaction with various forms of Sinai disorder and correlated potential disorder . A comprehensive study has been made of the $`A+AO`$ scheme with correlated potential disorder in two spatial dimensions and solutions also exist for this single-species reaction with random barriers and random traps . Furthermore, the specific case of single-species reactions in Sinai disorder was recently examined in one dimension in the context of aging phenomena, with the persistence exponent derived . In the two-dimensional case of $`A+AO`$ in potential disorder, it was found that the reaction process becomes sub-diffusion limited: the reaction rate is lower than the MARW. The form for the density with this potential disorder, which should be compared with the two-dimensional MARW in equation (2), was found to be
$`n{\displaystyle \frac{1}{\lambda ^{}t^{1\delta }}}`$ with $`\delta =\left[1+{\displaystyle \frac{8\pi }{\beta ^2\gamma }}\right]^1>0,\lambda ^{}=3D\beta ^2\gamma `$ (3)
where $`\beta ^2\gamma `$ measures the disorder strength, $`\delta `$ is a non-universal exponent, and $`\lambda ^{}`$ is an effective reaction constant. The interpretation was that the long-range disorder produces potential traps on all length scales which, after some time, will contain at most one reactant. Reactions can then only occur when the trapped reactants move between traps. However, at the same time, as the reactants explore the landscape, they get caught by increasingly deep traps, leading to sub-diffusive motion.
In this paper, we present results from a study of the single-species $`A+AO`$ reaction process in the presence of an uncorrelated, quenched random velocity field in dimension two and below. Despite the lack of long-range correlations the kinetic behaviour of diffusing particles is changed, as can be seen in the behaviour of the diffusion-length squared $`r^2`$. In two dimensions, it is known from a renormalisation-group (RG) treatment that this length is altered by the presence of a logarithm . Below two dimensions, $`d=2ϵ`$, the RG gives the two-loop result for the dynamic exponent of $`z=2+2ϵ^2+O(ϵ^3)`$ and hence the motion is sub-diffusive. The time-dependence in $`d=1`$ is also known , giving the following behaviour for $`r^2`$ as a function of dimension
$`r^2`$ $``$ $`[\mathrm{log}(Dt)]^4\text{ for }d=1`$
$`r^2`$ $``$ $`(Dt)^{2/z}\text{ for }d<2`$
$`r^2`$ $``$ $`4D_Rt\left[1+{\displaystyle \frac{4}{\mathrm{log}(t)}}+O\left({\displaystyle \frac{1}{\mathrm{log}^2(t)}}\right)\right]\text{ for }d=2\text{ in weak disorder}`$ (4)
where, for comparison, the result for pure diffusion is $`r^2=2dDt`$, and $`D_R`$ in equation (4) is the effective, measured diffusion constant in the late-time limit.
We will show that below two dimensions, a universal form similar to the pure-diffusion $`d<2`$ result in (2) with $`n`$ a function of the time $`t`$ cannot be found. This is due to the changed dynamic exponent $`z`$ which requires a dimensionful amplitude that must be a function of the disorder strength or the reaction rate. Though, it is reasonable to study the density as a function of time, or more specifically as a function of the length $`Dt`$, it is not the appropriate length scale for the reaction-diffusion problem in the presence of disorder. The natural length to use is the disorder-averaged diffusion length. By rewriting the density decay as a function of the scale $`r^2`$ a fully universal relation, similar to (2) can again be found
$`n{\displaystyle \frac{_d}{r^2^{d/2}}}`$ with $`_d=\left[{\displaystyle \frac{1}{3\pi ϵ}}+{\displaystyle \frac{2\mathrm{log}(128\pi )11}{12\pi }}+O(ϵ)\right]\text{ for }d=2ϵ.`$
The effect of Sinai disorder in two dimensions is not strong, and the alteration to the diffusion length (4) is not leading order. Nevertheless, we find the interesting result that a reaction process occurring in this disorder has a decay rate with a different leading-order amplitude from the MARW,
$`n`$ $`=`$ $`{\displaystyle \frac{\mathrm{log}(t)}{24\pi D_Rt}}+O\left(t^1\right).`$
In fact for weak disorder the reactions occur faster than in the MARW, contrary to the effects seen in the case of long-range potential disorder. Nevertheless, when written in terms of the diffusion constant of the underlying lattice model (to be described below) it will be shown that the disorder strength increases the density’s amplitude. These effects come from two competing terms: the disorder-renormalisation of the reaction term that increases the rate of reaction and the disorder-renormalisation of the propagator that decreases the rate of reaction.
In the rest of this paper we describe how these results were derived in more detail. In section (2) we introduce the model and describe some of the steps taken in the field-theoretic analysis of the model. In particular, the relation to existing theories, of diffusion in Sinai disorder and reaction with pure diffusion are discussed. The fixed point structure of the renormalised parameters is found and a perturbation expansion for the density, valid at early times, is obtained for $`d2`$. In section (3) we obtain a Callan-Symazik (CS) equation for the density as a function of time and show that no universal functional form can be found for $`d<2`$. However, by re-expressing the density in terms of the disorder-averaged diffusion length a universal form is obtained and the amplitude calculated to one-loop order. The behaviour at the upper-critical dimension $`d_c=2`$ is then examined and the density as function of the disorder strength analysed. Finally, we close in section (4) with a discussion of the results obtained.
## 2 The model and method
In this section, we introduce the model to be studied and also describe some of the steps taken to achieve its representation in field-theoretic form. The method used is standard and we only dwell on details that are different from systems previously studied. After the model has been defined it is written in the language of second quantisation, which in turn allows a mapping to a path-integral formulation. An average over all possible realisations of the random velocity field can be taken at this point, to produce a weighting function (an action) that gives disorder-averaged correlation functions. This bare action is then regularised and finally used to calculate a perturbation expansion for the early-time, disorder-averaged reactant density.
The model is defined on an infinite $`d`$-dimensional hypercubic lattice with a lattice spacing of unity. Each site $`i`$ of this lattice contains $`n_i`$ particles where $`n_i`$ can take the values $`0,1,2\mathrm{}`$. The quenched disorder in the diffusion rates is modeled by particles hopping independently from a lattice site $`i`$ to a neighbouring site $`e`$ at a fixed rate $`p_{ie}`$. The rates $`\{p\}`$ are random and contain no long-range correlations. Reactions can occur if there are two or more particles, $`n_i2`$, on a lattice site. This happens at a rate $`\lambda n_i(n_i1)`$, where $`\lambda `$ is the on-site reaction rate, reducing the number of particles on that site by 2.
The field-theoretic description is obtained by writing a master equation that describes the time-dependent flow of probability between microstates. It is convenient to write this equation in the language of bosonic operators. Given that the set of occupation numbers $`\{n_i\}`$ defines a microstate of the system, the probability that the system is in such a microstate will be written $`P(\{n_i\})`$. The master equation is $`_t|\psi (t)=|\psi (t)`$, where the probability state vector $`|\psi (t)`$ and evolution operator $``$ are
$`|\psi (t)`$ $`=`$ $`{\displaystyle \underset{\{n_i\}}{}}P(\{n_i\}){\displaystyle \underset{j}{}}(a_j^{})^{n_j}|0`$
$``$ $`=`$ $`{\displaystyle \underset{i}{}}\left[D{\displaystyle \underset{e}{}}\left(p_{ie}a_i^{}a_ip_{ei}a_i^{}a_e\right)\lambda \left(1(a_i^{})^2\right)a_i^2\right].`$ (5)
This algebraic description can now be converted to a field theory by using the coherent-state formalism. Observables, like the expected density $`n_j(t,\{p\})`$ at site $`j`$ at time $`t`$ for a given realisation of the disorder $`\{p\}`$ can be written as a path integration with respect to an action $`𝒮_p`$
$`n_j(t,\{p\})`$ $`=`$ $`{\displaystyle \underset{i}{}\left[𝒟\varphi _i𝒟\varphi _i^{}\right]\varphi _j\mathrm{exp}\left(𝒮_p\right)}.`$
The integration is over the complex fields $`\varphi ,\varphi ^{}`$ and the action $`𝒮_p`$ derived from equation (5) is
$`𝒮_p`$ $`=`$ $`{\displaystyle \underset{i}{}}\left[\varphi _i(t)+{\displaystyle _0^t}𝑑t\left(\varphi _i^{}_t\varphi _i+\varphi _i^{}{\displaystyle \underset{e}{}}\left(p_{ie}\varphi _ip_{ei}\varphi _e\right)\lambda (1(\varphi _i^{})^2)\varphi _i^2\right)n_0\varphi _i(0)\right].`$
It is convenient to shift the field $`\varphi ^{}`$ by its classical value $`\varphi ^{}=\overline{\varphi }+1`$ and take the continuum limit in space. The action $`𝒮(\stackrel{}{V})`$ thus obtained is naturally split into four parts $`𝒮_D+𝒮_\stackrel{}{V}+𝒮_R+𝒮_{n_0}`$: the diffusive, disorder, reaction and initial conditions. For the moment let us examine the diffusive and disorder parts
$`𝒮_D`$ $`=`$ $`{\displaystyle _0^t}𝑑t{\displaystyle d^dx\left(\overline{\varphi }_t\varphi \overline{\varphi }^2\left(D(x)\varphi \right)\right)}`$
$`𝒮_\stackrel{}{V}`$ $`=`$ $`{\displaystyle _0^t}𝑑t{\displaystyle d^dx\overline{\varphi }\left(\stackrel{}{V}(x)\varphi \right)}.`$
The disorder appears in both the diffusion constant and in a random velocity vector-field $`\stackrel{}{V}(x)`$. However, as can be checked under the RG, the disordered, spatially varying component of $`D(x)`$ is irrelevant in the technical sense, and therefore we consider just the case of a uniform diffusion field $`D(x)=D`$. Furthermore, though we have used a lattice model in the derivation of the continuum field theory, it is not necessary that some of the restrictions of the lattice formulation are passed to the continuum theory. In particular, for a walker on a lattice with a bias $`V`$ there is a minimum dispersion $`D=V/2`$. In the continuum, there is no reason to impose such a restriction, and therefore we treat the magnitude of the diffusion constant and $`V(x)`$ as fully independent quantities.
The velocity vector-field $`\stackrel{}{V}(x)`$ is taken to be a Gaussian random variable with the correlator $`V^\alpha (x)V^\beta (y)=\mathrm{\Delta }\delta _{\alpha ,\beta }\delta (xy)`$, i.e. there are no long-range correlations. An average can be performed on the component $`𝒮_\stackrel{}{V}`$ with respect to this field, to produce an action that gives disorder-averaged correlation functions,
$`\mathrm{exp}\left(𝒮_\mathrm{\Delta }\right)={\displaystyle 𝒟\stackrel{}{V}\left[\mathrm{exp}\left(\frac{1}{2\mathrm{\Delta }}d^dx(\stackrel{}{V}(x))^2\right)\mathrm{exp}\left(𝒮_\stackrel{}{V}\right)\right]}.`$
This integral can be performed to give the disorder-averaged bare action $`𝒮_0=𝒮_D+𝒮_\mathrm{\Delta }+𝒮_R+𝒮_{n_0}`$, which when suitably regularised, can be used for calculations. The various components of this action are
$`𝒮_D`$ $`=`$ $`{\displaystyle _0^t}𝑑t{\displaystyle d^dx\overline{\varphi }\left(_tD_0^2\right)\varphi }`$ (6)
$`𝒮_\mathrm{\Delta }`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Delta }_0}{2}}{\displaystyle d^dx\left(_0^t𝑑t\varphi \overline{\varphi }\right)\left(_0^t𝑑t\varphi \overline{\varphi }\right)}`$
$`𝒮_R`$ $`=`$ $`2\lambda _0{\displaystyle _0^t}𝑑t{\displaystyle d^dx\overline{\varphi }\varphi ^2}+\lambda _0{\displaystyle _0^t}𝑑t{\displaystyle d^dx\overline{\varphi }^2\varphi ^2}`$
$`𝒮_{n_0}`$ $`=`$ $`n_0{\displaystyle d^dx_0^t𝑑t\overline{\varphi }\delta (t)}.`$
Where we have written $`D_0`$, $`\mathrm{\Delta }_0`$ and $`\lambda _0`$ with subscripts to stress that they are bare quantities, but $`n_0`$ represents the initial density at $`t=0`$. It is seen that the action is a combination of that found in for diffusion in Sinai disorder (though with a trivial change of field variables) and that of the purely diffusive reaction process . Following the notation in previous works, the diagrams for the vertices are given in figure (1). The two reaction vertices figure (1a), renormalise identically and hence we do not introduce a separate reaction parameter for each vertex. It should also be noted that the disorder vertex figure (1b), when considered in momentum space, is proportional to the scalar product of the momentum flowing through the two outgoing $`\overline{\varphi }`$ fields. Finally, as with both the pure reaction and pure disordered-diffusion theories individually, the upper-critical dimension for the hybridised theory is $`d_c=2`$.
Because taking the continuum limit in space introduces unphysical divergences, the theory must be rendered finite before calculations can proceed. This was achieved by dimensional regularisation in $`d=2ϵ`$ dimensions in the absence of the $`𝒮_{n_0}`$ term. In this theory there is no field renormalisation, in fact only the diffusion constant, disorder strength and reaction rate are renormalised. The following renormalised diffusion constant $`D`$ and dimensionless interaction parameters $`g`$ and $`h`$ are now introduced
$$Z_DD=D_0Z_ggD\mu ^ϵ=Z_g\lambda =\lambda _0Z_hhD^2\mu ^ϵ=Z_h\mathrm{\Delta }=\mathrm{\Delta }_0.$$
It can be shown that the disorder vertex and propagator are not renormalised by the reaction vertices, and therefore we can use the results found previously for $`Z_h`$ at the one-loop level and $`Z_D`$ at the two-loop level. In fact the only new diagrams that need be considered are the dressings of the the reaction strength by the disorder. As can be checked, only one such diagram, see figure (1), diverges in $`d=2`$. This contribution is combined with the previously known result for the reaction-reaction renormalisation to give the following set of $`Z`$ factors
$$Z_D=1+\frac{h^2}{(4\pi )^2ϵ}+\mathrm{}Z_g=1+\frac{(gh)}{2\pi ϵ}+\mathrm{}Z_h=1+\frac{h}{4\pi ϵ}+\mathrm{}.$$
The flow function for the diffusion constant and the beta functions can now be evaluated
$`\varrho ={\displaystyle \frac{\mathrm{log}(D)}{\mathrm{log}(\mu )}}`$ $`=`$ $`{\displaystyle \frac{2h^2}{(4\pi )^2}}+O(h^3)`$
$`\beta _g={\displaystyle \frac{g}{\mathrm{log}(\mu )}}`$ $`=`$ $`{\displaystyle \frac{g}{2\pi }}\left(gh2\pi ϵ\right)+O(g^3,g^2h,gh^2)`$
$`\beta _h={\displaystyle \frac{h}{\mathrm{log}(\mu )}}`$ $`=`$ $`{\displaystyle \frac{h}{4\pi }}\left(h4\pi ϵ\right)+O(h^3).`$
The fixed point structure for $`h`$ is of course unchanged by the presence of the reaction vertex, with $`h^{}=4\pi ϵ+O(ϵ^2)`$ and the dynamic exponent remains $`z=2+\varrho ^{}=2+2ϵ^2+O(ϵ^3)`$. However, the fixed point of the renormalised reaction strength is now shifted to the larger value of $`g^{}=6\pi ϵ+O(ϵ^2)`$ in the presence of Sinai disorder (the value with no disorder is $`g^{}=2\pi ϵ+O(ϵ^2)`$).
The perturbative density
With the theory now regularised, observables such as the reactant density $`n(t)`$ at an early time $`t`$ can be calculated perturbatively
$`n(t)`$ $`=`$ $`{\displaystyle \left[𝒟\varphi 𝒟\overline{\varphi }\right]\varphi \mathrm{exp}\left(𝒮𝒮_{ct}\right)}`$
where the bare action has been rewritten in terms of the renormalised action and the appropriate counter term action $`𝒮_0=𝒮+𝒮_{ct}`$. The initial conditions are included and the renormalised action is equivalent in form to the bare action written above, but with the replacements $`D_0D`$, $`\mathrm{\Delta }_0hD^2\mu ^ϵ`$ and $`\lambda _0gD\mu ^ϵ`$. The early-time density calculation is performed as a loop expansion of equation (2) taken to the appropriate order of the small parameter $`ϵ=2d`$. An expansion is also made in inverse powers of the initial density $`n_0`$ with only the leading-order, $`n_0`$ independent term retained (the reason for this limit of large initial density will be given in the next section).
The tree-level or mean-field result shown in figure (1d) is is unchanged from the case of the MARW, equation (1)
$`n^{(0)}`$ $`=`$ $`{\displaystyle \frac{1}{2g\mu ^ϵDt}}.`$ (7)
However, at the one-loop level a new diagram appears, representing two particles interacting with the same disordered region at different times, and eventually annihilating. This addition, the third shown in figure (1f), is equal to
$`n_h^{(1)}`$ $`=`$ $`{\displaystyle \frac{1}{(Dt)^{d/2}}}{\displaystyle \frac{h}{g}}\left[{\displaystyle \frac{1}{4\pi ϵ}}+{\displaystyle \frac{2\mathrm{log}(\pi )3}{16\pi }}\right].`$
To produce the full, regularised one-loop contribution, this result is combined with the reaction-reaction contribution $`n_g^{(1)}`$, which is also common to , and the appropriate counter term $`n_{ct}^{(1)}`$
$`n_g^{(1)}={\displaystyle \frac{1}{(Dt)^{d/2}}}\left[{\displaystyle \frac{1}{4\pi ϵ}}+{\displaystyle \frac{2\mathrm{log}(8\pi )5}{16\pi }}\right]`$ $`n_{ct}^{(1)}={\displaystyle \frac{hg}{4\pi ϵg\mu ^ϵDt}}.`$ (8)
The full result can be expanded to finite order in $`ϵ`$ to produce the full one-loop perturbative result
$`n^{(1)}`$ $`=`$ $`{\displaystyle \frac{1}{16\pi (Dt)^{d/2}}}\left[2\mathrm{log}(8\pi )5{\displaystyle \frac{h}{g}}\left(2\mathrm{log}(\pi )3\right)\right]+O(ϵ).`$ (9)
## 3 The reactant density
In this section, the late-time behaviour of the reactant density will be examined. The perturbative density, has already been derived and is given by the sum of equations (7) and (9). Now a Callan-Symanzik (CS) equation will be used to relate this perturbative density to the required late-time, non-perturbative density. First, the differential CS equation will be obtained and solved as a function of time $`t`$ and the case of dimensions less than two will then be examined. It will be shown that no universal relation giving the density as a function of time can be obtained. However, by trading the time dependence in the CS equation for dependence on the disorder-averaged diffusion length, a fully universal relation between this length scale and the reaction decay rate will be obtained. The behaviour at the upper-critical dimension is then examined. It will be shown that the reactant density in two dimensions, written in terms of the measured, late-time diffusion constant, decays at a faster rate than the case of pure diffusion-reaction. However, written in terms of the diffusion constant of the underlying lattice model, it is shown that as the disorder strength increases the reaction rate decreases.
A Callan-Symanzik equation for the density
All physical quantities must be independent from the arbitrary $`\mu `$ that was introduced in section 2 in the definition of the dimensionless interaction strengths $`g`$ and $`h`$. Using this fact, and also dimensional analysis, the following differential equation can be written for the density $`n`$
$`\left[(2+\varrho ){\displaystyle \frac{}{\mathrm{log}(Dt)}}+\beta _g{\displaystyle \frac{}{g}}+\beta _h{\displaystyle \frac{}{h}}d{\displaystyle \frac{}{\mathrm{log}(n_0)}}+d\right]n`$ $`=`$ $`0`$ (10)
where the interaction dependent flow functions $`\varrho `$, $`\beta _g`$ and $`\beta _h`$ are given in the previous section. This can now be solved in the standard way by writing equation (10) as a complete differential with respect to a scaling variable $`s`$, thus
$`\left[{\displaystyle \frac{d}{d\mathrm{log}(s)}}+d\right]\stackrel{~}{n}`$ $`=`$ $`0.`$
where $`\stackrel{~}{n}=n(s)`$ is the value of the density at some scale. Following the notation used in we denote early-time quantities $`X(s)`$ (at a scale $`s`$) as $`\stackrel{~}{X}`$ and late-time quantities (at a scale $`s=1`$) simply as $`X(1)=X`$. The flow equations for the system parameters as a function of the scale $`s`$ are
$`{\displaystyle \frac{\mathrm{log}(D\stackrel{~}{t})}{\mathrm{log}(s)}}=2+\stackrel{~}{\varrho }{\displaystyle \frac{\stackrel{~}{g}}{\mathrm{log}(s)}}=\stackrel{~}{\beta }_g,{\displaystyle \frac{\stackrel{~}{h}}{\mathrm{log}(s)}}=\stackrel{~}{\beta }_h,`$ $`{\displaystyle \frac{\mathrm{log}(\stackrel{~}{n_0})}{\mathrm{log}(s)}}=d.`$ (11)
and the equation relating the density at these two different scales $`s=1`$ and $`s`$ is
$`n(Dt,g,h,n_0,\mu )`$ $`=`$ $`s^d\stackrel{~}{n}(D\stackrel{~}{t},\stackrel{~}{g},\stackrel{~}{h},\stackrel{~}{n_0},\mu )`$ (12)
The procedure will now be to insert the perturbative results, (7) and (9), suitably rewritten with $`g\stackrel{~}{g}`$ etc, into the RHS of equation (12) and replace all arguments with the $`s=1`$ values using the solutions of equations (11). Before proceeding it should be noted that the solution of the flow equation for the initial density implies $`\stackrel{~}{n}_0=n_0/s^d`$. As we will be interested in the late-time regime $`s0`$, this justifies the $`1/\stackrel{~}{n}_0`$ expansion in the perturbative calculation in the previous section.
Below two dimensions
The behaviour of the reactant density for $`d<2`$ is now considered. From the flow equation (11) in $`d=2ϵ`$ it is seen that as $`s0`$ the quantities $`\stackrel{~}{g}`$, $`\stackrel{~}{h}`$ and $`\stackrel{~}{\varrho }`$ approach their fixed-point values $`6\pi ϵ,4\pi ϵ`$ and $`2ϵ^2`$ respectively. Thus, time varies with the scale $`s`$ as $`s^z=\stackrel{~}{t}/t`$ where $`z=2+2ϵ^2`$ at the two-loop level . Using these results, the following form for the reactant density below two dimensions is found
$`n(t)`$ $``$ $`{\displaystyle \frac{C}{(Dt)^{d/z}}}.`$ (13)
This form appears similar to the $`d<2`$ case in equation (2) but actually lacks all the important universal features of that relation. By dimensional analysis, the prefactor $`C`$ must carry $`(2z)d/z`$ units of length and must therefore be a function of $`\lambda `$, $`\mathrm{\Delta }`$ or both. No such universal amplitude like $`𝒜_d`$ can be found that relates density to time for the case of reactions in a Sinai velocity field. Moreover, given that the dynamic exponent $`z=2+2ϵ^2+O(ϵ^3)`$ is itself an approximate quantity, there is little point attempting to derive the non-universal prefactor $`C`$. However, we will now show that, though there is no universality in a relation between the density and the length scale $`Dt`$ in the presence of disorder, a fully universal relation nevertheless exists in terms of the reaction density and a different length scale: the disorder-averaged diffusion length $`r^2^{1/2}`$ which is the typical distance a single reactant explores. This quantity can be shown to vary as $`r^2t^{2/z}`$, also with a non-universal amplitude. We now rewrite the CS equation (10) for the density, exchanging the dependence in time for a dependence in $`r^2`$
$$\left[2\frac{}{\mathrm{log}r^2}+\beta _g\frac{}{g}+\beta _h\frac{}{h}d\frac{}{\mathrm{log}(n_0)}+d\right]n=0.$$
This is solved as above with the scaling $`s^2=\stackrel{~}{r^2}/r^2`$. The perturbative density up to the one-loop level $`\stackrel{~}{n}^{(0)}+\stackrel{~}{n}^{(1)}`$ given in equations (7) and (9) can also be rewritten with the substitution $`2dDt=\stackrel{~}{r^2}`$ which is correct at this order. Combining these results gives the following late-time density expressed as a function of $`r^2`$
$`n`$ $`=`$ $`{\displaystyle \frac{1}{r^2^{d/2}}}\left[\stackrel{~}{r^2}^{d/2}n(\stackrel{~}{r^2},g^{},h^{},\stackrel{~}{n_0},\mu )\right]`$
$`n`$ $``$ $`{\displaystyle \frac{1}{r^2^{d/2}}}\left[{\displaystyle \frac{1}{3\pi ϵ}}+{\displaystyle \frac{2\mathrm{log}(128\pi )11}{12\pi }}+O(ϵ)\right].`$ (14)
Both $`n`$ and $`r^2`$ are non-universal functions of $`t`$, in as much as the disorder strength $`\mathrm{\Delta }`$ enters explicitly. However, we have found a universal relation between them, independent of all system parameters except the dimension of space. Though the amplitude calculated at this order in an $`ϵ`$ expansion is unlikely to give a good result for one dimension, the scaling relation $`n1/r^2^{d/2}`$ is exact at all orders in perturbation theory. Therefore, in one dimension it is expected that the product $`n(t)r^2(t)^{1/2}`$ approaches a universal, constant value independent of the disorder strength, reaction rate, and initial density. This is in agreement with the result found in for the $`A+AO`$ with infinite reaction rate. Given the validity of a factorisation assumption made in for Sinai disorder, the expected exact result for one dimension would have given an amplitude $`1/4\pi `$ \- equivalent to the MARW.
Two dimensions
We now consider the case of the reactant density in two dimensions, $`d=2`$. To obtain the asymptotic behaviour it will only be necessary to use the tree-level perturbative density $`n^{(0)}`$ rewritten in terms of the diffusion length $`\stackrel{~}{r^2}`$. The relevant equations to be inserted into the scaling relation are
$$\stackrel{~}{n}^{(0)}=\frac{2}{\stackrel{~}{g}\stackrel{~}{r^2}}\stackrel{~}{g}\frac{6\pi }{\mathrm{log}(s)}\frac{\stackrel{~}{r^2}}{r^2}s^2.$$
Combining these results yields the following forms for the reactant density in two dimensions
$`n(t)={\displaystyle \frac{\mathrm{log}r^2}{6\pi r^2}}+O\left(r^2^1\right)`$ $`n(t)={\displaystyle \frac{\mathrm{log}(t)}{24\pi D_Rt}}+O\left(t^1\right).`$ (15)
where in the second expression the density has been rewritten in terms of time by using the result (4). At this point comparison can be made with the MARW. The result (2) and (15) are of the same form, but differ from each other by the amplitude: the disorder renormalisation of the reaction term has decreased it from $`1/8\pi `$ to $`1/24\pi `$. This implies that reactions occur at an increased rate in the presence of Sinai disorder.
It is interesting at this point to consider the behaviour of different lattice models with fixed diffusion constant $`D`$. By writing a CS equation for the diffusion length the following relation between the effective diffusion constant measured at late time $`D_R`$, and the lattice-model parameter $`D`$ can be obtained
$`D_R`$ $`=`$ $`D\left(1{\displaystyle \frac{\mathrm{\Delta }}{2\pi D^2}}+O\left(\mathrm{\Delta }^2\right)\right).`$ (16)
The above relation is valid for weak disorder (small $`\mathrm{\Delta }`$) and implies that the effective diffusion constant is reduced from the lattice-model value $`D`$ by the disorder strength. Hence, if written in terms of the long time and length scale behaviour of a lattice model with parameters $`D`$ and $`\mathrm{\Delta }`$, the reactant density becomes
$`n(t)={\displaystyle \frac{\mathrm{log}(t)}{24\pi Dt}}\left(1+{\displaystyle \frac{\mathrm{\Delta }}{2\pi D^2}}\right)+O\left(t^1\right)+O\left(\mathrm{\Delta }^2\right).`$ (17)
In terms of the lattice parameter $`D`$ it is seen that the reaction rate is still faster than the MARW. However, as the disorder strength is increased the reaction rate starts to decrease. Unfortunately, the result is valid only for weak disorder and it is not possible to determine from (17) if a point is reached where there is a cross-over and the reaction rate becomes less than the MARW.
## 4 Discussion
We have examined the late-time density of reactants in the single-species process $`A+AO`$ in the presence of an uncorrelated, quenched random velocity field: so called Sinai disorder. Contrary to many existing works on reactions in disorder, the statistics chosen for this velocity field were such that there were no long-range correlations. Despite the lack of correlations, it was shown that in two dimensions and below disorder changes the density decay on all time scales. The model, introduced in section (2), was analysed in the language of field theory and the renormalisation group was used to obtain the late-time reactant density. It was shown that the appropriate action that generates the disorder-averaged correlation functions is a combination of terms seen in the field-theoretic analyses of diffusion in Sinai disorder and the $`A+AO`$ reaction in the absence of disorder . The new interaction diagrams that appear in this hybridised theory were identified up to the one-loop level, and the perturbative density was calculated. It was shown that at this level a new term appears corresponding to an annihilation of two particles that both interacted with the same region of the random field at earlier times. In section (3) the late-time forms for the reactant density were derived from the perturbative results, by the use of the appropriate Callan-Symanzik equation (10).
Below two dimensions the effects of trapping in Sinai disorder are severe and it was found that the reaction process becomes sub-diffusion limited, equation (13). It was shown that because of the changed dynamic exponent, the relation between the reactant density and time must be non-universal, i.e. dependent on the reaction rate or disorder strength. By writing the density as a function of time, the pure diffusion length $`Dt`$ is the implicit length scale. However, in the presence of disorder this is an inappropriate scale. Rather, the density should be written as a function of the disorder-averaged diffusion-length squared $`r^2`$. By exchanging the dependence in $`t`$ for $`r^2`$ at the level of the CS equation a universal relation between these two non-universal quantities was obtained that is independent of the reaction rate, disorder strength and initial density.
At the upper-critical dimension $`d_c=2`$ the asymptotically-exact form of the density was obtained as a function of time, equation (15). The random velocity field has the effect of reducing the amplitude of the density decay which implies that for weak disorder the rate of reaction is faster than for purely diffusing reactants: an effect coming from the disorder renormalisation of the reaction term given in figure (1c). Physically, it represents the process whereby two reactants are pushed into the same region of space by the disorder and therefore brought closer together than if they were simply diffusing without a bias. Taking the view-point of the lattice model, it is appropriate to express the density decay in terms of the model parameter $`D`$ rather than the measured, late-time effective diffusion constant $`D_R`$. In this case, it is seen that as the disorder strength is increased the reaction rate begins to decrease, equation (17). This occurs because the rate at which the particles explore space is reduced due to the diffusion-constant renormalisation, an effect coming from the dressing of the propagator by the disorder vertex. The result obtained here is an expansion in the disorder strength $`\mathrm{\Delta }`$ and is therefore only valid only for weak disorder. It would be interesting to obtain results for the strong disorder case, perhaps from a numerical approach, to see if increasing the disorder further produces a density decay slower than the MARW.
Finally, we briefly compare the effects of the disorder examined in this paper and the case of long-ranged potential disorder . The effect of potential disorder in two dimensions is more drastic because the exponent of the decay is changed, whereas for Sinai disorder the amplitude is altered. The relative severity can be understood by the nature of the disordered landscape. In a study of diffusion in various forms of disordered landscapes , it was noted that potential disorder produces a landscape with deep trapping wells where, to escape from a trap, any path a particle might take involves movement in an unfavourable direction. However, this is not the case for Sinai disorder where the landscape (in two dimensions) does not have the morphology of potential wells and any pseudo-traps that might exist will tend to have velocity drifts nearby that allow for escape.
Acknowledgements
We would like to thank Dr M. W. Deem, Dr J.-M. Park and Dr G. M. Schütz for useful discussions. Prof. D. Mukamel and Y. Kafri are thanked for useful comments on the manuscript . This research was partly supported by the Engineering and Physical Sciences Research Council under Grant GR/J78327.
|
no-problem/9901/astro-ph9901327.html
|
ar5iv
|
text
|
# Infrared spectroscopy of low-mass X-ray binaries II
## 1 Introduction
In low-mass X-ray binaries (LMXBs), a neutron star or black hole primary accretes material from its late-type companion star, producing intense X-ray emission. LMXBs can be divided into subclasses according to location within the Galaxy, accretion characteristics, and luminosity (e.g. van Paradijs & McClintock 1995). The “bright Galactic bulge” sources (GBS) are located within $`15^{}`$ longitude and $`2^{}`$ latitude of the Galactic Centre (see e.g. Warwick et al. 1988) and are among the most luminous X-ray sources in the Galaxy (typical $`L_X10^{38}`$ erg s<sup>-1</sup>). The GBS have shown no X-ray bursts and attempts to detect orbital variability have been unsuccessful, suggesting that their periods may be longer than those of canonical LMXBs (Charles & Naylor 1992, hereafter CN92). In addition, heavy obscuration in the Galactic Centre region has made optical study of the GBS nearly impossible. There are as yet no observations which explain the dichotomy between the poorly understood GBS and the rest of the LMXBs. The most likely theories suggest that the secondary stars in the GBS are late-type giants, in contrast to the quasi-main sequence stars in other LMXBs.
However, the infrared provides us with an ideal window for observing these systems. The IR has two primary advantages: the late-type secondaries in LMXBs are brighter relative to the accretion discs, and, more importantly for the GBS, the ratio of $`V`$\- to $`K`$-band extinction is nearly 10 (Naylor, Charles, & Longmore 1991, hereafter NCL91). Over the past eight years, we have developed a program of IR observations of X-ray binaries (XRBs), beginning with the discovery via colours or variability of candidates for the IR counterparts to the X-ray sources using precise X-ray and radio locations (NCL91). Following this photometric survey we began an IR spectroscopic survey of LMXBs. In 1995 we obtained IR spectra of the LMXB Sco X-1 and the GBS systems GX1+4 and GX13+1; these results were published in Bandyopadhyay et al. (1997), hereafter Paper I.
Continuing our spectroscopic survey of LMXBs, in this paper we present new results obtained from UKIRT in 1997. In Paper I, we presented a $`JHK`$spectrum of the prototype LMXB Sco X-1 taken in 1992 with an older, low resolution array and a short integration time. We have now observed Sco X-1 at higher resolution and with a substantially longer exposure time. We have also obtained the first $`K`$-band spectrum of the IR counterpart to the GBS Sco X-2 (GX 349+2), which was initially identified with a variable radio source (Cooke & Ponman 1991); variability in the $`R`$-band counterpart was later found by Wachter & Margon (1996). In Paper I we presented a spectrum of the heavily obscured GBS GX13+1 which confirmed the identity of the IR counterpart. We now present a new spectrum of this source which clearly shows the CO bands and metal lines of its late-type secondary. Finally, we have obtained IR spectra of candidate stars within the X-ray/radio error circles of the GBS GX5-1, GX17+2, and the LMXB 4U2129+47.
## 2 Observations and Data Reduction
We obtained $`K`$-band (2.00–2.45 $`\mu `$m) spectra using the Cooled Grating Spectrometer (CGS4) on the 3.8-m United Kingdom Infrared Telescope on Mauna Kea during the nights of 1997 July 1-3 UT. The 75 l/mm grating was used with the 150 mm camera and the 256$`\times `$256 pixel InSb array. Target observations were bracketed by observations of A-type stars for removal of telluric atmospheric features. A journal of observations is presented in Table 1.
The standard procedure of oversampling was used to minimise the effects of bad pixels (Wright 1995). The spectra were sampled over two pixels by mechanically shifting the array in 0.5 pixel steps in the dispersion direction, giving a full width half maximum resolution of 34 Å ($``$460 km s<sup>-1</sup>at 2.25 $`\mu `$m). We employed the non-destructive readout mode of the detector in order to reduce the readout noise. The slit width was 1.23 arcseconds which corresponds to 1 pixel on the detector. In order to compensate for the fluctuating atmospheric emission lines we took relatively short exposures and nodded the telescope so that the object spectrum switched between two different spatial positions on the detector. For some objects, the slit was rotated from its default north-south position to avoid contamination of the target spectrum by nearby stars. Details of the design and use of CGS4 can be found in Mountain et al. (1990).
The CGS4 data reduction system performs the initial reduction of the 2D images. These steps include the application of the bad pixel mask, bias and dark subtraction, flat field division, interlacing integrations taken at different detector positions, and co-adding and subtracting the nodded images (see Daly & Beard 1994). Extraction of the 1D spectra, wavelength calibration, and removal of the telluric atmospheric features was then performed using IRAF. A more detailed description of the data reduction procedure is provided in Paper I.
## 3 Results
### 3.1 Sco X-1
Our $`K`$-band spectrum of Sco X-1 is shown in Figure 1. Strong emission lines of HI, HeI, and HeII are the dominant features; no absorption lines from the secondary star can be distinguished. A list of identified lines is found in Table 2.
#### 3.1.1 The secondary star
In Paper I we presented a $`JHK`$spectrum of Sco X-1, taken with the earlier 64x64 CGS4 array. The spectrum showed no absorption features, and we concluded that the secondary is most likely a subgiant (see Paper I for a detailed discussion). By modelling the appearance of two spectral template stars in the $`K`$-band at the distance and reddening of Sco X-1, we determined that the mass-donating star in Sco X-1 is of a spectral type which shows little or no CO features, i.e. earlier than G5 for a subgiant (Kleinmann & Hall 1986, hereafter KH86). However, because the spectrum was taken with a short integration time on the older, lower resolution CGS4 array, there remained some doubt about this conclusion.
The Sco X-1 spectrum presented here removes this doubt. With the new array and a $``$53 minute integration time, there is still no evidence of absorption features intrinsic to the secondary star in the $`K`$-band spectrum, only the emission lines expected from the disc and/or the heated face of the secondary. The relatively long orbital period ($`P`$ = 18.9h) and the high mass transfer rate favour the formation of a large accretion disc (see Beall et al. 1984) and the optical spectrum is dominated by the disc (Schachter, Filippenko & Kahn 1989); consequently we might expect to see emission from the disc in the IR. However, based on our earlier modelling (see Paper I), in our new spectrum it is unlikely that contamination by the disc would be sufficiently high to completely obscure the strong CO features expected in the late-type subgiants. Therefore the mass-donating star in Sco X-1 is of at least an early G-type, and perhaps even earlier, making Sco X-1 somewhat unusual. The majority of identified secondaries in LMXB systems have been of a later type, most frequently K or M stars. As such, it is interesting to note that Sco X-1, the “prototypical LMXB”, may not be so typical after all.
#### 3.1.2 The P Cygni profile
Despite the lack of absorption features from the secondary star, one absorption feature is present in our Sco X-1 spectrum. The Br $`\gamma `$ emission line has a noticeable absorption dip on its blue edge: a P Cygni profile, indicative of a wind in the system. By subtracting the measured velocity of the star from the velocity of the blue edge of the absorption feature, the maximum outflow velocity of the wind is obtained. Although our spectrum has insufficient velocity resolution for an accurate measurement, we have made estimates of the radial velocity of Sco X-1 (using the HeI and HeII emission lines) and for the blue and red edges of the P Cygni profile; the results and the calculated outflow velocity are listed in Table 3. A P Cygni profile has also been observed in the LMXB GX1+4; the $``$250 km s<sup>-1</sup>outflow in that system likely comes from the M5iii donor star (Chakrabarty, van Kerkwijk, & Larkin 1998). However, the wind velocity we infer for Sco X-1, approximately 2600 km s<sup>-1</sup>, is supersonic, and thus unlikely to originate from the companion star (e.g. Warner 1995, Dupree 1986). As in CVs, the wind probably originates from the hot accretion disc, which produces a radiatively driven outflow. Given the dissimilar sizes of the two systems ($`P_{orb}\stackrel{>}{}`$260d for GX1+4; Chakrabarty & Roche 1997) and the contrasting mechanisms for driving the wind, the order-of-magnitude velocity difference between the winds in GX1+4 and Sco X-1 is not unexpected. Indeed, outflow velocities similar to that seen in Sco X-1 have been obtained in models of disc winds in CVs (e.g. Mason et al. 1995). However, to make accurate measurements of the outflow velocity and obtain information about the origin of the wind in Sco X-1 (e.g. from the shape of the line profile), high resolution observations of the Br $`\gamma `$ line are required.
#### 3.1.3 Shape of the continuum
We have measured the slope of the $`K`$-band continuum of Sco X-1, after dereddening the spectrum using $`E_{BV}`$ = 0.3 (determined from the 2200Å interstellar absorption feature; Vrtilek et al. 1991). We fit the data with a power law of the form $`F_\lambda \lambda ^\alpha `$ to represent the emission from a standard steady state accretion disc (e.g. Frank, King, & Raine 1992). Using this model, we find a power law index $`\alpha `$= -3.17$`\pm `$0.02 for the IR spectrum of Sco X-1. Shahbaz et al. (1996) found that the optical spectrum of Sco X-1 showed no evidence for any contribution from the secondary star and was well fitted with $`\alpha `$= -2.46; however, they used an older estimate of the reddening ($`E_{BV}`$ = 0.15). Using a reddening of 0.3 and normalizing the flux to the observed $`V`$ magnitude of Sco X-1 (van Paradijs 1995) by folding the spectrum through the filter response, we find $`\alpha `$ = -2.78 for the optical spectrum. We then estimated the expected outer disc temperature ($`T_{out}`$) for a steady state disc with no irradiation and found $`T_{out}`$4200 K. The continuum slopes of the observed optical and IR spectra are different from that expected from a steady state disc at this temperature; these results are summarized in Table 4. Figure 2 shows the observed optical and $`K`$-band spectra plotted with the simulated steady state disc for comparison.
What is the cause of these discrepancies? As Sco X-1 is a persistant source, we expect that the disc is irradiated and therefore hotter than a standard steady state disc; note that an irradiated disc is also required to fit the UV continuum observed by Vrtilek et al. (1991). We therefore expect that the slope of the optical continuum will be considerably steeper (i.e. bluer) than that predicted for a steady state disc. Figure 2 illustrates that the observed optical spectrum is indeed much bluer than the simulated disc. (Note that the simulated spectrum has been arbitrarily normalized; thus this discussion concerns only the shape of the continuum, not the flux level.) However, it is unclear why the observed IR spectrum is shallower than the steady state disc model. If the disc is irradiated, we would expect the slope of the IR spectrum to be steeper than a standard disc. One possibility is that the secondary star contributes to the flux of the IR continuum. A cool, late type (K/M) star would have a relatively shallow slope in the IR, whereas an earlier type (A/F) star would have a slope comparable to that expected from a standard disc. In theory, therefore, flux from a cool secondary could cause the observed slope of the Sco X-1 spectrum to be shallower than expected from the disc alone. However, if a late-type companion contributed sufficient flux to alter the slope of the continuum, that would imply a significant amount of the IR flux would be originating from the secondary star. In this case, we would expect to see absorption features intrinsic to the secondary in the Sco X-1 spectrum (see e.g. the modelling discussed in Paper I). As there is no evidence for such features in our spectrum, it seems unlikely that flux from the mass-donating star is causing the discrepancy between the observed and expected continuum slopes. It is therefore difficult to reconcile the continuum shape of the IR spectrum within the context of either a simple irradiated or steady state disc model.
The differences between the HeI and Br $`\gamma `$ line strengths in our 1992 and 1997 spectra may indicate variability in the level of X-ray irradiation of the disc. After dereddening the 1992 $`K`$-band spectrum by $`E_{BV}`$ = 0.3, we find the slope of the continuum to be -3.34$`\pm `$0.08, i.e. somewhat steeper than the 1997 spectrum. The bluer continuum and the stronger emission lines in the 1992 spectrum are consistent with a higher level of X-ray irradiation than in the 1997 data. As we do not have any direct information about the X-ray state of Sco X-1 during the 1992 observations, it is difficult to compare the X-ray behaviour of the source during the two epochs quantitatively. However, such changes in the continuum slope are consistent with the behaviour expected from a disc which is being irradiated by a variable level of X-ray emission. Further, we note that the difference in the resolution of the two spectra and the highly changeable atmospheric absorption in the region of the 2.059$`\mu `$m HeI line makes any conclusions derived from the apparently large change between the HeI line strengths at the two epochs uncertain.
### 3.2 Sco X-2
Our $`K`$-band spectrum of Sco X-2 (GX349+2) was produced by combining our two observations for a total integration time of 80 minutes; the spectrum is presented in Figure 3. The prominent features are the Brackett $`\gamma `$ and HeI emission lines from the accretion disc and/or heated face of the secondary. The presence of these lines directly confirm the identification of the IR counterpart to the X-ray source. The relatively low signal-to-noise (S/N $``$ 8) despite the long exposure time is a result of the intrinsic faintness of the IR counterpart ($`K`$ 14.6). The line identifications, measured wavelengths, and equivalent widths are listed in Table 2.
There is no evidence in our spectrum for the CO bandheads expected from a late-type secondary; however, the low S/N of our spectrum prevents us from drawing firm conclusions about the secondary’s spectral type. It is interesting to note that Southwell, Casares, & Charles (1996; hereafter SCC96) found no evidence in optical spectra of Sco X-2 for the $`\lambda `$6495 Cai/Fei feature, which is a signature of late G- and K-type stars (Horne, Wade, & Szkody 1985). We also note our GX13+1 spectrum taken with CGS4 in 1995 did show clear evidence of the CO bandheads despite having a S/N comparable to that of our Sco X-2 spectrum shown here (Paper I). To date, the secondary in Sco X-2 has not been spectroscopically detected in either optical or IR wavelengths. The lack of spectroscopic features associated with the secondary may indicate that the mass-donating star in this system is of an earlier type than the K/M secondaries detected in a number of LMXBs. Alternatively, the accretion disc contamination could be sufficiently high at both optical and IR wavelengths as to obscure late-type stellar spectral features in both our spectrum and the SCC96 optical spectrum.
Several periodicities have been detected during observations of Sco X-2. SCC96 suggested a period of $``$14d, whereas Wachter (1997) found a period of 22.5$`\pm `$0.1h. As either of these periods could be the $``$1-day alias of the other, it is unclear which of the two is the true orbital period (Barziv et al. 1997). The 14d period is clearly inconsistent with a main sequence secondary star; assuming a Roche-lobe filling secondary, a spectral type of K0iii would be required. However, in this case we again note that we might expect to see the prominent CO absorption features of a K0iii secondary in our Sco X-2 spectrum. In contrast, a 22.5h orbital period would be insufficient to contain a late-type giant, but also indicates that the companion star is evolved; in this case, the secondary would most likely be a subgiant. Several other XRBs with orbital periods $``$20h have subgiant secondaries, including Sco X-1, Cen X-4 ($`P`$=15.6h; Shahbaz et al. 1993), and Aql X-1 ($`P`$=19.1h; Shahbaz et al. 1996), suggesting that Sco X-2 may be similar.
### 3.3 GX13+1
Figure 4 shows our $`K`$-band spectrum of GX13+1, produced by combining two spectra for a total on-source integration time of 88 min (S/N$``$ 30). The spectrum shows Br $`\gamma `$ and HeI emission, along with five <sup>12</sup>CO bands and three <sup>13</sup>CO bands which are characteristic of evolved late-type single stars (KH86). Marginal detections of CaI and MgI are also present. The line identifications, measured wavelengths, and equivalent widths are listed in Table 2.
#### 3.3.1 The secondary star
In Paper I we reported the detection of Br $`\gamma `$ emission in GX13+1, which confirmed the identity of the IR counterpart to the X-ray source. Also visible in the earlier spectrum were two CO bandheads; however, the much lower S/N of that spectrum only allowed us to constrain the secondary spectral type to be between K2 to M5. Together with an estimate of the distance, we also determined that the secondary in this system must be a giant or subgiant. These constraints were consistent with previous estimates of the secondary’s spectral type (e.g. Garcia et al. 1992). Using our new spectrum of GX13+1, we have now estimated the spectral type of the companion star in GX13+1 by the technique of optimal subtraction, which minimizes the residuals after subtracting different template star spectra from the target spectrum. This method is sensitive to the fractional contribution of the companion star to the total flux $`f`$, where 1-$`f`$ is the “veiling factor” (Marsh, Robinson, & Wood 1994).
First we determined the velocity shift of the spectrum of GX13+1 with respect to each template star spectrum by the method of cross-correlation (Tonry & Davis 1979). We then performed an optimal subtraction between each template star and the GX13+1 spectrum. The optimal subtraction routine minimizes the residual scatter between the target and template spectra by adjusting a constant $`f`$, which represents the fraction of light contributed by the template star. The scatter is measured by carrying out the subtraction and then computing the $`\chi ^2`$between the resultant spectrum and a smoothed version of itself. The constant $`f`$ is therefore the fraction of light arising from the secondary star. The optimal values of $`f`$ are obtained by minimizing $`\chi ^2`$.
For this analysis we used a variety of giant and subgiant templates, ranging from G5 to M4. The optimal subtraction was performed in the spectral range 2.28–2.39 $`\mu `$m in order to encompass the first four <sup>12</sup>CO bands, which are the most prominent absorption features in the GX13+1 spectrum. The results of this analysis are presented in Table 5. The minimum $`\chi ^2`$occurs for a K5iii companion star, where the secondary contributes about 45% of the flux at $``$2.3 $`\mu `$m. Figure 5 illustrates the result of the optimal subtraction of the K5iii template from the GX13+1 spectrum. We note that the large IR variability which was observed by CN92 ruled out the possibility of the IR flux arising primarily from the mass-donating star; in other words, if the disc contribution to the IR flux was small, then we would expect the IR magnitude of GX13+1 to remain largely constant (as it does in GX1+4). The substantial IR variability ($``$ 1 mag at $`K`$) indicates that we should expect X-ray heating of the disc and secondary to be a major contributor to the IR flux from the system. The result of our optimal subtraction, which indicates that the K5iii secondary contributes less than half of the $`K`$-band flux, is completely consistent with this expectation.
The ratio of the equivalent widths of <sup>12</sup>CO to <sup>13</sup>CO depends upon luminosity class (see e.g. Dhillon & Marsh 1995), ranging from 90 in main sequence stars to $``$10 in giants (Campbell et al. 1990). To obtain a rough estimate of this ratio in GX13+1, we measured equivalent widths by first establishing the continuum as a linear function between marked points on either side of each feature. The flux was then determined by summing the pixels within the marked area and subtracting the continuum. Measurement of the ratio of the most clearly resolved <sup>12</sup>CO and <sup>13</sup>CO pair, the (2,0) bandheads, yielded a value of $``$9, which is comparable to that expected for a field giant and inconsistent with a main sequence secondary.
#### 3.3.2 The P Cygni profile
In addition to the absorption features from the secondary, the Br $`\gamma `$ line in our GX13+1 spectrum exhibits a prominent P Cygni profile. Similarly to Sco X-1, using the emission lines we have estimated the radial velocity of GX13+1 as well as measuring the location of the profile edges (see Table 3). The inferred wind velocity, $``$ 2400 km s<sup>-1</sup>, is similar to that calculated for Sco X-1; as in Sco X-1, the outflow probably originates in the disc rather than the mass-donating star. A high resolution spectrum of the Br $`\gamma `$ line is necessary to obtain accurate information about the outflow in this system.
#### 3.3.3 System parameters
There have been several attempts at finding an orbital period in GX13+1. Groot et al. (1996) have observed a possible 12.6$`\pm `$1 day modulation, while Corbet (1996) has reported a 25.2d period in the GX13+1 XTE ASM light curve. We note that the Groot et al. period is half of that found by Corbet. In contrast, Wachter (1996) found some evidence for a 19.5h periodicity in $`K`$-band photometric data. Using Paczynski’s formula for a Roche lobe filling star and an orbital period $`P`$ = 25.2d, the mean density for the secondary would be 0.0003 g cm<sup>-3</sup>; for a giant, this density corresponds to an approximate spectral type of K5. The K5iii spectral type we find for the secondary in GX13+1 therefore correlates well with the reported 25.2d period, and supports the possible identification of this period as an orbital modulation. In addition, the low $`L_X/L_{opt}`$ ratio in GX13+1, possibly the result of having a large region involved in X-ray reprocessing (i.e. a large disc), also reinforces the probability of a long orbital period in this system (CN92). Our spectrum rules out an orbital period of 19.5h, which would be insufficient to contain the orbital separation of a 1.4$`M_{}`$ neutron star and a Roche lobe filling K5iii companion ($`M`$ 5$`M_{}`$, $`R`$ 25R; Allen 1973). The nature of the 19.5h periodicity is therefore unclear.
Using an apparent $`K`$ magnitude of 12 (CN92) and an absolute magnitude $`M_K`$ = -3.8 for a K5iii star (Allen 1973; Koorneef 1983), together with a colour excess of $`E_{BV}`$ = 5.7 (van Paradijs 1995), we find a distance to GX13+1 of 6.9 kpc. There are several published estimates for the reddening ($`A_V`$) of GX13+1, ranging from $``$17.6 to $``$13.2 (see the discussion in CN92), leading to an uncertainty in our distance calculation of approximately 1 kpc. Therefore we adopt a value of 7$`\pm `$1 kpc for the distance to GX13+1. This value is consistent with the previous estimate of 8-9 kpc, which was based upon the mean distance to the Galactic Centre and has been used generally as an estimate for the distances to all of the GBS (see e.g. Naylor & Podsiadlowski 1993).
#### 3.3.4 Evolutionary status
On the basis of their X-ray properties, LMXBs have been divided into two classes, known as $`Z`$ and atoll sources from the shape of their X-ray colour-colour diagrams (see van der Klis 1995 for a review). GX13+1 has been classified as an atoll (less luminous) source (Hasinger & van der Klis 1989, hereafter HK89), although earlier studies placed GX13+1 into the category with the $`Z`$ (high luminosity) sources (e.g. White, Stella, & Parmar 1988). HK89 suggested that the evolutionary history of the $`Z`$ and atoll LMXBs could account for their observational differences, with the $`Z`$ sources having evolved secondaries and hence long orbital periods ($`\stackrel{>}{}`$15h) whereas the atoll sources would have main sequence secondaries and $`P_{orb}\stackrel{<}{}`$5h. However, all of the proposed orbital periods found to date in GX13+1 are substantially longer than the $``$5h periods predicted for an atoll source and in fact are similar to the periods found in the $`Z`$ sources Sco X-1 and Cyg X-2. Additionally, our spectrum shows unequivocally that the mass-donating companion in GX13+1 is an evolved late-type star, making a $``$5h orbital period impossible.
There are several possibilities for resolving the discrepancy between the observed nature of GX13+1 and the predictions arising from the $`Z`$/atoll classification scheme. First, GX13+1 may be mis-classified as an atoll source. Both the high X-ray luminosity of GX13+1 and the detection of radio flux from the source are more typical of the luminous $`Z`$ LMXBs than atoll sources; additionally, a $``$60Hz QPO has recently been found (Homan et al. 1998). However, its colour-colour diagram is not consistent with the X-ray properties of the six known $`Z`$ sources. A second possibility is that the existence of “hybrid” systems such as GX13+1, GX9+9, and GX9+1, which exhibit both $`Z`$ and atoll characteristics, should be considered as a third “intermediate” class of objects where the $`Z`$ and atoll classes overlap (e.g. Kuulkers & van der Klis 1998). Finally, it is possible that the evolutionary distinction between $`Z`$ and atoll LMXBs does not arise from a straightforward dichotomy between evolved and main sequence secondaries. For example, some systems could have evolved companions but lower accretion rates and/or weaker magnetic fields than canonical $`Z`$ sources. In this case, sources could exhibit atoll-type X-ray properties as well as long periods; in addition to GX13+1, the atoll source AC 211 ($`P_{orb}`$ = 17.1h) could fit into such a category (van der Klis 1992). It would then become difficult to make predictions about the orbital period and nature of the companion star in a given system purely on the basis of its X-ray behaviour. However, we note that the $`Z`$/atoll evolutionary dichotomy could still arise from fundamental differences in the types of mass-donating star in the two classes, but in a more specific manner than a simple division between evolved and main sequence companions.
### 3.4 GX5-1, GX17+2, 4U2129+47
In Figures 6 and 7 we show the spectra obtained of candidate IR counterparts to the GBS GX5-1 and GX17+2 and the LMXB 4U2129+47. To date, no IR counterpart has been confirmed for any of these elusive LMXBs.
#### 3.4.1 GX5-1
Other than Sco X-1, GX5-1 is the brightest of the persistent LMXBs, but no X-ray periodicity has been found nor has an IR counterpart been identified (van der Klis et al. 1991). An Einstein X-ray position and a precise radio position exist for GX5-1, but candidates for the IR counterpart to this heavily obscured source only become visible at $`K`$ (see e.g. the finding charts in NCL91). We obtained spectra of the two stars closest to the radio error circle, designated by NCL91 as stars 502 and 503. Neither of the spectra, shown in Figure 6, show any evidence for the signature Br $`\gamma `$ emission we would expect in an LMXB. Both stars 502 and 503 are faint, with $`K`$ magnitudes of $``$14 and $``$15 respectively (NCL91); combined with the relatively short integration time (16 min on each source), the generally poor quality of the two spectra (S/N $``$ 3) is unsurprising. However, in such a bright X-ray source, we might expect the emission lines from the disc to be especially strong; while not conclusive, our spectra therefore cast doubt on the potential of either of these two stars to be the IR counterpart of the X-ray source. We note that there is a star northeast of star 502 (designated as star 513 by NCL91) which also lies within the radio error circle of GX5-1 and as such is also a prime candidate for the IR counterpart. Due to the star’s extreme faintness ($`K\stackrel{<}{}16`$), we did not obtain a spectrum for this candidate. Obtaining photometric and spectroscopic information on star 513 should therefore be a primary objective for future IR observations of GX5-1.
#### 3.4.2 GX17+2
On the basis of its X-ray position, the GBS GX17+2 was optically identified with a G star known as “star TR” more than 25 years ago (Tarenghi & Reina 1972; Davidsen, Malina, & Bowyer 1976). The location of GX17+2 was further refined by a subarcsecond radio position (Hjellming 1978) but the putative counterpart has shown almost no optical variability (e.g. Margon 1978). We note, however, that Deutsch et al. (1996) have found a small discrepancy between the optical and radio positions of GX17+2. The IR counterpart to star TR ($`K`$14) has shown variability, but a consistent fit for the colours, extinction, distance, and spectral type could not be found (NCL91). Based on the variety of X-ray periods which have been suggested for this source, Bailyn & Grindlay (1987) theorized that GX17+2 is a triple system, with a giant star in orbit around a short-period LMXB. However, the discrepancy between the extinctions derived from optical and X-ray measurements led NCL91 to conclude that star TR is a foreground star unassociated with the X-ray source which is superimposed upon a highly reddened IR counterpart to GX17+2. Our $`K`$-band spectrum, shown in the top panel of Figure 7, shows no evidence either for emission lines or absorption features from a late-type IR counterpart obscured by star TR. Indeed, the featureless spectrum is consistent with an early G-type evolved star or a main sequence star earlier than G5 (KH86), as would be expected from star TR if it is an unassociated, foreground G star. Therefore, our spectrum has too short an integration time and/or insufficient resolution to show features from a faint, variable IR counterpart to GX17+2 which may be partially hidden behind star TR.
#### 3.4.3 4U2129+47
The neutron star LMXB 4U2129+47 is known to have an orbital period of 5.2h and has an optical counterpart, V1727 Cyg (Thorstensen et al. 1979). Radial velocity studies produced estimates for the masses of the compact object and secondary star of 0.6$`\pm `$0.2$`M_{}`$ and 0.4$`\pm `$0.2$`M_{}`$ respectively (Thorstensen & Charles 1982). Then, after entering X-ray quiescence in 1983, all evidence of photometric variability and radial velocity variations disappeared (e.g. Garcia et al. 1989, hereafter G89). The optical counterpart visible during the low state is an F8iv (Cowley & Schmidtke 1990) whose colours were initially observed to be inconsistent with a normal star (Kaluzny 1988). However, subsequent observations failed to confirm this abnormality in the optical colours of the F star (Deutsch et al. 1996). G89 proposed that 4U2129+47 is a triple system, although it has also been suggested that the F star is a foreground source unassociated with the system. Our $`K`$-band spectrum of the F star appears in the bottom panel of Figure 7. The spectrum is featureless (the spike at $``$2.05 $`\mu `$m is a residual of the telluric absorption feature removal); we see no indication of emission from an accretion disc, nor do we see any evidence for absorption features from the Mv star suggested by G89 as a hypothetical third member of the triple system. However, we note that even in the $`K`$-band, the flux from a faint Mv star would likely be obscured by the brighter F star.
## 4 Conclusions
We have presented $`K`$-band spectra of the LMXBs Sco X-1, Sco X-2, and GX13+1. The IR spectrum of Sco X-1 exhibits the emission features expected from a luminous accretion disc but does not show any absorption features arising from the system secondary, reinforcing our conclusion that the secondary is a subgiant of spectral type G5 or earlier. The spectrum of the proposed counterpart to Sco X-2 shows Br $`\gamma `$ emission from the accretion disc, confirming the identification. The $`K`$-band spectrum of GX13+1 exhibits both Br $`\gamma `$ emission and CO absorption bands; optimal subtraction indicates that the secondary in GX13+1 is most likely a K5iii, with the accretion disc contributing $``$50% of the $`K`$-band flux. In addition, the Br $`\gamma `$ emission lines in both Sco X-1 and GX13+1 show P Cygni profiles, indicating the presence of outflows in both systems with velocities $``$2000 km s<sup>-1</sup>. Spectra of two stars in the error circle of GX5-1 do not show the Br $`\gamma `$ emission signature which would identify the IR counterpart. Similarly, spectra of the G star at the position of GX17+2 and the F star at the position of 4U2129+47 fail to exhibit any evidence that they are physically associated with the X-ray sources.
It is interesting to note that while our GX13+1 spectrum clearly shows the features of the K giant secondary, neither Sco X-1 nor Sco X-2 appear to exhibit any spectral lines characteristic of late-type (late G to M) stars. Significant similarities in the behaviour of Sco X-1 and Sco X-2 have also been seen in the optical and X-rays, indicating a fundamental similarity in the nature of the two systems. Furthermore, the clear differences in these spectra occur despite the fact that GX13+1, Sco X-1, and Sco X-2 all have relatively long orbital periods and are luminous, persistent XRBs. Far from supporting the idea that most of the GBS have late-type evolved secondaries, this result leads us to speculate that the mass-donating stars in Sco X-1 and Sco X-2 may prove to be very different than previously assumed.
There have been two primary schemes proposed for the classification of LMXBs. In a statistical study of a flux-limited sample of LMXBs, Naylor and Podsiadlowski (1993, hereafter NP93) asserted that the GBS distribution is associated with that of M giants in the Galactic bulge. They classified Sco X-1, Sco X-2, and GX13+1 (along with seven other XRBs) as bulge sources on the basis of their location, the characteristics of their X-ray spectra, and the fact that they are persistent X-ray sources. In contrast, the $`Z`$/atoll scheme categorizes LMXBs based on the shape of their X-ray colour-colour diagrams (HK89). With this method, six of the sources classified as GBS by NP93, including Sco X-1 and Sco X-2, are labelled as $`Z`$-sources, whereas the remaining GBS, including GX13+1, are placed in the atoll category. Note that the $`Z`$ and atoll sources appear to be randomly scattered amongst the bulge and disc populations categorized by NP93. HK89 suggested that a primary difference between the $`Z`$ and atoll sources may originate with the mass-donating stars; namely, that the $`Z`$ sources have evolved secondaries while the atoll sources have main sequence companions. It is clear from our spectra that the dichotomy between $`Z`$ and atoll sources cannot be this straightforward, as both Sco X-1 and GX13+1 have evolved secondaries. However, if the spectral type of the secondaries in Sco X-1 and Sco X-2 prove to be considerably earlier than the K giant found in GX13+1, then the evolutionary scenarios of Sco X-1 and Sco X-2 are indeed likely to be quite distinct from sources such as GX13+1. We also note that the $`Z`$ source Cyg X-2 has an evolved A9 companion (Casares, Charles & Kuulkers 1998); Sco X-1 also has an evolved secondary of type earlier than G5, and Sco X-2 may be similar.
We therefore hypothesize that the secondaries in the six $`Z`$ sources may all be early-type evolved stars. We speculate further that the distinction between the evolutionary history of the $`Z`$ and atoll sources may not be that the former have evolved secondaries and the latter have main sequence companions, but instead that $`Z`$ sources have evolved early-type companions while most other LMXBs have late type (either evolved or main sequence) secondaries. If this were the case, it may also help to explain why the IR counterparts for the other three $`Z`$ sources (GX5-1, GX17+2, and GX340+0) have proven so elusive (although GX17+2 is a special case due to the foreground star). Until now it has been assumed that the secondaries in these systems would be late-type evolved stars and therefore bright in the $`K`$ band ($`VK`$ 4). Yet the few candidate stars found in deep $`K`$-band images of the GX5-1 and GX340+0 fields have failed to show photometric or spectroscopic evidence that they are in fact the counterparts. To elude detection, a late-type evolved counterpart would have to be at a greater distance than we believe from estimates of the X-ray column densities. However, if the secondaries in these systems are early-type stars (with $`VK`$ 0), then the counterparts would naturally be fairly faint in the $`K`$-band. It would then be unnecessary to invoke a significantly larger extinction and/or distance to these sources in order to explain our inability to locate their IR counterparts.
Our spectra provide clear evidence that neither the bulge/disc nor the $`Z`$/atoll schemes can be explained by a simple distinction between evolved and main sequence secondary stars. However, we note that the early-type/late-type dichotomy proposed here could fit into the $`Z`$/atoll classification scheme without having to disregard the bulge/disc association made by NP93. Although currently the evidence for this claim is limited, we believe that this hypothesis is well worth exploring.
## 5 Acknowledgements
The authors would like to thank Deepto Chakrabarty for helpful discussions about winds in X-ray binaries, and Erik Kuulkers for answering our questions about $`Z`$ and atoll sources. We would also like to thank Tom Marsh for the use of his $`\mathrm{molly}`$ routine. The data reduction was carried out using the $`\mathrm{iraf}`$ and $`\mathrm{ark}`$ software packages on the Oxford starlink node. TN was supported by a PPARC Advanced Fellowship. The United Kingdom Infrared Telescope is operated by the Royal Observatory Edinburgh on behalf of the UK Particle Physics and Astronomy Research Council.
|
no-problem/9901/cond-mat9901232.html
|
ar5iv
|
text
|
# 1 𝑅(1/(𝑇₁𝑇)) = ((𝑇₁𝑇)⁻¹_{𝑚𝑒𝑎𝑠.}-(𝑇₁𝑇)⁻¹_𝑛)/(𝑇₁𝑇)⁻¹_𝑛 of 63Cu(2) rate as a function of magnetic field at 95 K, the closed squares from Mitrović et al. []. (𝑇₁𝑇)⁻¹_𝑛 is found from a fit to high temperature behavior and is given by 1648 s-1/(103 K + 𝑇). The solid curve is calculated pairing fluctuation contribution at 95 K []. The open circles are the measurements of Gorny et al.[]. The open squares are from Song (3.5 T) [], Auler et al. (5.7 T) [], Carretta et al. (5.9 T) [], and Hammel et al. (7.4 T) [].
Comment on ”Magnetic Field Independence of the Spin Gap in YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub>
The nature of the onset of superconductivity in high temperature superconductors has been of considerable recent interest. Does the superconducting order parameter emerge from preformed pairs, or reflect the opening of a spin-pseudogap or a pseudo-gap in the charge or pairing channels? Recent papers that relate to this subject are the measurement of the magnetic field dependence of the <sup>63</sup>Cu(2) NMR spin lattice relaxation in YBCO by Gorny et al. and Mitrović et al. and the related experiments on the spin susceptibility taken from the oxygen NMR shift of Bachman et al..
In the recent letter of Gorny et al. it was reported that the spin-lattice relaxation rate in YBCO<sub>7-δ</sub> is magnetic field independent and indicative of the opening of a spin-pseudogap. These authors point out that their results are inconsistent with prior reports which concluded that there is a small but significant magnetic field dependence to $`1/T_1T`$ near $`T_c`$ in optimally doped material with interpretation in terms of pairing fluctuations (see Fig. 1). Gorny et al. suggest that previous experiments might be in error since special care is required in the measurement of the <sup>63</sup>Cu(2) NMR rate in order to avoid what they refer to as background effects. These effects, and how to deal with them in aligned powder samples, have been known and practiced for some time . The particular approach of Gorny et al., measuring a satellite resonance, appears to be quite effective, but not unique.
In this comment we propose a different explanation for the discrepancy between the reports. Quite simply, the sample of Gorny et al. is not optimally doped. First, there is evidence for this in the distribution of electric field gradients apparent in the spectrum published by Gorny et al., being twice as broad (FWHM $``$ 400kHz) as is expected for high-quality, optimally doped material (FWHM $`<`$ 200kHz). In addition, field independence for the NMR in slightly underdoped materials has been noted previously by Auler et al. who have shown that NQR and NMR (H = 12 T) have the same rates for this resonance near $`T_c`$. Finally, Song $`(\mathrm{\Delta }\delta 0.04)`$ and Auler et al. $`(\mathrm{\Delta }\delta 0.02)`$ have shown the effect of modest underdoping is to shift the maximum value of $`1/T_1T`$ to lower values and to higher temperatures compared with optimal doping. This explains why the Gorny et al. measurements, open circles in Fig.1, fall below others at 95 K.
Results for optimally doped materials are remarkably consistent, having a magnetic field dependence shown in the figure for the data at a temperature of 95 K. This field dependence close to $`T_c`$ has been successfully interpreted in terms of d-wave superconducting pairing fluctuations . In the underdoped materials the fluctuation effects seem to be dominated by a field-independent
pseudogap, developing rapidly near optimal doping , reported by Carretta et al., Auler et al., and confirmed by Gorny et al..
V. F. Mitrović, H. N. Bachman, W. P. Halperin,
M. Eschrig, J. A. Sauls
> Department of Physics and Astronomy, and
> Science and Technology Center for
> Superconductivity, Northwestern University,
> Evanston, Illinois 60208
(PACS numbers: 74.25.Nf, 74.40.+k, 74.72.Bk)
|
no-problem/9901/math9901135.html
|
ar5iv
|
text
|
# Enumeration of Symmetry Classes of Parallelogram PolyominoesWork partially supported by NSERC (Canada) and FCAR (Québec).
## 1 Introduction
Parallelogram polyominoes, sometimes called staircase polyominoes, form a subclass of (horizontally and vertically) convex polyominoes on the square lattice, characterized by the fact that they touch the bottom-left and the top-right corners of their minimal bounding rectangle. See Figure 1. A $`90^{}`$ rotation of these would give a distinct but equivalent class of parallelogram polyominoes. In the same way as for general convex polyominoes, the area of a parallelogram polyomino is defined as the number of cells that it contains and the half-perimeter is equal to the sum of its width and height. Considerable literature can be found on the enumeration of various classes of polyominoes having some convexity and directedness property, with motivation coming from combinatorics, statistical physics, computer science and recreational mathematics. See M. Bousquet-Mélou for a recent survey. In particular, parallelogram polyominoes have been studied with respect to their perimeter and area first by Pólya, with further contributions yielding refined enumerations from Bender, Klarner, Rivest, Delest, Fédou, Viennot and others. Mireille Bousquet-Mélou, using the Temperley methodology (), has given a generating function with respect to height, width, area and height of first and last columns (, ).
Polyominoes are usually considered equivalent if they can be obtained from one another by a plane translation. They are sometimes called translation-type polyominoes to be more precise (see D. A. Klarner ). It is natural to consider also congruence-type polyominoes, that is, equivalence classes of polyominoes under rotations and reflections. They occur as pieces that can freely move in space, as in plane packing problems (see S. W. Golomb ). In , the enumeration of congruence-type polyominoes, according to area and perimeter, has been carried out in the case of convex polyominoes.
The problem is equivalent to the enumeration of orbits of the dihedral group $`𝔇_4`$, of symmetries of the square, acting on convex polyominoes. The group $`𝔇_4`$ contains eight elements, usually represented as $`1`$, $`r`$, $`r^2`$, $`r^3`$, $`h`$, $`v`$, $`d_1`$ and $`d_2`$, where $`1`$ denotes the identity element, $`r`$ denotes a rotation by a right angle, $`h`$ and $`v`$, reflections with respect to the horizontal and vertical axes respectively, and $`d_1`$ and $`d_2`$, reflections about the two diagonal axes of the square (we take the bissector of the first quadrant for $`d_2`$). The number of orbits $`|X/G|`$ of any finite group $`G`$ acting on a set $`X`$ is given by the Cauchy-Frobenius formula (alias Burnside’s Lemma):
$$|X/G|=\frac{1}{|G|}\underset{gG}{}|\mathrm{Fix}(g)|,$$
(1)
where $`\mathrm{Fix}(g)`$ denotes the set of elements of X which are $`g`$-symmetric, that is, invariant under $`g`$. Hence the enumeration of congruence-type convex polyominoes involves determining the size of the symmetry classes of convex polyominoes for each group element $`g𝔇_4`$. Formula (1) is valid for infinite sets provided a weighted cardinality $`|X|_\omega `$ is taken, with respect to some $`G`$-invariant weight function $`\omega `$. For a class $`𝒫`$ of polyominoes this means using generating series $`𝒫(t,q)`$ with respect to half-perimeter and area (variables $`t`$ and $`q`$), for example.
The main goal of this paper is to carry out a similar procedure for the class $``$ of parallelogram polyominoes. We observe that a subgroup of $`𝔇_4`$ acts on parallelogram polyominoes, which we denote $`𝔇_2`$, namely $`𝔇_2=r^2,d_1=\{1,r^2,d_1,d_2\}`$, and that congruence types of parallelogram polyominoes coincide with orbits of $``$ under $`𝔇_2`$. In the following sections we therefore compute the generating series of the symmetry classes $`\mathrm{Fix}(g)`$ of parallelogram polyominoes for all $`g𝔇_2`$ except the identity. We then use (1) to obtain $`(/𝔇_2)(t,q)`$.
It is also possible to count asymmetric parallelogram polyominoes, that is polyominoes that are not $`g`$-invariant for any $`g`$ except the identity, using Möbius inversion in the lattice of subgroups of $`𝔇_2`$. This requires also the enumeration of the subclass $`\mathrm{Fix}(𝔇_2)`$ of $``$, of totally symmetric parallelogram polyominoes. We carry out this computation and show that asymmetric parallelogram polyominoes are asymptotically equivalent to all parallelogram polyominoes, as expected.
As we will see, the enumeration of all the symmetry classes of parallelogram polyominoes, according to perimeter, involves in one way or the other either the Dyck paths (or Dyck words, see J. Labelle ), counted by the Catalan numbers $`c_n`$, or the left factors of Dyck paths, counted by the central binomial coefficients $`b_n`$ (see Cori and Viennot ), where
$$b_n=\left(\genfrac{}{}{0pt}{}{2n}{n}\right)\text{and}c_n=\frac{1}{n+1}\left(\genfrac{}{}{0pt}{}{2n}{n}\right).$$
(2)
When the area is taken into account, $`q`$-analogues (some well-known and some novel) of these numbers appear naturally.
We would like to thank X. G. Viennot and M. Bousquet-Mélou for useful discussions.
## 2 Enumeration of parallelogram polyominoes
It has been known for a long time (Levine , Pólya ) that the number of parallelogram polyominoes of perimeter $`2n`$ is given by the Catalan number $`c_{n1}=\frac{1}{n}\left(\genfrac{}{}{0pt}{}{2n2}{n1}\right)`$. One proof of this fact is provided by the following bijection, due to Delest and Viennot () between parallelogram polyominoes of perimeter $`2n+2`$ and Dyck paths of length $`2n`$: given a parallelogram polyomino $`P`$ of perimeter $`2n+2`$, let $`(a_1,a_2,\mathrm{},a_k)`$ be the sequence of column heights of $`P`$, and $`(b_1,b_2,\mathrm{},b_{k1})`$ be such that $`b_i`$ is the number of cells of contact between columns $`i`$ and $`i+1`$ of $`P`$. The associated Dyck path $`D`$ is the unique Dyck path with $`k`$ peaks and $`k1`$ valleys such that the peak heights are given in order by the sequence $`(a_1,a_2,\mathrm{},a_k)`$, and the valley heights by the sequence $`(b_11,b_21,\mathrm{},b_{k1}1)`$ (the horizontal axis is at level $`0`$). The height of $`P`$ is $`n+1k=n(k1)`$, which is also given by
$$a_1+(a_2b_1)+(a_3b_2)+\mathrm{}+(a_kb_{k1})=\underset{i=1}{\overset{k}{}}a_i\underset{j=1}{\overset{k1}{}}b_j.$$
On the other hand, the number of $``$ steps in $`D`$, that is the half-length of the path, is given by
$$\underset{i=1}{\overset{k}{}}a_i\underset{j=1}{\overset{k1}{}}(b_j1)=\underset{i=1}{\overset{k}{}}a_i\underset{j=1}{\overset{k1}{}}b_j+(k1),$$
which is seen to be $`n`$ by the previous equation. Hence $`D`$ is a Dyck path of length $`2n`$.
Also, note that the sum of the heights of the peaks, $`_{i=1}^ka_i`$ is simply the area of $`P`$. Figure 2 illustrates the bijection.
It follows that the generating series $`(t)`$ for parallelogram polyominoes according to half-perimeter is
$$(t)=\underset{n2}{}c_{n1}t^n=\frac{12t\sqrt{14t}}{2}.$$
(3)
It has also been known for some time that when the area is taken into account, the generating series involves a quotient of $`q`$-analogues of Bessel functions (see Klarner and Rivest and Bender ). Pólya ( and ) found a Laurent series relating the area and perimeter generating function to a specialization of itself, from which the terms of the series can be extracted easily. The width, height and area generating series can also be expressed as a continued fraction (see ). We will use the following recent more general form due to M. Bousquet-Mélou , giving the generating series $`(v,x,y,q)`$ of parallelogram polyominoes, where the variables $`v`$, $`x`$, $`y`$ and $`q`$ mark respectively the height of the rightmost column, the width, the overall height, and the area.
###### Proposition 1
() The generating function $`(v,x,y,q)`$ of parallelogram polyominoes is given by
$$(1,v,x,y,q)=vy\frac{J_1(v,x,y,q)}{J_0(x,y,q)},$$
(4)
with
$$J_0(x,y,q)=\underset{n0}{}\frac{(1)^nx^nq^{\left(\genfrac{}{}{0pt}{}{n+1}{2}\right)}}{(q)_n(yq)_n}$$
(5)
and
$$J_1(v,x,y,q)=\underset{n1}{}\frac{(1)^{n1}x^nq^{\left(\genfrac{}{}{0pt}{}{n+1}{2}\right)}}{(q)_{n1}(yq)_{n1}(1vyq^n)}$$
(6)
with the usual notation $`(a)_n=(a;q)_n=_{i=0}^{n1}(1aq^i)`$. $`\mathrm{}`$
Note that the half-perimeter and area generating function $`(t,q)`$ of parallelogram polyominoes is obtained by putting $`v=1`$, $`x=t`$, $`y=t`$ in (4).
## 3 Symmetry classes of parallelogram polyominoes
### 3.1 Rotational symmetry
Observe that if we apply the Delest-Viennot bijection to an $`r^2`$-symmetric parallelogram polyomino of perimeter $`2k+2`$, the Dyck path we obtain is vertically symmetric (or, equivalently, the Dyck word associated to it is a palindrome). Hence we need only consider half the path, which is simply a left factor, of length $`k`$, of a Dyck path.
###### Proposition 2
The number of $`r^2`$-symmetric parallelogram polyominoes of half-perimeter $`k+1`$ is equal to the number of left factors of Dyck paths, of length $`k`$. $`\mathrm{}`$
###### Corollary 3
The number of $`r^2`$-symmetric parallelogram polyominoes of half-perimeter $`k+1`$ is given by
$$r_{k+1}(1)=\{\begin{array}{cc}\left(\genfrac{}{}{0pt}{}{k}{k/2}\right)\hfill & \text{if }k\text{ is even},\hfill \\ \frac{1}{2}\left(\genfrac{}{}{0pt}{}{k+1}{(k+1)/2}\right)\hfill & \text{if }k\text{ is odd}.\hfill \end{array}$$
(7)
It is known that the number of left factors of length $`2n`$ of Dyck words is equal to the number of words in the alphabet {0,1} with distribution $`0^n1^n`$, from which (7) follows easily. See for a bijective proof. Here we prove (7) using generating functions. Dyck paths and left factors of Dyck paths are generated by the algebraic grammar
$`C\epsilon +xC\overline{x}C`$
$`LC+CxL,`$
over the alphabet $`\{x,\overline{x}\}`$. $`C`$ denotes the Dyck paths and $`L`$ the left factors, while $`x`$ and $`\overline{x}`$ respectively denote a $``$ step and a $``$ step. The first production rule gives $`C(x,\overline{x})=1+x\overline{x}C(x,\overline{x})^2`$, which we solve for
$$C(x,\overline{x})=\frac{1\sqrt{14x\overline{x}}}{2x\overline{x}}.$$
The second production rule gives $`L(x,\overline{x})=C(x,\overline{x})(1+xL(x,\overline{x}))`$, which we can solve, now that we have $`C(x,\overline{x})`$, for
$$L(x,\overline{x})=\frac{\sqrt{14x\overline{x}}1}{x(12\overline{x}\sqrt{14x\overline{x}})}.$$
Substituting $`xt,\overline{x}t`$ into $`L(x,\overline{x})`$ gives the generating series $`L(t)`$ of left factors of Dyck paths by their length:
$$L(t)=\frac{2t1+\sqrt{14t^2}}{2t(1t)},$$
from which (7) follows. $`\mathrm{}`$
In order to include the area, we could extend another bijection, due to Bousquet-Mélou and Viennot , involving heaps of segments, to left factors of Dyck paths. However, there is a more direct approach. Indeed, parallelogram polyominoes with rotational symmetry can be obtained from two copies of a same parallelogram polyomino glued together. The glueing process depends on whether we want the final object to be of even width or of odd width, as can be seen in Figure 3. If $`R_2(x,y,q)`$ is the generating function of $`r^2`$-symmetric parallelogram polyominoes, then
$$R_2(x,y,q)=R_2^{(e)}(x,y,q)+R_2^{(o)}(x,y,q)$$
(8)
where $`R_2^{(e)}(x,y,q)`$ and $`R_2^{(o)}(x,y,q)`$ are respectively the generating series of even-width and odd-width $`r^2`$-symmetric parallelogram polyominoes.
###### Proposition 4
The generating function $`R_2^{(e)}(x,y,q)`$ of even-width parallelogram polyominoes is given by
$$R_2^{(e)}(x,y,q)=\frac{1}{1y}\left((\frac{1}{y},x^2,y^2,q^2)(1,x^2,y^2,q^2)\right).$$
(9)
where $`(v,x,y,q)`$ is the generating function (4) of parallelogram polyominoes.
Let $`P`$ be an $`r^2`$-symmetric parallelogram of even width. We define the fundamental region of $`P`$ to be the left half of $`P`$. Call this polyomino $`Q`$ (see Figure 3(a)). We first remark that $`Q`$ is a parallelogram polyomino. To get $`P`$ from $`Q`$, we rotate a copy of $`Q`$ by $`180^{}`$ and glue the result $`\overline{Q}`$ to $`Q`$ along the rightmost column. If this column has length equal to $`k`$, there will be $`k`$ possible positions for $`\overline{Q}`$ relative to $`Q`$. The substitution $`v1/y`$, $`xx^2`$, $`yy^2`$ and $`qq^2`$ in the generating series $`P(1,v,x,y,q)`$ of directed convex polyominoes corresponds to the highest position of $`\overline{Q}`$, which minimizes the overall height of $`P`$. All the possible positions will be accounted for by multiplying by $`(1+y+\mathrm{}+y^{k1})`$. In other words, the substitution to make in $`(v,x^2,y^2,q^2)`$ is
$$v^k\frac{1+y+\mathrm{}+y^{k1}}{y^k}=\frac{1}{1y}\left(\frac{1}{y^k}1\right).$$
(10)
Summing over all possible $`k`$’s, we find the proposed generating series (9) for $`r^2`$-symmetric parallelograms. $`\mathrm{}`$
###### Proposition 5
The generating function $`R_2^{(o)}(x,y,q)`$ of odd-width parallelogram polyominoes is given by
$$R_2^{(o)}(x,y,q)=\frac{1}{x}(\frac{1}{yq},x^2,y^2,q^2).$$
(11)
where $`(v,x,y,q)`$ is the generating function of parallelogram polyominoes.
The proof is similar to the previous one. The main difference is that only one glueing position of $`\overline{Q}`$ to $`Q`$ is admissible and that furthermore the rightmost column of $`Q`$ and its rotated image in $`\overline{Q}`$ are superimposed to yield an odd width (see Figure 3(b)). Details are left to the reader. $`\mathrm{}`$
We would like to find the number of $`r^2`$-symmetric parallelograms of a given half-perimeter, without losing the area information, i.e. we want to express the generating series in the form
$$R_2(t,q)=R_2(t,t,q)=\underset{k0}{}r_k(q)t^k.$$
(12)
The above expressions for the generating series of $`r^2`$-symmetric parallelogram polyominoes can be used to extract the polynomials $`r_k(q)`$ from it (i.e. develop it in powers of $`t`$ after substituting $`xt,yt`$ in it). Here are the first few of these polynomials:
$`r_2(q)`$ $`=`$ $`q`$
$`r_3(q)`$ $`=`$ $`2q^2`$
$`r_4(q)`$ $`=`$ $`q^4+2q^3`$
$`r_5(q)`$ $`=`$ $`2q^6+2q^4`$
$`r_6(q)`$ $`=`$ $`q^9+2q^8+q^7+2q^6+4q^5`$
### 3.2 Reflective symmetries
We begin by introducing a subfamily of parallelogram polyominoes which we will call Dyck polyominoes as they correspond to Dyck paths drawn over and above the main diagonal. We will also consider truncated Dyck polyominoes, which we will call left factors of Dyck polyominoes (or $`LFD`$ polyominoes for short), again in analogy with the left factors of Dyck paths. Dyck and $`LFD`$ polyominoes are illustrated in Figure 4.
We introduce $`L_n(u)=L_n(u,y,q)`$ the generating function for $`LFD`$ polyominoes having a basis of width $`n`$, with variables $`u`$, $`y`$ and $`q`$ corresponding to the number of cells of the uppermost row, the height and the area respectively. $`L_n(u)`$ can be defined recursively by the following functional equation, illustrated in Figure 5:
$$L_n(u)=u^nyq^n+\frac{yu^2q^2}{1uq}(L_n(1)L_n(uq)).$$
(13)
The generating function $`L(u)`$ of all $`LFD`$ polyominoes is simply the sum over all possible base widths,
$$L(u)=\underset{n1}{}L_n(u).$$
(14)
Moreover, the Dyck polyominoes being the $`LFD`$ polyominoes with width one bases, their height and area generating series $`D(t,q)`$ is given by
$$D(y,q)=L_1(1,y,q).$$
(15)
A straightforward application of Lemma 2.3 from (M. Bousquet-Mélou) gives the solution to the functional equation (13). As we do not need the variable $`u`$ for our purpose, we set it equal to $`1`$, which simplifies the expression for the generating function.
###### Proposition 6
The area and height generating function $`L_n(1,y,q)`$ for $`LFD`$ polyominoes having a basis of width $`n`$ is given by
$$L_n(1,y,q)=\frac{{\displaystyle \underset{m0}{}}{\displaystyle \frac{(1)^my^{m+1}q^{(m+n)(m+1)}}{(q)_m}}}{{\displaystyle \underset{m0}{}}{\displaystyle \frac{(1)^my^mq^{m(m+1)}}{(q)_m}}}.$$
(16)
$`\mathrm{}`$
For $`n=1`$, this gives
$$D(y,q)=\frac{{\displaystyle \underset{m0}{}}{\displaystyle \frac{(1)^my^{m+1}q^{(m+1)^2}}{(q)_m}}}{{\displaystyle \underset{m0}{}}{\displaystyle \frac{(1)^my^mq^{m(m+1)}}{(q)_m}}}$$
(17)
for the height and area generating function for Dyck polyominoes. However, we can also express $`D(y,q)`$ using the classical $`q`$-analogue of Catalan numbers $`c_n(q)`$, satisfying the recurrence
$$c_n(q)=\underset{k=0}{\overset{n1}{}}q^kc_k(q)c_{n1k}(q),$$
(18)
as it is well known that they area-enumerate Dyck paths of length $`2n`$. The area enumerated by $`c_n(q)`$ is the number of cells under the path and strictly above its supporting diagonal (i.e. the cells on the diagonal are not included in the area). To get a Dyck polyomino from a Dyck path, we have to add the area of the diagonal. If the length of the path is $`2n`$, then a factor of $`q^n`$ has to be added. A further diagonal of cells has to be added because the Dyck paths can touch the supporting diagonal, in which case they are not polyominoes. For the paths of length $`2n`$, $`n+1`$ cells thus have to be added, contributing a further $`q^{n+1}`$ factor to the area. This last diagonal also adds one unit of height to the polyominoes. Hence
$$D(y,q)=\underset{n1}{}y^nq^{2n1}c_{n1}(q).$$
(19)
#### 3.2.1 Reflective symmetry along the first diagonal
There is a nice area-preserving bijection between $`d_1`$-symmetric parallelograms of a given half-perimeter and $`r^2`$-symmetric parallelograms with same half-perimeter. Since the minimal rectangle of a $`d_1`$-symmetric parallelogram is necessarily a square, the perimeter is a multiple of $`4`$, and thus the half-perimeter is even. Hence we have
$$D_1(x,y,q)=\underset{k0}{}r_{2k}(q)t^{2k},$$
(20)
where $`D_1(x,y,q)`$ is the generating series of $`d_1`$-symmetric parallelogram polyominoes and the $`r_{2k}(q)`$ are defined by (12). The bijection is shown on an example in Figure 6, and goes as follows: a $`r^2`$-symmetric parallelogram has a center of rotation. If it has even half-perimeter, this center will either fall in the center of a cell (if both the height and the width are odd) or be the common corner of four cells forming a square (if both the height and the width are even). In both cases, we consider the first diagonal (parallel to the bissector of the second quadrant) passing through the center of rotation and the region of the parallelogram below the second diagonal (see Figure 6). This region is not a polyomino, but the parallelogram is obtained by glueing the region and a copy of it rotated by $`180^{}`$ in the unique way such that there are no “half-cells” left. Suppose that instead of rotating the copy of the region, we reflect it along the second diagonal and glue it so that there are no half-cells left, then we clearly obtain a $`d_1`$-symmetric parallelogram which, further, has the exact same perimeter and area as the initial $`r^2`$-symmetric parallelogram. We can similarly reverse the process to start with an arbitrary $`d_1`$-symmetric parallelogram and end with a $`r^2`$-symmetric parallelogram.
#### 3.2.2 Reflective symmetry along the second diagonal
We next consider $`d_2`$-symmetric parallelogram polyominoes, i.e. parallelograms which are left invariant by a symmetry along the second diagonal. Figure 7 gives an example of such a parallelogram. We observe first that the minimal rectangle of such a parallelogram will always be a square with side length equal to the quarter-perimeter of the inscribed parallelogram.
We note that $`d_2`$-symmetric parallelogram polyominoes can be constructed from two copies of a same Dyck polyomino, whose diagonals we glue together (dark cells on Figure 7). The area of the final object will be twice the area of the Dyck polyomino minus the area of diagonal, which was counted twice. There are as many cells on the diagonal as the height of the Dyck polyomino, and the width of the final object will also be the height of the Dyck polyomino. Hence we get
###### Proposition 7
The generating series $`D_2(x,y,q)`$ of $`d_2`$-symmetric parallelogram polyominoes is given by
$$D_2(x,y,q)=D(\frac{xy}{q},q^2).$$
(21)
$`\mathrm{}`$
#### 3.2.3 Reflective symmetry along both diagonals
The final (non-cyclic) subgroup whose set of fixed elements we study is the whole group itself. This group is generated by any two nontrivial elements, but it is convenient to consider the symmetries along the two diagonals as the generators. This allows us to characterize the fundamental region of a $`𝔇_2`$-symmetric parallelogram, as can be seen in Figure 8.
We note first that the minimal rectangle of a $`𝔇_2`$-symmetric parallelogram $`P`$ is a square. We remark also that the exterior path going from $`𝐀`$ to $`𝐂`$ is a Dyck path that has the additional property that it is symmetric with respect to the second diagonal passing through the center of $`P`$ (i.e. the Dyck word in $`x`$ and $`\overline{x}`$ associated to the Dyck path is a palindrome). Hence $`P`$ is completely determined by “half” a Dyck path (the path going from $`𝐀`$ to $`𝐁`$). If $`P`$ has half-perimeter $`2k`$ (its half-perimeter is necessarily even since the minimal rectangle is a square), then the path $`𝐀𝐂`$ is a symmetrical Dyck path of length $`2k2`$, and the path $`𝐀𝐁`$ is simply a left factor of length $`k1`$ of a Dyck path. Thus we have the following result:
###### Proposition 8
The number of $`𝔇_2`$-symmetric parallelogram polyominoes of half-perimeter $`2k+2`$ is given by
$$d_{2k+2}^{(1,2)}(1)=\{\begin{array}{cc}\left(\genfrac{}{}{0pt}{}{k}{k/2}\right)\hfill & \text{if }k\text{ is even},\hfill \\ \frac{1}{2}\left(\genfrac{}{}{0pt}{}{k+1}{(k+1)/2}\right)\hfill & \text{if }k\text{ is odd}.\hfill \end{array}$$
(22)
See Corollary 7. $`\mathrm{}`$
We obtain the area and half-perimeter generating function for the $`𝔇_2`$-symmetric parallelogram polyominoes by constructing these from 4 copies of a $`LFD`$ polyomino, as illustrated in Figure 9. Some cells are superposed (the dark ones) and others have to be added (square of white cells in the center), and that has to be taken into account when computing the area of the final object. If the $`LFD`$ polyomino has area $`A`$, height (number of cells on the diagonal) $`d`$ and base $`n`$, then we have that the area of the $`𝔇_2`$-symmetric parallelogram polyomino is $`4A2d+(n2)^22`$, while its half-perimeter will be given by $`2n+4(d1)`$. Hence we have the following proposition:
###### Proposition 9
The half-perimeter and area generating function $`D_{1,2}(t,q)`$ for $`𝔇_2`$-symmetric parallelogram polyominoes is given by
$$D_{1,2}(t,q)=t^2q+\underset{n2}{}t^{2n4}q^{n^24n+2}L_n(1,\frac{t^4}{q^2},q^4)$$
(23)
where $`L_n(u,x,q)`$ is the generating series for $`LFD`$ polyominoes with a base of width $`n`$. $`\mathrm{}`$
###### Corollary 10
$$D_{1,2}(t,q)=t^2q+\frac{{\displaystyle \underset{n2}{}}{\displaystyle \underset{m0}{}}{\displaystyle \frac{(1)^mt^{4m+2n}q^{4m^2+2m+4mn+n^2}}{(q^4)_m}}}{1t^4q^6{\displaystyle \underset{m0}{}}{\displaystyle \frac{(1)^mt^{4m}q^{4m^2+10m}}{(q^4)_{m+1}}}}.$$
(24)
This follows from equation (16). $`\mathrm{}`$
Here are the first few terms of $`D_{1,2}(t,q)`$:
$$D_{1,2}(t,q)=t^2q+t^4q^4+t^6q^9+t^8q^{10}+t^8q^{14}+t^8q^{16}+t^{10}q^{15}+t^{10}q^{19}+t^{10}q^{23}+t^{10}q^{25}+\mathrm{}$$
### 3.3 Congruence-type parallelogram polyominoes
We are now in a position to enumerate congruence-type parallelogram polyominoes, i.e. parallelograms up to rotation and reflection using formula (1) with $`G=𝔇_2`$ and $`𝒫=`$, the class of all parallelogram polyominoes:
$$|/𝔇_2|_w=\frac{1}{4}\underset{g𝔇_2}{}|\mathrm{Fix}(g)|_w,$$
(25)
where $`|\mathrm{Fix}(g)|_w`$ is the half-perimeter and area generating series of the convex $`g`$-symmetric polyominoes. Therefore,
###### Proposition 11
The half-perimeter and area generating series $`(/𝔇_2)(t,q)`$ of congruence-type parallelograms is given by
$$(/𝔇_2)(t,q)=|/𝔇_2|_w=\frac{1}{4}\left((1,t,t,q)+R_2(t,t,q)+D_1(t,t,q)+D_2(t,t,q)\right).$$
(26)
$`\mathrm{}`$
Here are the first few terms of $`(/𝔇_2)(t,q)=_{k0}\stackrel{~}{p}_k(q)t^k`$:
$`\stackrel{~}{p}_2(q)`$ $`=`$ $`q`$
$`\stackrel{~}{p}_3(q)`$ $`=`$ $`q^2`$
$`\stackrel{~}{p}_4(q)`$ $`=`$ $`q^4+2q^3`$
$`\stackrel{~}{p}_5(q)`$ $`=`$ $`q^6+q^5+3q^4`$
$`\stackrel{~}{p}_6(q)`$ $`=`$ $`q^9+2q^8+3q^7+4q^6+6q^5`$
### 3.4 Asymmetric parallelogram polyominoes
We can also enumerate asymmetric parallelogram polyominoes, i.e. parallelograms having no symmetry at all, using Möbius inversion in the lattice of subgroups of $`𝔇_2`$.
The reader is refered to for a general discussion of Möbius inversion, and to to see it applied to the enumeration of the symmetry classes of convex polyominoes. We simply give here in Figure 10 the lattice of the subgroups of $`𝔇_2`$ and the value of the Möbius function on the points of the lattice. For subgroups $`H`$ of $`𝔇_2`$, we denote by $`F_H`$ (resp. $`F_{=H}`$) the half-perimeter and area generating series for the set of parallelogram polyominoes having at least (resp. exactly) the symmetries of $`H`$.
###### Proposition 12
The half-perimeter and area generating series $`\overline{}(t,q)=F_{=0}`$ of asymmetric parallelogram polyominoes is given by
$`\overline{}(t,q)`$ $`=`$ $`F_0F_{r^2}F_{d_1}F_{d_2}+2F_{d_1,d_2}`$ (27)
$`=`$ $`(1,t,t,q)R_2(t,t,q)D_1(t,t,q)D_2(t,t,q)+2D_{1,2}(t,t,q).`$
where $`D_{1,2}(x,y,q)`$ is the generating series of $`𝔇_2`$-symmetric polyominoes. $`\mathrm{}`$
Here are the first few terms of $`\overline{}(t,q)=_{k0}\overline{p}_k(q)t^k`$:
$`\overline{p}_2(q)=\overline{p}_3(q)=\overline{p}_4(q)`$ $`=`$ $`0`$
$`\overline{p}_5(q)`$ $`=`$ $`4q^5+4q^4`$
$`\overline{p}_6(q)`$ $`=`$ $`8q^7+8q^6+8q^5`$
$`\overline{p}_7(q)`$ $`=`$ $`4q^{11}+8q^{10}+20q^9+24q^8+32q^7+24q^6`$
Note that the same method would allow us to enumerate the convex polyominoes having exactly the symmetries of any given subgroup of $`𝔇_2`$.
### 3.5 Asymptotic results
Here we show the asymptotic result that for large area or large perimeter, almost all parallelogram polyominoes are asymmetric. In other words, the probability for a parallelogram polyomino to have at least one symmetry goes to zero as the area or the perimeter goes to infinity.
We know from the work of Pólya () that the number of parallelogram polyominoes of half-perimeter $`n`$ is given by
$$p_n^{(t)}=c_{n1}\frac{4^n}{\sqrt{\pi }n^{3/2}}.$$
(28)
For the area, we have the result from Bender:
###### Proposition 13
(Bender ) Let $`p_n^{(q)}`$ be the number of parallelogram polyominoes with area $`n`$. Then
$$p_n^{(q)}k\mu ^n,$$
(29)
with
$$k=0.29745\mathrm{}\mu =2.30913859330\mathrm{}$$
$`\mathrm{}`$
###### Proposition 14
Let $`H`$ be any non-trivial subgroup of $`𝔇_2`$ and denote by $`P_H^{(q)}(n)`$ (resp. $`P_H^{(t)}(n)`$) the number of $`H`$-symmetric parallelogram polyominoes with area (resp. half-perimeter) $`n`$. Then,
$$\underset{n\mathrm{}}{lim}\frac{P_H^{(q)}(n)}{p_n^{(q)}}=0\text{and}\underset{n\mathrm{}}{lim}\frac{P_H^{(t)}(n)}{p_n^{(t)}}=0.$$
(30)
The proof for the perimeter part of the Proposition is immediate as we have closed forms for all the coefficients, and thus the limit can be verified to be zero explicitly.
For the area, we need only consider $`r^2`$\- and $`d_2`$-symmetric parallelogram polyominoes, as the $`d_1`$-symmetric parallelograms are in bijection with a subclass of $`r^2`$-symmetric ones, and the $`𝔇_2`$-symmetric parallelograms are a subclass of all the other classes of symmetry. A same basic argument works for $`r^2`$\- and $`d_2`$-symmetric parallelogram polyominoes, using the fact that they are constructed from two congruent subpolyominoes. A supplementary column sometimes has to be added in the $`r^2`$ case, according to whether the height or the width of the initial parallelogram is odd or even. These subpolyominoes are parallelograms in every case, and they have at most half the area of the initial object.
* $`r^2`$-symmetric parallelogram polyominoes : An $`r^2`$-symmetric parallelogram polyomino with even width and area $`n`$ is constructed from two congruent subparallelograms with exactly half the area, which can be glued together in at most $`n/2`$ ways (the maximal height of the columns that get glued together). So $`P_{r^2}^{(q),even}(n)\frac{1}{2}np_{n/2}^{(q)}`$. Hence
$$\underset{n\mathrm{}}{lim}\frac{P_{r^2}^{(q),even}(n)}{p_n^{(q)}}\underset{n\mathrm{}}{lim}\frac{\frac{1}{2}nk\mu ^{n/2}}{k\mu ^n}=0.$$
Next consider an $`r^2`$-symmetric parallelogram polyomino with odd width and area $`n`$. This polyomino is constructed from a central column ($`n`$ choices of height) and two congruent subparallelograms of area at most $`n/2`$. Then there are at most $`n`$ possible positions where to glue the subparallelograms to the central column (they are glued symmetrically). Thus $`P_{r^2}^{(q),odd}(n)n^2(1+p_1^{(q)}+p_2^{(q)}+\mathrm{}+p_{n/2}^{(q)})<n^3p_{n/2}^{(q)}`$ and the result follows as above. Hence the result holds for the subgroup $`r^2`$ of $`𝔇_2`$;
* $`d_2`$-symmetric convex polyominoes : Let $`P`$ be a $`d_2`$-symmetric parallelogram polyomino and $`Q`$ its fundamental region. Suppose that $`P`$ has $`b`$ cells on the diagonal symmetry axis. Then the minimum area $`P`$ can have is $`b+2(b1)`$. This gives a minimum area of $`b+(b1)`$ for $`Q`$. Hence
$$\frac{\text{Area of }Q_{min}}{\text{Area of }P_{min}}=\frac{2b1}{3b2}.$$
Then if we add to $`Q`$ a cell not on the diagonal symmetry axis, two cells get added to $`P`$, and thus we conclude that the ratio can only decrease as we make $`P`$ into a larger $`d_2`$-symmetric parallelogram polyomino with the same number of cells on the diagonal axis. For $`b2`$, the ratio will be smaller than or equal to $`3/4`$. As a loose approximation, we can take $`Q`$ to be any parallelogram polyomino. This gives $`P_{d_2}(n)p_{3n/4}^{(q)}+1`$. The $`+1`$ term corresponds to the unique $`d_2`$-symmetric parallelogram polyominoes having only one cell on the diagonal axis. Hence the result will also hold for the subgroup $`d_2`$.
$`\mathrm{}`$
###### Proposition 15
If we denote by $`\overline{p}_n^{(q)}`$ (resp. $`\overline{p}_n^{(t)}`$) the number of asymmetric parallelogram polyominoes of area (resp. half-perimeter) $`n`$, then
$`\overline{p}_n^{(q)}`$ $``$ $`p_n^{(q)},`$ (31)
$`\overline{p}_n^{(t)}`$ $``$ $`p_n^{(t)}.`$ (32)
We get the result from equation (27) and from the previous Proposition. $`\mathrm{}`$
Two tables can be found in appendix that present the numbers of parallelogram polyominoes according to their symmetry types and their perimeter or area. The columns indexed by subgroups of $`𝔇_2`$ give the numbers of parallelogram polyominoes of a given perimeter or area that are left fixed by the symmetries of the subgroup. The columns \# Orbits and Asym give respectively the number of congruence-type and asymmetric parallelogram polyominoes of the given size.
Appendix
|
no-problem/9901/hep-ph9901228.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Measurements of both solar and atmospheric neutrino fluxes provide evidence for neutrino oscillations. With three neutrinos, this implies that there is negligible neutrino hot dark matter in the universe unless the three neutrinos are approximately degenerate in mass. In this letter we construct theories with approximately degenerate neutrinos, consistent with the atmospheric and solar data, in which the lepton masses and mixings are governed by spontaneously broken flavour symmetries.
The Super-Kamiokande collaboration has measured the magnitude and angular distribution of the $`\nu _\mu `$ flux originating from cosmic ray induced atmospheric showers . They interpret the data in terms of large angle ($`\theta >32^{}`$) neutrino oscillations, with $`\nu _\mu `$ disappearing to $`\nu _\tau `$ or a singlet neutrino with $`\mathrm{\Delta }m_{atm}^2`$ close to $`10^3\text{eV}^2`$. Five independent solar neutrino experiments, using three detection methods, have measured solar neutrino fluxes which differ significantly from expectations. The data is consistent with $`\nu _e`$ disappearance neutrino oscillations, occuring either inside the sun, with $`\mathrm{\Delta }m_{}^2`$ of order $`10^5\text{eV}^2`$, or between the sun and the earth, with $`\mathrm{\Delta }m_{}^2`$ of order $`10^{10}\text{eV}^2`$. The combination of data on atmospheric and solar neutrino fluxes therefore implies a hierarchy of neutrino mass splittings: $`\mathrm{\Delta }m_{atm}^2\mathrm{\Delta }m_{}^2`$ <sup>1</sup><sup>1</sup>1A problem in one of the solar neutrino experiments or in the Standard Solar Model could, however, allow comparable mass differences.
In this letter we consider theories with three neutrinos. Ignoring the small contribution to the neutrino mass matrix which gives $`\mathrm{\Delta }m_{}^2`$, there are three possible forms for the neutrino mass eigenvalues:
$`\text{“Hierarchical”}\overline{m}_\nu `$ $`=`$ $`m_{atm}\left(\begin{array}{ccc}0& & \\ & 0& \\ & & 1\end{array}\right)`$ (1)
$`\text{“Pseudo-Dirac”}\overline{m}_\nu `$ $`=`$ $`m_{atm}\left(\begin{array}{ccc}1& & \\ & 1& \\ & & \alpha \end{array}\right)`$ (2)
$`\text{“Degenerate”}\overline{m}_\nu `$ $`=`$ $`m_{atm}\left(\begin{array}{ccc}0& & \\ & 0& \\ & & 1\end{array}\right)+M\left(\begin{array}{ccc}1& & \\ & 1& \\ & & 1\end{array}\right)`$ (3)
where $`m_{atm}`$ is approximately $`0.03`$ eV, the scale of the atmospheric oscillations. The real parameter $`\alpha `$ is either of order unity (but not very close to unity) or zero, while the mass scale $`M`$ is much larger than $`m_{atm}`$. We have chosen to order the eigenvalues so that $`\mathrm{\Delta }m_{atm}^2=\mathrm{\Delta }m_{32}^2`$, while $`\mathrm{\Delta }m_{}^2=\mathrm{\Delta }m_{21}^2`$ vanishes until perturbations much less than $`m_{atm}`$ are added. An important implication of the Super-Kamiokande atmospheric data is that the mixing $`\theta _{\mu \tau }`$ is large. It is remarkable that this large mixing occurs between states with a hierarchy of $`\mathrm{\Delta }m^2`$, and this places considerable constraints on model building.
What lies behind this pattern of neutrino masses and mixings? An attractive possibility is that a broken flavour symmetry leads to the leading order masses of (1), (2) or (3), to a large $`\theta _{atm}`$, and to acceptable values for $`\theta _{}`$ and $`\mathrm{\Delta }m_{}^2`$. It is simple to construct flavour symmetries which lead to (1) or (2) with large (although not necessarily maximal) $`\theta _{atm}`$ . For example, the hierarchical case results from integrating out a heavy Majorana right-handed neutrino which has comparable complings to $`\nu _\mu `$ and $`\nu _\tau `$, and the pseudo-Dirac case when the heavy state is Dirac, with one component coupling to the $`\nu _{\mu ,\tau }`$ combination and the other to $`\nu _e`$.<sup>2</sup><sup>2</sup>2The conventional paradigm for models with flavour symmetries is the hierarchical case with hierarchically small mixing angles, typically given by $`\theta _{ij}(m_i/m_j)^{\frac{1}{2}}`$. If the neutrino mass hierarchy is moderate, and if the charged and neutral contributions to $`\theta _{atm}`$ add, this conventional approach is not excluded by the data . However, in both hierarchical and pseudo-Dirac cases, the neutrino masses have upper bounds of $`(\mathrm{\Delta }m_{atm}^2)^{\frac{1}{2}}`$. In these schemes the sum of the neutrino masses is also bounded, $`\mathrm{\Sigma }_im_{\nu i}0.1`$ eV, implying that neutrino hot dark matter has too low an abundance to be relevant for any cosmological or astrophysical observation .
By contrast, it is more difficult to construct theories with flavour symmetries for the degenerate case , where $`\mathrm{\Sigma }_im_{\nu i}=3M`$, which are therefore unconstrained by any oscillation data. While non-Abelian symmetries can clearly obtain the degeneracy of (3) at zeroth order, the difficulty is in obtaining the desired lepton mass hierarchies and mixing angles, which requires flavour symmetry breaking vevs pointing in very different directions in group space. We propose a solution to this vacuum misalignment problem, and use it to construct a variety of models, some of which predict $`\theta _{atm}=45^{}`$. We also construct a model with bimaximal mixing having $`\theta _{atm}=45^{}`$ and $`\theta _{12}=45^{}`$ .
## 2 Texture Analysis
What are the possible textures for the degenerate case in the flavour basis? These textures will provide the starting point for constructing theories with flavour symmetries. In passing from flavour basis to mass basis, the relative transformations of $`e_L`$ and $`\nu _L`$ gives the leptonic mixing matrix $`V`$ . Defining $`V`$ by the charged current in the mass basis, $`\overline{e}V\nu `$, we choose to parameterize $`V`$ in the form
$$V=R(\theta _{23})R(\theta _{13})R(\theta _{12})$$
(4)
where $`R(\theta _{ij})`$ represents a rotation in the $`ij`$ plane by angle $`\theta _{ij}`$, and diagonal phase matrices are left implicit. The angle $`\theta _{23}`$ is necessarily large as it is $`\theta _{atm}`$. In contrast, the Super-Kamiokande data constrains $`\theta _{13}20^{}`$ , and if $`\mathrm{\Delta }m_{atm}^2>2\times 10^3\text{eV}^2`$, then the CHOOZ data requires $`\theta _{13}13^{}`$ . For small angle MSW oscillations in the sun , $`\theta _{12}0.05`$, while other descriptions of the solar fluxes require large values for $`\theta _{12}`$ .
Which textures give such a $`V`$ together with the degenerate mass eigenvalues of eqn. (3)? In searching for textures, we require that in the flavour basis any two non-zero entries are either independent or equal up to a phase, as could follow simply from flavour symmetries. This allows just three possible textures for $`m_\nu `$ at leading order
$`\mathrm{`}\mathrm{`}A^{\prime \prime }m_\nu `$ $`=`$ $`M\left(\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right)+m_{atm}\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ 0& 0& 1\end{array}\right)`$ (5)
$`\mathrm{`}\mathrm{`}B^{\prime \prime }m_\nu `$ $`=`$ $`M\left(\begin{array}{ccc}0& 1& 0\\ 1& 0& 0\\ 0& 0& 1\end{array}\right)+m_{atm}\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ 0& 0& 1\end{array}\right)`$ (6)
$`\mathrm{`}\mathrm{`}C^{\prime \prime }m_\nu `$ $`=`$ $`M\left(\begin{array}{ccc}1& 0& 0\\ 0& 0& 1\\ 0& 1& 0\end{array}\right)+m_{atm}\left(\begin{array}{ccc}0& 0& 0\\ 0& 1& 1\\ 0& 1& 1\end{array}\right)`$ (7)
Alternatives for the perturbations proportonal to $`m_{atm}`$ are possible. Each of these textures will have to be coupled to corresponding suitable textures for the charged lepton mass matrix $`m_E`$, defined by $`\overline{e_L}m_Ee_R`$. For example, in cases (A) and (B), the big $`\theta _{23}`$ rotation angle will have to come from the diagonalization of $`m_E`$.
To what degree are the three textures A,B and C the same physics written in different bases, and to what extent can they describe different physics? Any theory with degenerate neutrinos can be written in a texture A form, a texture B form or a texture C form, by using an appropriate choice of basis. However, for certain cases, the physics may be more transparent in one basis than in another, as illustrated later.
## 3 A Misalignment Mechanism
The near degeneracy of the three neutrinos requires a non-Abelian flavour symmetry, which we take to be $`SO(3)`$, with the three lepton doublets, $`l`$, transforming as a triplet. This is for simplicity – many discrete groups, such as a twisted product of two $`Z_2`$s would also give zeroth order neutrino degeneracy. We expect the $`SO(3)`$ theories discussed below to have analogues with discrete non-Abelian symmetries <sup>3</sup><sup>3</sup>3$`SO(3)`$ has been invoked recently in connection with quasi-degenerate neutrinos also in Refs ..
We work in a supersymmetric theory and introduce a set of “flavon” chiral superfields which spontaneously break SO(3). For now we just assign the desired vevs to these fields; later we construct potentials which force these orientations. Also, for simplicity we assume one set of flavon fields, $`\chi `$, couple to operators which give neutrino masses, and another set, $`\varphi `$, to operators for charged lepton masses. We label fields according to the direction of the vev, e.g. $`\varphi _3=(0,0,v)`$. For example, texture A, with
$$m_E=\left(\begin{array}{ccc}0& 0& 0\\ 0& \delta _2& D_2\\ 0& \delta _3& D_3\end{array}\right)m_{II},$$
(8)
results from the superpotential
$$W=(ll)hh+(l\chi _3)^2hh+(l\varphi _3)\tau h+(l\varphi _2)\tau h+(l\varphi _3)\xi _\mu \mu h+(l\varphi _2)\xi _\mu \mu h$$
(9)
where the coefficient of each operator is understood to be an order unity coupling multiplied by the appropriate inverse power of the large flavour mass scale $`M_f`$. The lepton doublet $`l`$ and the $`\varphi ,\chi `$ flavons are all $`SO(3)`$ triplets, while the right-handed charged leptons ($`e,\mu ,\tau `$) and the Higgs doublets, $`h`$, are $`SO(3)`$ singlets. The electron mass is neglected. The form of eqn. (9) may be guaranteed by additional Abelian flavour symmetries; in the limit where these symmetries are exact, the only charged lepton to acquire a mass is the $`\tau `$. These symmetries are broken by vevs of flavons $`\xi _{e,\mu }`$, which are $`SO(3)`$ and standard model singlet fields. The hierarchy of charged fermion masses is then generated by $`\xi _{e,\mu }/M_f`$. The ratios $`\varphi _{2,3}/M_f`$ and $`\chi /M_f`$ generate small dimensionless $`SO(3)`$ symmetry breaking parameters. The first term of (9) generates an $`SO(3)`$ invariant mass for the neutrinos corresponding to the first term in (5). The second term gives the second term of (5) with $`m_{atm}/M=\chi _3^2/M_f^2`$. The remaining terms generate the charged lepton mass matrices. Note that the charged fermion masses vanish in the $`SO(3)`$ symmetric limit — this is the way we reconcile the near degeneracy of the neutrino spectrum with the hierarchical charged lepton sector. This is viable because, although the leading neutrino masses are $`SO(3)`$ invariant, they are second order $`SU(2)`$ violating and are suppressed relative to the electroweak scale $`h`$ by $`h/M_f`$, where $`M_f`$ may be very large, of order the unification or Planck scale. On the other hand the charged lepton masses, although arising only via $`SO(3)`$ breaking, are only first order in $`SU(2)`$ breaking. Hence their suppression relative to $`h`$ is of order $`\varphi _i/M_f`$. Since $`\varphi _i`$ are $`SU(2)`$ singlets, they may have vevs much larger than $`h`$: the charged leptons can indeed be much heavier than the neutrinos.
In this example we see that the origin of large $`\theta _{atm}`$ is due to the misalignment of the $`\varphi `$ vev directions relative to that of the $`\chi `$ vev. This is generic. In theories with flavour symmetries, large $`\theta _{atm}`$ can only arise because of a misalignment of flavons in charged and neutral sectors. To obtain $`\theta _{atm}=45^{}`$, as preferred by the atmospheric data, requires however a very precise misalignment, which can occur as follows. In a basis where the $`\chi `$ vev is in the direction $`(0,0,1)`$, there should be a single $`\varphi `$ field coupling to $`\tau `$ which has a vev in the direction $`(0,1,1)`$, where an independent phase for each entry is understood. As we shall now discuss, in theories based on $`SO(3)`$, such an alignment occurs very easily, and hence should be viewed as a typical expectation, and certainly not as a fine tuning.
Consider any 2 dimensional subspace within the $`l`$ triplet, and label the resulting 2-component vector of $`SO(2)`$ as $`\mathrm{}=(\mathrm{}_1,\mathrm{}_2)`$. At zeroth order in SO(2) breaking only the neutrinos of $`\mathrm{}`$ acquire a mass, and they are degenerate from $`\mathrm{}\mathrm{}hh`$. Introduce a flavon doublet $`\chi =(\chi _1,\chi _2)`$ which acquires a vev to break $`SO(2)`$. If this field were real, then one could do an $`SO(2)`$ rotation to set $`\chi _2=0`$. However, in supersymmetric theories $`\chi `$ is complex and a general vev has the form $`\chi _i=a_i+ib_i`$. Only one of these four real parameters can be set to zero using $`SO(2)`$ rotations. Hence the scalar potential can determine a variety of interesting alignments. There are two alignments which are easily produced and are very useful in constructing theories:
$$\text{“SO(2)” Alignment:}W=X(\chi ^2M^2);m_\chi ^2>0;\chi =M(0,1).$$
(10)
The parameter $`M`$, which could result from the vev of some $`SO(2)`$ singlet, can be taken real and positive by a phase choice for the fields. Writing $`\chi _i=a_i+ib_i`$, with $`a_i`$ and $`b_i`$ real, an $`SO(2)`$ rotation can always be done to set $`a_1=0`$. The driver field $`X`$ forces $`\chi ^2=M^2`$, giving $`b_2=0`$ and $`a_2^2=b_1^2+M^2`$ with $`b_1`$ undetermined. The potential term which aligns the directon of the $`\chi `$ vev is the positive soft mass squared $`m_{\chi ^2}\chi ^{}\chi `$, which sets $`b_1=0`$.
The second example is:
$$\text{“U(1)” Alignment:}W=X\phi ^2;m_\phi ^2<0;\phi =V(1,i)\text{ or }V(1,i).$$
(11)
It is now the negative soft mass squared which forces a magnitude $`\sqrt{2}|V|`$ for the vev. Using $`SO(2)`$ freedom to set $`a_2=0`$, $`|F_X|^2`$ provides the aligning potential and requires $`\phi ^2=a_1^2+2ia_1b_1b_1^2b_2^2=0`$, implying $`b_1=0`$ and $`b_2=\pm ia_1`$. The $`U(1)`$ alignment leaves a discrete 2-fold degeneracy. In fact, the vevs $`V(1,\pm i)`$ do not require any particular choice of $`SO(2)`$ basis: performing $`SO(2)`$ transformation by angle $`\theta `$ on them just changes the phase of $`V`$ by $`\pm \theta `$. The phases in $`\phi `$ are unimportant in determining the values of the neutrino mixing angles, so that the relative orientation of the vevs of (10) and (11) corresponds to $`45^{}`$ mixing.
The vev of the $`SO(2)`$ alignment, (10), picks out the original $`SO(2)`$ basis; however, the vev of the $`U(1)`$ alignment, (11), picks out a new basis $`(\phi _+,\phi _{})`$, where $`\phi _\pm =(\phi _1\pm i\phi _2)/\sqrt{2}`$. If $`(\phi _1,\phi _2)(1,i)`$, then $`(\phi _{},\phi _+)(1,0)`$. An important feature of the $`U(1)`$ basis is that the $`SO(2)`$ invariant $`\phi _1^2+\phi _2^2`$ has the form $`2\phi _+\phi _{}`$. In the SO(3) theory, we usually think of $`(ll)hh`$ as giving the unit matrix for neutrino masses as in texture A. However, if we use the $`U(1)`$ basis for the 12 subspace, this operator actually gives the leading term in texture B, whereas if we use the $`U(1)`$ basis in the 23 subspace we get the leading term in texture C.
## 4 The Neutrinoless Double Beta Decay Constraint
Searches for neutrinoless double beta decay, $`\beta \beta _{0\nu }`$, place a limit $`m_{\nu ee}<0.5`$ eV . Consider texture A with $`m_E=m_{II}`$, so that the electron is dominantly in $`l_1`$. The $`\beta \beta _{0\nu }`$ limit implies $`\mathrm{\Sigma }_im_{\nu i}<1.5`$ eV, and therefore places a constraint on the amount of neutrino hot dark matter in the universe
$$\mathrm{\Omega }_\nu (l_1e)<0.05\left(\frac{0.5}{h}\right)^2.$$
(12)
While values of $`\mathrm{\Omega }_\nu `$ which satisfy this constraint can be of cosmological interest, it is also important to know whether this bound can be violated.
The bound is not greatly altered if texture A is taken with
$$m_E=\left(\begin{array}{ccc}0& \delta _1& D_1\\ 0& \delta _2& D_2\\ 0& \delta _3& D_3\end{array}\right)m_{III},$$
(13)
for generic values of $`D_1,D_2`$ and $`D_3`$. However, there is a unique situation where the $`\beta \beta _{0\nu }`$ bound on the amount of neutrino hot dark matter is evaded. It is convenient to discuss this special case in the basis in which it appears as texture B with $`m_E=m_{II}`$. To the order which we work, the electron mass eigenstate is then in the doublet $`l_{}=(l_1il_2)/\sqrt{2}`$, where we label the basis by $`(,+,3)`$ and, since there is no neutrino mass term $`l_{}l_{}hh`$, the rate for neutrinoless double beta decay vanishes. This important result is not transparent when the theory is described by texture A. In this case $`m_E=m_{III}(\delta _1=i\delta _2,D_1=iD_2)`$ and the electron is in a linear combination of $`l_1`$ and $`l_2`$. There are contributions to $`\beta \beta _{0\nu }`$ from both $`l_1l_1hh`$ and $`l_2l_2hh`$ operators, and these contributions cancel.
As an illustration of the utility of the $`U(1)`$ vev alignment, this theory with vanishing $`\beta \beta _{0\nu }`$ rate is described by the superpotential
$$W=(ll)hh+(l\chi _3)^2hh+(l\varphi _3)\tau h+(l\varphi _{})\tau h+(l\varphi _3)\xi _\mu \mu h+(l\varphi _{})\xi _\mu \mu h.$$
(14)
Comparing with the theory for texture A with $`m_E=m_{II}`$, described by (9), the only change is the replacement of a vev in the 2 direction with one in the $``$ direction.
In theories of this sort, it is likely that a higher order contribution to $`\beta \beta _{0\nu }`$ will result when perturbations are added for $`m_e`$ and $`\mathrm{\Delta }m_{}^2`$. For example, if the electron mass results from mixing with the second generation by an angle $`\theta (m_e/m_\mu )^{\frac{1}{2}}`$, then $`\beta \beta _{0\nu }`$ is reintroduced. However, the resulting limit on $`\mathrm{\Omega }_\nu `$ is weaker than (12) by about an order of magnitude, corresponding to this mixing angle. Large values of $`\mathrm{\Omega }_\nu `$ in such theories could be probed by further searches for neutrinoless double beta decay.
## 5 Models For Large $`\theta _{atm}`$
Along the lines described above, we first construct a model with large, but undetermined, $`\theta _{atm}`$, which explicitly gives both the Yukawa couplings and the orientation of the flavon vevs. Introduce two SO(3) triplet flavons, carrying discrete symmetry charges so that one, $`\chi `$, gives only neutrino masses, while the other, $`\varphi `$, gives only charged lepton masses:
$$W=(ll)hh+(l\chi )^2hh+(l\varphi )\tau h.$$
(15)
Suppose that both flavons are forced to acquire vevs using the “$`SO(2)`$” alignment of (10):
$$W=X(\chi ^2M^2)+Y(\varphi ^2M^2)+Z(\chi \varphi M^{\prime \prime 2});m_\chi ^2>0,m_\varphi ^2>0$$
(16)
where we have also added a $`Z`$ driving term to fix the relative orientation of $`\chi `$ and $`\varphi `$. As before we may take $`M`$, $`M^{}`$ and $`M^{\prime \prime }`$ real by a choice of the phases of the fields. Minimizing the potential from $`|F_X|^2`$ the $`SO(3)`$ freedom allows the choice: $`\chi =M(0,0,1)`$. The minimization of $`|F_Y|^2`$ is not identical, because now there is only a residual $`SO(2)`$ freedom, which allows only the general form $`\varphi _i=a_i+ib_i`$, with $`a_1=0`$. Setting $`F_Y=0`$ and minimizing $`\varphi ^{}\varphi `$ gives $`\varphi =M^{}(0,\mathrm{sin}\theta ,\mathrm{cos}\theta )`$, with $`\theta `$ undetermined. The $`Z`$ driver fixes $`\mathrm{cos}\theta =MM^{}/M^{\prime \prime 2}`$ which is of order unity if $`M,M^{}`$ and $`M^{\prime \prime }`$ are comparable. If all other flavons couple to the leptons through higher dimension operators, $`\theta _{atm}=\theta `$.
Perhaps a more interesting case is to generate maximal mixing. To achieve this, change $`\varphi `$ to the “$`U(1)`$” alignment of (11)
$$W=X(\chi ^2M^2)+Y\varphi ^2+Z(\chi \varphi M^{\prime \prime 2});m_\chi ^2>0,m_\varphi ^2>0.$$
(17)
As before, $`SO(3)`$ freedom allows $`\chi =M(0,0,1)`$ and $`\varphi _i=a_i+ib_i`$, with $`a_1=0`$. Setting $`F_Z=0`$ aligns $`b_3=0`$ and $`a_3=M^{\prime \prime 2}/MV`$, while $`F_Y=0`$ forces $`b_1^2+b_2^2=V^2+a_2^2`$ and $`a_2b_2=0`$. With $`m_\varphi ^2>0`$, the remaining degeneracy is completely lifted by the soft mass squared term, giving $`a_2=0`$ and $`(b_1,b_2)=V(\mathrm{sin}\theta ,\mathrm{cos}\theta )`$. Since $`a_1=a_2=0`$, the $`SO(2)`$ freedom has not been used up, and we can choose an $`SO(2)`$ basis in which $`\theta =0`$:
$$\chi =M\left(\begin{array}{c}0\\ 0\\ 1\end{array}\right)\varphi =V\left(\begin{array}{c}0\\ i\\ 1\end{array}\right).$$
(18)
As expected, these vevs show that $`\chi `$ has an “$`SO(2)`$” alignment, while $`\varphi `$ has a “$`U(1)`$” alignment. The alignment term ensures that $`\varphi `$ and $`\chi `$ vevs are not orthogonal. The lepton masses from (15) now give $`\theta _{atm}=45^{}`$, up to corrections of relative order $`m_\mu /m_\tau `$. In the $`(1,,+)`$ basis, this theory has the leading terms of texture C with
$$m_E=\left(\begin{array}{ccc}0& 0& 0\\ 0& \delta _2& 0\\ 0& 0& D_3\end{array}\right)m_I$$
(19)
## 6 Models With Large $`\mathrm{\Omega }_\nu `$ and Large $`\theta _{atm}`$
The key to avoiding the $`\beta \beta _{0\nu }`$ constraint (12) is to have a $`U(1)`$ vev alignment in the 12 space so that the electron is in $`l_{}`$. In this basis the $`SO(3)`$ invariant neutrino mass term is $`2l_+l_{}+l_3l_3`$, as shown in texture B, and gives a vanishing $`\beta \beta _{0\nu }`$ rate. Thus we seek to modify the model of eqn (15), which generates large $`\theta _{atm}`$, to align the electron along $`l_{}`$. The interactions of (15) are insufficient to identify the electron. We must add perturbations for the muon mass, which will identify the electron as the massless state. Hence we extend (15) to
$$W_1=(ll)hh+(l\chi )^2hh+(l\varphi _\tau )\tau h+(l\varphi _\mu )\xi _\mu \mu h,$$
(20)
and seek potentials where $`\varphi _{\tau ,\mu }`$ have zero components in the $`+`$ direction.
To obtain a $`\chi `$ vev in the 3 direction, and a $`U(1)`$ alignment in the 12 space, we use (17), with $`M^{\prime \prime }=0`$ to enforce the orthogonality of $`\varphi `$ with $`\chi `$
$$W_2=X(\chi ^2M^2)+Y\varphi _\mu ^2+Z\chi \varphi _\mu ;m_\chi ^2>0,m_{\varphi _\mu }^2<0,$$
(21)
which gives
$$\chi =M\left(\begin{array}{c}0\\ 0\\ 1\end{array}\right)\varphi _\mu =V\left(\begin{array}{c}1\\ i\\ 0\end{array}\right).$$
(22)
Large $`\theta _{atm}`$ requires $`\varphi _\tau `$ to have large components in both $``$ and 3 directions, and results from the addition
$$W_3=Z^{}\varphi _\mu \varphi _\tau ;m_{\varphi _\tau }^2<0.$$
(23)
In the (1,2,3) basis
$$\varphi _\tau =V^{}\left(\begin{array}{c}1\\ i\\ \sqrt{2}x\end{array}\right).$$
(24)
This theory, described by $`W_1+W_2+W_3`$, determines the vev orientations so that $`\mathrm{\Omega }_\nu `$ is unconstrained by $`\beta \beta _{0\nu }`$ decay. The value of $`\theta _{atm}`$ is generically of order unity, but is not determined.
Additional potential terms can determine $`x`$ and hence $`\theta _{atm}`$. For example, maximal mixing can be obtained in a theory with three extra triplets, $`\varphi _{1,2,3}`$. Discrete symmetries are introduced so that none of these fields couples to matter: the matter interactions remain those of (20). The field $`\varphi _1`$ is driven to have an $`SO(2)`$ alignment, and also the product $`\varphi _1\chi `$ is driven to zero. The $`SO(2)`$ freedom, not specified until now, then allows the form $`\varphi _1=V_1(1,0,0)`$. The other two triplets $`\varphi _2`$ and $`\varphi _3`$ are driven just like $`\varphi _\mu `$ and $`\varphi _\tau `$ respectively: $`\varphi _2^2,\varphi _2\chi `$ and $`\varphi _2\varphi _3`$ are all forced to vanish. However, the vevs are not identical to those of $`\varphi _{\mu ,\tau }`$, because $`\varphi _2\varphi _\mu `$ is forced to be non-zero, so that the discrete $`\pm `$ choice of the $`U(1)`$ alignment is opposite for $`\varphi _{2,3}`$ compared with $`\varphi _{\mu ,\tau }`$:
$$\varphi _2=V_2\left(\begin{array}{c}1\\ i\\ 0\end{array}\right)\varphi _3=V_3\left(\begin{array}{c}1\\ i\\ \sqrt{2}y\end{array}\right).$$
(25)
Maximal mixing follows from two further constraints: forcing $`\varphi _3\varphi _\tau `$ to zero imposes $`xy=1`$, while forcing $`ϵ\varphi _1\varphi _3\varphi _\tau `$ to zero ($`ϵ`$ is the tensor totally antisymmetric in $`SO(3)`$ indices) sets $`y=x`$. Hence $`(x,y)=(\pm 1,1)`$, giving $`\theta _{atm}=45^{}`$. The complete theory is described by $`W_1+W_2+W_3+W_4`$, where
$`W_4`$ $`=`$ $`X_1(\varphi _1^2M_1^2)+X_2\varphi _1\chi +X_3\varphi _2^2+X_4\varphi _2\chi +X_5\varphi _2\varphi _3`$ (26)
$`+X_6(\varphi _2\varphi _\mu M_2^2)+X_7\varphi _3\varphi _\tau +X_8ϵ\varphi _1\varphi _3\varphi _\tau .`$
There are other options for constructing theories with interesting vacuum alignments. For example, doublets may be used as well as triplets, and if $`SO(3)`$ is gauged, the aligning potential may arise from $`D`$ terms as well as $`F`$ terms.
## 7 Conclusions
In this letter we made a counter-intuitive proposal for a theory of lepton masses; in the limit of exact flavour symmetry, the three neutrinos are massive and degenerate, while the three charged leptons are massless. Such zeroth-order masses result when the three lepton doublets form a real irreducible representation of some non-Abelian flavour group — for example, a triplet of $`SO(3)`$. A sequential breaking of the flavour group then produces both a hierarchy of charged lepton masses and a hierarchy of neutrino $`\mathrm{\Delta }m^2`$. The Majorana neutrino masses are small because, as always, they are second order in weak symmetry breaking.
We showed that the $`SO(3)`$ symmetry breaking may follow a different path in the charged and neutral sectors, leading to a vacuum misalignment with interesting consequences. There can be large leptonic mixing angles, with $`45^{}`$ arising from the simplest misalignment potentials. Such mixing can explain the atmospheric neutrino data while allowing large amounts of neutrino hot dark matter. The latter is consistent with the bounds on the $`\beta \beta _{0\nu }`$ process because the symmetry suppresses the Majorana mass matrix element $`m_{\nu ee}`$. Such a model can give bimaximal mixing with the large mixing angles very close to $`45^{}`$.
## Acknowledgements
This work was supported in part by the U.S. Department of Energy under Contracts DE-AC03-76SF00098, in part by the National Science Foundation under grant PHY-95-14797. HM was also supported by Alfred P. Sloan Foundation.
|
no-problem/9901/hep-ph9901313.html
|
ar5iv
|
text
|
# 1 Interaction amplitude (arbitrary units) of two colour dipoles as function of their impact (units of correlation lengths 𝑎). One large 𝑞𝑞̄-dipole of extension 12𝑎 is fixed, the second small one of extension 1𝑎 is, averaged over all its orientations, shifted along on top of the first one. For the 𝐷₁-tensor structure of the correlator there are only contributions when the endpoints are close to each other, whereas for the 𝐷-structure large contributions show up also from between the endpoints. This is to be interpreted as interaction with the gluonic string between the quark and antiquark.
HD-THEP-99-1. – To be published in Fizika B
VECTOR MESONS $`\rho `$, $`\rho ^{}`$ AND $`\rho ^{\prime \prime }`$
DIFFRACTIVELY PHOTO- AND LEPTOPRODUCED
G. KULZINGER <sup>1</sup><sup>1</sup>1 Supported by the Deutsche Forschungsgemeinschaft under grant no. GRK 216/1-96 <sup>,</sup><sup>2</sup><sup>2</sup>2E-mail: G.Kulzinger@thphys.uni-heidelberg.de
Institut für Theoretische Physik der Universität Heidelberg,
Philosophenweg 16, 69120 Heidelberg, Germany
In the framework of non-perturbative QCD we calculate high-energy diffractive production of vector mesons $`\rho `$, $`\rho ^{}`$ and $`\rho ^{\prime \prime }`$ by real and virtual photons on a nucleon. The initial photon dissociates into a $`q\overline{q}`$-dipole and transforms into a vector meson by scattering off the nucleon which, for simplicity, is represented as quark-diquark. The relevant dipole-dipole scattering amplitude is provided by the non-perturbative model of the stochastic QCD vacuum. The wave functions result from considerations in the frame of light-front dynamics; the physical $`\rho ^{}`$\- and $`\rho ^{\prime \prime }`$-mesons are assumed to be mixed states of an active $`2S`$-excitation and some residual rest ($`2D`$\- and/or hybrid state). We obtain good agreement with the experimental data and get an understanding of the markedly different $`\pi ^+\pi ^{}`$-mass spectra for photoproduction and $`e^+e^{}`$-annihilation.
Keywords: non-perturbative QCD, diffraction, photoproduction, photon wave function, $`\rho `$-meson, excited vector mesons, hybrid
1. Introduction
Diffractive scattering processes are characterized by small momentum transfer, $`t<\mathrm{\hspace{0.25em}1}`$ GeV<sup>2</sup>, and thus governed by non-perturbative QCD. To get more insight in the physics at work we investigate exclusive vector meson production by real and virtual photons. In this note we summarize recent results from Ref. on $`\rho `$-, $`\rho ^{}`$\- and $`\rho ^{\prime \prime }`$-production, see also Ref. . In Refs we have developed a framework which we here can only flash.
We consider high-energy diffractive collision of a photon, which dissociates into a $`q\overline{q}`$-dipole and transforms into a vector meson, with a proton in the quark-diquark picture, which remains intact. The scattering $`T`$-amplitude can be written as an integral of the dipole-dipole amplitude and the corresponding wave functions. Integrating out the proton side, we have
$$T_V^\lambda (s,t)=is\frac{dzd^2𝐫}{4\pi }\psi _{V(\lambda )}^{}\psi _{\gamma (Q^2,\lambda )}(z,𝐫)J_p(z,𝐫,\mathrm{\Delta }_T),$$
(1)
where $`V(\lambda )`$ stands for the final vector meson and $`\gamma (Q^2,\lambda )`$ for the initial photon with definite helicities $`\lambda `$ (and virtuality $`Q^2`$); $`z`$ is the light-cone momentum fraction of the quark, $`𝐫`$ the transverse extension of the $`q\overline{q}`$-dipole. The function $`J_p(z,𝐫,\mathrm{\Delta }_T)`$ is the interaction amplitude for a dipole $`\{z,𝐫\}`$ scattering on a proton with fixed momentum transfer $`t=\mathrm{\Delta }_T^2`$; for $`\mathrm{\Delta }_T=0`$ due to the optical theorem it is the corresponding total coss section (see below Eq. (4)). It is calculated within non-perturbative QCD: In the high-energy limit Nachtmann derived a non-perturbative formula for dipole-dipole scattering whose basic entity is the vacuum expectation value of two lightlike Wilson loops. This gets evaluated in the model of the stochastic QCD vacuum.
2. The model of the stochastic QCD vacuum
Coming from the functional integral approach the model of the stochastic QCD vacuum assumes that the non-perturbative part of the gauge field measure, i.e. long-range gluon fluctuations that are associated with a non-trivial vacuum structure of QCD, can be approximated by a stochastic process in the gluon field strengths with convergent cumulant expansion. Further assuming this process to be gaussian one arrives at a description through the second cumulant $`\mathbf{}g^2F_{\mu \nu }^A(x;x_0)F_{\rho \sigma }^A^{}(x^{};x_0)\mathbf{}`$ which has two Lorentz tensor structures multiplied by correlation functions $`D`$ and $`D_1`$, respectively. $`D`$ is non-zero only in the non-abelian theory or in the abelian theory with magnetic monopoles and yields linear confinement. Whereas the $`D_1`$-structure is not confining.
The underlying mechanism of (interacting) gluonic strings also shows up in the scattering of two colour dipoles, cf. Fig. 1, and essentially determines the $`T`$-amplitude if large dipole sizes are not suppressed by the wave functions. To confront with experiment this specific-large distance prediction we are intended to study the broad $`\rho `$-states and, especially, their production by broad small-$`Q^2`$ photons. Before we enter the discussion of our results, however, we have to specify these states and have to fix their wave functions as well as that of the photon.
3. Physical states $`\rho `$, $`\rho ^{}`$ and $`\rho ^{\prime \prime }`$
Analyzing the $`\pi ^+\pi ^{}`$-invariant mass spectra for photoproduction and $`e^+e^{}`$-annihilation Donnachie and Mirzaie concluded evidence for two resonances in the 1.6 GeV region whose masses are compatible with the $`1^{}`$ states $`\rho (1450)`$ and $`\rho (1700)`$. We make as simplest ansatz
$`|\rho (770)`$ $`=`$ $`|1S,`$ (2)
$`|\rho (1450)`$ $`=`$ $`\mathrm{cos}\theta |2S+\mathrm{sin}\theta |rest,`$
$`|\rho (1700)`$ $`=`$ $`\mathrm{sin}\theta |2S+\mathrm{cos}\theta |rest,`$
where $`|rest`$ is considered to have $`|2D`$\- and/or hybrid components whose couplings to the photon both are suppressed, see Refs. and , respectively. With our convention of the wave functions the relative signs $`\{+,,+\}`$ of the production amplitudes of the $`\rho `$-, $`\rho ^{}`$\- and $`\rho ^{\prime \prime }`$-states in $`e^+e^{}`$-annihilation determine the mixing angle to be in the first quadrant; from Ref. then follows $`\theta 41^{}`$. With this value and the branching ratios of the $`\rho ^{}`$\- and $`\rho ^{\prime \prime }`$-mesons into $`\pi ^+\pi ^{}`$ extracted in Ref. we calculate the photoproduction spectrum as shown in Fig. 2 with the observed signs pattern $`\{+,+,\}`$; for details cf. . We will understand below from Fig. 3 the signs change of the $`2S`$-production as due to the dominance of large dipole sizes in photoproduction in contrary to the coupling to the electromagnetic current $`f_{2S}`$ being determined by small dipole sizes.
4. Light-cone wave functions
In the high-energy limit the photon can be identified as its lowest Fock, i.e. $`q\overline{q}`$-state. The vector meson wave function distributes this $`q\overline{q}`$-dipole $`\{z,𝐫\}`$, accordingly.
Photon: With mean of light-cone perturbation theory (LCPT) we get explicit expressions for both longitudinal and transverse photons. The photon transverse size which we will see to determine the $`T`$-amplitude is governed by the product $`\epsilon r`$$`\epsilon =\sqrt{z\overline{z}Q^2+m^2}`$ and $`r=|𝐫|`$. For high $`Q^2`$ longitudinal photons dominate by a power of $`Q^2`$; their $`z`$-endpoints being explicitly suppressed, LCPT is thus applicable. For moderate $`Q^2`$ also transverse photons contribute which have large extensions because endpoints are not suppressed. For $`Q^2`$ smaller than 1 GeV<sup>2</sup> LCPT definitively breaks down. However, it was shown that a quark mass phenomenologically interpolating between a zero valence and a $`220`$ MeV constituent mass astonishingly well mimics chiral symmetry breaking and confinement. Our wave function is thus given by LCPT with such a quark mass $`m(Q^2)`$, for details cf. Refs .
Vector mesons: The vector mesons wave functions of the $`1S`$\- and $`2S`$-states are modelled according to the photon. We only replace the photon energy denominator $`(\epsilon ^2+𝐤^2)^1`$ by a function of $`z`$ and $`|𝐤|`$ for which ansätze according to Wirbel and Stech are made; for the ”radial” excitation we account by both a polynomial in $`z\overline{z}`$ and the $`2S`$-polynomial in $`𝐤^2`$ of the transverse harmonic oscillator. The parameters are fixed by the demands that the $`1S`$-state reproduces $`M_\rho `$ and $`f_\rho `$ and the $`2S`$-state is both normalized and orthogonal on the $`1S`$-state. For details cf. Ref. .
5. Results
Before presenting some of our results we stress that all calculated quantities are absolute predictions. Due to the eikonal approximation applied, the cross sections are constant with total energy $`s`$ and refer to $`\sqrt{s}=20`$ GeV where the proton radius is fixed. (The two parameters of the model of the stochastic QCD vacuum, the gluon condensate $`\mathbf{}g^2FF\mathbf{}`$ and the correlation length $`a`$, are determined by matching low-energy and lattice results, cf. Ref. .)
In Fig. 3 we display – for the transverse $`2S`$-state, $`\lambda =T`$ – both the functions
$`J_p^{(0)}(z,r)`$ $`:=`$ $`{\displaystyle _0^{2\pi }}{\displaystyle \frac{d\phi _𝐫}{2\pi }}J_p(z,𝐫,\mathrm{\Delta }_T=0)`$ (3)
$`r\psi _{V(\lambda )}^{}\psi _{\gamma (Q^2,\lambda )}(r)`$ $`:=`$ $`{\displaystyle \frac{dz}{4\pi }_0^{2\pi }\frac{d\phi _𝐫}{2\pi }|𝐫|\psi _{V(\lambda )}^{}\psi _{\gamma (Q^2,\lambda )}(z,𝐫)}`$ (4)
which together, see Eq. (1), essentially determine the leptoproduction amplitude. It is strikingly shown how for decreasing virtuality $`Q^2`$ the outer positive region of the wave functions effective overlap $`r\psi _{V(\lambda )}^{}\psi _{\gamma (Q^2,\lambda )}`$ wins over the inner negative part due to the strong rise with $`r`$ of the dipole-proton interaction amplitude $`J_p^{(0)}`$ which itself is a consequence of the string interaction mechanism discussed above. In praxi dipole sizes up to $`2.5`$ fm contribute significantly to the cross section.
Our results for integrated elastic cross sections as functions of $`Q^2`$ are given in Fig. 4. For the $`\rho `$-meson our prediction is about $`2030\%`$ below the E665-data . However, we agree with the NMC-experiment which measures some definite superposition of longitudinal and transverse polarization, see Table 3 in Ref. . For the $`2S`$-state, due to the nodes of the wave function, we predict a marked structure; the explicit shape, however, strongly depends on the parametrization of the wave functions.
In Fig. 5 we display the ratio $`R_{LT}(Q^2)`$ of longitudinal to transverse coss sections and find good agreement with experimental data for the $`\rho `$-state. For the $`2S`$-state we again predict a marked structure which is very sensitive to the node positions in the wave functions.
Further results refering to cross sections differential in $`t`$ and the ratio of $`2\pi ^+2\pi ^{}`$-production via $`\rho ^{}`$ and $`\rho ^{\prime \prime }`$ to $`\pi ^+\pi ^{}`$-production via $`\rho `$ are given in Ref. .
Ackknowledgements
The author thanks H.G. Dosch and H.J. Pirner for collaboration in the underlying work.
|
no-problem/9901/nucl-th9901084.html
|
ar5iv
|
text
|
# 1 Energy per nucleon for symmetric nuclear matter. The dashed curve is the result of the original QMC. The solid curve is for the QMC with short-range correlations (QMCs). The region enclosed with the dotted curves is the empirical equation of state [].
ADP-99-6/T351
A quark-meson coupling model with short-range quark-quark correlations
K. Saito<sup>*</sup><sup>*</sup>*ksaito@nucl.phys.tohoku.ac.jp
Physics Division, Tohoku College of Pharmacy
Sendai 981-8558, Japan
K. Tsushimaktsushim@physics.adelaide.edu.au and A.W. Thomasathomas@physics.adelaide.edu.au
Special Research Center for the Subatomic Structure of Matter
and Department of Physics and Mathematical Physics
The University of Adelaide, Adelaide, SA 5005, Australia
## Abstract
Short-range quark-quark correlations are introduced into the quark-meson coupling (QMC) model in a simple way. The effect of these correlations on the structure of the nucleon in dense nuclear matter is studied. We find that the short-range correlations may serve to reduce a serious problem associated with the modified quark-meson coupling model (within which the bag constant is allowed to decrease with increasing density), namely the tendency for the size of the bound nucleon to increase rapidly as the density rises. We also find that, with the addition of correlations, both QMC and modified QMC are consistent with the phenomenological equation of state at high density.
PACS numbers: 21.65.+f, 21.30.-x, 24.10.Jv, 12.39.Ba
Keywords: nuclear matter, quark-meson coupling model, short-range correlations, quark structure effects
About a decade ago, Guichon proposed a relativistic quark model for nuclear matter, where it consists of non-overlapping nucleon bags bound by the self-consistent exchange of scalar ($`\sigma `$) and vector ($`\omega `$) mesons in mean-field approximation (MFA). This model has been further developed as the quark-meson coupling (QMC) model, and applied to various phenomena in nuclear physics (for recent reviews, see Ref.). Recently, Jin and Jennings have proposed an alternative version of QMC (called the modified QMC, MQMC), where the bag constant is allowed to decrease as a function of density.
So far, the use of the QMC model has been limited to the region of small to moderate densities, because it has been assumed that the nucleon bags do not overlap. It is therefore of great interest to explore ways to extend the model to include short-range quark-quark correlations, which may occur when nucleon bags overlap at high density. In this paper we will introduce these short-range correlations in a very simple way, and calculate their effect on the quark structure of the nucleon in medium. We refer to this model as the quark-meson coupling (or modified quark-meson coupling) model with short-range correlations (QMCs (or MQMCs)).
Let us consider uniformly distributed (iso-symmetric) nuclear matter with density $`\rho _B`$. At high density the nucleon bags start to overlap with each other, and a quark in one nucleon may interact with quarks in other nucleons in the overlapping region. Since the interaction between the quarks is short range, it seems reasonable to treat it in terms of contact interactions. An additional interaction term of the form, $`_{int}_{ij}\overline{\psi }_q(i)\mathrm{\Gamma }_\alpha \psi _q(i)\overline{\psi }_q(j)\mathrm{\Gamma }^\alpha \psi _q(j)`$, may then be added to the original QMC Lagrangian density . Here $`\psi _q(i)`$ is a quark field in the $`i`$-th nucleon and $`\mathrm{\Gamma }_\alpha `$ stands for $`1,\gamma _5,\gamma _\mu ,\gamma _5\gamma _\mu `$ or $`\sigma _{\mu \nu }`$ (with or without the isospin and color generators). (For the present we consider only u and d quarks.) In MFA most of these terms vanish in a static, spin saturated, uniform system because of rotational symmetry, parity etc. We shall retain only the dominant MFA contributions, namely the scalar- and vector-type interactions: $`\mathrm{\Gamma }_\alpha =1`$ and $`\gamma _0`$.
Next we consider the probability for the nucleon bags to overlap, using a simple geometrical approach. Let us first consider a collection of rigid balls, with a radius $`R_c`$. In the close-packed structure of nuclear matter we find that the effective volume per ball is given by $`V_c=4\sqrt{2}R_c^3`$. The corresponding density, $`\rho _c`$, is then given by the inverse of $`V_c`$, and hence the radius of the rigid ball is related to the density as $`R_c=1/(4\sqrt{2}\rho _c)^{1/3}`$. Returning to our problem, we see that for a given nuclear density $`\rho _B`$, if the nucleon bag radius, $`R`$, which is given by solving the nuclear matter problem self-consistently, is larger than $`R_c(=1/(4\sqrt{2}\rho _B)^{1/3})`$ the nucleons will overlap. If $`RR_c`$, there is no overlap. Of course, one could build a more sophisticated model, allowing for nucleon motion and nucleon-nucleon correlations , but we believe that the present model is sufficient for an initial investigation.
Now consider two nucleon bags separated by a distance $`d`$ in nuclear matter. They will overlap for $`d<2R`$ and the common volume is then given by $`V_{ov}=V_N(13y/4+y^3/16)`$ , where $`V_N`$ is the nucleon volume ($`=4\pi R^3/3`$) and $`y=d/R`$. It is natural to choose the probability of overlap, $`p`$, to be proportional to $`V_{ov}/V_N`$:
$$p(y)1\frac{3y}{4}+\frac{y^3}{16}.$$
(1)
Of course, this choice is quite model-dependent. In principle we could use an arbitrary, smooth function, which goes to unity at $`y=0`$ and zero beyond $`y=2`$ and which respects the three dimensional geometry of this problem. In this exploratory study, we take this simple form as an example.
In mean-field approximation the Dirac equation for a quark field in a nucleon bag is given by
$$[i\gamma (m_qg_\sigma ^q\sigma +f_s^q\overline{\psi }_q\psi _q)(g_\omega ^q\omega +f_v^q\psi _q^{}\psi _q)\gamma _0]\psi _q=0,$$
(2)
where $`m_q`$ is the bare quark mass, $`\sigma `$ and $`\omega `$ are the mean-field values of the $`\sigma `$ and $`\omega `$ mesons and $`g_\sigma ^q`$ and $`g_\omega ^q`$ are, respectively, the $`\sigma `$\- and $`\omega `$-quark coupling constants in the usual QMC model . The new coupling constants, $`f_{s(v)}^q`$, have been introduced for the scalar (vector)-type short-range correlations, and are given by (see Eq.(1))
$$f_{s(v)}^q=\frac{\overline{f}_{s(v)}^q}{M^2}\times (1\frac{3y}{4}+\frac{y^3}{16})\theta (y)\theta (2y).$$
(3)
We have also taken $`y=d/R`$, with $`d`$ the average distance between two neighbouring nucleons at a given nuclear density $`\rho _B`$ – i.e., as explained above, $`d=2R_c=2/(4\sqrt{2}\rho _B)^{1/3}`$. Note that since the coupling constants have dimension of (energy)<sup>-2</sup> we introduce new, dimensionless coupling constants, $`\overline{f}_s^q`$ and $`\overline{f}_v^q`$ by dividing by the free nucleon mass ($`M=939`$ MeV) squared. (If the coupling strength is positive the correlation gives a repulsive force.) In Eq.(2), $`\overline{\psi }_q\psi _q`$ and $`\psi _q^{}\psi _q`$ are, respectively, the average values of the quark scalar density and quark density with respect to the nuclear ground state, which are approximately given by the values at the center of the nucleon in local density approximation (we will revisit this later).
Now we can solve the Dirac equation, Eq.(2), as in the usual QMC, with the effective quark mass
$$m_q^{}=m_qg_\sigma ^q\sigma +f_s^q\overline{\psi }_q\psi _q,$$
(4)
instead of the bare quark mass. The Lorentz vector interaction shifts the nucleon energy in the medium :
$$ϵ(\stackrel{}{k})=\sqrt{M^2+\stackrel{}{k}^2}+3(g_\omega ^q\omega +f_v^q\psi _q^{}\psi _q),$$
(5)
where $`M^{}`$ is the effective nucleon mass, which is given by the usual bag energy
$$M^{}=\frac{3\mathrm{\Omega }z}{R}+\frac{4}{3}\pi BR^3.$$
(6)
Here $`B`$ and $`z`$ are respectively the bag constant and usual parameter which accounts for zero-point motion and gluon fluctuations . The quark energy, $`\mathrm{\Omega }`$ (in units of $`1/R`$), is defined by $`\sqrt{x^2+(Rm_q^{})^2}`$, where $`x`$ is the lowest eigenvalue of the quark, which is given by the usual boundary condition at the bag surface .
The total energy per nucleon at density $`\rho _B`$ is then expressed as
$`E_{tot}`$ $`=`$ $`{\displaystyle \frac{4}{(2\pi )^3\rho _B}}{\displaystyle ^{k_F}}𝑑\stackrel{}{k}\sqrt{M^2+\stackrel{}{k}^2}+3(g_\omega ^q\omega +f_v^q\psi _q^{}\psi _q)`$ (7)
$`+`$ $`{\displaystyle \frac{1}{2\rho _B}}(m_\sigma ^2\sigma ^2m_\omega ^2\omega ^2),`$
where $`k_F`$ is the Fermi momentum, and $`m_\sigma `$ and $`m_\omega `$ are respectively the $`\sigma `$ and $`\omega `$ meson masses. The $`\omega `$ field created by the uniformly distributed nucleons is determined by baryon number conservation: $`\omega =3g_\omega ^q\rho _B/m_\omega ^2=g_\omega \rho _B/m_\omega ^2`$ (where $`g_\omega =3g_\omega ^q`$), while the $`\sigma `$ field is given by the thermodynamic condition: $`(E_{tot}/\sigma )=0`$. This gives the self-consistency condition (SCC) for the $`\sigma `$ field :
$$\sigma =\frac{4}{(2\pi )^3m_\sigma ^2}\left(\frac{M^{}}{\sigma }\right)^{k_F}𝑑\stackrel{}{k}\frac{M^{}}{\sqrt{M^2+\stackrel{}{k}^2}},$$
(8)
where
$$\left(\frac{M^{}}{\sigma }\right)=3g_\sigma ^qS_N(\sigma )=g_\sigma C_N(\sigma ).$$
(9)
Here $`g_\sigma =3g_\sigma ^qS_N(0)`$ and $`C_N(\sigma )=S_N(\sigma )/S_N(0)`$, with the quark scalar charge defined by $`S_N(\sigma )=_{bag}𝑑\stackrel{}{r}\overline{\psi }_q\psi _q`$. We should note that because the scalar-type correlation does not directly involve the $`\sigma `$ field the SCC is not modified by it. However, the correlations do affect the $`\sigma `$ field through the quark wave function.
In actual calculations, the quark density, $`\psi _q^{}\psi _q`$, in the total energy, Eq.(7), may be replaced by $`3\rho _B`$, and the quark scalar density, contributing to the effective quark mass, $`m_q^{}`$, is approximately given as $`\overline{\psi }_q\psi _q=(m_\sigma ^2/g_\sigma )\sigma `$ because of the SCC (see also Ref.).
Now we present the numerical results. First, we choose $`m_q`$ = 5 MeV and the bag radius of the nucleon in free space, $`R_0`$, to be 0.8 fm. We calculate the matter propereties using not only QMC but also MQMC. For the latter we take a simple variation of the bag constant in the medium to illustrate the role of the short-range correlations: $`(B/B_0)^{1/4}=\mathrm{exp}(g_\sigma ^B\sigma /M)`$ with $`g_\sigma ^B`$ = 2.8 and the bag constant in free space, $`B_0`$. In both models, the bag constant (in free space) and $`z`$ parameter are determined to fit the free nucleon mass with $`R_0`$ = 0.8 fm. We find $`B_0^{1/4}`$ = 170.0 MeV (in QMC, $`B=B_0`$ at all densities) and $`z`$ = 3.295. The coupling constants, $`g_\sigma `$ and $`g_\omega `$, are determined so as to reproduce the binding energy ($`15.7`$ MeV) at the saturation density ($`\rho _0`$ = 0.15 fm<sup>-3</sup>). We find that $`g_\sigma ^2`$ = 67.80 and $`g_\omega ^2`$ = 66.71 for QMC and $`g_\sigma ^2`$ = 35.69 and $`g_\omega ^2`$ = 80.68 for MQMC. Note that the matter properties at $`\rho _0`$ in both models with short-range correlations are identical to those of the original models . This is because, in our simple geometric approach, the effect of nucleon overlap starts beyond $`\rho _0`$ (see below).
In Figs.1 and 2, we present the total energies per nucleon for QMC and MQMC, respectively. We determine the coupling constant, $`\overline{f}_v^q`$, so as to reproduce the empirical value of the energy around $`\rho _B/\rho _0=2.54`$ . This yields the value $`\overline{f}_v^q`$ = 300 for QMCs and $`\overline{f}_v^q`$ = 10 for MQMCs. (Note that in MQMCs the overlap probability is much larger than that in QMCs at the same density because the bag radius in MQMC increases very rapidly at finite density. This is the reason why the strength of $`\overline{f}_v^q`$ for MQMCs is much weaker than that in QMCs.) For the strength of the scalar-type correlation, since we have no definite guideline, we take the same value in both models: $`\overline{f}_s^q`$ = 200 (as an illustration). From the figures we can see that the nucleon overlap starts around $`\rho _B/\rho _0`$ 2.7 or 1.3 for QMCs and MQMCs, respectively. The empirical energies at high densities (the region enclosed with the dotted curves in Figs. 1 and 2 ) are well reproduced in both models (in particular, in MQMCs) if the short-range correlations are considered.
In Fig.3, we show the change of the nucleon mass in matter. We can see that the effect of the short-range correlations on the mass is not strong. Correspondingly, the strength of the $`\sigma `$ field in matter is not much altered by the correlations.
In Fig.4, we present the variation of the quark eigenvalue. In QMCs, the effect of the short-range correlations is weak, while in MQMCs the effect becomes very large as the nuclear density increases and the nucleons overlap more and more. Since we chose a repulsive scalar-type correlation, the effective quark mass in QMCs or MQMCs becomes larger than that in the original model as the density grows. This leads to a larger eigenvalue in QMCs or MQMCs. As a consequence of this repulsive correlation a solution for the quark eigenvalue in MQMCs can be found even beyond $`\rho _B/\rho _0`$ 3.
Turning next to the size of the nucleon itself, as measured by the bag radius, we see in Fig.5 that the effect of the short-range correlations can be very significant. While the effect is small in QMCs, in MQMCs the bag radius starts to shrink as soon as the nucleons begin to overlap. We find a similar effect on the root mean square radius of the quark wave function. In the original MQMC it is well known that there is a serious problem concerning the bag radius. In particular, it grows rapidly at high density because of the decrease of the bag constant. However, as we can see from the figure, the inclusion of a repulsive (scalar-type) short-range correlation yields a remarkable improvement for the in-medium nucleon size in MQMC.
In summary, we have studied (in mean-field approximation) the effect of short-range quark-quark correlations associated with nucleon overlap. We have found that the empirical equation of state at high density can be very well reproduced using a repulsive vector-type correlation. Furthermore, we have shown that a repulsive scalar-type correlation can counteract the tendency for the in-medium nucleon size to increase in MQMC. This may prove to be a significant improvement because there are fairly strong experimental constraints on the possible increase in nucleon size in-medium . While our inclusion of correlations has been based on quite simple, geometrical considerations, in the future we would hope to formulate the problem in a more sophisticated, dynamical way and to use it to study the properties of finite nuclei (including hyper nuclei ).
This work was supported by the Australian Research Council and the Japan Society for the Promotion of Science.
|
no-problem/9901/astro-ph9901297.html
|
ar5iv
|
text
|
# Disc and secondary irradiation in dwarf and X-ray novae
## Abstract
I present a short review of irradiation processes in close binary systems.
## 1. Introduction
Thermal-viscous instability is widely accepted as the origin of Dwarf Nova and Soft X-ray Transient (X-ray Nova) outbursts. In the ‘standard’ model, one assumes a constant mass-transfer rate, no effects of irradiation are taken into account and the accretion disc is supposed to extend down to the surface of the accreting body (or the last stable orbit in the case of accreting black holes and compact neutron stars). This version of the model, however, explains neither all the observed outburst types nor all the observed outburst properties. Generalizations of the ‘standard’ model which take into account some or all of these neglected, but obviously important, effects have been proposed and studied in various investigations. I will shortly discuss some recent results concerning irradiation (Hameury, Lasota & Huré 1997; Dubus et al. 1999; hereafter DLHC; King (1997); King & Ritter 1998).
Since in the literature of the subject most articles use incorrect equations to describe irradiated accretion discs I begin by repeating the simple derivation of the basic equations presented in DLHC.
## 2. A simple introduction to the vertical structure of irradiated discs
In accretion discs the vertical energy conservation equation has the form:
$$\frac{dF}{dz}=Q_{\mathrm{vis}}(R,z)$$
(1)
where $`F`$ is the vertical (in the $`z`$ direction) radiative flux and $`Q_{\mathrm{vis}}(R,z)`$ is the viscous heating rate per unit volume. Eq. (1) states that an accretion disc is not in radiative equilibrium, contrary to a stellar atmosphere. Using the “$`\alpha `$ \- viscosity” prescription (Shakura & Sunyaev 1973) $`\nu =(2/3)\alpha c_\mathrm{s}^2/\mathrm{\Omega }_\mathrm{K}`$, where $`\alpha `$ is the viscosity parameter ($`1`$), $`\mathrm{\Omega }_\mathrm{K}`$ is the Keplerian angular frequency and $`c_\mathrm{s}=\sqrt{P/\rho }`$ is the sound speed, $`\rho `$ the density, and $`P`$ the pressure one can write
$$Q_{\mathrm{vis}}(R,z)=(3/2)\alpha \mathrm{\Omega }_\mathrm{K}P$$
(2)
Viscous heating of this form has important implications for the structure of optically thin layers of accretion discs and may lead to creation of coronae and winds (Shaviv & Wehrse 1986; 1991). Here, however, we are interested in the effects of irradiation on the inner structure of an optically thick disc, so our results should depend on the precise form of the viscous heating. We neglect, however, the possible presence of an X-ray irradiation generated corona and wind, described by Idan & Shaviv (1996).
When integrated over $`z`$, the rhs of Eq. (1) is equal to viscous dissipation per unit surface:
$$F_{\mathrm{vis}}=\frac{3}{2}\alpha \mathrm{\Omega }_\mathrm{K}_0^+\mathrm{}P𝑑z,$$
(3)
which is close, but not exactly equal, to the surface heating term $`(9/8)\nu \mathrm{\Sigma }\mathrm{\Omega }_\mathrm{K}^2`$ generally used in the literature. The difference between the two expressions may be important in numerical calculation (Hameury et al. 1998) but in the present context is of no importance. One can rewrite Eq. (1) as
$$\frac{dF}{d\tau }=f(\tau )\frac{F_{\mathrm{vis}}}{\tau _\mathrm{T}}$$
(4)
where we introduced a new variable, the optical depth $`d\tau =\kappa _\mathrm{R}\rho dz`$, $`\kappa _\mathrm{R}`$ being the Rosseland mean opacity and $`\tau _\mathrm{T}=_0^+\mathrm{}\kappa _\mathrm{R}\rho 𝑑z`$ is the total optical depth. As shown in DLHC, putting $`f(\tau )=1`$ is a good approximation.
At the disc midplane, by symmetry, the flux must vanish: $`F(\tau _\mathrm{T})=0`$, whereas at the surface, ($`\tau =0`$)
$$F(0)\sigma T_{\mathrm{eff}}^4=F_{\mathrm{vis}}$$
(5)
Equation (5) states that the total flux at the surface is equal to the energy dissipated by viscosity (per unit time and unit surface). The solution of Eq. (4) is thus
$$F(\tau )=F_{\mathrm{vis}}\left(1\frac{\tau }{\tau _\mathrm{T}}\right)$$
(6)
where $`\tau _{\mathrm{tot}}`$ is the total optical depth.
To obtain the temperature stratification one has to solve the transfer equation. Here we use the diffusion approximation
$$F(\tau )=\frac{4}{3}\frac{\sigma dT^4}{d\tau },$$
(7)
appropriate for the optically thick discs we are dealing with. The integration of Eq. (7) is straightforward and gives :
$$T^4(\tau )T^4(0)=\frac{3}{4}\tau \left(1\frac{\tau }{2\tau _\mathrm{T}}\right)T_{\mathrm{eff}}^4$$
(8)
The upper (surface) boundary condition is:
$$T^4(0)=\frac{1}{2}T_{\mathrm{eff}}^4+T_{\mathrm{irr}}^4$$
(9)
where $`T_{\mathrm{irr}}^4`$ is the irradiation temperature, which depends on $`R`$, the albedo, the height at which the energy is deposited and on the shape of the disc (see Eq. 13). In Eq. (9) $`T(0)`$ corresponds to the emergent flux and, as mentioned above, $`T_{\mathrm{eff}}`$ corresponds to the total flux, hence the factor 1/2 in front of $`T_{\mathrm{eff}}^4`$. The temperature stratification is thus :
$$T^4(\tau )=\frac{3}{4}T_{\mathrm{eff}}^4\left[\tau \left(1\frac{\tau }{2\tau _\mathrm{T}}\right)+\frac{2}{3}\right]+T_{\mathrm{irr}}^4.$$
(10)
For $`\tau _\mathrm{T}1`$, the temperature at the disc midplane is
$$T_\mathrm{c}^4T^4(\tau _\mathrm{T})=\frac{3}{8}\tau _{\mathrm{tot}}T_{\mathrm{eff}}^4+T_{\mathrm{irr}}^4$$
(11)
It is clear, therefore, that for the disc inner structure to be dominated by irradiation and the disc to be isothermal one must have
$$\frac{F_{\mathrm{irr}}}{\tau _{\mathrm{tot}}}\frac{\sigma T_{\mathrm{irr}}^4}{\tau _{\mathrm{tot}}}F_{\mathrm{vis}}$$
(12)
and not just $`F_{\mathrm{irr}}F_{\mathrm{vis}}`$ as is usually assumed. The difference between the two criteria is important in low-mass X-ra binaries since, for parameters of interest, $`\tau _{\mathrm{tot}}10^210^3`$ in the outer disc regions.
The effect of disc irradiation is illustrated on Figs. 1 & 2 (DLHC). Fig. 1 shows the vertical structure of a ring which is part of an unilluminated accretion disc. This ring is on the lower, cool branch of the S-curve. The energy transport is dominated by convection. The surface temperature is equal to the effective temperature given by viscous dissipation (see Eq. 5). The vertical structure of an irradiated disc is shown on Fig. 2. Although the irradiating flux is 20 times larger than the viscous flux the disc is not isothermal. Since in the non-irradiated disc $`T_\mathrm{c}14500`$ K and the surface temperature is $`T_\mathrm{s}5700`$ the optical depth is $`100`$ and, obviously an irradiation temperature higher than 14500 K is required to make the disc isothermal. Note that irradiation suppressed convection and the disc now is purely radiative.
## 3. Can the outer disc ‘see’ a point source located at the midplane ?
The answer is ‘no’ (see e.g. DLHC). The reason is that contrary to the frequently made assumption a ‘standard’ accretion disc is convex rather than concave. Images showing flaring discs have nothing to do with reality, at least with reality described by the standard model of planar discs.
For a point source, the irradiation temperature can be written as
$$T_{\mathrm{irr}}^4=\frac{\eta \dot{M}c^2(1\epsilon )}{4\pi \sigma R^2}\frac{H_{\mathrm{irr}}}{R}\left(\frac{d\mathrm{ln}H_{\mathrm{irr}}}{d\mathrm{ln}R}1\right)$$
(13)
where $`\eta `$ is the efficiency of converting accretion power into X-rays, $`\epsilon `$ is the X-ray albedo and $`H_{\mathrm{irr}}`$ is the local height at which irradiation energy is deposited, or the height of the disc “as seen by X-rays”. We use here $`H_{\mathrm{irr}}`$ and not $`H`$, the local pressure scale-height, as is usually written in the literature because, in general, $`H_{\mathrm{irr}}H`$.
Eq. (13) is usually used (at least in the recent, abundant, publications on the subject) with ‘typical’ values of $`\epsilon >0.9`$, $`H/R=0.2`$ (no difference is seen between $`H_{\mathrm{irr}}`$ and $`H`$; the fact that there is no reason for the photosphere to be at one (isothermal) scale-height seems to be largely ignored). These values are supposed to be given by ‘observations’. However, when one reads the articles quoted to support this assertion (Vrtilek et al. 1990; de Jong et al. 1996) one finds nothing of the kind there. These papers assume that an irradiated disc is isothermal. de Jong et al. (1996) fit light-curves with the isothermal model of Vrtilek et al. (1990). Moreover, since de Jong et al. (1996) model lightcurves of neutron-star binaries the $`H/R=0.2`$ cannot be applied to black-hole binaries (especially if $`H`$ were the pressure scale-height): since black-holes are more massive than neutron stars, in their case $`H/R`$ should be smaller at a given radius (let me add for the benefit of some readers: this is because gravity is then stronger), as seen in Fig. 6 & 7 of DLHC. In any case, for $`H/R=0.2`$ the vertical hydrostatic equilibrium equation would imply high temperatures at the disc’s outer rim:
$$T8\times 10^7\left(\frac{M}{M_{}}\right)\left(\frac{R}{10^{10}\mathrm{cm}}\right)^1\left(\frac{H}{R}\right)^2\mathrm{K},$$
(14)
which is clearly contradicted by observations.
When one calculates self-consistent (in the sense that $`H_{\mathrm{irr}}`$ in Eq. (13) is calculated and not assumed) models of irradiated discs one sees that the outer disc regions whose structure could be affected by irradiation are hidden in the shadow of the convex disc. This ‘self–screening’ results from the same physical process that is at the origin of the dwarf-nova instability: a dramatic change of opacities due to hydrogen recombination. Therefore, although irradiation could stabilize the disc, the unstable disc regions (see Fig. 3), cannot be irradiated by a point source located at the disc mid-plane. Models invoking the stabilizing effects of irradiation (van Paradijs 1996; King & Ritter 1998) have therefore to be revised (DLHC). Since outer disc regions in low mass X-ray binaries are clearly irradiated, (van Paradijs & McClintock 1995) model revisions must concern the disc–irradiating source geometry. To represent a geometry allowing the disc to see the irradiating source DLHC assumed that
$$T_{\mathrm{irr}}^4=𝒞\frac{\dot{M}c^2}{4\pi \sigma R^2}$$
(15)
and calculated irradiated disc structure. The results are shown in Fig. 4. The continous line represents both a non-irradiated disc and a disc irradiated according to Eq. (13) with a self-consistently calculated $`H_{\mathrm{irr}}`$: there is no difference between the two cases.
## 4. Can irradiation by a hot white dwarf explain the UV–delay in dwarf novae?
As shown by Hameury, Lasota & Dubus (1999) the answer to this question asked by King (1997) is ‘no’. Irradiation of the disc by the hot white dwarf may, however, be important in a different context (see Sect. 6).
## 5. Can the secondary in a dwarf-nova system be irradiated during outbursts ?
The answer is ‘yes’ at least for SS Cyg (Hessman 1984) and WZ Sge (Smak 1993). In these systems it was observed that the secondary’s hemisphere facing the accreting white dwarf was, during the outburst, heated to 16 000 - 17000 K, which for WZ Sge implies that the irradiating flux is $`5000`$ times larger that the star’s intrinsic flux. It is hard to believe that the companion does not increase its mass transfer rate in such conditions. WZ Sge is a very special system anyway (Lasota, Kuulkers & Charles 1999).
## 6. Can irradiation of the disc and of the secondary determine outburst properties?
Warner (1998) suggested that outburst properties of SU UMa stars could be explained by the effects of irradiation of both the accretion disc and the secondary. Preliminary results by Hameury & Lasota (1999, in preparation; see also Hameury these proceedings) seem to confirm that properties of SU UMa (and ER UMa) stars may be explained in this way.
## Acknowledgement
This article was written during a visit at the Weizmann Institute. I thank Moti Milgrom and the Department of Condensed Matter for hospitality and the Einstein Center for support.
## 7. References
1. de Jong J.A., van Paradijs J., Augusteijn T. 1996, A&A, 314, 484
2. Dubus, G., Lasota, J.-P., Hameury, J.-M., Charles, Ph. 1999, MNRAS, in press
3. Hameury J.-M., Lasota J.-P., Huré J.-M. 1997, MNRAS, 287, 937
4. Hameury J.-M., Lasota J.-P., Dubus G. 1999, MNRAS, in press
5. Hameury J.-M., Menou K., Dubus G., Lasota J.-P., Huré J.-M. 1998, MNRAS, 298, 1048
6. Hessman F.V., Robinson E.L., Nather R.E., Zhang E.-H. 1984, ApJ, 286, 747
7. Idan I., Shaviv G. 1996, MNRAS, 281, 604
8. King A.R. 1997, MNRAS, 288, L16
9. King A.R., Ritter H. 1998, MNRAS, 293, 42
10. Lasota J.-P., Kuulkers, E., Charles, Ph. 1998, MNRAS, submitted
11. van Paradijs J. 1996, ApJ, 464, L139
12. van Paradijs J., McClintock J.E. 1995, in X-ray Binaries, eds. Lewin W.H.G., van Paradijs J., van den Heuvel E.P.J., (Cambridge University Press, Cambridge) p. 58
13. Shakura N.I., Sunyaev R.A. 1973, A&A, 24, 337
14. Shaviv G., Wehrse R. 1986, A&A, 159, L5
15. Shaviv G., Wehrse R. 1991, in Theory of Accretion Disks, eds. Meyer F., Duschl W.J., Frank J., Meyer-Hofmeister E., (Kluwer, Dordrecht) p. 419 16. Smak J. 1993, Acta. Astron., 43, 101
17. Tuchman Y., Mineshige S., Wheeler J.C. 1990, ApJ, 359, 164
18. Vrtilek S.D. et al. 1990, A&A, 235, 165
19. Warner B. 1998, in Wild Stars in the Old West: Proceedings of the 13th North American Workshop on Cataclysmic Variables and Related Objects, eds. S. Howell, E. Kuulkers & C. Woodward, (ASP Conf. Ser. 137) p. 2
|
no-problem/9901/hep-lat9901023.html
|
ar5iv
|
text
|
# OVERVIEW FROM LATTICE QCD
## 1 Introduction
The lattice approach to QCD facilitates the numerical evaluation of expectation values without recourse to perturbative techniques. Although the lattice formulation is almost as old as QCD itself and first simulations of the path integral have been performed as early as in 1979 , only recently computers have become powerful enough to allow for a determination of the infinite volume continuum light hadron spectrum in the quenched approximation to QCD within uncertainties of a few per cent. To this accuracy the quenched spectrum deviates from experiment. Some collaborations have started to systematically explore QCD with two flavours of light sea quarks and the first precision results indeed indicate such differences.
Lattice QCD is a first principles approach; no parameters apart from those that are inherent to QCD, i.e. a strong coupling constant at a certain scale and $`n`$ quark masses, have to be introduced. In order to fit these $`n+1`$ parameters, $`n+1`$ low energy quantities are matched to their experimental values: the lattice spacing $`a(g,m_i)`$, that results from given values of the bare lattice coupling $`g`$ and (in un-quenched QCD) quark masses $`m_i`$, can be obtained by fixing $`m_\rho `$ as determined on the Lattice to the experimental value. The lattice parameters that correspond to physical $`m_um_d`$ can then be obtained by adjusting $`m_\pi /m_\rho `$; the right $`m_s`$ can be reproduced by adjusting $`m_K/m_\rho `$ or $`m_\varphi /m_\rho `$ to experiment etc.. Once the scale and quark masses have been set, everything else becomes a prediction. Due to the evaluation of path integrals by use of a stochastic process, lattice predictions carry statistical errors which can in principle be made arbitrarily small by increasing the statistics, i.e. the amount of computer time spent. In this sense, it is an exact approach.
Lattice results have in general to be extrapolated to the (continuum) limit $`a0`$ at fixed physical volume. The functional form of this extrapolation is theoretically well understood and under control. This claim is substantiated by the fact that simulations with different lattice discretisations of the continuum QCD action yield compatible results after the continuum extrapolation. For high energies, an overlap between certain quenched Lattice computations and perturbative QCD has been confirmed too , excluding the possibility of fixed points of the $`\beta `$-function at finite values of the coupling, other than $`g=0`$. After taking the continuum limit an infinite volume extrapolation is performed. Results on hadron masses from quenched evaluations on lattices with spatial extent $`La>2`$ fm are virtually indistinguishable from the infinite volume limit within typical statistical errors in most cases. However, for QCD with sea quarks the available information is not yet sufficient for definite conclusions, in particular as one might expect a substantial dependence of the on-set of finite size effects on the sea quark mass(es).
The effective infinite volume limit of realistically light pions cannot be realised at a reasonable computational cost, neither in quenched nor in full QCD. Therefore, in practice another extrapolation, in the quark mass, is required. This extrapolation to the correct light quark mass limit is theoretically less well under control than those to the continuum and infinite volume limits. The parametrisations used are in general motivated by chiral perturbation theory and the theoretical uncertainties are the dominant source of error in latest state-of-the-art spectrum calculations.
Many important questions are posed in low-energy QCD: is the same set of fundamental parameters (QCD coupling and quark masses) that describes for instance the hadron spectrum consistent with high energy QCD or is there place for new physics? Are all hadronic states correctly classified by the naïve quark model or do glueballs, hybrid states and molecules play a rôle? At what temperatures/densities does the transition to a quark-gluon plasma occur? What are the experimental signatures of quark-gluon matter? Can we solve nuclear physics on the quark and gluon level? Clearly, complex systems like iron nuclei are unlikely ever to be solved from first principles alone but modelling and certain approximations will always be required.
It is desirable to test model assumptions, to gain control over approximations and, eventually, to derive low-energy effective Lagrangians from QCD. Lattice simulations are a very promising tool for this purpose and in the first part of this article I will try to give a flavour of such more theoretically motivated studies before reviewing recent results on glueballs and exotic hybrid mesons as well as discussing the light hadron spectrum.
## 2 The confinement scenario
Two prominent features of QCD, confinement of colour sources and spontaneous breaking of chiral symmetry, are both lacking a proof. They appear to be related, however: in the low temperature phase of broken chiral symmetry, colour sources are effectively confined. Clearly, an understanding of what is going on should help us in developing the methods required to tackle a huge class of non-perturbative problems.
It is worthwhile to consider the simpler pure $`SU(N)`$ gauge theory. In this case confinement can be rigorously defined since the Polyakov line is an order parameter for the de-confinement phase transition that is related to spontaneously breaking a global $`Z_N`$ symmetry. I will present some results that have been obtained in the computationally cheaper $`SU(2)`$ gauge theory whose spectrum shares most qualitative features with that of $`SU(3)`$.
In the past decades, many explanations of the confinement mechanism have been proposed, most of which share the feature that topological excitations of the vacuum play a major rôle. These pictures include, among others, the dual superconductor picture of confinement and the centre vortex model . Depending on the underlying scenario, the excitations giving rise to confinement are thought to be magnetic monopoles, instantons, dyons, centre vortices, etc.. Different ideas are not necessarily exclusive. For instance, all mentioned excitations are found to be correlated with each other in numerical as well as in some analytical studies, such that at present it seems to be rather a matter of personal preference which one to consider as more fundamental.
Recently, the centre vortex model has enjoyed renewed attention . In this picture, excitations that can be classified in accord with the centre group provide the disorder required to produce an area law of the Wegner-Wilson loop and, therefore, confinement. One striking feature is that — unlike monopole currents — centre vortices form gauge invariant two-dimensional objects, such that in four space-time dimensions, a linking number between a Wegner-Wilson loop and centre vortices can unambiguously be defined, providing a geometric interpretation of the confinement mechanism .
I will restrict myself to discussing the superconductor picture which is based on the concept of electro-magnetic duality after an Abelian gauge projection and has originally been proposed by ’t Hooft and Mandelstam . The QCD vacuum is thought to behave analogously to an electrodynamic superconductor but with the rôles of electric and magnetic fields being interchanged: a condensate of magnetic monopoles expels electric fields from the vacuum. If one now puts electric charge and anti-charge into this medium, the electric flux that forms between them will be squeezed into a thin, eventually string-like, Abrikosov-Nielsen-Oleson vortex which results in linear confinement.
In all quantum field theories in which confinement has been proven, namely in compact $`U(1)`$ gauge theory, the Georgi-Glashow model and SUSY Yang-Mills theories, this scenario is indeed realised. However, before one can apply this simple picture to QCD or $`SU(N)`$ chromodynamics one has to identify the relevant dynamical variables: it is not straight forward to generalise the electro-magnetic duality of a $`U(1)`$ gauge theory to $`SU(N)`$ where gluons carry colour charges. How can one define electric fields and dual fields in a gauge invariant way?
In the Georgi-Glashow model, the $`SO(3)`$ gauge symmetry is broken down to a residual $`U(1)`$ symmetry as the vacuum expectation value of the Higgs field becomes finite. It is currently unknown whether QCD provides a similar mechanism and various reductions of the SU(N) symmetry have been conjectured. In this spirit, it has been proposed to identify the monopoles in a $`U(1)^{N1}`$ Cartan subgroup of $`SU(N)`$ gauge theory after gauge fixing with respect to the off-diagonal $`SU(N)/U(1)^{N1}`$ degrees of freedom. After such an Abelian gauge fixing QCD can be regarded as a theory of interacting photons, monopoles and matter fields (i.e. off-diagonal gluons and quarks). One might assume that the off-diagonal gluons do not affect long range interactions. This conjecture is known as Abelian dominance . Abelian as well as monopole dominance are qualitatively realised in Lattice studies of $`SU(2)`$ gauge theory in maximally Abelian (MA) gauge projection, which appears to be the most suitable gauge fixing condition.
In Figure 1, I display the electric field distribution between SU(2) quarks, separated by a distance $`r=15a1.2`$ fm, that has been obtained within the MA gauge projection. Everything is measured in lattice units $`a0.081`$ fm where the physical scale derived from the value $`\sqrt{\kappa }=440`$ MeV for the string tension is intended to serve as a guide to what one might expect in “real” QCD. Indeed an elongated Abrikosov-Nielsen-Oleson vortex forms between the charges. In Fig. 2, I display a cross section through the centre plane of this vortex. While the electric field strength decreases with the distance from the core, the modulus of the dual Ginzburg-Landau (GL) wave function, $`f`$, i.e. the density of superconducting magnetic monopoles decreases towards the centre of the vortex where superconductivity breaks down. In this study the values $`\lambda =0.15(2)`$ fm and $`\xi =0.25(3)`$ fm have been obtained for penetration depth and GL coherence length, respectively. The ratio $`\lambda /\xi =0.59(13)<1/\sqrt{2}`$ corresponds to a (weak) type I superconductor, i.e. QCD flux tubes appear to attract each other. For details I refer the reader to Ref.
## 3 String breaking
In the pure gauge theory results presented above, the energy stored in the vortex between charges increases in proportion to their distance ad infinitum. In full QCD with sea quarks, however, the string will break into two parts as soon as this energy exceeds the energy required to create a quark-antiquark pair from the vacuum: inter-quark forces at large separation will be completely screened by sea quarks and excited $`\mathrm{{\rm Y}}`$ states can decay into a $`B\overline{B}`$ meson pair. In Fig. 3, I display a recent comparison between the quenched and $`n_f=2`$ static potential by the $`T\chi L`$ collaboration at a sea quark mass $`m_{ud}m_s/3`$ . Estimates of masses of pairs of static-light mesons into which the static heavy-heavy systems can decay are also included into the figure. The potentials have been matched to each other at a distance $`r=r_00.5`$ fm. In presence of sea quarks anti-screening is weakened and, therefore, starting from the same infra red value, the effective QCD coupling runs slower towards the $`\alpha _s=0`$ ultra violet limit. This effect explains why at small $`r`$ the unquenched data points are somewhat below their quenched counterparts: the effective Coulomb force remains stronger. Around $`r=1.2`$ fm, the un-quenched potential is expected to become unstable. However, the data are not yet precise enough to resolve this effect.
Motivated by such QCD simulations, the dynamics of string breaking has recently been analysed in some toy models . First results on interactions between two $`B`$ mesons in quenched QCD have been reported by Pennanen and Michael .
## 4 Glueballs and quark-gluon hybrid states
In Fig. 3, not only the ground state potential but also a so-called hybrid potential is displayed in which the gluonic component contributes to the angular momentum. Recently, the spectrum of such potentials has been accurately determined by Juge, Kuti and Morningstar . The presence of gluons in bound states should also affect light meson and baryon spectra: one would expect additional excitations that cannot be classified in accord with the naïve constituent quark model. On the Lattice and in experiment it should be most easy to discriminate states with exotic, i.e. quark model forbidden, quantum numbers from “standard” hadrons. Spin-exotic baryons cannot be constructed but only mesons and glueballs. First results on light hybrid mesons have been reported by two groups . The lightest spin-exotic particle has quantum numbers $`J^{PC}=1^+`$ and a mass between 1.8 and 2 GeV. Recent investigations incorporating sea quarks confirm these findings. However, at present all experimental candidates have masses smaller than 1.6 GeV. Therefore, in the interpretation of experiment mixing between spin exotic mesons and four-quark molecules, such as a $`\pi f_1`$, should be considered. It is certainly worthwhile to investigate this possibility on the Lattice too.
Recent results by Morningstar and Peardon have revolutionised our knowledge on the quenched glueball spectrum. Only in case of the scalar glueball they fail to reach the precision of the 1993 state-of-the-art Lattice predictions : finer lattices are required for a safer continuum limit extrapolation. As can be seen from Fig. 4, the ordering of glueball states has become fairly well established. The fact that the lightest spin-exotic state $`2^+`$ lies well above 4 GeV explains why such states have escaped observation so far. Glueballs are quite heavy and spatially rather extended, due to the lack of quarks that tie the flux together. Therefore, these states lend themselves to the use of anisotropic lattices: the size of the temporal lattice spacing is dictated by the heavy mass that one wishes to resolve while a much coarser spatial spacing can be adapted to resolve the glueball wave function. Introducing this anisotropy was vital for the improvement achieved. Recent results in QCD with sea quarks on the scalar and tensor glueballs are consistent with quenched findings . Beyond the quenched approximation, glueballs will mix with standard quark model states. Investigations of such mixing and decay rates of the mixed states are challenging questions that are waiting to be approached by Lattice studies in the near future.
## 5 Light hadrons
In addition to quantities that theorists or experimentalists are interested in, well-known observables can be computed on the Lattice too. The motivation is two-fold: testing QCD and gauging the Lattice methodology. Experimental low energy input like the hadron spectrum is required in the first place to fix the lattice spacing and quark masses. Subsequently, among other predictions, the fundamental parameters $`\alpha _s`$ and quark masses can be converted to, for instance, the $`\overline{MS}`$ scheme that is convenient for perturbative continuum calculations. It is not a priori clear whether the low energy results are compatible with values required to explain high energy QCD phenomenology.
Assuming that QCD is the right theory, the observed states can serve as a guideline to judge the viability of approximations, such as quenching in the absence of high precision full QCD results. Last but not least, quark masses and other parameters can be varied and Lattice results can be confronted with predictions of, for instance, chiral perturbation theory. Indeed, evidence for quenched chiral logarithms has been reported .
In Fig. 5, I display results from a recent state-of-the-art calculation of the quenched light hadron spectrum by the CP-PACS collaboration . For comparison the results from the GF11 collaboration that have set the standard back in 1994 are included (squares). The $`\pi `$ and $`\rho `$ masses that have been used as input values for the lattice spacing $`a`$ and quark mass $`m_u=m_d`$ are omitted from the plot. $`m_s`$ has been set by two methods: forcing the $`K`$ mass to agree with experiment (full circles) and forcing the $`\varphi `$ mass to agree (open circles). Neither of the methods can bring the spectrum completely in line with experiment. However, no mass comes out to be wrong by more than 10 %, indicating that the main effect of sea quarks is to renormalise the over-all value of the coupling, rather than altering mass ratios, despite the fact that all particles displayed, with the exception of the nucleon, become unstable in full QCD. First un-quenched results by the same collaboration indicate an improvement in the direction of the experimental values. Many groups are at present studying quantities which one might expect to be more sensitive towards quenching like the $`\eta ^{}`$ mass, quark masses and the $`\pi N\sigma `$ term .
## Acknowledgements
I have received funding by DFG (grants Ba 1564/3-1 and Ba 1564/3-2). I thank Andrei Afanasjev, Branko Vlahovic, Dubravko Klabucar and Elio Soldi for organising this stimulating conference at Dubrovnik and hope that reviving the international tradition of this former Yugoslav science centre will serve as a starting point to overcome nationalism, separatism and sectarian violence.
|
no-problem/9901/gr-qc9901035.html
|
ar5iv
|
text
|
# Universal Entropy Bound for Rotating Systems
## Abstract
We conjecture a universal upper bound to the entropy of a rotating system. The entropy bound follows from application of the generalized second law of thermodynamics to an idealized gedanken experiment in which an entropy-bearing rotating system falls into a black hole. This bound is stronger than the Bekenstein entropy bound for non-rotating systems.
One of the most intriguing features of both the classical and quantum theory of black-holes is the striking analogy between the laws of black-hole physics and the universal laws of thermodynamics . In particular, Hawking’s (classical) theorem , “The surface area of a black hole never decreases,” is a property reminiscent of the entropy of a closed system. This striking analogy had led Bekenstein to conjecture that the area of a black hole (in suitable units) may be regarded as the black-hole entropy – entropy in the sense of information about the black-hole interior inaccessible to observers outside the black hole. This conjecture is logically related to a second conjecture, known as the generalized second law of thermodynamics (GSL): “The sum of the black-hole entropy (now known to be $`\frac{1}{4}`$ of the horizon’s surface area) and the common (ordinary) entropy in the black-hole exterior never decreases”.
The general belief in the validity of the ordinary second law of thermodynamics rests mainly on the repeated failure over the years of attempts to violate it. There currently exists no general proof of the law based on the known microscopic laws of physics. In the analog case of the GSL considerably less is known since the fundamental microscopic laws of physics, namely, the laws of quantum gravity are not yet known. Hence, one is forced to consider gedanken experiments in order to test the validity of the GSL. Such experiments are important since the validity of the GSL underlies the relationship between black-hole physics and thermodynamics. If the GSL is valid, then it is very plausible that the laws of black-hole physics are simply the ordinary laws of thermodynamics applied to a self-gravitating quantum system. This conclusion, if true, would provide a striking demonstration of the unity of physics. Thus, it is of considerable interest to test the validity of the GSL in various gedanken experiments.
In a classical context, a basic physical mechanism is known by which a violation of the GSL can be achieved: Consider a box filled with matter of proper energy $`E`$ and entropy $`S`$ which is dropped into a black hole. The energy delivered to the black hole can be arbitrarily red-shifted by letting the assimilation point approach the black-hole horizon. As shown by Bekenstein , if the box is deposited with no radial momentum a proper distance $`R`$ above the horizon, and then allowed to fall in such that
$$R<\mathrm{}S/2\pi E,$$
(1)
then the black-hole area increase (or equivalently, the increase in black-hole entropy) is not large enough to compensate for the decrease of $`S`$ in common (ordinary) entropy. Bekenstein has proposed a resolution of this apparent violation of the GSL which is based on the quantum nature of the matter dropped into the black hole. He has proposed the existence of a universal upper bound on the entropy $`S`$ of any system of total energy $`E`$ and maximal radius $`R`$ :
$$S2\pi RE/\mathrm{}.$$
(2)
It has been argued , and disputed that this restriction is necessary for enforcement of the GSL; the box’s entropy disappears but an increase in black-hole entropy occurs which ensures that the GSL is respected provided $`S`$ is bounded as in Eq. (2). Other derivations of the universal bound Eq. (2) which are based on black-hole physics have been given by Zaslavskii and by Li and Liu . Few pieces of evidence exist concerning the validity of the bound for self-gravitating systems . However, the universal bound Eq. (2) is known to be true independently of black-hole physics for a variety of systems in which gravity is negligible . In particular, Schiffer and Bekenstein had provided an analytic proof of the bound for free scalar, electromagnetic and massless spinor fields enclosed in boxes of arbitrary shape and topology.
In this paper we test the validity of the GSL in an idealized gedanken experiment in which an entropy-bearing rotating system falls into a stationary black hole. We argue that while the bound Eq. (2) may be a necessary condition for the fulfillment of the GSL, it may not be a sufficient one.
It is not difficult to see why a stronger upper bound must exist for the entropy $`S`$ of an arbitrary system with energy $`E`$, intrinsic angular momentum $`s`$ and (maximal) radius $`R`$: The gravitational spin-orbit interaction (the analog of the more familiar electromagnetic spin-orbit interaction) experienced by the spinning body (which, of course, was not relevant in the above mentioned gedanken experiment) can decrease the energy delivered to the black hole. This would decrease the change in black-hole entropy (area). Hence, the GSL will be violated unless the spinning-system entropy (what disappears from the black-hole exterior) is restricted by a bound stronger than Eq. (2).
Furthermore, there is one disturbing feature of the universal bound Eq. (2). As was pointed out by Bekenstein , Kerr black holes conform to the bound; however, only the Schwarzschild hole actually attains the bound. This uniqueness of the Schwarzschild black hole (in the sense that it is the only black hole which have the maximum entropy allowed by quantum theory and general relativity) among the electrically neutral Kerr-family solutions is somewhat disturbing. Clearly, the unity of physics demands a stronger bound for rotating systems in general, and for black holes in particular (see also ).
In fact, the plausible existence of an upper bound stronger than Eq. (2) on the entropy of a rotating system has nothing to do with black-hole physics. Classically, entropy is a measure of the phase space available to the system in question. Consider a system whose energy is no more than $`E`$. The limitation imposed on $`E`$ amounts to a limitation on the momentum space available to the system’s components (provided the potential energy is bounded from below). Now, if part of the system’s energy is in the form of a coherent (global) kinetic energy (in contrast to random motion of its constituents), then the momentum space available to the system’s components is further limited (part of the energy of the system is irrelevant for the system’s statistical properties). If the system has a finite dimension in space, then its phase space is limited. This amounts to an upper bound on its entropy. This bound evidently decreases with the absolute value of the intrinsic angular momentum of the system. However, our simple argument cannot yield the exact dependence of the entropy bound on the system’s parameters: its energy, intrinsic angular momentum (spin), and proper radius.
In fact, black-hole physics (more precisely, the GSL) provides a concrete universal upper bound for rotating systems. We consider a spinning body of rest mass $`\mu `$, (intrinsic) spin $`s`$ and proper cylindrical radius $`R`$, which is descending into a black hole. We consider plane (equatorial) motions of the body in a Kerr-Newman background , with the (intrinsic) spin orthogonal to the plane (the general motion of a spinning particle in a Kerr-Newman background is very complicated, and has not been analyzed so far). The black-hole (event and inner) horizons are located at
$$r\pm =M\pm (M^2Q^2a^2)^{1/2},$$
(3)
where $`M`$, $`Q`$ and $`a`$ are the mass, charge and angular-momentum per unit mass of the hole, respectively (we use gravitational units in which $`G=c=1`$). The test particle approximation implies $`|s|/(\mu r_+)1`$.
The equation of motion of a spinning body in the equatorial plane of a Kerr-Newman background is a quadratic equation for the conserved energy (energy-at-infinity) $`E`$ of the body
$$\stackrel{~}{\alpha }E^22\stackrel{~}{\beta }E+\stackrel{~}{\gamma }=0,$$
(4)
where the expression for $`\stackrel{~}{\alpha },\stackrel{~}{\beta }`$ and $`\stackrel{~}{\gamma }`$ are given in .
The actual role of buoyancy forces in the context of the GSL is controversial (see e.g., ). Bekenstein has recently shown that buoyancy protects the GSL, provided the floating point (see ) is close to the black-hole horizon. In addition, Bekenstein has proved that one can derive the universal entropy bound Eq. (2) from the GSL when the floating point is near the horizon (this is the relevant physical situation for macroscopic and mesoscopic objects with a moderate number of species in the radiation, which seems to be the case in our world). The entropy bound Eq. (2) is also a sufficient condition for the validity of the GSL. For simplicity, and in the spirit of the original analysis of Bekenstein , we neglect buoyancy contribution to the energy bookkeeping of the body. As in the case of non rotating systems we expect this not to effect the final entropy bound.
The gradual approach to the black hole must stop when the proper distance from the body’s center of mass to the black-hole horizon equals $`R`$, the body’s radius. Thus, in order to find the change in black-hole surface area caused by an assimilation of the spinning body, one should first solve Eq. (4) for $`E`$ and then evaluate it at the point of capture $`r=r_++\delta (R)`$, where $`\delta (R)`$ is determined by
$$_{r_+}^{r_++\delta (R)}(g_{rr})^{1/2}𝑑r=R,$$
(5)
with $`g_{rr}=(r^2+a^2cos^2\theta )\mathrm{\Delta }^1`$, and $`\mathrm{\Delta }=(rr_{})(rr_+)`$. Integrating Eq. (5) one finds (for $`\theta =\pi /2`$ and $`Rr_+`$)
$$\delta (R)=(r_+r_{})\frac{R^2}{4r_+^2}.$$
(6)
Thus, the conserved energy $`E`$ of a body having a radial turning point at $`r=r_++\delta (R)`$ is
$$E=\frac{aJ}{\alpha }\frac{Js(r_+r_{})r_+}{2\mu \alpha ^2}+\frac{R(r_+r_{})}{2\alpha }\sqrt{\mu ^2+J^2\frac{r_+^2}{\alpha ^2}},$$
(7)
where the “rationalized area” $`\alpha `$ is related to the black hole surface area $`A`$ by $`\alpha =A/4\pi `$, and $`J`$ is the body’s total angular momentum. The second term on the r.h.s. of Eq. (7) represents the above mentioned gravitational spin-orbit interaction between the orbital angular momentum of the body and its intrinsic angular momentum (spin).
An assimilation of the spinning body by the black hole results in a change $`dM=E`$ in the black-hole mass and a change $`dL=J`$ in its angular momentum. Using the first-law of black hole thermodynamics
$$dM=\frac{\kappa }{8\pi }dA+\mathrm{\Omega }dL,$$
(8)
where $`\kappa =(r_+r_{})/2\alpha `$ and $`\mathrm{\Omega }=a/\alpha `$ are the surface gravity ($`2\pi `$ times the Hawking temperature ) and rotational angular frequency of the black hole, respectively, we find
$$d\alpha =\frac{2Jsr_+}{\mu \alpha }+2R\sqrt{\mu ^2+J^2\frac{r_+^2}{\alpha ^2}}.$$
(9)
The increase in black-hole surface area Eq. (9) can be minimized if the total angular momentum of the body is given by
$$J=J^{}\frac{s\alpha }{Rr_+\sqrt{1\left(\frac{s}{\mu R}\right)^2}}.$$
(10)
For this value of $`J`$ the area increase is
$$(\mathrm{\Delta }A)_{min}=8\pi \mu R\sqrt{1\left(\frac{s}{\mu R}\right)^2},$$
(11)
which is the minimal increase in black-hole surface area caused by an assimilation of a spinning body with given parameters $`\mu ,s`$ and $`R`$. Obviously, a minimum exists only for $`s\mu R`$. Otherwise, $`\mathrm{\Delta }A`$ can be made (arbitrarily) negative, violating the GSL. Moller’s well-known theorem therefore protects the GSL.
Arguing from the GSL, we derive an upper bound to the entropy $`S`$ of an arbitrary system of proper energy $`E`$, intrinsic angular momentum $`s`$ and proper radius $`R`$:
$$S2\pi \sqrt{(RE)^2s^2}/\mathrm{}.$$
(12)
It is evident from this suggestive argument that in order for the GSL to be satisfied $`[(\mathrm{\Delta }S)_{tot}(\mathrm{\Delta }S)_{bh}S0]`$, the entropy $`S`$ of the rotating system should be bounded as in Eq. (12). This upper bound is universal in the sense that it depends only on the system’s parameters (it is independent of the black-hole parameters which were used to suggest it).
It is in order to emphasize an important assumption made in obtaining the upper bound Eq. (12); We have not taken into account second-order interactions between the particle’s angular momentum and the black hole, which are expected to be of order $`O(J^2/M^3)`$. Taking cognizance of Eq. (10) we learn that this approximation is justified for rotating systems with negligible self gravity, i.e., rotating systems with $`\mu R`$.
Although our derivation of the entropy bound is valid only for rotating systems with negligible self-gravity, we conjecture that it might be applicable also for strongly gravitating systems; A positive evidence for the validity of the bound is the fact that any Kerr black hole saturates it, provided the effective radius $`R`$ is properly defined for the black hole: consider an electrically neutral Kerr black hole. Let its energy and angular momentum be $`E=M`$ and $`s=Ma`$, respectively. The black-hole entropy $`S_{BH}=A/4\mathrm{}=\pi (r_+^2+a^2)/\mathrm{}`$ exactly saturates the entropy bound provided one identifies the effective radius $`R`$ with $`(r_+^2+a^2)^{1/2}`$, where $`r_+=M+(M^2a^2)^{1/2}`$ is the radial Boyer-Lindquist coordinate for the Kerr black-hole horizon. The identification may be reasonable because $`4\pi (r_+^2+a^2)`$ is exactly the black-hole surface area.
Evidently, systems with negligible self-gravity (the rotating system in our gedanken experiment) and systems with maximal gravitational effects (i.e., rotating black holes) both satisfy the upper bound Eq. (12). Thus, this bound appears to be of universal validity. The intriguing feature of our derivation is that it uses a law whose very meaning stems from gravitation (the GSL, or equivalently the area-entropy relation for black holes) to derive a universal bound which has nothing to do with gravitation \[written out fully, the bound Eq. (12) would involve $`\mathrm{}`$ and $`c`$, but not $`G`$\]. This provides a striking illustration of the unity of physics.
In summary, an application the generalized second law of thermodynamics to an idealized gedanken experiment in which an entropy-bearing rotating system falls into a black hole, enables us to conjecture an improved upper bound to the entropy of a rotating system. The bound is stronger than Bekenstein’s bound for non-rotating systems. Moreover, this bound seems to be remarkable from a black-hole physics point of view: provided the effective radius $`R`$ is properly defined, all Kerr black holes saturate it (although we emphasize again that our specific derivation of the bound is consistent only for systems with negligible self gravity). This suggests that the Schwarzschild black hole is not unique from a black-hole entropy point of view, removing the disturbing feature of the entropy bound Eq. (2). Thus, all electrically neutral black holes seem to have the maximum entropy allowed by quantum theory and general relativity. This provides a striking illustration of the extreme character displayed by (all) black holes, which is, however, still within the boundaries of more mundane physics.
ACKNOWLEDGMENTS
I thank Jacob D. Bekenstein for helpful discussions. This research was supported by a grant from the Israel Science Foundation.
|
no-problem/9901/cond-mat9901158.html
|
ar5iv
|
text
|
# Global Phase Diagram of a One-Dimensional Driven Lattice Gas
\[
## Abstract
We investigate the non-equilibrium stationary state of a translationally invariant one-dimensional driven lattice gas with short-range interactions. The phase diagram is found to exhibit a line of continuous transitions from a disordered phase to a phase with spontaneous symmetry breaking. At the phase transition the correlation length is infinite and density correlations decay algebraically. Depending on the parameters which define the dynamics, the transition either belongs to the universality class of directed percolation or to a universality class of a growth model which preserves the local minimal height. Consequences of some mappings to other models, including a parity-conserving branching-annihilation process are briefly discussed.
\] The interplay of external driving fields and internal repulsive forces between particles can lead to interesting and unexpected phase transitions in the steady states of one-dimensional driven diffusive systems even if the interactions are only short-ranged . Generically, the presence of boundaries or single defects in driven systems leads to shock waves and mutual blocking mechanisms which result in a breakdown of homogeneous particle flow. Thus localized static inhomogeneities are responsible for a variety of phenomena including first- and second-order phase transitions or spontaneous symmetry breaking . These observations are of practical importance for the qualitative understanding of many-body systems in which the dynamic degrees of freedom reduce to effectively one dimension as e.g. in traffic flow , kinetics of protein synthesis , gel-electrophoresis , or interface growth of thin films .
Whether continuous phase transitions can occur also in spatially homogeneous non-equilibrium systems in one dimension is less well-understood . In particular, there is a long-standing conjecture that in systems with local interactions the steady states have rapidly decaying correlations and, like in 1-d equilibrium models, no phase transition accompanied by algebraically decaying correlations takes place. On the other hand, recent studies of more complicated driven systems of three or more species of particles in $`1d`$ have demonstrated that phase separation may take place in these models , thus proving the possibility of long-range order, but leaving open the issue of continuous phase transitions with algebraic decay of correlations. In the absence of a general framework for studying non-equilibrium phase transitions, analyzing specific models could provide useful insight in these complex phenomena.
In this context, several translationally invariant one-dimensional growth models with local interactions which exhibit roughening transitions have recently been introduced. A common feature of these models is that one of the local transition rates which govern their dynamics is set to zero. The resulting roughening transition in one class of models belongs to the universality class of directed percolation . In another class of growth models which preserve the local minimal height, the transition is found to belong to a different universality class . It would be of great interest to put these classes of models within a unifying framework, so that the various types of transitions, the associated crossover phenomena and the global phase diagram could be studied.
In this Letter we introduce a simple homogeneous driven $`1d`$ lattice gas model with local dynamics. It exhibits a phase transition where correlations decay algebraically and which is accompanied by spontaneous symmetry breaking. The model can be mapped onto a growth model where the transition becomes a roughening transition. By varying the parameters which define its dynamics, some types of the transitions discussed above can be realized. The various transitions and the global phase diagram are studied.
We consider a lattice gas which is an asymmetric exclusion process with next-nearest-neighbour interaction. Each lattice-site $`i\{1,2,\mathrm{},L\}`$ of a periodic chain may be either empty ($`\mathrm{}`$) or occupied by one particle of a single species, labeled $`A`$. The model evolves by random sequential updating. Particles hop to the right with constant attempt rate $`r`$ ($`q`$) if the right nearest neighbour site is vacant and the nearest neighbour site at the left is occupied (empty). Unlike in the KLS-models , the left-hopping mechanism is different: A particle hops to the left with rate $`p=1qr`$ only if the next-nearest-neighbour site is empty as well. The model is therefore defined by the transitions
$$\begin{array}{cccccccc}\hfill A& A& \mathrm{}& & A& \mathrm{}& A& \text{with rate }r,\hfill \\ \hfill \mathrm{}& A& \mathrm{}& & \mathrm{}& \mathrm{}& A& \text{with rate }q,\hfill \\ \hfill \mathrm{}& \mathrm{}& A& & \mathrm{}& A& \mathrm{}& \text{with rate }p.\hfill \end{array}$$
(1)
By identifying vacancies with up-spins and particles with down-spins, these dynamics may be interpreted as a non-equilibrium spin-relaxation process. The choice $`p=0`$ is a special case of the kinetic Ising models of Ref. , with $`r=q=1/2`$ corresponding to the totally asymmetric exclusion process (TASEP) . In yet another mapping one obtains a growth model for a one-dimensional interface (see below).
Our interest is in the stationary behavior of the half-filled system, i.e. the asymptotic state of the system reached at very large times. A thorough survey of the phase diagram yields as main features a phase $`(I)`$ with spontaneously broken $`Z_2`$-symmetry between two antiferromagnetic stationary states and a disordered phase $`(II)`$ (Fig. 1). As we shall argue below, the transition line separating the two phases belongs to the universality class of directed percolation except for $`r=0`$, where the universality class is different.
More specifically, we found that the stationary state can be calculated exactly along the four lines $`r=0`$, $`q=0`$, $`q=1/2`$ and $`p=0`$ (Fig. 1). (i) For $`q=1/2`$, the system is disordered, and the stationary states are uncorrelated product measures. (ii) For $`p=0`$, the stationary distribution is that of a one-dimensional Ising model . Correlations are short-ranged with divergent correlation lengths only at the extremal points $`q=1`$ (phase separation into regimes with complete ferromagnetic order but opposite magnetization) and $`r=1`$ (complete antiferromagnetic order), respectively. (iii) The stationary state along $`q=0`$ is also antiferromagnetically ordered, but at $`r=p=1/2`$ there is an interesting phase transition in the dynamics of the system. Evidently, for small $`q`$, transitions between the two antiferromagnetic states $`A\mathrm{}A\mathrm{}A\mathrm{}\mathrm{}`$ and $`\mathrm{}A\mathrm{}A\mathrm{}A\mathrm{}`$ are possible with finite probability, if the system is finite. However, for $`r>1/2`$ the flipping time between these two states diverges with a power law in system size $`L`$, whereas for $`r<1/2`$ this flipping time diverges exponentially in system size. This is a signature for spontaneous symmetry breaking (and associated ergodicity breaking in the thermodynamic limit), even away from the line $`q=0`$. (iv) Along $`r=0`$ the minimal height of the corresponding growth model is conserved . As in the related class of models of Ref. , the dynamics satisfies detailed balance with respect to an energy functional which is proportional to the area under the interface. The point $`p=q=1/2`$ (corresponding to a change in the sign of the energy $`E`$) marks the transition from an antiferromagnetic state to a state where complete phase ordering takes place and translational invariance is spontaneously broken. This transition is analogous to the wetting transition of Ref. .
This summary of exact results demonstrates the rich behavior that even rather simple homogeneous lattice gases may show and also indicates a certain degree of universality of these phenomena in 1D non-equilibrium systems. Here, we want to discuss the behavior of the system as it crosses the phase transition line between the broken symmetry phase I and the disordered phase II. We shall focus on the line $`r=q`$ with the limiting cases $`r=q=1/2`$ (usual right hopping TASEP with uncorrelated disordered stationary state) and $`r=q=0`$ (left hopping TASEP with next-nearest-neighbour repulsion and fully ordered stationary states). We performed Monte-Carlo simulations for half-filled periodic systems of size $`L=2^n`$, mostly with $`n=10`$. Expectation values were averaged over $`4000L`$ rounds after a transient period of at least the same duration.
We study the quantity
$$\mathrm{\Delta }(t)=\frac{1}{t}_0^t𝑑t\frac{2}{L}\underset{i=1}{\overset{L}{}}(1)^in_i(t).$$
(2)
where $`n_i=0`$ corresponds to an empty site $`i`$ and $`n_i=1`$ to an occupied one. In the limit $`t\mathrm{}`$, it corresponds to the non-conserved order parameter $`2/L_i(1)^in_i`$, which is the stationary difference in sublattice particle densities (the ‘staggered magnetization’ in spin language). Because of ergodicity, the stationary value of the order parameter in a finite system vanishes by symmetry. However, as a signature of spontaneously broken symmetry in the thermodynamic limit, one expects an initial decay to some quasi-stationary value $`\mathrm{\Delta }_0`$, before $`\mathrm{\Delta }`$ eventually approaches zero for very long times (exponentially large in system size). On the other hand, in the disordered phase one expects an initially ordered state with $`\mathrm{\Delta }=1`$ to rapidly disorder, i.e. one expects $`\mathrm{\Delta }`$ to decay quickly to zero.
A second quantity of interest is the stationary particle current which, according to the definition (1) of the process, on the line $`r=q=(1p)/2`$ is given by $`j(q)=qn_i(1n_{i+1})(12q)(1n_{i1})(1n_i)n_{i+1}`$. Clearly, $`j(0)=0`$ and $`j(1/2)=1/8`$, up to a small finite-size correction of order $`1/L`$. The presence of spontaneous symmetry breaking suggests $`j=0`$ for all $`qq_c`$ (up to exponentially small corrections in system size), since any finite current would lead to a transition between the two degenerate stationary states with $`\mathrm{\Delta }=\pm \mathrm{\Delta }_0`$ within a finite time.
This intuitive picture is well-supported by our Monte Carlo simulations (Fig. 2). The current $`j`$ vanishes in phase I and the order parameter $`\mathrm{\Delta }_0`$ vanishes in phase II. We find a phase transition point $`q_c=0.1515\pm 0.0005`$ for $`r=q`$, above which the current decays with a power law
$$j(qq_c)^y$$
(3)
with $`y1.7\pm 0.1`$. Approaching the critical point $`q_c`$ from below, $`\mathrm{\Delta }_0`$ decays with a power law
$$\mathrm{\Delta }_0(q_cq)^\theta $$
(4)
with $`\theta 0.54\pm 0.04`$. To investigate whether this continuous bulk phase transition is accompanied by spatial long-range order—as one would expect in an equilibrium system—we examine the stationary density correlation function $`C(k)=4(n_in_{i+k}n_in_{i+k})`$ which turns out to decay to a non-zero value below $`q_c`$. At the critical point, correlations decay algebraically
$$C(k)k^\gamma ,$$
(5)
where $`\gamma 1.0\pm 0.1`$ (Fig. 2) .
We can gain further insight by considering the mapping to an interface model which is described by height difference variables $`12n_i`$ and an additional stochastic variable $`h`$, representing the absolute height of the interface at some reference point. Each time a particle hops to the right, the local height increases by two units (deposition), whereas hopping to the left describes a height decrease (evaporation) (Fig. 3). Thus the current gives the stationary growth velocity of the interface, while the density correlation function measures height-gradient correlations. Growth occurs at local minima with rate $`q`$, independently of the precise nature of the immediate environment. However, evaporation of particles does not occur from a “flat” part of the interface: The corresponding process $`A\mathrm{}A\mathrm{}AA\mathrm{}\mathrm{}`$ is forbidden. On a coarse-grained scale this means that in a locally flat piece of the interface evaporation is not strong enough to create little craters which could then further grow. A similar situation (with different microscopic dynamics) was investigated by Alon et al. , who found a phase transition between a smooth phase where no current flows and a rough, growing phase in the universality class of the KPZ-equation. The transition is related to directed percolation in 1+1 dimensions and is accompanied by spontaneous symmetry breaking in the height variable $`h`$.
Here we find similar behavior which is most transparent in the two limiting cases $`q=0`$ and $`q=1/2`$, respectively. The limit $`q1/2`$ corresponds to the TASEP (growing, rough interface), which indeed describes interface growth in the KPZ universality class . In the limit $`q0`$, there is no current and one has spontaneous symmetry breaking between (macroscopically) flat interfaces on an even or odd height level, respectively. We stress, however, that spontaneous symmetry breaking occurs already on the level of the particle description, i.e. without reference to the extra height variable. Assuming universality, one expects the exponent $`y`$ to be given by the critical exponent $`\nu _{}1.73`$ of the DP-coherence time and also a logarithmic divergence of the interface width $`w=[L^1_i(h_iL^1_ih_i)^2]^{1/2}`$. This is in agreement with our results in Eq. (3) and Fig. 4. Also the value (4) of the order parameter exponent $`\theta `$ is consistent with the result $`\theta =0.55\pm 0.05`$ reported in Ref. , thus independently confirming universality. Results on the correlation exponent $`\gamma `$ have not been reported in earlier work.
The transition at $`r=0`$ is of a different nature. Here the model satisfies detailed balance and the current vanishes both above and below the transition. At the phase transition point $`q_c=1/2`$ the lattice gas is uncorrelated. Using the interface representation of the model one can show that the interface width diverges algebraically with an exponent $`1/3`$ as $`q`$ approaches $`1/2`$ from below (Fig. 4).
The understanding of $`\theta `$ and of the new correlation exponent $`\gamma `$ (which have no conventional interpretation within the framework of directed percolation), and the behavior of these two quantities at the transition at $`r=0`$ have to be addressed in future work. Also the behavior of the system away from half-filling, where preliminary results suggest the disappearance of phase I, is an open issue. Returning to our original question we conclude at this point that the stationary states of homogeneous one-dimensional lattice gas models may exhibit continuous bulk phase transitions with an algebraic decay of correlations even if interactions are short-ranged. In our model, this transition results from dynamical constraints which—unlike in the KLS models—lead to a competition between a disordering dynamics (the right-hopping process) and processes forcing the system into either of two antiferromagnetically ordered states (the restricted left hopping process). For sufficiently strong ordering processes, the stationary current ceases to flow and spontaneous symmetry breaking sets in.
It is interesting to consider yet another mapping of our model, obtained by mapping particles into vacancies and vice versa on one (either even or odd) sublattice. The resulting dynamics are that of a new class of parity-conserving (PC) branching-annihilation processes $`\mathrm{}AA\mathrm{}\mathrm{}\mathrm{}`$ and $`A\mathrm{}\mathrm{}AAA`$ with no absorbing state. In addition to particle-parity conservation (particle number modulo 2), there is a $`U(1)`$ symmetry which results from the particle number conservation of the original hopping process. Generically, one expects parity-conserving branching-annihilation processes not to be in the DP universality class, but in a distinct PC universality class . From our results it appears that, in the presence of additional symmetries, the picture of phase transitions in 1D branching-annihilation processes is more complicated.
We thank M. R. Evans for helpful discussions. G.M.S. and D.H. thank the Weizmann Institute for kind hospitality. We are grateful for financial support by the Einstein Center (G.M.S.), the DFG’s Heisenberg program (D.H., grant no. He 2789/1-1) and support of the Israeli Science Foundation and the Israel Ministry of Science.
|
no-problem/9901/nucl-th9901092.html
|
ar5iv
|
text
|
# Collapse and the Tritium Endpoint Pileup
## 1 Introduction
What happens at the endpoint of tritium $`\beta `$-decay? The tritium nucleus decays to a helium nucleus, electron, and antineutrino with energy $`E_0`$ 18.6 keV,
$$\mathrm{H}^3\mathrm{He}^3+e+\overline{\nu }+E_0.$$
(1)
What is the chance that an electron is emitted with energy $`E_1`$ just a little less than the endpoint energy $`E_0`$? Given the kinematics of a three-body decay and assuming the neutrino mass and the nuclear recoil energy are negligible, one can show that the probability $`P(E_1)dE_1`$ that an electron is emitted into a window $`dE_1`$ at energy $`E_1`$ is $`P(E_1)`$ = $`Np_1^2(E_0E_1)^2,`$ for $`0E_1E_0`$ and $`P(E_1)`$ = 0 for $`E_0`$ $`E_1,`$ where $`p_1`$ is the electron momentum and $`N`$ is a normalization constant: $`N`$ = $`1/_0^{E_0}(E_2^2+2m_eE_2)(E_0E_2)^2𝑑E_2.`$ The formula follows from the phase space available to the decay particles and neglects the existence of excited daughter atomic states, Coulomb corrections, and other complications. To simplify the discussion, we ignore all complications except the ones we need to introduce.
Just below the endpoint an approximation suffices,
$`P(E_1)2Nm_eE_0(E_0E_1)^2,`$ (2)
where $`E_1`$ $`E_0`$ and $`m_e`$ is the energy equivalent of the electron mass, $`m_e`$ = $`511`$ keV. Thus we have
Assumption I. The probability that an electron is emitted with energy $`E_1`$ near the endpoint is given by the probability $`P`$ in (2).
The observed spectrum \[2-5\] is something else. ‘An anomalous pileup of events at the endpoint’ says the footnote in the Particle Data Group listing for the neutrino mass squared, a comment attributed to Stoeffl and Decman . An antineutrino mass would remove events, dropping the rate below that expected with the probability $`P`$ in (2), so an antineutrino mass is not the complication that solves the problem. Thus we have
Observation. An anomalous pileup of events is observed at the endpoint of tritium decay.
Suppose we agree that (2) is wrong at the spectrometer by Observation, while by Assumption I the formula is correct at the decay site. This leaves the possibility that complications occur in-flight from decay to spectrometer that explain the change in spectrum from emission to detection.
## 2 In-Flight Complications
Upon decay, the three decay particles form an entangled quantum system that eventually collapses into what we can picture as three separate, independent particle states, see Fig. 1. Some such systems would collapse quicker than others; let $`T`$ be a typical time between emission and collapse.
During time $`t<`$ $`T`$ the electron energy is uncertain within $`\mathrm{\Delta }E`$ and $`T`$ $`\mathrm{}/(2\mathrm{\Delta }E)`$, by one of the uncertainty principles . Nothing can be done to find out what the electron energy is during this time without incurring an Immediate-Collapse penalty. One can certainly not use its eventual energy $`E`$ observed at the detector to infer the probability of emission $`P`$. If we knew $`P`$ at emission, then we would know the electron energy at emission and we would collapse the system immediately upon decay.
Now we have something to work with. The electron observed at the spectrometer with energy $`E`$ is an electron that until time $`T`$ had an energy $`E_1`$ somewhere in a range $`\mathrm{\Delta }E`$ surrounding $`E`$. Thus we have
Uncertainty Deduction. An electron observed to have energy $`E`$ $`E_0`$ could have been emitted with any energy $`E_1`$ in the range
$$E\frac{\mathrm{\Delta }E}{2}E_1E+\frac{\mathrm{\Delta }E}{2}.$$
(3)
The spectrum $`P`$ in (2) is expected when $`\mathrm{\Delta }E`$ is negligible and the typical collapse time $`T`$ is very long; $`P`$ describes the decay of isolated tritium nuclei.
The energies in the range (3) have different decay probabilities. For simplicity, we assume that an electron detected with energy $`E`$ could have originated with equal likelihood from anywhere in the range (3) with no contributions from energies outside the range. If $`E_2`$ and $`E_3`$ are both in the interval (3), then $`P(E_2)`$ and $`P(E_3)`$ contribute with equal weight to the probability average. By conservation of energy, and no matter what probability average is obtained for an energy $`E>`$ $`E_0`$, all detected electrons must have energy $`E`$ $`E_0`$.
Assumption II. The probability that an electron arrives at the spectrometer with energy $`E`$ is the average $`\overline{P}(E)`$ over the emission probabilities $`P(E_1)`$ for electrons emitted with energies $`E_1`$ in the range (3).
Energy intervals like $`A`$ in Figs. 2 and 3 that do not contain the endpoint give an average emission into window $`dE`$ with probability $`\overline{P}_A(E)dE`$, where
$`\overline{P}_A(E)={\displaystyle \frac{1}{\mathrm{\Delta }E}}{\displaystyle _{E\mathrm{\Delta }E/2}^{E+\mathrm{\Delta }E/2}}P(E_1)𝑑E_1P(E)+2Nm_eE_0{\displaystyle \frac{\mathrm{\Delta }E^2}{12}},`$ (4)
for $`E+\mathrm{\Delta }E/2`$ $`E_0`$ and $`E_0E`$ $`E_0`$.
Intervals like $`B`$ in Fig. 2 that do contain the endpoint give
$`\overline{P}_B(E)={\displaystyle \frac{1}{\mathrm{\Delta }E}}{\displaystyle _{E\mathrm{\Delta }E/2}^{E_0}}P(E_1)𝑑E_1`$ (5)
$$2Nm_eE_0\frac{(E_0E+\mathrm{\Delta }E/2)^3}{3\mathrm{\Delta }E},$$
for $`E`$ $`E_0`$ $`E+\mathrm{\Delta }E/2`$. For intervals like $`C`$ in Fig. 2 with $`E`$ $`E_0`$, the probability must vanish, $`\overline{P}_C(E)`$ = 0, to conserve energy.
Let us consider the observations reported in Ref. 2. At $`E`$ = 18550 eV, Fig. 2 of Ref. 2 shows that $`\overline{P}_{obs}`$ = $`1.2P`$. By (2), this means $`\overline{P}_{obs}`$ = $`P`$ \+ $`2Nm_eE_0\times 70`$ eV$`^2.`$ By (4), $`\overline{P}_A`$ = $`\overline{P}_{obs}`$ when $`\mathrm{\Delta }E^2/12`$ = 70 eV$`^2.`$ Thus $`\mathrm{\Delta }E`$ = 30 eV. ($`\mathrm{\Delta }E`$ = 30 eV also satisfies $`\overline{P}_B`$ = $`\overline{P}_{obs},`$ but $`E+\mathrm{\Delta }E/2`$ = 18565 eV $`<E_0,`$ implying a type A interval; see Fig. 1.)
Reconciliation Value The electron spectrum adjusted for in-flight collapse agrees with the observed spectrum when the energy uncertainty is about $`\mathrm{\Delta }E`$ = 30 eV. In Fig. 4 we plot the spectrum (2) together with a spectrum with $`\mathrm{\Delta }E`$ = 30 eV.
## 3 Discussion
The energy uncertainty gives a lower bound for $`T`$,
$$T\frac{\mathrm{}}{2\mathrm{\Delta }E}\frac{6.6\times 10^{16}\mathrm{eV}\mathrm{s}}{2\times 30\mathrm{eV}}=1.1\times 10^{17}\mathrm{s}.$$
(6)
This gives the collapse time for the three particle system that would provide an uncertainty of 30 eV in electron energy.
An electron near the endpoint travels at a speed of $`v`$ = $`c\sqrt{1m_e^2/(m_e+E_0)^2}`$ = 0.26$`c`$. In the average collapse time $`T`$ the electron travels a distance $`x`$ = $`0.26cT`$ $`9\times 10^{10}`$ m $``$ 9 atomic diameters and the antineutrino has traveled a distance of $`cT`$ $`33\times 10^{10}`$ m $``$ 33 atomic diameters. The He nucleus recoils more than about $`9m_e/m_{\mathrm{He}}`$ = $`1.6\times 10^3`$ of an atomic diameter.
Any of the three particles, electron, He nucleus, or antineutrino, could undergo the interaction or whatever it is that causes the collapse of the entire three particle entangled quantum system. At more than nine atomic diameters from the recoiling nucleus the electron could be interacting with the ambient gas molecules. Likewise, the antineutrino at more than 33 atomic diameters would be out amongst the gas molecules.
If the ambient gas forces collapse then varying the gas population should have an effect on the number of excess counts observed. A higher density makes for a shorter $`T`$ and a larger $`\mathrm{\Delta }E`$. Doubling the gas density might double the pileup at 18550 eV.
Alternatively, the recoiling He nucleus might be detected by the electron(s) originally bound to the tritium nucleus. The number of electrons bound to the tritium varies with the type of source: atomic tritium, molecular tritium, etc. If the bound electrons are involved in the collapse mechanism, then laser light of a frequency that is slightly more than the lowest resonance available to the original tritium sample might make the observed pileup a function of laser parameters.
For simplicity consider an atomic tritium source. Immediately upon decay, the atomic electron remains in the tritium ground state $`\psi _\mathrm{H}`$. This state is a superposition of helium-3 states, with 70% (= $`\psi _{\mathrm{He}}\psi _\mathrm{H}^2`$) of the electrons in the helium-3 ground state $`\psi _{\mathrm{He}}.`$ The change in the atomic electron’s energy remains insignificant for a short time after the nucleus has decayed, by the uncertainty principle.
Assume that the atomic electron detects the changes in the nucleus shortly after the change in its own energy is detectible. Thus the collapse time $`T`$ would then be given by $`T`$ $`\mathrm{}/(2\delta E),`$ where $`\delta E`$ is the change in the atomic electron’s energy. Let us neglect the longer $`T`$s for other states and consider only the 70% of atomic electrons that are in the helium-3 ground state after the nuclear decay. For these the energy change is $`\delta E`$ = $`E_{He}E_H`$ = 40 eV and $`T`$ would be $`T\mathrm{}/(2\delta E)`$ $`0.8\times 10^{17}`$ s.
The source in Ref. 2 is molecular tritium, but let us assume the results would be similar with atomic tritium. If 70% of the tritium decays account for the observed 20% excess at $`E`$ = 18550 eV, then the excess must be 30% in the contributing population. Reworking the above calculation with the new excess gives $`\mathrm{\Delta }E^2/12`$ = $`70\times (30/20)`$ eV<sup>2</sup> and $`\mathrm{\Delta }E`$ is now 35 eV, $`\mathrm{\Delta }E`$ = 35 eV. And $`T`$ decreases slightly to $`T\mathrm{}/(2\mathrm{\Delta }E)`$ $`0.9\times 10^{17}`$ s. The near coincidence of this result deduced from the observations, $`T`$ = $`0.9\times 10^{17}`$ s, and the collapse time deduced from the energy change of the atomic electrons, $`T`$ = $`0.8\times 10^{17}`$ s, implies that the endpoint pileup may be due to the detection of the tritium decays by the electrons bound to the decaying nuclei.
## Appendix A Problems
1. (i) Find the numerical value of the normalization constant $`N`$ in (2). Also (ii) find the energy $`E_{\mathrm{Max}}`$ in eV at the maximum probability and (iii) find $`P_{\mathrm{Max}}dE`$ for a $`dE`$ = 10 eV window centered on $`E_{\mathrm{Max}}`$. \[(i) $`9.8\times 10^{23}`$ eV<sup>-5</sup>. (ii) 6210 eV. (iii) 0.00096\]
2. A formula more accurate than the formula given in (2) is $`P(E_1)`$ $`2Nm_eE_1(E_0E_1)^2.`$ Recalculate (4) and (5) with the more accurate formula and obtain the new uncertainty $`\mathrm{\Delta }E`$ that gives $`\overline{P}_{obs}(18550\mathrm{eV})`$ = $`1.2P(18550\mathrm{eV})`$.
3. For a relativistic particle of mass $`m`$ and energy $`E`$ the momentum has magnitude $`p`$ satisfying $`p^2`$ = $`E^2`$ \+ $`2mE`$. Show that this is true.
4. For a given value of $`\mathrm{\Delta }E`$ = 29 eV, plot the fractional change in the spectrum, $`(\overline{P}P)/P`$ from 18340 to 18640 eV as in Fig. 2 of Ref. 2. On the same graph plot the result of averaging $`\overline{P}`$ with a Gaussian with a 10 eV width at half maximum to simulate a 10 eV spectrometer resolution.
|
no-problem/9901/cond-mat9901337.html
|
ar5iv
|
text
|
# Attractive Interactions Between Rod-like Polyelectrolytes: Polarization, Crystallization, and Packing
## ACKNOWLEDGMENTS
This work was sponsored by the National Science Foundation, grant DMR9807601.
|
no-problem/9901/hep-ex9901028.html
|
ar5iv
|
text
|
# High-purity germanium detector ionization pulse shapes of nuclear recoils, -𝛾 interactions and microphonism
## 1 Introduction
The dark matter in the Galactic halo is assumed to be dominantly composed of WIMPs . A direct detection method is through WIMP interaction with ordinary matter by elastic scattering off nuclei . Direct detection experiments search for the energy deposition produced in a low background detector by a WIMP elastically scattered off a nucleus therein (typical below 100 keV). Most promising future approaches include experiments with scintillation– , cryogenic– and semiconductor detectors . The best results at present are obtained using NaI–crystals and HPGe–detectors . The background of HPGe–detectors in the energy region below 100 keV originates from natural radioactivity and microphonic noise. Therefore a further step in reduction of the background from natural radioactivity by one order of magnitude or more needs a reliable method to identify microphonic noise. The success of digital pulse shape analysis in discriminating single and multiple scattered events is encouraging to think of a similar method in order to identify microphonics in the low energy region of HPGe–detectors. Another way to identify microphonics is the simultaneous use of two different shaping times in the processing of the signal (see and references therein) which is however not the topic of this paper. Also are there other applications which use the information from the pulse shapes of the charge current of Ge–detectors . All of the cited papers measure pulse shapes at higher energies (beyond 200keV).
A first step in developing a pulse shape analysis method is to study pulses of well defined origin. Nuclear recoil events comparable to those of not yet known particles, WIMPs, can be generated by elastically scattering of neutrons off nuclei in the germanium detector. Gamma interaction events in the energy region below 100keV can be generated for example by radiation of the detector with a <sup>133</sup>Ba source.
We measured the pulse shape of neutron interactions, of $`\gamma `$–interactions and of microphonic noise. Simultaneously we measured the ionization efficiency of germanium recoil nuclei inside germanium. Nuclear recoil events were studied until now by a cryogenic experiment , which could demonstrate the difference in ionization and phonon signals produced by nuclear recoil and photon interactions. The ionization efficiency was measured already in the 60s. With exception of one early experiment, which measured the endpoint of the energy spectrum from elastically scattered neutrons , the shapes of a peak from inelastically scattered neutrons were studied.
From the good agreement of our measured ionization efficiency with the theory of Lindhard and the previous measurements we conclude to measure indeed pulse shapes of Ge recoil events in germanium. To sample the pulse shape of each recoil event we had to perform an event-by-event measurement. For this purpose we built up a coincidence experiment as described in section 2. In section 3 we discuss the measurement of $`\gamma `$–ray pulse shapes and in section 4 the measurement of microphonic pulses. We give a conclusion and an outlook in section 5.
## 2 Neutron Scattering Experiment
The experimental setup can be seen in Fig. 1. A 3.3 MHz pulsed proton beam with 16 MeV energy, 1 ns duration, and 1.5 nA current was used to produce neutrons in a Lithium coated copper target by p(<sup>7</sup>Li,<sup>7</sup>Be)n, p(<sup>65</sup>Cu,<sup>65</sup>Zn)n and p(<sup>63</sup>Cu,<sup>63</sup>Zn)n reactions. The maximal neutron energies for E<sub>p</sub> = 16 MeV under 30 are listed in Tab. 1. Due to the different reactions and the large number of excited states in <sup>65</sup>Cu the neutrons had a continuous energy spectrum which was measured in 1.36 m distance by time of flight (TOF).
To select events from elastic scattering of neutrons inside the germanium we placed NE 213 scintillators under 87 and 132 degrees (compare Fig. 1). The scintillators were equipped with n,$`\gamma `$–discrimination using the differences in pulse rise times of neutron and $`\gamma `$–interactions in the liquid scintillator . Coincidence between the timing signal of the Ge–detector, one of the scintillators and of the proton beam signal was used as start signal for the time measurements. A coincidence of the delayed start signal and the n,$`\gamma `$–discrimination reduced the trigger rate down to 1 Hz due to rejection of random coincidences of $`\gamma `$–interactions in the scintillators and the Ge–detector.
For each event we recorded energy–deposits inside the HPGe and the neutron detectors, n,$`\gamma `$–signal of the neutron detectors, time differences between beam pulse and each of the three detectors and the pulse shape of the differentiated HPGe preamplifier output.
Our Ge-detector was a n-type coaxial, closed ended HPGe detector with a mass of 1.05 kg and a diameter of 5cm. N-type detectors are more resistant to fast neutron damage than p–types .
The Germanium detector was calibrated with a <sup>228</sup>Th source, the TOF measurements with several delays from 2 ns up to 150 ns. The energy resolution of the Germanium detector was 1.55 keV at 80 keV. The time resolution of the scintillators was $``$ 1.5 ns. In Fig. 2 the energy deposited inside the germanium detector is plotted as function of the TOF of neutrons detected in one neutron detector. The events from elastically scattered neutrons form a continuous band towards the lower right.
From this measurement the ionization efficiency of germanium atoms in germanium can be calculated :
$$E_R=E_n\frac{2m_{Ge}m_n+m_n^2m_n^2cos2\varphi +m_ncos\varphi \sqrt{2(2m_{Ge}^2m_n^2+m_n^2cos2\varphi )}}{(m_{Ge}+m_n)^2}$$
(1)
Here E<sub>R</sub> is the energy of the recoil nucleus, E<sub>n</sub> is the incident neutron energy, m<sub>n</sub> and m<sub>Ge</sub> are masses of neutron and germanium nuclei, $`\varphi `$ is the laboratory scattering angle of the neutron. The neutron energy E<sub>n</sub> as function of the flight time and energy loss of the neutron which is equal to the recoil energy of the germanium nucleus can be calculated from:
$$t_n=d_1\sqrt{\frac{m_n}{2E_n}}+d_2\sqrt{\frac{m_n}{2(E_nE_R)}},$$
(2)
where d<sub>1</sub> is the distance from the copper target to the germanium detector; d<sub>2</sub> from germanium detector to the neutron detector. Taking both formulae the recoil energy can be calculated and compared to the ionization energy measured by the germanium detector. The ionization efficiency is given by the ratio of ionization energy to recoil energy. The results are plotted in figure 3. The experimental values are in good agreement with the theory of Lindhard which is also verified by other experiments down to 0.3 keV. Therefore we conclude that we have measured recoil events from elastically scattered neutrons inside the germanium detector.
The pulse shapes of recoil events were obtained by differentiation of the customary integrating preamplifier output with 20 ns time constant. The signal was also integrated with 20 ns. To reduce the noise level we selected the pulse shapes according to their rise times and calculated mean pulse shapes. The mean pulse shapes were calculated by adding the individual pulses of certain rise times (ca. 100 pulses per rise time) and dividing by the number of pulses. In Fig. 4 we show as an example mean pulse shapes for 80keV Ge recoil nuclei from neutron scattering (left picture) with rise times of 88ns, 112ns and 136ns.
## 3 $`\gamma `$–Pulse shapes compared to nuclear recoils
Production of nuclear recoil pulses in a low–level experiment for a calibration measurement is not only a great effort but also polluting the experiment, since the neutrons would activate the detector and its shielding. Thus one has to think of a different source for generating pulse shapes to calibrate any pulse shape discrimination method. The usefulness of $`\gamma `$–sources is obvious. We have sampled pulse shapes from a <sup>133</sup>Ba source by using the low energy Ba lines at 53keV and 80keV and the <sup>133</sup>Cs X–ray lines at 30keV and 35keV.
The pulse shapes in a coaxial detector depend on the interaction radius. Therefore we have radiated the detector using a lead collimator at different radial positions. The resulting rise times as a function of the collimator position can be seen in Tab. 2. The shape of the pulses depends on the motion of the charge carriers in the electric field inside the Ge detector . Shown in the table are also the rise times corrected for the differentiation of the pulses with 50ns shaping time and the rise times calculated under the assumption of a true coaxial Ge–detector and a constant electron drift velocity of 10<sup>7</sup> cm/s . For the n–type coaxial Ge detector the collection time of the electrons (which are the majority charge carriers and move towards the inner contact) dominates the time response of the detector . Thus interactions at smaller detector radii should show a smaller risetime than interactions which take place in the outer part of the detector. This dependence has been in principle confirmed in the behaviour of the pulses at low energies. The difference between calculated and measured rise times is due to the irradiation of the detector from the top, where the effect of the closed ended geometry is most visible. Since our aim was to measure pulse shapes of different rise times and not to determine the interaction radius from the measured rise time, this effect is of no importance for this work.
In order to reduce the noise background the pulse shapes from $`\gamma `$–ray events were selected and summed in the same way as the nuclear recoil pulses described in Section 2. As example we show in Fig. 4 (right picture) 80keV pulse shapes from $`\gamma `$–ray events. The three risetime classes are the same for nuclear recoil and $`\gamma `$–ray events. The number of summed pulses was ca. one thousand per risetime.
There is obviously no difference between nuclear recoil and $`\gamma `$–ray pulses within the timing resolution of Ge–detectors. The mean $`\gamma `$–pulses are smoother because of the higher statistics of the accumulated data with the $`\gamma `$–ray source. We conclude that it is not possible to differentiate between $`\gamma `$–ray– and Ge–recoil–events by means of the pulse shape.
Thus a relative background suppression method based on pulse shape analysis like for NaI scintillators will not be applicable for Ge–detectors. Consequently, one can calibrate the pulse shape analysis for nuclear recoils by $`\gamma `$–ray sources. This is an easy to handle method which needs no sophisticated experiments and without the risk of activating the detector and radiation damage.
## 4 Comparison of pulse shapes from microphonic events with $`\gamma `$–interaction pulses
When placed in a low level environment Germanium detectors are very sensitive to microphonic noise. Microphonism constitutes one of the main limitations of Germanium detectors in the low energy region and rends the evaluation of low energy spectra ambiguous. The usual way of discriminating against microphonism is to to use the timing information of each event . The time distribution of all events in the spectrum is computed and cuts are set on the number of events per a certain time interval. This method makes the assumption that microphonic events occur in bursts and leads to run time losses up to 40%. A method of analysing the pulse shape of each individual event would be much more efficient.
To record a library of typical microphonic pulses we used the small p–type natural Ge–detector of the HDMS–Experiment situated in the Gran Sasso Underground Laboratory. The detector has an active mass of 202 g and is situated in a low level cryostat with 60cm distance between FET and preamplifier. To reach a low energy threshold and record the pulse shape of each event we built up a special electronic and trigger system. The preamplifier energy signal is divided, amplified with 2$`\mu `$s and 4$`\mu `$s shaping time and measured by 13bit ADCs. The ADCs deliver fast, so called peak–detect signals, which are subsequently used for trigger purposes. The faster 2$`\mu `$s shaped signal serves as a stop signal for the 250MHz flash ADC which records the pulse shapes. However, the 2$`\mu `$s shaped signal yields a worse energy resolution and thus a higher energy threshold because of the remaining higher noise level of the baseline. The best energy resolution (1.87 keV at 1332keV) and threshold (2.5keV) are obtained using the 4$`\mu `$s shaping. The preamplifier’s timing output is divided into four branches then differentiated, integrated and amplified in timing filter amplifiers with different time constants. The differentiation and integration time constants are (50ns, 50ns), (100ns, 100ns), and (200ns, 200ns). The signals are amplified in two different ways to record both low and high energetic pulses. For the purposes of this paper the (50ns, 50ns) shaped pulses are most suitable. The obtained pulse shapes are recorded with flash ADCs.
We recorded microphonic pulse shapes with energies up to 60keV. $`\gamma `$–ray pulses with comparable energies were measured with an EuTh source. Fig. 5 shows examples of both types of pulse shapes with the same energies. On the left side are microphonic pulses, $`\gamma `$–ray pulses are on the right side. The patterns of the two kind of pulses are clearly different and it is obvious that microphonic pulses are not like the baseline noise which is present in the microphonic free $`\gamma `$–ray pulses. For a more quantitative comparison between microphonic– and $`\gamma `$–ray pulses we suggest several discrimination methods. One might analyse the power spectra of the pulses, compute the second derivative and count the number of extrema or compute the integrated signal. Which, or which combination of the above mentioned methods will deliver the highest rejection efficiency and will be applied has yet to be seen. Moreover the characteristics of microphonic pulses will depend on the single detector and its operational environment. Thus it is not reasonable to further investigate the different methods in this paper.
## 5 Conclusion and Outlook
Pulse shapes of recoil events from neutron scattering inside a germanium detector were collected for the first time. The measured ionization efficiencies of recoiling germanium atoms in germanium are in good agreement with the theory of Lindhard and earlier measurements. We use this confirmation as cross check for our sample of nuclear recoil pulse shapes. A difference to the pulse shapes from $`\gamma `$–ray interaction has not been found. For a calibration of the nuclear recoil pulse shape we confirm the reasonable practice to use $`\gamma `$–ray sources instead of neutrons as a calibration standard. We found instead a relevant difference between nuclear recoil pulse shapes ( $`\gamma `$–pulse shapes) and microphonic pulses. Thus further development of an electronic noise reduction method for dark matter experiments like is possible by measuring the pulse shape of each recorded event. This microphonic noise mainly obscures energy spectra for WIMP detection in the most interesting near–threshold energy region. A discrimination method against microphonics would eliminate one of the last systematic uncertainties for Ge–detectors. We are confident to apply such a method in our new dark matter experiment, starting it’s operation during this year, the Heidelberg Dark Matter Search (HDMS) Experiment .
|
no-problem/9901/astro-ph9901330.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Any ACT telescope operating under the strict condition of no moonlight during observations, can reach a theoretical duty cycle of 18% per year (Dawson and Smith, 1996) at a latitude of 40. If this strict criterion is relaxed to observations under partial moonlight (e.g. 70% of the moon illuminated, i.e. a period of nine days before or after new moon), an increase of the duty cycle to 24% is possible. A further increase is possible when observations are extended into twilight time, when similar conditions exist. In view of recent reports of source variability, it is important to monitor a source as long as possible.
Furthermore, with the advent of the next generation of very large, low threshold energy, imaging telescopes, any increase of the duty cycle will make the financial investment and scientific yield more attractive. The method should also have a rapid reaction time ($`1`$ minute) to enable the observation of GRBs (Gamma Ray Bursts). It is therefore necessary to investigate ways to increase the duty cycle of ACT telescopes in such a way that it can be easily realized without too great a loss in sensitivity.
Earlier, Pare et al. (1991) reported the use of solar blind photomultipliers (PMTs, with sensitivity limited to the UV range) to observe during moonshine. The approach proved successful but the threshold energy was increased by a factor of 3.5 with respect to measurements with normal PMTs during no-moon conditions. Although such an approach is possible, it is expensive and time consuming since a second camera must be available and should be interchanged with the normal one to operate during moonshine.
A second approach is to use a UV sensitive filter in front of normal PMTs to block out most of the scattered light from the moon. Such a system was successfully tested by Chantell et al. (1995). Operations were extended up to full moon and limited to positions more than 10 from the moon. Successful detection of the Crab was possible after additional software cuts (apart from the normal supercuts). The energy threshold was 3.5 times the no-moon value and 10 times more observation time was needed to reach a specific significance. It is clear that a more modest approach is needed to limit the increase of threshold and time. Bradbury et al. (1996) suggest the use of wavelength shifters which could increase the UV sensitivity and could be used alone or in combination with the abovementioned filter system, partly counterbalancing the increases.
It is known from measurements in the U-band (300 to 400 nm), which represents the most sensitive wavelength band of ACT (Atmospheric Cerenkov Technique) telescopes, that the NSB (night sky brightness) increases by a factor of three to five (see Figure 1) during a half illuminated moon (compared to no-moon conditions), rising to a factor of 30 - 50 during full moon (Dawson and Smith, 1996; Schaefer, 1998). This increase depends on various factors including the telescope altitude, moon angle, zenith angle and atmospheric composition as well as the aerosol content. As most imaging telescopes are operating on a double trigger threshold (i.e. the hardware trigger which is determined by the fluctuations in the NSB, and a much higher image threshold (e.g. $`30`$ photo-electrons for the HEGRA CT1 telescope)), we expect that increases in the NSB of up to a factor of 10 will not have a marked effect on the quality of the produced images. A slight increase of the hardware threshold (e.g. through lowering of the PMT high voltage (HV)) due to moonlight does not imply significant changes of the image threshold and thus we expect no major changes in data analysis to occur when we observe during moonshine.
In an attempt to lower the energy threshold, Quinn et al. (1996) increased the HV of the PMTs by 40% and the produced images in the camera were similar to those obtained under normal conditions. This is an indication that the normal supercuts analysis (to obtain an enhanced $`\gamma `$-ray signal, see Petry et al. (1996)) is robust to changes in the system gain, provided that the pixel response is uniform. The image parameters do however change when an UV filter system is used (Chantell et al., 1995). This is due to a change in the spectral composition of the Cerenkov light.
The above discussion shows that although various techniques were investigated, an effective and simple method to increase observation time is still lacking. In the following we report on such a technique and illustrate it with observations of the Crab Nebula and Mkn 501 with the HEGRA CT1 telescope.
## 2 Observations
### 2.1 Exploratory measurements
Since the differential spectrum of the NSB, sun and moonlight peak in the yellow to red region of the spectrum (e.g. Dobber, 1998), the bulk of this light is not registered by the blue sensitive PMTs used in the ACT. Furthermore, most imaging telescopes are equipped with Winston cones on their cameras, preventing most of the scattered light from atmospheric particles (both Rayleigh and Mie scattering) and the environment to enter the detection system. Scattered light from high altitude haze and ice crystals are also excluded since no observations are conducted under these conditions since it make the data unreliable (shown e.g., by Snabre, et al., 1998). It is therefore believed that most imaging ACT telescopes may regularly operate during these conditions of increased illumination without deterioration of their detectors, provided that they operate with low to medium gain PMTs. By using an atmospheric extinction program (Schaefer, 1998) which include Rayleigh and Mie scattering, it is clear from Figure 1, that apart from an exclusion zone around the moon (varying from $`20^{}`$ to $`40^{}`$, depending on the illumination of the moon and the haziness of the sky), a relative constant NSB-level, as a function of zenith angle, may be expected.
All measurements were conducted with the HEGRA CT1 telescope with its 5 m<sup>2</sup> reflector and 127 pixel camera operating at a threshold energy of 1.7 TeV during December 1996 (Mirzoyan et al. (1994) and Rauterberg et al. (1995)). The usual hardware trigger of at least 2 tubes triggering at 15 photo-electrons was used with the low gain PMTs. The 10 stage EMI 9083 A PMTs are operated with only 8 stages and AC coupled fast amplifiers compensate for the reduced gain. This operational mode allows one to circumvent large and damaging anode currents in the PMTs from e.g. the NSB, as well as scattered moon- and sunlight (during dusk and dawn). The measurements were conducted during December 1996 whilst the moon was nine days old ($`12.5`$ visual magnitude, 70% illumination). This implied an increase in the NSB by a factor of 20 (see Figure 1). The telescope was pointed in a direction $`90^{}`$ away from the moon. Without any adjustment to the PMTs, the accidental trigger rate (ATR) and the average PMT current increased as expected.
The HV of the PMTs, and thus the system gain was lowered to ensure minimal PMT fatigue and to lower the accidental trigger rate (ATR). The small signals causing the increased ATR disappeared rapidly and at a HV reduction of 4%, the ATR was down to a manageable 0.09 Hz, and the average PMT current was 8 $`\mu `$A. The third magnitude star $`\zeta `$ Tauri was clearly visible in the camera and cosmic ray events could easily be recognised against a low background. The result of laser calibration runs assured us that the individual pixels were responding linearly. This was borne out by off-line analysis which shows uniform triggering throughout the camera. The raw trigger rate was 65 - 70% of that under no-moon conditions, indicating an estimated increase in the energy threshold from 1.7 to 2.4 TeV. From the gain characteristics of our PMTs ($`\mathrm{\Delta }g/\mathrm{\Delta }U=2/140`$V at the operational voltage of 1080 V, averaged over all PMTs), we calculate a gain reduction of 25% for a HV reduction of 4%. This, in turn resulted in a 40% flux reduction for a power law spectrum with $`\alpha =1.6`$, which is in good agreement with the observed value of 38%. It should be noted that neither is the PMT current exactly proportional to the NSB photon flux (due to non-linear gain effects and base-line shifts caused by AC coupling), nor is the trigger rate directly proportional with the PMT gain.
With the telescope apparently operating normally, it was pointed gradually closer to the moon. No significant changes in the ATR or PMT currents could be detected up to 30 from the moon, where the NSB started to increase rapidly, in accordance with Figure 1. This preliminary measurements indicated that the telescope could be operated over a large region of the sky without any marked influence from the moon.
Further measurements showed that the abovementioned operating conditions can be maintained until the moon is 85% illuminated (11 days before and after new moon), provided that the moon is not approached closer than 20. From Figure 1, it is therefore clear that the camera can handle NSB increases of up to a factor of $``$50.
### 2.2 Crab observations
Under the operating conditions described above, we observed the Crab Nebula (the only ACT source with a constant flux) during five nights under varying moon conditions (from 5 to 9 days old and approaching the moon itself up to 22). Data at zenith angles smaller than 30 were used (see Table 1 for further detail).
A total of 11.7 hours of observations on the Crab was analysed. Applying normal supercuts (Petry et al., 1996) resulted in a significance S (in standard deviations) of 0.8 $`\sqrt{t}`$ compared to 1.3 $`\sqrt{t}`$ (with $`t`$ in hours) under no-moon conditions. No large increase in observation time is therefore needed to reach the same significance as under no-moon conditions. The fact that our significance, S, is smaller under moon conditions, indicates that there is indeed a change in the parameters for $`\gamma `$-ray selection, indicating the need of software optimisation of the analysis of moonshine (MS) data.
In Figure 2, the ALPHA-distribution is shown. It is clear that it is similar (but flatter - See Section 2.3) than that observed from other sources under normal dark moon conditions. To determine the $`\gamma `$-ray excess, we used 80 h of background data compiled over the previous two years by CT1 (Petry et al., 1997). The justification for this procedure stems from the excellent agreement of the ON- and OFF-data for ALPHA $`>20^{}`$. It is however essential that simultaneous background measurements be made to establish any effect of the increased NSB on the background (See Section 2.3). To check for these possible biases, the number of expected background events was determined, in a second approach, using the region $`20^{}<ALPHA<80^{}`$ of the ON-data instead of OFF-data (expected background events: N = N<sub>on</sub> ($`20^{}<ALPHA<80^{}`$) / 6). With this method a slightly more significant result was obtained (see Table 1).
The resulting $`\gamma `$-ray rate of 4 h<sup>-1</sup> is 62% of the 6.4 h<sup>-1</sup> seen by the HEGRA CT1 telescope during the same period from the Crab under normal, no-moon conditions at a threshold of 1.7 TeV. This should be compared with only 7% of the no-moon rate reported by Chantell et al. (1995) with their filter system. Thus, our technique provides a smaller increase in threshold and can easily be adapted by any imaging telescope to increase the exposure of sources.
### 2.3 Mkn 501 observations
From March to September 1997 the AGN Mkn 501 showed strong, variable $`\gamma `$-ray emission at an average flux twice that of the Crab (see e.g., Kranich, et al. (1997) and Protheroe, et al. (1998)). Having proven that the moonshine technique is working, this event provided the possibility of increasing our exposure of the source and, at the same time, investigate the moonshine observations more extensively. At the end of April 1997, the nominal operating HV of the PMTs was increased by 6% in order to re-establish the sensitivity to that of 1994. This adjustment was needed due to normal ageing of the PMTs, with a gain reduction of 20-30% after two years of operation.
With the experience of the Crab observations, we adopted a more conservative and refined observing strategy for the moonshine observations: The PMTs were running at nominal voltage up to a 20% illuminated moon (3 nights), a 6% reduction in HV up to 70% illumination (5 nights), an 9% reduction in HV up to 90% illumination (2 nights) and a 13% reduction up to 95% illumination (one night). This scaling of HV reduction correlates with the increase of NSB during increasing moon luminosity, and exclude only the night during full moon. This strategy is quantified in Table 2, together with the expected threshold energies of the various HV settings. A limit on the average PMT current of 12 $`\mu `$A (20 $`\mu `$A maximum) was set. This implied that a further HV reduction was made as soon as this limit was reached - this occurred a number of times during the same night (e.g. MJD 50615 and 50641 in Figure 4) as the source was approaching the moon.
With this observing strategy, we increased the total exposure of the source by 56% compared to the normal dark moon observations. The final data set consists of 28 nights of MS data, 41 with both MS and dark moon data (making a comparison possible) and 52 nights with only dark moon data. Due to this additional observations, the Mkn 501 data set of the HEGRA CT1 telescope is the most complete of all ACT observations of this event (see Table 2 for detail).
The most important characteristics of the observations are summed up in Table 2. From this Table the following conclusions may be drawn: (i) The quality factor, Q, describing the $`\gamma `$/hadron separation capability,
Q = $`[N_{on}/T]/\sqrt{N_b/T}`$
(with $`N_{on}`$ and $`N_b`$ respectively the ON-source and OFF-source events with ALPHA $`<10^{}`$) decreases with decreasing HV. This is understandable since the recorded images will become smaller with increasing HV reduction, due to higher tail cuts (which is a result of the increase of PMT noise caused by moonlight). Furthermore, the moonshine will produce additional noise which cannot be filtered out with the normal supercuts analysis. (ii) Comparing the two sets of HV settings which contain both dark moon and MS data, it is clear that Q is the same, assuring us that the additional light due to moonshine does not have a marked influence on the $`\gamma `$-hadron separation. (iii) The Q-value of the Crab observations fits in with the general dependency of Q on the HV reduction. (iv) The background rate, at nominal HV, was 20% higher with moonlight than without. This increased to 55% at a 6% HV reduction. It can therefore be inferred that this rate will continue to increase with higher HV reductions, contributing more and more to the background and possibly dilutes the signal. Care should therefore be taken with HV reductions larger than 10% and Monte Carlo studies are indicated to investigate the effect of moonlight on the normal supercuts analysis.
In Figure 3, the ALPHA-distribution for 244 h dark moon observation and 134 h moonshine observations is shown, divided into six subsets, according to the applicable HV setting. To maximise the source exposure, we used as OFF-source data a sample taken during dark nights, as discussed in Section 2.2. We also recorded a small sample of OFF-data during moonshine which was in good agreement with the ALPHA-distribution of the dark night OFF-source data. Comparing the various panels, we conclude the following: (i) The background-region (ALPHA $`>20^{}`$) has the same shape for all the cases, independent of HV reduction or the presence of moonlight. (ii) Comparing Figures 3(a) with 3(b) as well as 3(c) with 3(d), an increase in the background rate (i.e. ALPHA $`>20^{}`$) is evident as soon as moonlight contributes to the NSB. (iii) It is clear from all six panels that the shape of the ALPHA-excess does became flatter with decreasing HV. This is attributable to additional noise in all the camera pixels due to moonlight. A confirmation of this effect is apparent when Figure 2 is compared with Figure 3(f): In both cases the flattening of the ALPHA-excess is similar, illustrating that this flattening is mainly determined by the addition of moonlight to the detector and is not due to the fact that the Crab has a lower flux than Mkn 501.
We therefore conclude that the supercuts analysis is robust enough for observing during moonshine in the way described. Care should however be taken when the HV is reduced by more than 10%, due to low Q-values. This excludes observations during the last two to three nights before full moon.
Figure 4 shows the light curve for Mkn 501, including the MS/twilight data, up to zenith angles of $`60^{}`$. The fluxes should be considered as preliminary due to a shortage of Monte Carlo data at large zenith angles. The errors for the MS data are generally larger because (a) the MS measurements were mostly of shorter duration, and (b) for a coherent presentation the integral flux data were calculated for 1.5 TeV, i.e. the MS data were extrapolated, using a power law coefficient of $`1.5`$, assuming the threshold energies in Table 2. During the 216 days of the multiple flare, we were able to collect data during all nights with clear weather, excluding only the nights during full moon. This is a good example of the value of MS observations.
During 39 nights we recorded both dark moon and MS data and a comparison between the fluxes for the resultant 57 pairs of moon/no-moon data can be made. Although Mkn 501 showed flux variability on a few hours time scale, the procedure of selecting pairs from the same night minimise this effect. From Figure 5 a good agreement is apparent with a correlation coefficient of 0.81. From the fitted line a gradient of 0.63 was calculated. This value increases to 0.83 when only data at the nominal HV is analysed. These two values should be compared with a theoretical value of 1. The fact that the MS data show larger fluxes than the corresponding no-moon data may be attributable to an overestimation of the effect of HV reduction on the energy threshold, the shortage of Monte Carlo simulations at large zenith angles as well as the use of non-optimised, normal supercuts. This effect is under further investigation.
### 2.4 Observations during Twilight
With the successful implementation of MS measurements, it was realized that additional observations could be possible during twilight. The conventional practice of ACT telescopes is to start observing after Astronomical Twilight (when the sun is more than $`18^{}`$ below the horizon). Again, applying the atmospheric model of Schaefer (1998), it became clear that observations could indeed be extended into twilight time. Depending on the pointing direction of the telescope, the observations could start as soon as Nautical Twilight (the sun reaching $`12^{}`$ below the horizon). This could add 20 to 40 minutes of additional observations, depending on the latitude of the observatory.
It was decided to make single runs, lasting 20 minutes, just before Astronomical Twilight, at the nominal HV of the PMTs. Ten such observations were made with a total exposure of 200 minutes The results, occurring during the Mkn 501 campaign, is also shown in Figure 3. The correlation between these measurements and the data collected immediately afterwards, during normal dark conditions, are excellent. We are therefore certain to add this twilight time as a further potential observing slot in case of urgent measurements, e.g. GRBs.
## 3 Prerequisites for operation at high background light levels
The following precautions to minimise the impact of moonlight/twilight have to be taken to prevent damage to the PMTs or producing unreliable data: (i) The PMTs should be operated with a gain of a few times $`10^4`$, followed by low noise AC-coupled preamplifiers. For this purpose, 6 - 8 stage PMTs would be ideal. Even under severe background light the anode currents will be far below critical values of fatigue and fast-ageing. A byproduct of low gain is a lower number of positively charged ions liberated from the last stages. These ions might eventually be accelerated and hit the cathode, thus creating large pulses and in turn accidental triggers (Mirzoyan and Lorenz, 1996). (ii) For prolonged moonshine operation, care should be taken to use good vacuum ($`<10^6`$ Torr) PMTs. This will prevent permanent damage by ions to the photo-cathode and first dynodes. (iii) Due to the high noise rate the ATR will increase. Besides gain reduction one can minimise the coincidence overlap time or introduce higher level fast trigger systems such as so-called next-neighbour triggers. This has recently been introduced successfully in the HEGRA telescopes. In order to minimise the noise contribution to the data (and image analysis) the gating time for the pulse height recording system should be minimised. For the above presented observations we used 30 ns gates for the signal recording ADCs. (iv) Suppression of nearby scattered light can be achieved by using optimised light collectors (Winston cones) in front of the PMTs, so that only light coming from the mirror area (plus a small safety margin) is collected. This is also a standard feature of the HEGRA telescopes. (v) Care should be taken, at large moon angles ($`>90^{}`$), that no direct illumination of the PMTs by the moon occurs. (vi) In order to minimise scattered light from the telescope frame, it should be painted matte black.
Observation in presence of moon light increases the PMT’s anode current. This will accelerate the ageing of the PMTs. The dominant effect is that due to intense electron bombarding the gain of the last dynodes will decrease. For the used PMTs with CuBe dynodes it was found that the gain drops by a factor of about 2 for an integrated anode current of 5 Coul/10 mm<sup>2</sup> dynode area. Very similar values were found for the 8<sup>′′</sup> PMTs of the wide angle Cherenkov matrix AIROBICC (Karle, 1995). These PMTs integrate the NSB over about 1 sterad. The dynode area of the 8<sup>′′</sup> PMTs is about a factor 10 larger than that of the 9083A PMTs. It should be noted that the reduction factor fluctuates considerably from PMT to PMT and is very likely different for PMTs from different manufacturers. From the about 1 year operation under different light levels it was found that the gain change and trigger rate are strongly correlated and the HT-gain correlation can be used to reestablish the gain after ageing in a predictable manner. As mentioned above for minimising ageing it is obvious to operate the PMTs at the lowest possible anode currents, respectively gain.
For the current operation mode of the CT1 camera it can be predicted that the PMTs would have a lifetime exceeding 15 years when operating for about 500 h/y at half illuminated moon and a source separation of at least 20.
Note that the above arguments apply to a lesser amount also for observations around the galactic centre from southern locations. The central area of our galaxy is at least 10 times brighter than the dark celestial regions outside the milky way.
## 4 Conclusions
A simple technique with a fast reaction time, which can be used with imaging ACT telescopes to increase the average observation time of an object, was successfully tested. The only adjustment is a uniform decrease of a few percent in the HV of the PMTs (0 to 13% in this case). This can be realized within seconds and will not affect the normal aging of the PMTs. Any telescope which operates at a moderate high PMT gain of a few times $`10^4`$ may use this technique. Increases of the NSB up to a factor of 50 can be handled. The technique increases the threshold energy of the telescope by up to a factor of 2.2, depending on the HV reduction. It will allow more effective use of observation time (e.g. the early nights during the waxing moon). Further investigations and refinements are under discussion and it is believed that this technique could be improved by optimising the supercuts for the various conditions discussed in this paper.
## Acknowledgements
This work is supported in part by the German Ministry for Eduction and Research, BMBF, and the Spanish Research Organisation CYCIT. The HEGRA Collaboration thanks also the Instituto d’Astrofisica de Canarias for permission to use the site and continuing support. One of us, C. Raubenheimer, thanks the Max Planck Society and the Oppenheimer Trust for financial support to carry out this work.
## References
Bradbury, S.M., et al. (1996), in Towards a Major Atmospheric Cerenkov Detector -IV (Padova), 182
Chantell, M., et al. (1995), 24th Int. Cosmic Ray Conf. (Rome), 2, 544
Dawson, B. and Smith, A. (1996), technical report GAP-96-034, University of Adelaide
Karle, A,. et al. (1995), Astroparticle Physics 3, 321
Dobber, M.R. (1998), ESA preprint
Kranich, D., et al. (1997), Proc. 4th Compton Symp. (Williamsburg),2, 1407
Pare, E., et al. (1991), 22nd Int. Cosmic Ray Conf. (Dublin), 1, 492
Petry, D., et al. (1996), Astron. & Astrophys., 311, L13
Petry, D., et al. (1997), private communication
Protheroe, R.J. et al. (1998), 25th Int. Cosmic Ray Conf. (Durban), 9, in press
Quinn, J., et al. (1996), in Towards a Major Atmospheric Cerenkov Detector-IV (Padova), 341
Mirzoyan, R., et al. (1994), NIM A, 351, 51
Mirzoyan, R., and Lorenz, E. (1996), in Towards a Major Atmospheric Cerenkov Detector-IV (Padova), 209
Rauterberg, G., et al. (1995), 24th Int. Cosmic Ray Conf. (Rome), 3, 412
Schaefer, B.E., (1998), Sky & Telescope (May), p 57 (submitted to Publ. Astr. Soc. Pacific)
Snabre, P., Saka, A., Fabre, B., and Espignat, P. (1998), Astropart. Phys., 8, 159
Table 1: Moonshine Observations of the Crab
(December 1996 - February 1997)
| | OFF-region | Dark Night |
| --- | --- | --- |
| | $`20^{}<ALPHA<80^{}`$ | OFF-data |
| Average zenith angle () | 15.43 | 15.43 |
| Observation time (min) | 703 | 703 |
| Raw number of events | 21230 | 21230 |
| Events after all cuts except ALPHA | 668 $`\pm `$ 26 | 668 $`\pm `$ 26 |
| Events after all cuts (ALPHA $`10^{}`$) | 123 $`\pm `$ 11 | 123 $`\pm `$ 11 |
| Expected background events | 68 $`\pm `$ 9 | 81 $`\pm `$ 9 |
| Excess events | 55 $`\pm `$ 14 | 42 $`\pm `$ 13 |
| Significance of excess ($`\sigma `$ ) | 4.0 | 3.4 |
| Excess rate (h<sup>-1</sup>) | 4.7 $`\pm `$ 1.2 | 3.6 $`\pm `$ 1.2 |
Table 2: Moonshine/Twilight (MS) and Dark Moon (DM) conditions and observations of Mkn 501 (Zenith $`<59^{}`$, March - September 1997)
| $`I_{moon}`$ | NSB($`30^{}`$) | $`\mathrm{\Delta }(HV)`$ | $`E_{th}`$ | $`N_{on}`$ | $`N_b`$ | Exposure | Signifi- | Q |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| (%) | (nLambert) | (%) | (TeV) | | | T (h) | cance ($`\sigma `$) | |
| | 54 | 0 (DM) | 1.2 | 5720 | 1187 | 183.0 | 51.2 | 12.3 |
| $`<20`$ | 94 | 0 (MS<sup>1</sup>) | 1.2 | 647 | 150 | 17.9 | 18.6 | 13.0 |
| | | -6 (DM) | 1.7 | 1306 | 407 | 60.9 | 22.3 | 8.3 |
| 20-70 | 840 | -6 (MS) | 1.7 | 2151 | 927 | 74.9 | 20.1 | 8.2 |
| | | -9 (MS) | 2.4 | 427 | 247 | 25.9 | 9.1 | 5.9 |
| 70-90 | 1800 | -9 (MS<sup>2</sup>) | 2.4 | 123 | 81 | 11.7 | 3.4 | 4.0 |
| 90-95 | 2900 | -13 (MS) | 3.4 | 128 | 79 | 17.0 | 4.0 | 3.5 |
<sup>1</sup> = including 3.3 h twilight observations;
<sup>2</sup> = Crab Nebula observations
$`I_{moon}`$ = moon illumination;
NSB ($`30^{}`$) = visual NSB $`30^{}`$ from moon;
$`\mathrm{\Delta }`$(HV) = PMT voltage reduction
$`E_{th}`$ = threshold energy as calculated from PMT gain characteristics
$`N_{on}`$ = total events with ALPHA $`<10^{}`$
$`N_b`$ = expected background events with ALPHA $`<10^{}`$ (see text)
Figure 1: The visual (300 - 900 nm) Night Sky Brightness as a function of angular separation between the moon and the observed object (moon angle), and the phase of the moon (expressed in days after new moon). The calculations used the model of Schaefer (1998).
Figure 2: The ALPHA-distribution for the 11.7 hours of moonshine observations of the Crab Nebula. The data were obtained with the HEGRA CT1 telescope at zenith angles smaller than 30. The filled symbols present the Crab measurements and the open symbols the background as obtained from previous observations. An ALPHA-cut at 10 resulted in a significance of 3.4 $`\sigma `$ at a threshold of $``$2.4 TeV.
Figure 3: ALPHA-distributions for the various categories (as indicated) of Mkn 501 observations during 1997 with the HEGRA CT1 telescope. Full symbols represent the actual ON-source measurements and the open symbols are normalised OFF-source observations from Petry et al.(1997). The normal supercuts analysis was applied to the raw data. A flattening of the ALPHA-excess, with increasing HV reduction, is evident.
Figure 4: The 1997 light curve of Mkn 501 (preliminary), as observed by the HEGRA CT1 telescope at an energy threshold of 1.5 TeV. Open symbols represent moonshine/twilight observations whereas full symbols represent dark moon measurements. For the flux extrapolation, a power law coefficient of $`1.5`$ has been used.
Figure 5: The correlation between dark moon and moonshine/twilight fluxes at 1.5 TeV for the 57 pairs of measurements on Mkn 501, which occurred during the same night.
(moon1.tex)
|
no-problem/9901/astro-ph9901300.html
|
ar5iv
|
text
|
# A Robust Classification of Galaxy Spectra: Dealing with Noisy and Incomplete Data
## 1 Introduction
The next generation of spectroscopic surveys, such as the Sloan Digital Sky Survey (SDSS) and the 2 degree Field redshift survey (2dF; Maddox et al. 1998) will provide a wealth of information about the spectral properties of galaxies in the local and intermediate redshift universe. For the first time we will have high signal-to-noise spectrophotometry of large, systematically selected samples of galaxies. The quality of these data will be such that we will not be restricted to measuring just redshifts but will be able to extract the spectral characteristics of individual galaxies. If we can define robust methods for the classification of galaxy spectra we will be able to study the evolution of the spectral properties of galaxies and relate these observations to the physical processes that drive them.
In the light of this a number of statistical techniques have been developed for automated classification of galaxies based on spectral continuum and line properties. One of the most promising of these methods has been the Principal Component Analysis or Karhunen-Loève transform (Karhunen 1947, Loève 1948). The technique has been successfully applied to the classification of galaxy spectra (Connolly et al. 1995, Folkes et al. 1996, Sodre and Cuevas 1997, Bromley et al. 1998, Galaz and de Lapparent, 1998), QSOs (Francis et al. 1992) and stars (Singh et al. 1998).
The underlying basis behind these techniques is that a galaxy spectrum can be described by a small number of orthogonal components (eigenspectra). These eigenspectra are found to correlate strongly with the physical properties of galaxies, such as the star formation rate or age of the stellar population. By projecting a galaxy spectrum onto these orthogonal components we have a measure of the relative contributions of these different stellar types and consequently an estimate for the spectral type of the galaxy. Since the spectral energy distribution of galaxies is the sum of the SEDs of their stellar population, such an approach is quite natural and should give a reasonable description of the galaxies.
Each of these approaches makes the underlying assumption that the galaxy spectra are perfect. In the real world, where surveys will cover a wide range in redshift and luminosity, this will not be the case. The ensemble of galaxy spectra will cover a broad range of rest wavelengths, have variable signal-to-noise, and will contain spectral regions affected by sky lines or artifacts in the spectrographs. This will result in spectra whose wavelength coverage will only be a subset of that of the eigenspectra (i.e. the data will contain missing spectral regions or gaps within the spectra). Applying the standard techniques whereby we project the galaxy spectra onto an eigenbasis with a simple scalar product will introduce biases in to the galaxy classification schemes.
In this paper we address these issues. We extend the KL analysis of spectra to incorporate the effects of gappy data and variable signal to noise. In section 2 we provide the mathematical basis for our analysis and in sections 3 and 4 we show how we can provide an optimal interpolation of galaxy spectra over the missing data. Sections 5 and 6 demonstrate how eigenspectra can be built from noisy and incomplete spectra. Finally in Section 7 we discuss the application of these techniques to the general case of incomplete data and how they might be used in astrophysical problems.
## 2 An Orthogonal Expansion of Censored Data
It has been known for some time that galaxy spectra can be represented by a linear combination of orthogonal spectra (i.e. eigenspectra) and that these eigenspectra can be used for galaxy classification. In an earlier paper (Connolly et al. 1995) we described the technique for applying the Karhunen-Loève transform (KL; also known as Principal Component Analysis) to derive these eigenspectra. In this paper we will discuss the application of the KL transform to the classification of galaxies when we have gaps in the spectra (e.g. due to the removal of sky lines, bad regions on a spectrograph’s CCD camera or galaxies with different rest wavelength coverage).
From the KL transform we can construct an orthonormal eigenbasis such that each galaxy spectrum, $`f_\lambda `$, can be represented as a linear combination of eigenspectra, $`e_{i\lambda }`$.
$$f_\lambda =\underset{i}{}a_ie_{i\lambda },$$
(1)
where $`\lambda `$ represents the wavelength dimension, $`i`$ is the number of the eigenspectrum and $`a_i`$ are the coefficients of the linear combination. If we project a galaxy onto this eigenbasis we can define the set of linear expansion coefficients, $`a_i`$, that fully describe a galaxy spectrum. The eigenspectra $`e_{i\lambda }`$ are defined to be orthogonal such that, using a simple scalar product,
$$\underset{\lambda }{}e_{i\lambda }e_{j\lambda }=\delta _{ij},$$
(2)
where the sum in $`\lambda `$ extends over a pre-defined wavelength range $`(\lambda _1,\lambda _2)`$. The eigenspectra are ranked by decreasing eigenvalues, which in turn reflect the statistical significance of the particular eigenspectrum. For the details of the construction of the eigenbasis see Connolly et al. (1995).
We have shown that, for galaxy spectra, the majority of the information present within the data is contained within the first 3-7 eigencomponents (Connolly et al. 1995). Consequently the expansion of each galaxy spectrum in terms of the eigenbasis can be truncated (i.e. we can retain most of the information present within the galaxy data using only a handful of components). This truncated expansion represents an optimal filtering of the data in the least squares sense. As such it provides a very efficient mechanism for describing galaxy spectra.
From this series of expansion coefficients, whether truncated or not, we can construct a very natural classification scheme. As we have shown (Connolly et al. 1995), the first three coefficients correlate with the amount of a galaxy spectral energy distribution that is dominated by an old stellar population or by active star formation. These coefficients form an approximately one parameter sequence, well correlated with the ages of galaxies and the distribution of these coefficients can be used to separate galaxies into distinct classes (e.g. Folkes et al. 1996, Bromley et al. 1998).
The underlying basis behind these analyses is that a galaxy can be represented as a linear combination of orthogonal spectral components. The orthogonality of the system is important as it means that the expansion coefficients are uncorrelated and, therefore, provide a very simple and general way to separate galaxies into distinct classes. The benefits of the orthogonal expansion only hold if the eigenspectra and the galaxy spectrum are constructed over the same wavelength range. If, as found in real spectra, there are regions of missing data within a galaxy spectrum, due, for example, to the presence of sky lines or the fact that the rest wavelength coverage of the galaxy spectra differ from that of the eigenspectra, then the orthogonality of the system no longer holds. This loss of orthogonality can be understood if we consider Figure 1. The eigenbasis (as shown by the first 3 eigenspectra) is clearly orthogonal over the full spectral range (the scalar product between the individual eigenspectra is zero). If we exclude those data points with $`\lambda <5000`$ Å we see that the orthogonality no longer holds and the eigensystem is not linearly independent. This means that the coefficients that would be derived by simply projecting a galaxy spectrum onto this censored basis would be biased (in other words the spectral modes become correlated). As the wavelength range over which the data can be defined as valid clearly varies as a function of redshift and spectrograph, comparison between the classification of different galaxy populations becomes extremely difficult.
For the case of reconstructing faces from gappy data Everson and Sirovich (1995) have shown that we can account for the gaps within the data and reconstruct “unbiased” correlation coefficients. We extend here the analysis of Everson and Sirovich (1995) and Connolly and Szalay (1996) to the case of spectral data and generalize the problem to consider an arbitrary weighting of the spectra.
We consider the gappy galaxy spectrum, $`f_\lambda ^o`$, that we wish to project onto the eigenbasis, as consisting of the true spectrum (i.e. without gaps), $`f_\lambda `$, and a wavelength dependent weight, $`w_\lambda `$. The wavelength dependent weight will be zero where there are gaps within the data (corresponding to infinite noise), and $`1/\sigma _\lambda ^2`$ for the remaining spectral range. By applying this weight function we have a general mechanism by which we can down- or up-weight not just bad regions but also particular spectral features (e.g. emission lines) that we wish to emphasize within the data. It is worth noting that in the special case where we do not consider the effects of noise (i.e. the weight values are 0 or 1) then the wavelength dependent weight acts as a window function and $`f_\lambda ^o=w_\lambda f_\lambda `$. In the general case that we discuss below, where we include gaps and noise, the weight function is related to the true spectrum through the $`\chi ^2`$ minimization (Equation 3).
Given the relative weight of each spectral element we wish to derive a set of expansion coefficients that minimize the quadratic deviation between the observed spectrum, $`f_\lambda ^o`$, and its truncated reconstruction, $`_ia_ie_{i\lambda }`$, where the sum over $`i`$ extends to a small number of eigenspectra only. To do this we define the $`\chi ^2`$ statistic such that,
$$\chi ^2=\underset{\lambda }{}w_\lambda (f_\lambda ^o\underset{i}{}a_ie_{i\lambda })^2$$
(3)
and minimize this function with respect to the $`a_i`$’s. This gives the minimal error in the reconstructed spectrum, over the full range in $`\lambda `$, weighted by the variance vector, $`w_\lambda `$.
Solving for $`a_i`$ we get,
$$\underset{\lambda }{}w_\lambda e_{j\lambda }\underset{i}{}a_ie_{i\lambda }=\underset{\lambda }{}w_\lambda f_\lambda ^oe_{j\lambda },$$
(4)
Defining $`M_{ij}=_\lambda w_\lambda e_{i\lambda }e_{j\lambda }`$ and $`F_j=_\lambda w_\lambda f_\lambda ^oe_{j\lambda }`$ this simplifies to
$$a_i=\underset{j}{}M_{ij}^1F_j.$$
(5)
Clearly $`F_j`$ represents the expansion coefficients that we would have derived if we had undertaken a weighted projection of the observed galaxy spectrum $`f_\lambda ^o`$ onto the eigenbasis (i.e. a biased set of coefficients) and $`M_{ij}^1`$ tells us how the eigenbasis is correlated over the censored spectral range. If the weights were all equal (there was no region that was masked or of lower signal-to-noise) then Equation 4 simplifies to that given in our original Equation 1 and $`M_{ij}^1`$ becomes a unity matrix. As we introduce gaps into the spectra the off-diagonal components of $`M_{ij}^1`$ become more significant.
Therefore, by correcting for the correlated nature of the eigenbasis we can determine the values of the expansion coefficients, $`a_i`$ that we would have derived had we had complete spectral coverage and no noise within the observed spectra. As such they are independent (within the errors) of the wavelength range over which we observe a galaxy and can be used to classify galaxy spectra taken over a wide range in redshift and with differing signal-to-noise. This enables an objective comparison of galaxy spectral types using the complete spectral information and free from the wavelength dependent selection biases that may be present in existing analyses.
Associated with the corrected expansion coefficients $`a_i`$ we can define a covariance matrix. This measures the uncertainty in the coefficients due to the correlated nature of the eigensystem. It is straightforward to show that the covariance between the expansion coefficients is simply,
$$\mathrm{Covar}(a_ia_j)=a_ia_ja_ia_j=\frac{1}{N}M_{ij}^1$$
(6)
where $`N=_\lambda 1/\sigma _\lambda ^2`$. The size of the uncertainty in the expansion coefficients, after the correction, is proportional to the amount that the eigenbasis is correlated (as we would expect). From this analysis we, therefore, have a correction for the effect of the gaps inherent within real spectra and a measure of the error on these derived values.
Given this approach we can not only derive the set of corrected coefficients for the classification of galaxy spectra, but we can also use these coefficients to reconstruct the regions of the spectra that are masked. This tells us that if a galaxy spectrum can be reproduced using a handful of components then the spectral features present within the data (e.g. the Balmer series of absorption lines) are correlated (as we would expect from the physics). Therefore, if we have sufficient spectral coverage to detect a feature in the spectrum we can predict the strengths of additional features where we have no data. The gappy KL analysis does this in a mathematically rigorous way, allowing the data themselves to define the inherent correlations.
## 3 Optimal Interpolation of Gaps in the Spectra
Section 2 outlines the basic mathematical and physical reasoning behind the classification of galaxy spectra in the case of gappy data. We now consider the application of these techniques to spectral data. In order to be able to test our technique, we create an eigenbasis, using the GISSEL96 model spectral energy distributions of Bruzual and Charlot (1993). We use a simple stellar population model with an instantaneous burst of star formation at zero age and sample the model spectra from 0 to 20 Gyr. In total the sample contains 222 spectra covering the wavelength range 3500 Å to 8000 Å (designed to approximate the spectral coverage of the SDSS data). The choice of our particular Bruzual and Charlot model is somewhat arbitrary as we are only concerned in having a set of spectra that cover a wide range in age and for which we can control the uncertainties within the data.
We construct the eigenbasis as described in Connolly et al. (1995) for the Bruzual and Charlot data after normalizing all spectra to unit scalar product. The diagonalization of the correlation matrix is undertaken using the Singular Value Decomposition algorithms of the Meschach package. In Figure 1 we show the first three eigenspectra and in Figure 2 the corresponding sequence of eigenvalues. The size of the eigenvalue is directly related to the amount of variance (or information) contained within each of the eigenspectra. The eigenvalues decrease rapidly with the first three components containing 99.97% of the total system variance. By the tenth eigenspectrum the eigenvalue (or variance of the system contained within this spectrum) is at the level of $`10^4`$ of a percent. Using just the first three eigencomponents (i.e. truncating the expansion) we should, therefore, be able to reconstruct any given spectrum to an accuracy of better than 0.05%.
Considering these eigenspectra in turn, the first eigenspectrum is the mean spectrum and represents the ‘center of mass’ of the Bruzual and Charlot sample of galaxy spectra. The second eigenspectrum has the spectral shape of an O star and describes the star formation component of the galaxy spectral energy distribution. The third component is a mixture of an old G or K star stellar population (with a strong 4000 Å break) and an intermediate age A star population (with strong Balmer lines). From the distribution of eigenvalues (see above) a linear combination of these three stellar spectra can, therefore, reproduce the full range of the Bruzual and Charlot spectral energy distributions to a very high accuracy.
If we project a galaxy onto this eigenbasis the expansion coefficients tell us the relative contributions of each of these components. This provides not only a classification of the galaxy but also a means of reconstructing the underlying spectrum. As we can reproduce the galaxy spectra with a small number of eigenspectra we should be able to use these components to interpolate over regions without data.
For the case of real spectra we might expect the KL reconstruction to require more than just the handful of eigencomponents that we need for the synthetic data (e.g. to account for the distinct spectral signatures of the small number of AGN present within any spectroscopic survey). The techniques we will apply here should be equally applicable to real spectra given the provisos that the eigenbasis will provide a better reconstruction if it is built from similar types of galaxies and that the number of components required may be significantly larger (with the associated increase in computational resources).
### 3.1 Interpolation due to missing data
As we have shown in Section 2 a simple projection of a galaxy spectrum onto the eigenbasis will result in a biased set of expansion coefficients. If, however, we account for the gaps within the data, we can correct the expansion coefficients and use these values to estimate the underlying spectrum. In the following analysis we will consider the case of randomly positioned gaps within a galaxy spectrum. This is designed to simulate the effect of excluding spectral regions due to the position of sky lines or artifacts in a spectrograph. We initially assume that we know the underlying eigenbasis that describes the galaxy populations (in section 6 we will expand this analysis to construct the eigenbasis itself from gappy data) and we ignore the effect of noise.
In Figure 3 we take three representative spectra, a zero age spectrum an intermediate age spectrum (0.16 Gyr) and an old stellar population (20 Gyr). Each of these spectra has been drawn from the sample of 222 galaxies in the Bruzual and Charlot sample described above. For each of these three spectra we mask the wavelength range 3800 Å to 4000 Å. We project each spectrum onto the eigenbasis over the spectral range 3500 Å to 8000 Å (excluding the masked region). We then correct these derived coefficients for the correlated nature of the eigenbasis (i.e. due to the masked regions). Figure 3 shows the reconstruction of the galaxy spectra within the masked region, when using 3 and 5 eigencomponents respectively. The solid line shows the true spectrum and the dotted line the reconstruction. To compare the accuracy of the reconstruction as a function of galaxy type we define an error that is independent of the overall galaxy flux. This error is given by the rms deviation between the reconstructed and “true” spectra (over the masked region) when both spectra are normalized by their scalar product to unity.
For three eigencomponents the reconstruction works well for the 0 Gyr and 0.16 Gyr galaxy spectra. The normalized rms deviation between the true spectrum and the reconstructed data is only 0.0016. For the 20 Gyr model the reconstructed spectrum produces the correct features present within the data (i.e. the absorption lines present in the true spectrum are found in the reconstruction) but their relative amplitudes are inconsistent. Over the masked spectral range the deviation between the reconstructed and true spectrum for this 20 Gyr model is 0.018, a factor of ten worse than the younger stellar types. Given that we are deriving the interpolation based on three spectral components that are constructed over the full wavelength range 3500 Å to 8000 Å it is remarkable that we can reproduce the observed spectra to such high accuracy.
If we incorporate a further two eigencomponents ($`N_{eigen}=5`$) the reconstruction of the 20 Gyr model is substantially improved. The deviation between the true and reconstructed spectrum falls to only 0.0045. The other two spectral types also show improvement in the interpolation though the magnitude of this improvement is not as dramatic as for that of the 20 Gyr spectrum. The first three eigenspectra are, therefore, more sensitive to the spectral features (both continuum and absorption lines) of star forming and intermediate age galaxies. This is not entirely surprising as the Bruzual and Charlot models from which we construct the eigenbasis are dominated by these types of galaxies (over 135 of the 222 spectra come from galaxies with ages less than 1 Gyr). The eigenspectra will, therefore, be weighted more towards these younger galaxy spectra than to the more evolved stellar populations.
### 3.2 Interpolation due to the effect of redshift
While the interpolation of galaxy spectra across narrow spectral intervals may be seen as relatively straightforward a more challenging problem is how to extrapolate a galaxy spectrum. The need for this will arise if we wish to project a galaxy spectrum onto an eigenbasis where the galaxy’s wavelength coverage is only a subset of that of the eigenspectrum (e.g. the eigenspectra and galaxy are at different redshifts).
In Figure 4 we demonstrate that the correction for gappy data can equally apply to the case of extrapolating a galaxy spectrum as well as for simple interpolation. For the 0 Gyr, 0.16 Gyr and 20 Gyr galaxy spectra we exclude the wavelength range 7100 Å to 8000 Å and apply the KL reconstruction as described above. As before, the solid line shows the true spectrum and the dotted line the reconstructed spectrum. The left hand panel shows the reconstruction when using three eigencomponents and the right hand panel for five components.
The 0 Gyr, 0.16 Gyr and 20 Gyr spectra are all accurately reconstructed from three components. The rms uncertainty in the 0 Gyr 0.16 Gyr and 20 Gyr spectra amount to 0.0016, 0.0006 and 0.001 respectively. The 0 Gyr and 20 Gyr models are systematically offset by approximately 4% when using three eigencomponents. As found in Section 3.1 increasing the number of eigenspectra used in the reconstruction improves the resulting spectra for all three galaxy types. The most marked improvement occurs for the 0 Gyr and 20 Gyr models where the deviation drops to less than 0.0002.
The results are, unsurprisingly, similar to those found for the case of simple interpolation. All three spectral types are well described by 3 eigencomponents. Of these the 0 Gyr and 20 Gyr spectra have the largest errors. Increasing the number of components used in the reconstruction reduces the dispersion between the true and corrected spectrum decreases. It is, therefore, clear that projecting a galaxy spectrum onto its eigenbasis gives a natural (and optimal in the quadratic sense) interpolation scheme. It utilizes the correlations inherent within the data to determine how individual spectral regions are related.
## 4 Galaxy Classification using Gappy Data
Projecting a galaxy onto its eigenbasis provides a very simple and natural classification scheme. As we have shown in Section 3 the eigenspectra are highly correlated with stellar spectral types (with the second and third eigenspectra correlating with O and K stellar spectral energy distributions). By projecting a galaxy spectrum onto this eigenbasis the expansion coefficients, $`a_i`$, will tell us the contribution of each of these eigenspectra to the overall spectral energy distribution. As has been shown previously, these expansion coefficients can then be used to separate galaxies into distinct spectral classes (Connolly et al. 1995, Folkes et al., 1996, Bromley et al., 1998).
In Figure 5a we demonstrate this effect for the Bruzual and Charlot model. The solid line shows the distribution of the first two expansion coefficients, $`a_1`$ and $`a_2`$ as a function of galaxy age. The galaxies form a simple, one parameter distribution, that transitions from star forming galaxies (bottom of the plot) to quiescent, old stellar populations (top of the plot). For the simple stellar population given by the Bruzual and Charlot model the correlation between expansion coefficients and galaxy age is extremely tight. In the case of real data we find that there is a much larger dispersion in the relation (Bromley et al., 1998).
Some of this dispersion may be due to the failure to correct for the gappy nature of real spectra. As we have shown in section 2, when galaxy spectra are projected onto an eigenbasis without correcting for the gaps within the data the eigenbasis is no longer orthogonal. This means that the eigenspectra become correlated. The consequence of this is that the expansion coefficients are also correlated and any classification scheme based on gappy data will be biased. As we will show, the biasing of the expansion coefficients due to the gappy nature of the data does not just introduce a larger statistical uncertainty into any derived classification, it can also produce systematic errors.
We initially consider the case of spectra with missing spectral regions due to the presence of sky lines or defects in the spectrograph (as in Section 3.1). The gaps within the data will manifest as small wavelength regions where the galaxy spectrum must be masked or interpolated across. We simulate this using the Bruzual and Charlot data by excising ten randomly positioned spectral regions, each of 45 Å in width. The effect of this masking on the derived expansion coefficients is shown in Figure 5a. The coefficients derived from the masked data are shown by crosses.
Masking of these spectral regions introduces a significant dispersion into the classification scheme. For the 10% masking adopted above, the dispersion in the classification correlation is approximately 0.03 in absolute number or a 3% error in terms of the sum of the square of the coefficients (the coefficients sum, in quadrature, to unity due to the scalar product normalization). If we apply the corrections to the expansion coefficients as described in section 2 we can reconstruct the original unbiased coefficients. In Figure 5a the corrected coefficients are shown by the filled ellipses. The size of the ellipses are defined by the three sigma errors on the corrected expansion coefficients as calculated from the variance analysis in Equation 6. We find that by applying the corrections to the coefficients we can recover the underlying true expansion coefficients and thereby derive an unbiased classification.
A more important effect in terms of the classification of galaxies is the effect of redshift. When we analyze an ensemble of galaxies over a range of redshifts, the intrinsic rest wavelengths that we sample will be dependent on the redshift of the galaxy in question. In principle we could just consider those spectral regions that are in common to all galaxies within our sample. In practice, however, the wide range in redshifts that we will be faced with in the 2dF and SDSS surveys may result in very little wavelength overlap for galaxies at the extremes of the redshift distribution (e.g. the redshift distribution for the SDSS is expected to have a significant tail of $`z>0.5`$ galaxies which would reduce the wavelength range common to all galaxies by $``$40%).
In Figure 5b we simulate the effect of redshift on the classification of galaxies. We assume that the galaxies are randomly distributed between redshifts of $`0<z<0.2`$ (a conservative assumption) and mask out those regions of the spectrum that lie beyond the 8000 Å cutoff (see section 3). The effect on the derived coefficients due to these censored data is shown by the crosses in Figure 5b. In contrast to the effect of randomly positioned gaps within a galaxy spectrum (which introduce a random scatter into the classification coefficients) we find that the effect of redshift is to systematically bias the expansion coefficients. The first expansion coefficient, $`a_1`$, is systematically over estimated by approximately 10% and the second component underestimated by approximately 25%. As before, the solid ellipses in Figure 5b show the expansion coefficients once corrected for the missing spectral regions. The size of the points reflect the three sigma uncertainties in the corrected coefficients due to the variance determined from Equation 6.
Therefore, if we apply the correction for the gaps within the observed data we can reproduce, to high accuracy, the underlying classification coefficients. Given our current, conservative, simulation where we excise up to 1500 Å we can recover the true coefficients to an accuracy of better than 0.002 in absolute number or 0.2%. Increasing the amount of data that we mask will naturally make the eigenspectra more correlated and the derived coefficients less accurate.
## 5 Reconstructing Spectral Energy Distributions from Noisy Data
In the previous sections we assumed that the observed galaxy spectrum was gappy but free of noise (essentially assuming that the weight function was zero or unity). The general form of Equation 2 enables us to extend these analyses to account for galaxy spectra in the presence of noise. We demonstrate here that the use of the KL expansion provides an optimal filtering of a noisy spectrum and that the correction for gaps within a spectrum is equally applicable in the presence of noise.
In the top panel of Figure 6 we show a 20 Gyr spectrum with a signal-to-noise of approximately 5 per pixel (the noise is constant as a function of wavelength). We project the spectrum onto the first three eigenspectra (as derived in Section 3) and determine the expansion coefficients. From this truncated expansion we can reconstruct the underlying galaxy spectral energy distribution. The reconstructed spectrum is shown in the lower panel of Figure 6 (dotted line) together with the true, noise free, spectrum (solid line). A comparison of the true and reconstructed spectra shows that they are consistent with a total deviation of 0.00039. Even given significant amounts of noise (on a pixel level) we can, therefore, reconstruct the underlying spectral energy distribution with a high level of accuracy.
The reason the reconstruction reproduces the galaxy spectrum so accurately is straightforward to understand if we consider the integrated signal-to-noise of the full spectrum. If each galaxy can be reproduced by 3-5 eigencomponents then we can describe a galaxy spectrum by at most 5 numbers. In principle we should, therefore, only require 5 data points on a spectrum to constrain these eigencomponents (in practice with only a small number of data points the eigenspectra become very correlated and the uncertainty in the derived expansion is large). Even in the case of substantial noise (per pixel) we can co-add the pixels to reduce the overall noise on the expansion. Applying this truncated expansion to real life observations provides an optimal (in the least-squares sense) filtering of the data and should provide a substantial improvement in signal-to-noise.
For the case of gaps within noisy spectra we can still reconstruct missing spectral regions. In Figure 7 we reproduce the analysis of Section 3.1 for a 20 Gyr spectrum. We exclude those data in the wavelength range 3800 Å to 4000 Å and reduce the input spectrum to a signal-to-noise of 5 (per pixel). The noisy spectrum is then projected onto the first three eigenspectra and the expansion coefficients corrected for the correlated eigenbasis. Using the first three eigenspectra we reconstruct the overall galaxy spectrum. The reconstructed spectrum is shown in Figure 7 by a dotted line and the true spectrum by a solid line. The reconstruction is almost identical to that derived from the noise-free spectrum. In the region 3800 Å to 4000 Å the deviation between the reconstructed spectrum using noisy data and the error free data is 0.019 (comparable to the noise free case).
Combining the optimal filtering of the KL truncated expansion with the correction for gaps within the data we can, therefore, reproduce galaxy spectra at very high signal-to-noise. A natural application for this procedure is the filtering of low signal-to-noise data derived from spectroscopic redshift surveys. Using the eigenbasis as a template for cross-correlation with the observed galaxy spectrum (Glazebrook et al. 1998) and correcting for the bias in the classification coefficients we can derive an optimal representation of the underlying spectrum, an estimate of the significance of the correlation and a measure of the classification coefficients (which describe how closely the galaxy is related to the overall distribution of galaxy spectral types). This latter information is a quality assurance test. If the classification coefficients do not lie within the general distribution then the galaxy in question has either an unusual spectral type (worth further study) or there is a miss-match in the classification (and the redshift is probably incorrect).
How well we will be able to classify and repair galaxy spectra will ultimately be limited by how we construct the eigenbasis itself. If the type of galaxies that we construct the eigenspectra from do not fully sample the population of galaxies to which we wish to apply the classification then there could be a significant mismatch between the spectral properties of the eigentemplates and the galaxies being reconstructed (e.g. if we tried to use normal galaxy spectra to classify QSOs and AGNs then the reconstruction would be poor). This problem can, however, be overcome by building the eigenbasis using a subset of galaxies that are selected to evenly sample the distribution of galaxy types rather than being weighted by the relative strengths of the different galaxy populations. Even without this approach the residuals between the observed and reconstructed spectra (within those spectral regions that contain valid data) will provide a measure of how well the eigenbasis can describe a particular galaxy spectrum and, consequently, whether the classification is valid.
## 6 Building Empirical Spectral Energy Distributions
By correcting for the gaps within galaxy spectra we have derived a simple mechanism for classifying galaxies (and interpolating across the regions of missing data) that can be applied to spectra that do not fully overlap. We can now extend the analysis to constructing an eigenbasis from gappy data. The earlier derivation assumed that we knew what the underlying eigenbasis should be. We also assumed that the eigenspectra are well constrained over the full spectral range that the galaxies cover. This is a feasible proposition if we use spectral synthesis models to derive a set of eigenspectra and then project observed galaxy spectra onto this basis. This has the advantage that one can relate the coefficients directly to the physical properties of the models (e.g. age or metalicity). Its disadvantage is that we know that the spectral synthesis models cannot yet reproduce the observed colors of galaxies (particularly at high redshift) and so may not describe fully the spectral properties of all galaxy populations. Secondly, the models are generally derived from intermediate resolution spectra (with a dispersion of $``$10 Å) while the new generation of spectroscopic surveys will have a substantially higher resolution (e.g. 3 Å for the SDSS). Therefore, by restricting ourselves to model spectra we may miss important physical information present within the spectral data.
In an ideal case we would want to build the set of eigenspectra directly from the observed spectra. In such a way the data for the eigenbasis and galaxies that we wish to classify would be taken through the same optical system and have the same intrinsic resolution. Unfortunately, relying on observations means that we must construct a set of eigenspectra from data that have different restframe spectral coverage (due to the redshift of the galaxies) and gaps within the spectra (e.g. from the removal of sky lines). If the galaxies occupy a range of redshifts then missing spectral regions will occur at different rest wavelengths. Therefore, for a large ensemble of galaxies over a range of redshifts, all spectral regions will be sampled by a number of galaxies and the eigenspectra can be constructed over the superset of wavelengths that the galaxies cover.
To build the eigenbasis we take an iterative approach in a manner analogous to that described in Everson and Sirovich (1995). After shifting all galaxy spectra to their restframe we linearly interpolate across all gaps within the spectra. This gives the zeroth order correction for each spectrum, $`f_i^0`$. From these corrected data we build the eigenspectra, $`e_i^0`$, and project each of the individual spectra onto this basis. After correcting the expansion coefficients for the gappy nature of the projection we can use the KL basis $`e_i^0`$ to interpolate across the regions of missing data and form a first order corrected spectrum, $`f_i^1`$. This procedure continues until convergence.
The iterative technique used to construct the eigenbasis and then repair the galaxy spectra is shown in Figure 8. Each galaxy used to create the eigenbasis has had 10 gaps of 100 Å randomly positioned within the spectrum. From this gappy data set we undertake the iterative procedure outline above. Figure 8 shows the spectrum of a 2 Gyr old galaxy for the wavelength range 3500 Å to 4500 Å. Within this spectrum the wavelength range 3900 Å to 4000 Å has been masked out. The series of panels show how the reconstruction of this masked region improves as we iteratively improve the underlying eigenbasis (with the true spectrum shown as a solid line and the reconstruction by a dotted line). Figure 8a shows the initial linear interpolation across the spectral region 3900 Å to 4000 Å from which the zeroth order eigenbasis, $`e_i^0`$ is constructed. The rms dispersion between the true spectrum and the linearly interpolated reconstruction is 0.019 (which should be treated as the fiducial mark against which all other reconstruction techniques are applied). The first order correction, $`f_i^1`$ (for 3 eigencomponents) is shown in Figure 8b. At this point the rms dispersion between the true and reconstructed spectra has already fallen to 0.0095. For a reconstruction using 3 eigencomponents the procedure converges rapidly with a difference between subsequent iterations of $`<1\%`$ by the fifth iteration (see Figure 8c). At this stage the reconstruction is stable with an rms uncertainty of 0.0077 (comparable to the values we derived when we knew the underlying eigenbasis). We can, therefore, increase the number of components used in the reconstruction (to improve the eigenbasis and the interpolation). Figure 8d shows this effect where, after iterating five times using 3 components, we increase the number of eigenspectra to five. The dispersion between the reconstructed and true spectrum falls to 0.0013 with five components.
The number of iterations and components used in the construction is dependent on the information present within the data. With the Bruzual and Charlot models the galaxy spectra can be reconstructed to an accuracy of less than 1% using only 5 eigencomponents. For real spectra we would expect the number of components to be dependent on the spectral resolution of the data and the intrinsic wavelength coverage.
## 7 Discussion
We have described a general framework for undertaking spectral classification of galaxy spectra, accounting for gaps within the data and different intrinsic rest wavelength coverage. It is expected that when this technique is applied to galaxy spectra from the next generation of spectroscopic surveys (such as the SDSS and 2dF) we will have a mechanism for measuring the spectral evolution of galaxies in terms of a common classification scheme that is almost independent of redshift.
Standard techniques for classifying galaxy spectra using eigenspectra have either ignored the effect of gaps within the data or restricted their analysis to wavelengths that are common to all galaxies within their sample. For the next generation of redshift surveys we can expect the redshift distribution of the galaxies to be broad and, if we were to restrict our analysis to common wavelengths, the available spectral range on which we could classify a galaxy to be very small. Our technique alleviates this problem. It allows a single eigenbasis to be derived over a very broad intrinsic wavelength range (from the data themselves) and the classification of the galaxy spectra to be corrected for the incomplete coverage. The derivation of the covariance matrix for this correction enables us to determine both a classification and a measure of the uncertainty on this parameter.
We expect that this classification technique will be equally applicable to continuum subtracted data (i.e. absorption line spectra) as it is to the spectrophotometric data we analyze here. The number of eigenspectra required to classify or reconstruct a galaxy spectrum will be dependent on the quality of the data (i.e. resolution and signal-to-noise) and the overall wavelength coverage. As noted previously the immediate application of this technique will be to measuring redshifts from local and high-redshift spectroscopic surveys. The analysis we describe provides an optimal noise suppression that, when combined with the redshift determination, will produce a high signal-to-noise representation of noisy and incomplete data together with an associated error estimate. In a following paper (Connolly and Szalay 1999) we describe the implementation of our techniques for constructing spectral energy distributions and classifying galaxy spectra using redshift survey data.
The application of this technique to astrophysical problems is, however, substantially more general than providing corrections to galaxy classification. It can equally well be applied to galaxy broadband photometry (Csabai et al. 1998) or a combination of spectrophotometry and photometry. In the next few years ground- and space-based instrumentation will provide broadband photometry and spectroscopy for large samples of galaxies covering wavelengths from the ultraviolet through to the far-infrared. The generalization of the construction of galaxy eigenspectra from noisy data and for spectra that do not have complete spectral coverage will enable us to construct a composite eigenbasis, and consequently composite galaxy spectral energy distributions, that cover a broad spectral range.
## 8 Conclusions
The use of a Principal Component Analysis or KL transform to classify galaxy spectra is becoming a standard technique in the analysis of spectroscopic survey data. The current applications of this approach have assumed that the galaxies we wish to classify and the eigenbasis we will use for the classification cover the same wavelength range. For real data this will clearly not be the case. Sky lines that are masked out of spectra and changes in rest wavelength coverage due to the redshift of a galaxy will all introduce gaps into the spectra. Unless we correct for this effect we will introduce systematic errors into our classification scheme. In this paper we derive a generalized form for the KL classification that accounts for the presence of gaps within galaxy spectra. We show that applying this technique to simulated spectra we can determine a robust classification of galaxy spectra (together with an error) that is relatively insensitive to the redshift of the galaxy in question.
We would like to thank Istvan Csabai for many useful discussions on the application and interpretation of the gappy KL analysis and the anonymous referee for comments that helped clarify the technical discussion. We acknowledge partial support from NASA grants AR-06394.01-95A and AR-06337.11-94A (AJC) and an LTSA grant (ASZ). The SVD analysis was undertaken using the Meschach library of linear algebra algorithms from Stewart and Leyk.
|
no-problem/9901/quant-ph9901069.html
|
ar5iv
|
text
|
# Entangling atoms in photonic crystals
## I Introduction
Quantum entanglement is one of the most remarkable feature of quantum mechanics. Coherent control of the entanglement between quantum systems attracts lot of attention mainly because of its potential application in quantum information processing. Simultaneously, experimental investigation of the entanglement allows us to test basic postulates of quantum mechanics and to answer fundamental epistemological questions. These questions are related to the original Gedanken experiment of Einstein, Podolsky and Rosen which triggered discussions about non-locality of quantum mechanics and motivated experimental proposals to test whether quantum mechanics is the complete non-local theory. The first experimental confirmation of the violation of Bell’s inequalities has been done with the help of entangled photons . A weak point of experiments with photons is an insufficient control of directions of emitted photons and small detectors efficiencies. This problem should be removed in proposals where highly excited (Rydberg) atoms are entangled. Probably the first proposal of such an experiment is described in Ref. . Other proposal have been presented in Refs. . Authors of these schemes proposed techniques how to create entangled atoms in microwave single-mode cavities. Recently, controlled entanglement between atoms separated apr. by $`10`$ mm interacting with an electromagnetic field in a high-$`Q`$ cavity has been experimentally realized . In addition, trapped ions have been created in entangled states .
In this paper we propose a simple scheme for entangling atoms in photonic crystals. We remind us that photonic crystals are artificially created three-dimensional periodic dielectric materials which exhibit a frequency gap or several gaps in spectrum of propagating electromagnetic (EM) waves . An EM wave with its frequency from the gap can not propagate in the structure in any direction. Photonic crystals operating at microwave frequencies were successfully created in laboratories . They consist of a solid dielectric and empty regions. The periodicity of a photonic crystal can be destroyed by removing or adding a piece of material which creates a defect EM mode in the structure. This mode is spatially localized around the region of the defect. The frequency of the mode and the spatial modulation of its electric field amplitude depends on properties of the defect . It means that one can adjust parameters of the defect mode by creating a suitable defect in the crystal. In particular, the spatial dependence of the mode amplitude can be adjusted to particular needs. In quantum optics, defect modes in photonic crystals can be used similarly as high-$`Q`$ single-mode cavities . The quality factor of a single mode in a metallic cavity can be of order of $`10^8`$ or more and similar values can be reached for a single defect mode in a photonic crystal . Today three-dimensional photonic crystals are available only at microwave frequencies. They can be used for experiments with Rydberg atoms, similarly as microwave cavities.
In this paper we consider two interactions via which one can produce entangled atoms. Firstly, we show that at it is possible to generate entangled atoms without a defect mode, using the action of the resonant dipole-dipole interaction (RDDI) mediated by off-resonant modes of photonic band continua. Secondly, we explore the scheme in which the atoms become mutually entangled due to the interaction with the defect-field mode.
The paper is organized as follows: Basic features of the proposed setup are described in Section II. In Section III we discuss how the atoms in photonic crystals can be entangled via the resonant dipole-dipole interaction. In Section IV we study in detail the entanglement of atoms which interact with a single defect mode in the photonic crystal. In Section V we conclude the paper with some remarks.
## II Setup of the scheme
We consider two mechanisms via which a system of identical atoms can be entangled in photonic crystals. We assume that the atoms are modeled by two-level systems having their transition frequencies in a photonic bandgap (PBG).
The first mechanism is the resonant dipole-dipole interaction (RDDI) mediated by off-resonant modes of the photonic-band continua \[see the Hamiltonian (12)\]. This interaction has been analyzed in detail by Kurizki and John and Wang as well as by John and Tran Quang . These authors have considered a system of two-level atoms. They have shown that if one of the atoms is excited and the other one is in its ground state, then they can exchange excitation in spite of the fact that their transitions frequencies are in a PBG and spontaneous emission is nearly totally suppressed. The RDDI can be understood as an energy exchange via localized field . This light tunneling (or photon-hopping conduction) can be very efficient when the distance between the atoms is much smaller than the light wavelength. The RDDI can occur either in a free space or in a cavity. However, in a free space the excitation is irreversibly radiated into the continuum of the field modes after a very short time (given by Fermi’s Golden rule) and the entanglement between the atoms is deteriorated rapidly.
The second mechanism is due to an excitation exchange via a defect mode which is resonant (or nearly resonant) with the atoms. This type of interaction explicitely involves a quantized defect mode and is described by the Hamiltonian (15).
These two interactions can also occur simultaneously. As we will see, the second mechanism is much more efficient and allows a coherent control over the process of entanglement. The first mechanism can be neglected in many cases, especially when the atoms have their transition frequencies near the center of a wide PBG and their distance is not much smaller than the wavelength of the resonant light.
In what follows we describe the basic setup of the proposed experiment in the case when the atoms interact only via the defect mode. Let us assume that one of the three atoms (let say the atom $`A`$) is prepared initially in its excited state while the other two atoms ($`B`$ and $`C`$) are initially in their ground states (see Fig.1). After the preparation the atoms are injected into cylindrical void regions of the crystal. We consider the photonic crystal of the geometry designed by Yablonovitch et al. although other appropriate geometries can be used as well. The void cylinders intersect at the center of the crystal. The defect-field mode (located near the center of the crystal) is initially prepared in its vacuum state. Firstly the atoms propagate freely in the void cylinders outside the defect-field (this is due to the fact that the transition frequencies of the atoms lie inside the wide PBG). When the atoms enter the defect region they start to interact with the single defect-field mode. And then again, after they leave the defect region they evolve freely. If the exited (ground) state of atom $`j`$ ($`j=A,B,C`$) is denoted as $`|e_j`$ ($`|g_j`$) and the $`n`$-photon state of the single mode defect field is denoted as $`|n`$ then the initial state of the system under consideration can written as
$$|\mathrm{\Psi }(0)=|e_A|g_B|g_C|0|e_A,g_B,g_C,0.$$
(1)
When we assume that in the defect region the atom-field interaction is governed by the Hamiltonian in the dipole and rotating wave approximations (see below) then the final state of the system reads
$$|\mathrm{\Psi }(t)=a(t)|e_A,g_B,g_C,0+b(t)|g_A,e_B,g_C,0+c(t)|g_A,g_B,e_C,0+\gamma (t)|g_A,g_B,g_C,1,$$
(2)
where $`t`$ is the time at which we detect the internal states of the atoms at the exit of the crystal. The final values of the amplitudes $`a`$, $`b`$, $`c`$ and $`\gamma `$ depend on a particular setup of the experiment including the coupling parameters and velocities of the atoms. For completeness of the description we specify trajectories $`𝐫_j(t)`$ of the three atoms which can move along the axes of the three void regions
$$𝐫_j(t)=𝐫_j(0)+𝐯_jt;j=A,B,C$$
(3)
with the vectors $`𝐫_j(0)`$ and $`𝐯_j`$ specified by their components as
$`𝐫_A(0)`$ $`=`$ $`{\displaystyle \frac{L}{4}}\{\mathrm{tan}\mathrm{\Theta },\sqrt{3}\mathrm{tan}\mathrm{\Theta },2\},`$ (4)
$`𝐯_A`$ $`=`$ $`{\displaystyle \frac{v_A}{2}}\{\mathrm{sin}\mathrm{\Theta },\sqrt{3}\mathrm{sin}\mathrm{\Theta },2\mathrm{cos}\mathrm{\Theta }\},`$ (5)
for the atom $`A`$. While for the other two atoms ($`B`$ and $`C`$) we have
$`𝐫_B(0)`$ $`=`$ $`{\displaystyle \frac{L}{4}}\{\mathrm{tan}\mathrm{\Theta },\sqrt{3}\mathrm{tan}\mathrm{\Theta },2\};`$ (6)
$`𝐯_B`$ $`=`$ $`{\displaystyle \frac{v_B}{2}}\{\mathrm{sin}\mathrm{\Theta },\sqrt{3}\mathrm{sin}\mathrm{\Theta },2\mathrm{cos}\mathrm{\Theta }\};`$ (7)
and
$`𝐫_C(0)`$ $`=`$ $`{\displaystyle \frac{L}{2}}\{\mathrm{tan}\mathrm{\Theta },0,1\};`$ (8)
$`𝐯_C`$ $`=`$ $`v_C\{\mathrm{sin}\mathrm{\Theta },0,\mathrm{cos}\mathrm{\Theta }\}.`$ (9)
Here we assume the origin of the coordinates in the center of the cube crystal with the side of the length $`L`$; $`\mathrm{\Theta }`$ is the angle between the axes of the cylinders and the $`z`$ direction.
## III Entanglement via resonant dipole-dipole interaction
In this Section we consider just two identical atoms ($`A`$ and $`B`$) which move in the crystal as it is described above. Here we assume that there is no defect mode in the crystal. The atoms move inside the crystal with constant velocities. The recoil effect due to interaction with electromagnetic field is neglected because the atoms are relatively heavy particles. The interaction between the atoms and the electromagnetic field modes inside the crystal is described by the Hamiltonian in the electric-dipole approximation
$`H`$ $`=`$ $`\mathrm{}\omega {\displaystyle \underset{j=A,B}{}}\sigma _z^j+\mathrm{}{\displaystyle \underset{\lambda }{}}\omega _\lambda a_\lambda ^{}a_\lambda `$ (10)
$``$ $`{\displaystyle \frac{1}{ϵ_0}}\mu (A)𝐃(𝐫_A){\displaystyle \frac{1}{ϵ_0}}\mu (B)𝐃(𝐫_B),`$ (11)
where $`a_\lambda `$ and $`a_\lambda ^{}`$ are the annihilation and creation operators of the field mode labeled by $`\lambda `$, $`𝐃(𝐫)`$ is the transverse displacement-field operator and $`\mu (A)`$ and $`\mu (B)`$ are the atomic dipole operators. When the atomic transition frequencies are far from abrupt changes in the density of modes the Hamiltonian (11) can be approximated as (for more details see )
$$H_{\mathrm{eff}}=\mathrm{}\omega \underset{j=A,B}{}\sigma _z^j+\mathrm{}\left(J_{AB}\sigma _+^A\sigma _{}^B+J_{BA}\sigma _{}^A\sigma _+^B\right),$$
(12)
where $`\sigma _\pm ^x`$ are raising and lowering operators of the atoms ($`x=A,B`$) and $`J_{AB}`$ is a matrix element for the effective description of the RDDI . For qualitative estimations, we will use $`J_{AB}`$ evaluated under the assumption that density of electromagnetic modes is that of a free space. In this case we find (for more details see )
$$\mathrm{}J_{AB}=\mu _i^{ge}(A)\mu _j^{eg}(B)\frac{1}{4\pi ϵ_0R^3}[(\delta _{ij}3\widehat{R}_i\widehat{R}_j)(\mathrm{cos}k_AR+k_AR\mathrm{sin}k_AR)(\delta _{ij}\widehat{R}_i\widehat{R}_j)k_A^2R^2\mathrm{cos}(k_AR)],$$
(13)
where $`R`$ is the distance between the atoms, $`k_A\omega /c`$, $`\mu ^{eg}`$ is the absolute value of the atomic dipole matrix element, and $`\widehat{R}_i`$ are the components of the unit vector starting at the position of the atom $`A`$ and oriented towards the atom $`B`$. We assume summation over repeated indeces. We stress that the above expression for $`J_{AB}`$ is valid in a free space, but in the limit $`R\lambda `$ it can also be applied for photonic crystals , i.e. it can be also used for an order-of-magnitude description of the RDDI effects in photonic crystals. These effects are most important in the regime $`R\lambda `$ when the free-space expression is valid also in photonic crystal. We will apply the Hamiltonian (12) with $`J_{AB}`$ given by Eq.(13) also for description of propagation of the atoms in the crystal in the case when $`RL`$. Even though the expression for $`J_{AB}`$ given by Eq.(13) is not precise it provide us with a rather good picture of the RDDI effect. We note that in order to find more appropriate expression for $`J_{AB}`$ we would have to know the electromagnetic eigenmodes for the three-dimensionally periodic structure and the corresponding derivation of $`J_{AB}`$ is very complicated.
In what follows we study the time evolution of the atoms initially prepared in the state $`|\mathrm{\Psi }(0)=|e_A,g_B`$ which is governed by the effective Hamiltonian (12) with time-dependent $`J_{AB}`$ (which is due to the fact that the atoms are moving through the crystal). We show that the RDDI can in principle be used for controlling the entanglement between atoms. We have solved the corresponding Schrödinger equation numerically. We have used parameters typical for Rydberg atoms and currently existing photonic crystals. In Fig.2 we plot results for the time-dependent atomic populations. We have chosen the atomic trajectories similarly as it is specified in the previous section but we added a small value ($`0.050.3`$ mm) to the initial $`x_A(0)`$ coordinate so that the trajectory of atom A is parallel but not identical with the axis of the cylinder. This prevents the collision of the atoms. The velocities of both atoms are $`200`$ m s<sup>-1</sup>.
Taking into account that the physical conditions are chosen such that the electromagnetic field is adiabatically eliminated from the interaction \[see the effective Hamiltonian (12)\] the two atoms due to the unitarity of the evolution remain in a pure state $`|\mathrm{\Psi }(t)_{AB}=a(t)|e_A,g_B+b(t)|g_A,e_B`$ with the amplitudes $`a(t)`$ and $`b(t)`$ which depend on the RDDI. From here it follows that due to the RDDI the two atoms become entangled. The degree of entanglement in the present case can be quantified with the help of the von Neumann entropy $`S=\mathrm{Tr}[\widehat{\rho }\mathrm{ln}\widehat{\rho }]`$ of each individual atom for which we have $`S=|a(t)|^2\mathrm{ln}|a(t)|^2|b(t)|^2\mathrm{ln}|b(t)|^2`$ where $`|a(t)|^2=1|b(t)|^2`$. In other words the degree of the entanglement depends on the population of internal levels of the atoms and highest degree of entanglement is attained for $`|a(t)|^2=|b(t)|^2=1/2`$.
As seen from Fig. 2 the population of the excited state of the atom $`A`$ depends on the minimal distance $`R_{\mathrm{min}}`$ between the atoms during the passage through the crystal. From our numerical investigation it follows that the atoms are most entangled for $`R_{\mathrm{min}}0.05`$ mm. However we note that with present techniques the controle over the position of atoms in the configuration considered here is about $`\pm 1`$ mm . Consequently, the RDDI is not very suitable for a coherent controle of entanglement between atoms in photonic crystals. In the following Section we consider entanglement via a defect mode when the currently available precision control is sufficient.
## IV Entanglement via a defect mode
Let us consider the interaction of the atoms with a single defect-field mode in the dipole and the rotating-wave approximations. We assume that the distance between the atoms is always sufficiently large so that they do not interact via RDDI. The corresponding Hamiltonian can be written as
$`H`$ $`=`$ $`\mathrm{}\omega {\displaystyle \underset{j=A,B,C}{}}\sigma _z^j+\mathrm{}\omega _0a^{}a`$ (14)
$`+`$ $`\mathrm{}{\displaystyle \underset{j=A,B,C}{}}\left[G(𝐫_j)\sigma _+^ja+G^{}(𝐫_j)\sigma _{}^ja\right],`$ (15)
where $`\omega _0`$ is the mode frequency (which we assume to be equal to the atomic transition frequency $`\omega `$), $`\sigma _\pm ^j`$ are atomic raising and lowering operators and $`𝐫_A`$ and $`𝐫_B`$ are the positions of the atoms. The position dependence of the coupling parameters $`G(𝐫_j)`$ can be expressed as
$$G(𝐫_j)=G_0ϵ𝒟_jf(𝐫_j),$$
(16)
where $`f(𝐫)`$ is the field-mode amplitude at the position $`𝐫`$, $`ϵ`$ is the electric-field polarization direction of the defect mode and $`𝒟_j`$ is a unit vector in the direction of the atomic dipole matrix element of the atom $`j`$. It is known that the spatial dependence of a defect-mode amplitude is a function which oscillates and decays exponentially . A particular profile of the spatial dependence of the defect mode can be adjusted via a properly generated defect of the periodicity. A rigorous calculation of the electromagnetic field in the presence of a defect in a $`3`$D photonic crystal can be a difficult task. In this paper we use a model profile of the spatial dependence of the electric field. Similar profiles have already been created in existing photonic crystals . We note that for the purpose of the proposed experiment a complete information about the mode shape is not needed. The results of the experiment depend only on the shape along the trajectories of the atoms. In what follows we use the profile
$$f(𝐫)=\mathrm{exp}\left[\frac{|𝐫𝐑_0|}{R_{\mathrm{def}}}\right]\mathrm{sin}(𝐤𝐫+\mathrm{\Phi }),$$
(17)
where $`𝐑_\mathrm{𝟎}`$ is the position around which the mode is localized, $`R_{\mathrm{def}}`$ is a parameter (defect-mode radius) describing the rate of the exponential decay of the mode envelope, $`\mathrm{\Phi }`$ is a phase factor and $`𝐤`$ is the parameter describing spatial oscillations of the field mode. We chose its magnitude to be $`k=\pi /a`$ where $`a`$ is the value of the side of an elementary cubic cell in the photonic crystal. We consider values of the constant $`R_{\mathrm{def}}`$ comparable with $`a`$. We estimate the value of the coupling constant $`G_0`$ from microcavity experiments
$$G_0=\sqrt{\frac{V_{\mathrm{cav}}}{V_{\mathrm{eff}}}}\mathrm{\Omega },$$
(18)
where $`V_{\mathrm{cav}}`$ is the modal volume of the microcavity mode, $`V_{\mathrm{eff}}`$ is the effective modal volume of the defect mode and $`\mathrm{\Omega }`$ is the vacuum Rabi frequency in the microwave experiment. The numerical values are : $`V_{\mathrm{cav}}=11.5\mathrm{cm}^3`$ and $`\mathrm{\Omega }=43`$ kHz. When we consider the transitions between levels $`63P_{3/2}`$ and $`61D_{3/2}`$ of Rubidium atoms, then the atomic transition frequency is $`\omega /(2\pi )=21506.51`$ MHz. Finally, the effective modal volume can be approximated as
$$V_{\mathrm{eff}}=\frac{4}{3}\pi (2R_{\mathrm{def}})^3.$$
(19)
Because the atoms are moving the coupling parameters depend on time \[in what follows we will use the notation $`G_j(t)`$\] We consider positions of the atoms given by Eqs.(5) and (7). In some cases we add a small value to $`x_A(0)`$ given by (5) to prevent the atoms to collide in the center of the crystal. Details of the geometry of the proposed experiment are given in Section II and in Fig .1.
Once we have specified all model parameters we can solve the Schrödinger equation for the system which is supposed to be initially prepared in the state $`|\mathrm{\Psi }(0)=|e_A,g_B,g_C,0`$. Due to the fact that the number of excitations is an integral of motion in the present case the state vector at time $`t>0`$ has the form (2) and the corresponding Schrödinger equation can be rewritten into a set of a system of linear differential equations. These equations can be solved analytically for time-independent coupling constants $`G_j(t)`$ which is not our case. Therefore we have to integrate the equations numerically.
### A One atom
We start our discussion with a problem when just a single atom (let say the atom $`A`$) passing through the crystal is considered. We assume that the atom is on resonance with the defect mode (i.e., $`\omega =\omega _0`$).
This corresponds to the Jaynes-Cummings model with a time-dependent coupling constant. The general solution of this model for real coupling parameter was found Sherman et al.. With the initial condition $`|\mathrm{\Psi }(0)_=|e_A,0`$ the solution can be expressed as
$`|\mathrm{\Psi }(t)`$ $`=`$ $`\mathrm{cos}\left[{\displaystyle _0^t}G_A(t^{})𝑑t^{}\right]|e_A,0`$ (20)
$``$ $`i\mathrm{sin}\left[{\displaystyle _0^t}G_A(t^{})𝑑t^{}\right]|g_A,1.`$ (21)
This implies for the atomic excitation
$$P_\mathrm{e}^{(A)}(t)=\mathrm{cos}^2\left[_0^tG_A(t^{})𝑑t^{}\right].$$
(22)
In the case of the defect mode with linear dimensions much smaller than the side of the crystal we can use the approximation
$$_0^tG_A(t^{})𝑑t^{}_{\mathrm{}}^{\mathrm{}}G_A(t^{})𝑑t^{}.$$
(23)
We note that this integral for a given choice of the profile function \[see Eq.(17)\] with the phase of the field mode $`\mathrm{\Phi }=0`$ equals to zero. This means that the atom exits the crystal in the same state as it entered it. Obviously the defect mode also remains in its initial (vacuum) state. In Fig.3a we plot the time dependence of the coupling constant between the atom $`A`$ and the defect mode. While in Fig.3b we present the time dependence of the corresponding excited-state probability. It is assumed that the defect is located at the center $`𝐑_0=\mathrm{𝟎}`$ of the crystal. The other parameters are chosen such that $`\mathrm{\Phi }=0`$ rad , $`𝐤=(0,0,k)`$, $`𝒟_A=ϵ=(1,0,0)`$ \[see Eqs. (16),(17)\]. We assume that the atom moves along the axis of the cylindrical cavity. The velocity of the atom is chosen to be $`v_A=500\mathrm{m}\mathrm{s}^1`$. From Fig. 3a we clearly see that the atom on its way through the crystal interacts with the defect mode just around the center of the crystal. The other important feature is seen from Fig.3b, i.e. The atom is transiently entangled with the defect mode in the center of the crystal. Nevertheless it leaves the crystal in a pure (unentangled) state. This effect of “spontaneous” disentanglement of the atom from the defect mode is very important when we consider creation of pure entangled state of two atoms.
### B Two atoms
Let us consider a situation when two atoms interact with the same defect mode as in the previous case. The atoms have their dipoles oriented along the direction $`ϵ`$ of the electric-field polarization. The velocity of the atom $`A`$ is $`500\mathrm{m}\mathrm{s}^1`$. The time evolution of the corresponding atomic populations for various velocities of the atom $`B`$ are plotted in Fig. 4.
Firstly we consider both atoms to have the same velocity (see Fig. 4a). In this case we assume that the atom $`A`$ is displaced from the axis of the cylindrical hole through which it flies \[i.e. we add $`0.3`$ mm to $`x_A(0)`$ given by (5)\] to avoid the influence of the RDDI between the atoms and their collision. We see that the atoms strongly interact with the field in the region of the defect. However, after the interaction the initial state of the system is approximately restored (see the “stationary” values of the probability amplitudes $`a(\tau )`$, $`b(\tau )`$ and $`\gamma (\tau )`$ which are displayed in the figures). It is interesting to compare Fig. 3b with Fig. 4a to see how the time evolution of the population of the atom $`A`$ is modified by the presence of the additional atom $`B`$. We see that for the given set of parameters the presence of the atom $`B`$ does not influence the dynamics of the atom $`A`$ significantly.
Now we will study how the level population depends on the velocity of the atom $`B`$. From Fig. 4 we see that for properly chosen velocity the interaction between the atoms mediated by the defect field can be pronounced. For instance, from Fig. 4b (here $`v_B=490\mathrm{m}\mathrm{s}^1`$) we see that not only the excitation of the atom $`B`$ can be higher than the population of the atom $`A`$, but also the defect mode becomes partially excited and entangled with the atomic system.
When the atom $`B`$ has the velocity $`v_B=515\mathrm{m}\mathrm{s}^1`$ (see Fig. 4c) then the defect mode in the stationary limit is in the vacuum state \[$`\gamma (\tau )0.0616i`$\] and is (with high precision) completely disentangled from the atomic system. It is interesting to note that in this particular situation the defect mode mediates transfer of most of the excitation from the atom $`A`$ to the atom $`B`$.
Let us assume now the velocity of the atom $`B`$ to be $`v_B=532.8\mathrm{ms}^1`$ (see Fig. 4d). In this case the defect mode in the stationary limit is again in the vacuum state and is completely disentangled from the atomic system. Moreover the amplitudes $`a(\tau )`$ and $`b(\tau )`$ are in this case almost equal, which means that the atoms at the exit from the crystal are in the state $`|\mathrm{\Psi }=(|e_A,g_B+|g_A,e_B)/\sqrt{2}`$, i.e. they are prepared in a pure maximally entangled state.
In the cases presented in Fig. 4 the phase factor $`\mathrm{\Phi }`$ of the defect mode is set to zero so that the integrals of the coupling constants $`G_A(t)`$ and $`G_B(t)`$ over the trajectories of the atoms are equal to zero. The defect-mode radius $`R_{\mathrm{def}}=10`$ mm. We have also studied the dynamics for other values of $`\mathrm{\Phi }`$, when the integrals of the coupling constants differ from zeros. In this case the disentanglement of the defect mode and the atoms is not so well pronounced, i.e. the defect mode becomes excited. We have also found a general feature: If the integrals of coupling constants are zeros and the coupling constants are small enough then the defect mode after the interaction is left in the vacuum state. However, if we increase the couplings (by decreasing the mode volume $`V_{\mathrm{eff}}`$) the defect mode can be left in an excited state \[i.e. $`\gamma (\tau )0`$; see the expression for the state vector (2)\]. Consequently, the atoms are left in a mixture state.
We have also analyzed the situation when the defect mode is not located directly at the center of the crystal. Moreover we have assumed that $`\mathrm{\Phi }0`$. It can be shown that even in this case it is possible to find a value $`v_B`$ at which the atoms exit the crystal in a nearly pure maximally entangled state.
### C Three atoms
Let us consider the same setup as in our previous discussion except we assume now three atoms flying through the crystal (see Fig. 1). These three two-level Rydberg atoms ($`A`$, $`B`$ and $`C`$) are injected into the holes at the bottom side of the crystal simultaneously. The atom $`A`$ is initially in its upper level $`|e_A`$ while atoms $`B`$ and $`C`$ are initially in their lower states $`|g_B`$ and $`|g_C`$. The single defect mode is initially prepared in its vacuum state $`|0`$. The atoms move along the axes of the holes and interact with the defect mode in the central region of the crystal. The electric-field amplitude of the mode is given by Eq. (17). We consider slightly asymmetric position of the defect mode in the crystal (the reason is explained below). In Fig.5 we present plots of the final atomic populations versus velocities $`v_B`$ and $`v_C`$ while $`v_A`$ is fixed at the value $`500\mathrm{m}\mathrm{s}^1`$. These plots show that adjusting atomic velocities we can obtain required probabilities such that in the final state (2) the probability amplitude $`\gamma (\tau )`$ is equal to zero, which means that the defect mode is decoupled from the atomic system. The atoms are then in a pure superposition state. In particular, if we select velocities $`v_B=536.4\mathrm{ms}^1`$ and $`v_C=527.4\mathrm{ms}^1`$, we obtain a final state with equal probabilities $`|a(\tau )|^2=|b(\tau )|^2=|c(\tau )|^20.33`$ (see Fig.6). Square of $`|\gamma (\tau )|`$ gives the probability of the photon in the final state approximately equal to $`0.02`$. It means that the atomic subsystem is in a good approximation decoupled from the field subsystem.
We have chosen an asymmetric position of the defect mode with respect to the center of the crystal because for the symmetric position we were able to obtain the “symmetric” result $`|a(\tau )|^2=|b(\tau )|^2=|c(\tau )|^20.33`$ only when two of the velocities are equal. In this case we face the problem of the collision of the atoms. We expect that a better choice of the defect geometry might produce a final state more disentangled from the field as is the case presented in Fig.6.
We see from Figs.5 that variations of the final atomic populations are rather robust with respect to changes in velocities, i.e. uncontrolled velocity fluctuations (which in experiments can be reduced up to $`0.4\mathrm{ms}^1`$ ) do not deteriorate the predicted entanglement.
## V Conclusions
In this paper we have shown that atoms can be entangled in photonic crystals via dipole interaction mediated by off-resonant modes or via an interaction with a single defect mode. In the first mechanism (RDDI) the atoms can coherently exchange excitation while only a very small part of this energy is radiated into the field. However, this interaction might not be easy to control in an experiment because it requires a high precision position control of the position of the atoms. The second mechanism (via a single resonant defect mode) is experimentally more promising because it can be realized with currently available microwave photonic crystals and with highly excited Rydberg atoms.
We have shown that atoms can be prepared in pure entangled states and that the probability amplitudes of the generated superposition states of the atoms can be coherently controlled by varying the velocities of the atoms or by varying the orientations of the atomic dipole matrix elements.
In our scheme of entanglement via defect modes in photonic crystals the distance between the entangled atoms at the exit from media depends on the size of the media, the angle between the atomic trajectories, the atomic velocities and the life of the atoms. For the parameters used in this paper the distance between the entangled atoms is of the order of tens of centimeters.
Finally, we think that investigation of dynamics of Rydberg atoms in photonic crystals is an interesting complement to current experimental cavity quantum electrodynamics.
|
no-problem/9901/hep-th9901068.html
|
ar5iv
|
text
|
# References
DFTUZ 98/34
Parity and CT realization in QCD
Vicente Azcoiti and Angelo Galante
Departamento de Física Teórica, Facultad de Ciencias, Universidad de Zaragoza,
50009 Zaragoza (Spain).
ABSTRACT
We show that an essential assumption in the Vafa and Witten’s theorem on P and CT realization in vector-like theories, the existence of a free energy density in Euclidean space in the presence of any external hermitian symmetry breaking source, does not apply if the symmetry is spontaneously broken. The assumption that the free energy density is well defined requires the previous assumption that the symmetry is realized in the vacuum. Even if Vafa and Witten’s conjecture is plausible, actually a theorem is still lacking.
A few years ago Vafa and Witten gave an argument against spontaneous breaking of parity in vector-like parity-conserving theories as QCD . The main point in their proof was the crucial observation that any arbitrary hermitian local order parameter $`X`$ constructed from Bose fields should be proportional to an odd power of the four indices antisymmetric tensor $`ϵ^{\mu \nu \rho \eta }`$ and therefore would pick-up a factor of $`i`$ under Wick rotation. The addition of an external symmetry breaking field $`\lambda X`$ to the Lagrangian in Minkowski space becomes then a pure phase factor in the path-integral definition of the partition function in Euclidean space. But a pure phase factor in the integrand of a partition function with positive definite integration measure can only increase the vacuum energy density and their conclusion was that, in such a situation, the mean value of the order parameter should vanish in the limit of vanishing symmetry breaking field.
A weak point in this simple and nice argument is the assumption that the vacuum energy density (equivalently, the free energy density) is well defined when the symmetry breaking external field $`\lambda `$ is not zero.
We want to show here how Vafa and Witten’s argument breaks down if parity is spontaneously broken. In other words, the assumption that the vacuum energy density is well defined at non-vanishing $`\lambda `$ requires the previous assumption that parity is not spontaneously broken.
Before going on with the presentation of our argument, let us say that it is rather surprising that the impossibility to break spontaneously a symmetry depends so crucially on the fact that an hermitian order parameter picks-up a factor of $`i`$ under Wick rotation. It is well known in Statistical Mechanics that the inclusion of an external symmetry breaking field in the Hamiltonian of a statistical system is a useful tool to analyze spontaneous symmetry breaking. If the symmetry breaking term is finite, i.e. if its contribution to the Hamiltonian is proportional to the number of degrees of freedom, a very small perturbation is enough to select one between the degenerate vacua when the symmetry is spontaneously broken. But it is also known that the addition of a symmetry breaking term to the Hamiltonian is not necessary to analyze spontaneous symmetry breaking. The analysis of the probability distribution function (p.d.f.) of the order parameter in the symmetric model has been extensively used to investigate spontaneous symmetry breaking in spin systems or to analyze in complex systems, as spin-glasses, the structure of equilibrium states not connected by symmetry transformations . The p.d.f. formalism has also been extended to quantum field theories with fermionic degrees of freedom and applied to the analysis of the chiral structure of the vacuum in vector-like gauge theories. In other words, the symmetric action contains enough information on the vacuum structure.
In our approach, in order to work with well defined mathematical objects, we will use the lattice regularization scheme and assume that the lattice regularized action preserves, as for Kogut-Susskind fermions, the positivity of the determinant of the Dirac operator. The other essential assumption we use is that the hermitian P-non-conserving order parameter is a local operator constructed from Bose fields and therefore, as any intensive operator, it does not fluctuate in a pure vacuum state. This property is equivalent to the statement that all connected correlation functions verify cluster property in a pure vacuum state.
The Euclidean path integral formula for the partition function is
$$𝒵=𝑑A_\mu ^a𝑑\overline{\psi }𝑑\psi exp\left(d^4x\left(L(x)+i\lambda X(x)\right)\right)$$
(1)
where following Vafa and Witten we have exhibited the factor of $`i`$ that arises from Wick rotation, i.e. X in (1) is real.
Using the p.d.f. of the order parameter $`X`$, we can write the partition function as
$$𝒵(\lambda )=𝒵(0)𝑑\stackrel{~}{X}P(\stackrel{~}{X},V)e^{i\lambda V\stackrel{~}{X}}$$
(2)
where $`V`$ in (2) is the number of lattice sites, $`P(\stackrel{~}{X},V)`$ is the p.d.f. of $`X`$ at a given lattice volume
$$P(\stackrel{~}{X},V)=\frac{𝑑A_\mu ^a𝑑\overline{\psi }𝑑\psi e^{d^4xL(x)}\delta \left(\overline{X}(A_\mu ^a)\stackrel{~}{X}\right)}{𝑑A_\mu ^a𝑑\overline{\psi }𝑑\overline{\psi }e^{{\scriptscriptstyle d^4xL(x)}}}$$
(3)
and
$$\overline{X}(A_\mu ^a)=\frac{1}{V}d^4xX(x)$$
Notice that, since the integration measure in (3) is positive or at least semi-positive definite, $`P(\stackrel{~}{X},V)`$ is a true well normalized p.d.f.
Let us assume that parity is spontaneously broken. In the simplest case in which there is no an extra vacuum degeneracy due to spontaneous breakdown of some other symmetry, we will have two vacuum states as corresponds to a discrete $`Z_2`$ symmetry. Since $`X`$ is an intensive operator, the p.d.f. of $`X`$ will be, in the thermodynamical limit, the sum of two $`\delta `$ distributions:
$$\underset{V\mathrm{}}{lim}P(\stackrel{~}{X},V)=\frac{1}{2}\delta (\stackrel{~}{X}a)+\frac{1}{2}\delta (\stackrel{~}{X}+a)$$
(4)
At any finite volume, $`P(\stackrel{~}{X},V)`$ will be some symmetric function ($`P(\stackrel{~}{X},V)=P(\stackrel{~}{X},V)`$) developing a two peak structure at $`\stackrel{~}{X}=\pm a`$ and approaching (4) in the infinite volume limit.
Due to the symmetry of $`P(\stackrel{~}{X},V)`$ we can write the partition function as
$$𝒵(\lambda )=2𝒵(0)Re_0^{\mathrm{}}P(\stackrel{~}{X},V)e^{i\lambda V\stackrel{~}{X}}𝑑\stackrel{~}{X}$$
(5)
and if we pick up a factor of $`e^{i\lambda Va}`$
$$𝒵(\lambda )=2𝒵(0)Re\left(e^{i\lambda Va}_0^{\mathrm{}}P(\stackrel{~}{X},V)e^{i\lambda V(\stackrel{~}{X}a)}𝑑\stackrel{~}{X}\right)$$
(6)
which after simple algebra reads as follows:
$`𝒵(\lambda )/(2𝒵(0))=`$ $`\mathrm{cos}(\lambda Va){\displaystyle _0^{\mathrm{}}}P(\stackrel{~}{X},V)\mathrm{cos}\left(\lambda V(\stackrel{~}{X}a)\right)𝑑\stackrel{~}{X}`$ (7)
$``$ $`\mathrm{sin}(\lambda Va){\displaystyle _0^{\mathrm{}}}P(\stackrel{~}{X},V)\mathrm{sin}\left(\lambda V(\stackrel{~}{X}a)\right)𝑑\stackrel{~}{X}`$
The relevant zeroes of the partition function in $`\lambda `$ can be obtained as the solutions of the following equation:
$$\mathrm{cot}(\lambda Va)=\frac{_0^{\mathrm{}}P(\stackrel{~}{X},V)\mathrm{sin}\left(\lambda V(\stackrel{~}{X}a)\right)𝑑\stackrel{~}{X}}{_0^{\mathrm{}}P(\stackrel{~}{X},V)\mathrm{cos}\left(\lambda V(\stackrel{~}{X}a)\right)𝑑\stackrel{~}{X}}$$
(8)
Let us assume for a while that the denominator in (8) is constant at large $`V`$. Since the absolute value of the numerator is bounded by 1, the partition function will have an infinite number of zeroes approaching the origin ($`\lambda =0`$) with velocity $`V`$. In such a situation the free energy density does not converge in the infinite volume limit.
But this is essentially what happens in the actual case. In fact if we consider the integral in (8)
$$f(\lambda V,V)=_0^{\mathrm{}}P(\stackrel{~}{X},V)\mathrm{cos}\left(\lambda V(\overline{X}a)\right)𝑑\stackrel{~}{X}$$
(9)
as a function of $`\lambda V`$ and $`V`$ it is easy to check that the derivative of $`f(\lambda V,V)`$ respect to $`\lambda V`$ vanishes in the large volume limit due to the fact that $`P(\stackrel{~}{X},V)`$ develops a $`\delta (\stackrel{~}{X}a)`$ in the infinite volume limit. At fixed large volumes $`V`$, $`f(\lambda V,V)`$ as function of $`\lambda V`$ is an almost constant non-vanishing function (it takes the value of $`1/2`$ at $`\lambda V=0`$). The previous result on the zeroes of the partition function in $`\lambda `$ remains therefore unchanged; it generalizes the Lee-Yang theorem on the zeroes of the grand canonical partition function of the Ising model in the complex fugacity plane to any statistical model with a discrete $`Z_2`$ symmetry.
To illustrate this result with an example, let us take for $`P(\stackrel{~}{X},V)`$ a double gaussian distribution
$$P(\stackrel{~}{X},V)=\frac{1}{2}\left(\frac{V}{\pi }\right)^{1/2}\left(e^{V(\stackrel{~}{X}a)^2}+e^{V(\stackrel{~}{X}+a)^2}\right)$$
(10)
which gives for the partition function
$$𝒵(\lambda )=Z(0)\mathrm{cos}(\lambda Va)e^{\frac{1}{4}\lambda ^2V}$$
(11)
and for the mean value of the order parameter
$$<iX>=\frac{1}{2}\lambda +\mathrm{tan}(\lambda aV)a$$
(12)
The zeroes structure of the partition function is evident in (11) and consequently the mean value of the order parameter (12) is not defined in the thermodynamical limit. Notice also that if $`a=0`$ (symmetric vacuum), the free energy density is well defined at any $`\lambda `$ and then Vafa and Witten’s argument applies.
In conclusion we have shown that an essential assumption in the Vafa and Witten’s theorem on P and CT realization in vector-like theories, namely the existence of a free energy density in Euclidean space in the presence of any external hermitian symmetry breaking source, does not apply if the symmetry is spontaneously broken. The assumption that the free energy density is well defined requires the previous assumption that the symmetry is realized in the vacuum.
To clarify this point let us discuss a simple model which, as vector-like theories, has a positive definite integration measure and, after the introduction of an imaginary order parameter, a complex action: the Ising model in presence of an imaginary external magnetic field. This model verifies all the conditions of Vafa-Witten theorem. If we assume that the free energy density exists, we will conclude that the $`Z_2`$ symmetry is not spontaneously broken. This is obviously wrong in the low temperature phase. The solution to this paradox lies in the fact that the free energy density in the low temperature phase and for an imaginary magnetic field is not defined (it is singular on the imaginary axis of the complex magnetic field plane). It is true that this model is not a vector-like gauge model but in any case verifies all Vafa-Witten conditions, except the existence of the free energy density. This example demonstrates that such an assumption is not trivial and, what is more relevant, to assume the existence of the free energy density is at least at the same level than assume the symmetry be realized in the vacuum.
A possible way to prove Vafa and Witten’s claim on parity realization in vector-like theories could be to show the existence of a Transfer Matrix connecting the Euclidean formulation with the Hamiltonian approach in the presence of any hermitian symmetry breaking field. A weaker sufficient condition would be the positivity of $`Z(\lambda )`$ around $`\lambda =0`$, even if, from a mathematical point of view, it is not a necessary one for the symmetry to be realized.
The proof of these conditions for any symmetry breaking operator seems very hard, even for the weaker condition. However in the case of the more standard operator $`F\stackrel{~}{F}`$, associated to the $`\theta `$-vacuum term, the reflection positivity of $`𝒵`$ has been shown for the two dimensional pure gauge model using the lattice regularization scheme . Up to our knowledge a generalization of this result to four dimensional theories and (or) dynamical fermions does not exists. Only arguments suggesting that a consistent Hamiltonian approach could be constructed in the four dimensional pure gauge Yang-Mills model can be found in the literature . Summarizing, even if Vafa and Witten’s conjecture seems to be plausible, a theorem on the impossibility to break spontaneously parity in vector-like theories is still lacking.
Acknowledgements
We thank M. Asorey for useful discussions and E. Witten for a critical reading of the manuscript. This work has been partially supported by CICYT (Proyecto AEN97-1680). A.G. was supported by a Istituto Nazionale di Fisica Nucleare fellowship at the University of Zaragoza.
|
no-problem/9901/cond-mat9901226.html
|
ar5iv
|
text
|
# Phase Coexistence of Complex Fluids in Shear Flow
## I Introduction
Shear flow induces phase transitions and dynamic instabilities in many complex fluids, including wormlike micelles , liquid crystals, ; and lamellar surfactant systems which can “roll” into multilamellar vesicles (“onions”) . These instabilities typically manifest themselves in non-monotonic constitutive curves such as those in Fig. 1 , and in several systems, including wormlike micelles and lamellar surfactant solutions, are accompanied by observable coexistence of two macroscopic “phases” of material.
If a mean strain rate forces the system to lie on an unstable part of the flow curve (with negative slope), the system can phase separate into regions (bands) with high and low strain rates and still maintain the applied strain rate . Fig. 1 shows that phase separation can occur at either common stress or common strain rate, depending on the geometry of the bands : bands stacked along the axis of a Couette cell have the same strain rate and different shear stresses, while radial phase separation imposes a uniform shear stress and different strain rates. The shear-thinning wormlike micelle system phase separates radially into common-stress bands, while shear-thickening systems have been observed to separate into bands with both the common strain rate (worms) or common stress (worms and onions) geometries, although the evidence for true steady state phase separation at common stress is not yet firm.
Other systems with flow-induced “phase transitions” include colloidal suspensions of plate-like particles (which shear thicken and sometimes crystallize) or nearly monodisperse spheres , as well as a variety of surfactant-like solutions of diblock copolymers in selective solvents . With so many increasingly detailed and careful experiments on so many systems, it would be nice to have a consistent framework for non-equilibrium transitions. Unfortunately, most systems are sufficiently complicated that none of the observed transitions can be completely described, even qualitatively, by a credible microscopic model. For example, in certain limits the class of shear-thinning wormlike micelle systems which shear band has a mature theory for the linear rheology, which may be extended using successful ideas from entangled polymer dynamics to predict an instability . Unfortunately, a complete description of phase coexistence also requires knowledge of the shear-induced state, as well as details of the concentration dependence and, as we shall see, the inhomogeneous contribution to the dynamical equations. We are, at present, far from having all of these ingredients, and in many cases we do not have a clear understanding of even the structure of the high shear rate state, much less its dynamics.
Recently, we have studied a well-known model, the Doi model for rigid rod suspensions in shear flow , which, while admittedly the product of many approximations, provides physically well-founded dynamics for both quiescent and shear-induced states. Although the shear rates necessary for inducing a transition are, in practice, typically quite high unless very long rods are used, and physical systems are often susceptible to various dynamical instabilities, this model system is quite helpful for building intuition about how to calculate non-equilibrium “phase diagrams” and how their resulting topologies resemble and differ from their equilibrium counterparts.
A vexing question for non-equlibrium calculations is how to replace the free energy minimization familiar from equilibrium thermodynamics, to determine the analog of a first order phase transition. In the context of Fig. 1, one needs to determine the selected stress for an imposed strain rate (for phase separation at a common stress). It has emerged that an unambiguous resolution of this problem is to include explicit non-local terms in the dynamical equations and explicitly construct the coexisting state . This reduces to the equilibrium construction in the case of zero shear, and can be shown to yield a single (barring accidental degeneracies) stress (given all other imposed conditions) at which coexistence occurs .
Below we summarize some results of our calculations on the Doi model for rigid rod suspensions in shear flow ; the details will be published elsewhere . This system is surprisingly rich, given its apparent simplicity and fairly obvious coupling of internal order to flow. Then we discuss some aspects of the interface construction for determining coexistence, and how it compares to its equilibrium counterpart.
## II The Doi Model
The modified Doi model describes the dynamics of the rod-like particles suspension. The orientational degrees of freedom are parametrized by the conventional liquid crystalline order parameter tensor
$$Q_{\alpha \beta }(𝐫)=\nu _\alpha \nu _\beta \frac{1}{3}\delta _{\alpha \beta },$$
(1)
where $``$ denotes an average around the point $`𝐫`$ of the second moment of the rod orientations $`𝝂`$. For rigid rods the phase diagram, and in fact the dynamics, can be more conveniently represented by the exclude volume parameter $`u`$, defined by
$$u=\varphi L\alpha ,$$
(2)
where $`\varphi `$ is the rod volume fraction, $`L`$ is the rod aspect ratio and $`\alpha `$ is an $`𝒪(1)`$ prefactor . Beginning from the Smoluchowski equation for a solution of rigid rods, and including a Maier-Sauper–like orientational interaction parameter, Doi was able to derive approximate coupled equations of motion for the dynamics of $`𝑸`$, and the fluid velocity $`𝐯(𝐫)`$, including the liquid crystalline contribution to the fluid stress tensor.
The essential physics is that flow tends to align the rodlike molecules, typically roughly parallel to the flow direction, and hence stabilizes a nematic, or aligned, state. To study other complex fluids we would have a structural variable analogous to $`𝑸`$; *e.g.* in the wormlike micelle system we might need, in addition to the orientation tensor, the dynamics of the mean micellar size. We have augmented the Doi model by allowing for concentration diffusion driven by chemical potential gradients, included the dynamical response to inhomogeneities in liquid crystalline order and concentration, and included the translational entropy of mixing which gives the system a biphasic coexistence regime in the absence of flow. For a given stress or strain rate we determine phase coexistence by explicitly constructing a stable coexisting steady state, which require inhomogeneous terms in the equations of motion (arising here from free energy terms which penalize inhomogeneities in $`𝑸`$ and $`\varphi `$). This procedure and the model have been documented elsewhere , and the interface construction will be discussed in more detail in Sec. III.
To calculate the phase diagram we solve for the steady state homogeneous solutions to the coupled dynamical equations for $`\{\varphi ,𝑸,𝐕\}`$. This yields a set of solutions which are then candidates for phase coexistence. Coexistence is possible with either common stress or common strain rate in the coexisting phases, depending on geometry, and must be examined for all pairs of stable homogeneous states. We expect coexisting states to have different values for $`\varphi `$, $`𝑸`$, and either the stress or strain rate. As mentioned above, we determine coexistence by finding the locus of control parameters for which a stable interfacial solution between two homogeneous states exists. We parametrize the shear stress and strain rate as
$`\widehat{\dot{\gamma }}`$ $`=`$ $`{\displaystyle \frac{\dot{\gamma }L^2}{6D_{\mathrm{r}0}\nu _1\nu _2^2}}`$ (3)
$`\widehat{\sigma _{xy}}`$ $`=`$ $`{\displaystyle \frac{\sigma _{xy}\nu _2L^3}{3k_BT}},`$ (4)
where $`D_{\mathrm{r}0}`$ is the rotational diffusion coefficient, and $`\nu _1`$ and $`\nu _2`$ are $`𝒪(1)`$ geometric constants.
The Doi model has three stable steady states in shear flow : A weakly-ordered *paranematic* state I, with the major axis of the order parameter in the shear plane; a *flow-aligning* state N, with a larger order parameter and major axis in the shear plane; and a *log-rolling* state L, with major axis in the vorticity direction. Fig. 2 shows homogeneous constitutive relations for the I and N states. As can be seen, the N and L states are, successively less viscous than the I phase at the same concentration, with a viscosity which decreases slightly with increasing concentration (reflecting the greater order and hence lower viscosity of more concentrated phases), in contrast to the less-ordered I phases, whose viscosity increases with concentration, as is usual for colloidal suspensions.
### A Common Stress Phase Separation
We first discuss the phase diagrams for common stress coexistence, in which the phase separation is radial in a cylindrical Couette flow. For common stress coexistence of two phases $`I`$ and $`II`$, the fraction $`\zeta `$ in phase $`I`$ is determined by the lever rule,
$`\overline{\varphi }`$ $`=`$ $`\zeta \varphi _I+(1\zeta )\varphi _{II}`$ (5a)
$`\overline{\dot{\gamma }}`$ $`=`$ $`\zeta \dot{\gamma }_I+(1\zeta )\dot{\gamma }_{II},`$ (5b)
where $`\overline{\varphi }`$ and $`\overline{\dot{\gamma }}`$ are mean values. Fig. 3 shows the phase diagram calculated for I-N coexistence for $`L=5`$. The tie lines denoting pairs of coexisting phases are horizontal in the $`\sigma _{xy}u`$ plane, and have positive slopes in the $`(\widehat{\dot{\gamma }}u)`$ plane because the more concentrated nematic phase flows faster at a given stress. For weak stresses the equilibrium system is slightly perturbed and the tie lines are almost horizontal, while at high stresses the tie lines become steeper as the composition difference between the phases decreases and vanishes at a critical point.
Mean Constitutive Relations—From the information in Fig. 3b we can calculated the mean constitutive relation that would be measured in an experiment on a system with a given prescribed mean concentration. Upon applying stress at a given concentration, the system traces a vertical path through Fig. 3 until the two-phase region is reached, during which $`\widehat{\sigma }_{xy}(\overline{\dot{\gamma }})`$ varies smoothly. At this stress a tiny band of N phase develops, with composition and strain rate determined by the lever rule; and $`\widehat{\sigma }_{xy}(\overline{\dot{\gamma }})`$ is non-analytic (Fig. 4), exhibiting a change in slope. As the stress and, hence the mean strain rate, increases further the system visits successive tie lines in the $`\dot{\gamma }u`$ plane, each with a higher stress and mean strain rate and different coexisting concentrations. For $`\overline{\varphi }`$ close to the equilibrium I-N transition (Fig. 4c) the tie lines in the $`(\widehat{\dot{\gamma }}u)`$ plane are fairly flat and the stress $`\sigma _{xy}`$ changes significantly through the two-phase region. More dilute systems (Fig. 4a,b) have steeper tie lines and straighter and flatter ‘plateaus’ in $`\widehat{\sigma }_{xy}(\overline{\dot{\gamma }})`$.
Controlled strain rate experiments should follow the homogenous flow curves, except for the coexistence regime. In this case, analogy with equilibrium systems suggests that the system should eventually nucleate into a phase-separated banded state, with a corresponding stress change. Experiments on wormlike micelles display this kind of behavior upon increasing the strain rate above that of the phase boundary. If the mean strain rate is on an unstable part of the flow curve, we expect a ‘spinodal’ (or mechanical) instability. There is a small region (inside the loop in Fig. 3a) where the system is unstable when brought, at controlled strain rate, into this region from either the I or N states. This corresponds to constitutive curves with the shape of curve b in Fig. 2.
For controlled stress experiments, for stresses larger than the minimum coexistence stress and less than the local maximum we expect the system to follow the homogenous flow curve until a nucleation event occurs (Fig. 4b). Then, the strain rate should increase to either that of proper banded or single N phase strain rate, depending on the magnitude of the stress. For stresses larger than the I limit of stability we expect, again by analogy with equilibrium, a spinodal-type instability. This simple picture is not quite corroborated in wormlike micelles: Grand et al. reported that a stress within a narrow range above the coexistence stress could be applied, and the system remained on the “metastable” branch indefinitely.
Analogy with equilibrium systems suggests similar behavior upon reducing the stress or strain rate from the high-shear branch. Careful experiments (on any system) are needed to test the idea. For example, it is interesting to examine that, upon reducing the strain rate below the upper strain rate for the onset of shear banding and above the limit of stability of the high strain rate branch, whether the stress would spontaneously increase into the banded state.
Log Rolling Phase— Fig. 5 shows the phase diagram for paranematic-log rolling I-L coexistence. For non-zero stress the biphasic region shifts to higher concentrations, since the stability limit of the L phase shifts to higher concentrations. Since the I and L phases have major axes of alignment in orthogonal directions, there is not a critical point; rather, the biphasic region ends when the I phase becomes unstable to the N phase. We have also computed N-L phase coexistence, but cannot resolve this (very concentrated, $`u3`$) regime accurately and do not present these results here.
Can one observe I-L coexistence? This can only occur for concentrations above that necessary for equilibrium phase separation. One could conceivably prepare an equilibrium I-N mixture with the nematic phase in the log-rolling geometry. Upon applying shear, the system would then maintain coexistence and move through the I-L two-phase region. However, the I phase is, itself, within the two phase region for I-N phase separation, so we expect the prepared coexisting I-L state to be metastable under shear.
### B Common Strain Rate Phase Separation
Common strain rate phase separation can be calculated exactly analogous to that for common stress phase separation. The resulting phase diagram for I-N coexistence, for $`L=5`$, is shown in Fig. 6. The shear stress and composition are partitioned according to the lever rule in Fig. 6b, with
$$\overline{\sigma }_{xy}=\zeta \sigma _{xyI}+(1\zeta )\sigma _{xyII},$$
(6)
In this case tie lines connecting coexisting phases are parallel in the $`\dot{\gamma }u`$ plane, and have a negative slope in the $`(\widehat{\sigma }_{xy}u)`$ plane because the I phase coexists with a denser and less viscous N phase. There is an interesting crossover in the $`(\widehat{\sigma }_{xy}u)`$ plane. For dilute systems the stress in the N phase immediately outside the biphasic regime is less than the stress just before the system enters the biphasic region (Fig. 6b). Since the stress of the N branch is less than that of the I branch at the same strain rate and composition, we expect a decrease in the stress across the biphasic regime if composition effects are weak, *e.g.* near a critical point. For higher mean compositions the stress *increases* across the biphasic regime, because the width of the biphasic regime overcomes the shear thinning effect.
Mean Constitutive Relations— The mean constitutive relation that could be measured in an experiment may be calculated from Fig. 6a, and are shown in Fig. 7. At higher concentrations the plateau has a positive slope while, coinciding with the crossover noted above, for lower concentrations the plateau has a negative slope. A negative slope usually signifies a bulk instability, but here each band lies on a stable branch of its particular constitutive curve and the flow should be stable. Stable ‘negative-slope’ behavior has been seen in shear-thickening systems which phase separate at common stress , although in that case the mean constitutive curve was different, consisting of a backwards S curve and non-monotonic behavior only with multiple stresses for a given strain rate.
Based on analogies with equilibrium, we naively expect controlled strain rate and controlled stress experiments on concentrations such as those in Fig. 7a-b to yield behavior similar to that for common stress phase separation, with nucleated or spinodal behavior depending on the applied strain rate, and the same caveat applying to decreasing the strain rate from agove. The situation for compositions with curves such as Fig. 7a is qualitatively different. Here there is a range of stresses with *three* stable states: homogeneous I and N branches, and a banded intermediate branch. For controlled stress experiments, one possibility is that the I and N branches are favored in their respective domains of stresses. For example, in start-up experiments the system would remain on the I branch until a certain stress, at which point it would nucleate after some time. If the system nucleated onto the coexistence branch, increasing the stress further would return the system to the I branch. Since it nucleated *from* the I branch, it is more likely to jump directly to the N branch. Similar behavior is to be expected upon reducing the stress from the N phase. Another possibility is intrinsic hysteresis: that is, the system never jumps until reaching its limit of stability (from either the I or N side). The present theory cannot address this question. For controlled strain rate experiments, it could, in principle, be possible to maintain a stress on the two state region, although in practice this would also seem to be quite difficult, and would seem to be mechanically unstable. In the case where stable composite curves with negative slopes were accessed, stress was the control variable .
### C Common Stress or Common Strain Rate?
What about the relative stability of phase separation at common stress or strain rate? While our one-dimensonal calculations cannot address this question, we have examined the two phase diagrams in the $`\sigma _{xy}\mu `$ and $`\dot{\gamma }\mu `$ planes, where $`\mu `$ is the chemical potential . This can be seen in Fig. 8a-b where, for example, the I boundary for common strain rate phase separation ($`I_\gamma `$) lies in the N region for common stress phse separation, in the $`\mu \sigma _{xy}`$ plane. This occurs because the stress of the I phase, at common strain rate, is larger than the stress of the N phase, due to the shear thinning nature of the transition. Conversely, the I phase at common stress lies within the I region of the common strain rate phase diagram. Analogy with equilibrium phase transitions suggests that, since the I phase of common strain rate phase separation thus lies on the “wrong” side of the phase boundary for common stress, given by the line in the $`\mu \sigma _{xy}`$ plane, it would be unstable (or metastable) to phase separation at common stress. Conversely, the I phase at common stress is on the “correct” side of the coexistence line in the $`\mu \dot{\gamma }`$ plane, and, again based on analogy with equilibrium, might be expected to be stable. Note that if the transition were shear thickening the situation would be reversed, and the arguments above would lead to common stress phase separation being unstable (or metastable) with respect to common strain rate phase separation.
Boundary conditions may also play a role. In a Couette device the slight inhomogeneity of Couette flow induces an asymmetry between the inner and outer cylinders, exactly the symmetry of common stress phase separation (Fig. 1). This should enhance the stability of common stress phase separation. Cone-and-plate rheometry induces a similar preference for the common stress geometry.
An alternative possibility is presented in Fig. 8. If one argues that, in steady state, among the possible phases which are compatible with the interface solvability condition, the chemical potential reaches its minima so that no more diffusive material flux is possible. Based on such a criterion, upon increasing the strain rate for a given mean concentration the stable phase is that with the lowest chemical potential. The thick horizontal arrows in Fig. 8 denote the $`\mu (\sigma _{xy})`$ and $`\mu (\dot{\gamma })`$ paths for the homogeneous high and low shear rate states, in the two phase diagrams. The I branch becomes unstable at A to phase separation at common stress, when the homogeneous path first crosses the phase boundary in the $`\mu \sigma `$ plane. For higher stresses the system follows the segment AB in Fig. 8b, along the phase coexistence line at common stress, and follows the stress plateau AB in Fig. 8c. In the $`\mu \dot{\gamma }`$ plane the system phase separates, and the chemical potential as a function of mean strain rate follows the diagonal path AB in Fig. 8a (the dotted lines denote the strain rates of the coexisting phases).
Upon increasing the strain rate further than point B, the chemical potential of the system can decrease by phase separating at a common strain rate. This reduces the chemical potential, at a given strain rate, from that of the segment BD to that of segment BC. Hence the system would take the path BC along the phase boundary in the $`\mu \dot{\gamma }`$ plane, as far as point C, upon which the phase boundary crosses the homogeneous curve for the high strain rate phase of the given mean concentration. The path would be the diagonal path BC in the $`\mu \sigma _{xy}`$ plane, and would correspond to the negative-sloped segment BC in the flow curve, Fig. 8c. Finally the system follows the high strain rate branch, through CD.
Upon increasing the controlled stress, the system would be expected to follow ABD. Upon decreasing the stress or the strain rate, DC-bottom jumping is expected. These scenarios follow from minimizing the chemical potential subject to the solvability constraint should phases coexist. Its correctness, of course, should be further examined by the full time evolution of the original dynamic equations.
Experimental Studies— Mather et al. studied a liquid crystalline polymer melt (an aromatic polyester) and determined the lower limit of the I-N phase boundary in the $`\dot{\gamma }T`$ plane. The studies most relevant to the Doi model for rigid rod suspensions have been on wormlike micellar solutions near their isotropic-nematic coexistence region , where common-stress banding was observed with a plateau stress that became steeper for concentrations closer to the equilibrium I-N phase boundary, in qualitative agreement with our results. Common strain rate banding has not bee seen in these systems. Micelles are considerably more complicated than simple rigid rods, because they are not strictly rigid and their length (and hence coupling to flow) is a strong dynamic function of concentration. Experiments on micelles far from an apparent nematic transition exhibit common stress shear banding with nearly flat coexistence plateaus , consistent with a concentration-independent instability (or transition). In kinetics studies the delay time before the transition to a banded (or high strain rate) flow in controlled stress start-up ‘quenches’ diverged for a window of stresses slightly above the banding stress, whereas controlled strain rate ‘quenches’ always decayed, eventually, onto a banded flow state. These interesting behaviors cannot be explained by the topologies of the phase diagrams in Fig. 3. Bonn *et al.* recently studied lamellar surfactant systems and observed slowly coarsening bands in the common strain rate geometry; and for controlled strain rate measurements they found transient constitutive curves analogous to Fig. 7a or 7b, consistent with common-strain rate phase separation. The true steady state behavior was not measured.
## III Interface Construction
Several microscopic and phenomenological models, as well as the apparent underlying flow curves for wormlike micelles, show an apparent degeneracy in the shear stress at which coexistence occurs (in the case of coexistence at a common stress). To resolve this degeneracy we have relied on the presence of inhomogeneous terms in the dynamical equations of motion, and determined the selected stress as that stress which allows a stable interfacial solution. In this section we explore this in more detail using a toy constitutive model. Similar arguments were given Spenley *et al.* in a different languauge, and in a recent more rigorous study .
We consider planar flow with a velocity field $`𝐯(𝐫)=v(y)\widehat{𝐱}`$, with $`\dot{\gamma }(y)v/y`$, and postulate the following constitutive relation for the shear stress:
$$\sigma (\dot{\gamma })=\sigma _h(\dot{\gamma })D(\dot{\gamma })_y^2\dot{\gamma }.$$
(7)
The homogeneous flow curve $`\sigma _h(\dot{\gamma })`$ is non-monotonic, as in Fig. 9; and can be derived for a system with an underlying transition, as for the modified Doi model above, or from phenomenological models, such as the widely-used Johnson-Segalman (JS) model . Gradient terms may come from the diffusion of the stress elements . The flow curve shown in Fig. 9 is for the JS model. In the (some what artificial) model where only the shear stress diffusion is considered, the steady flow condition for the JS model has the form of Eq. (7), with $`D(\dot{\gamma })1/(1+\dot{\gamma }^2)`$.
The steady state condition for planar flow is a uniform shear stress,
$$\sigma _0=\sigma _h(\dot{\gamma })D(\dot{\gamma })_y^2\dot{\gamma },$$
(8)
with $`\sigma _0`$ a constant. In an infinite system, an interfacial shear banding solution at a given stress $`\sigma _0`$ satisfies Eq. (8), with boundary conditions
$`\dot{\gamma }(\mathrm{})`$ $`=`$ $`\dot{\gamma }_A`$ (9a)
$`\dot{\gamma }(\mathrm{})`$ $`=`$ $`\dot{\gamma }_B.`$ (9b)
$$_y\dot{\gamma }(\pm \mathrm{})=0$$
(10)
Hence, given the second-order differential equation, the system is overdetermined. A solution is only possibly when these two conditions coincide, which may be obtained by varying the stress $`\sigma _0`$. It is straightforward to integrate Eq. (8) to show that a solution is possible when $`\sigma _0`$ satisfies the following condition
$$_{\dot{\gamma }_A}^{\dot{\gamma }_A}\frac{\sigma _0\sigma _h(\dot{\gamma })}{D(\dot{\gamma })}=0.$$
(11)
Note that this is not an equal areas construction, unless $`D(\dot{\gamma })`$ is a constant $`D`$.
Further insight may be obtained by casting the interface solution in terms of a dynamical system. Defining
$`p`$ $`=`$ $`\dot{\gamma }`$ (12a)
$`q`$ $`=`$ $`_ypp^{},`$ (12b)
Eq. (8) becomes the following dynamical system, with $`y`$ playing the role of time.
$`p^{}`$ $`=`$ $`q`$ (13a)
$`q^{}`$ $`=`$ $`{\displaystyle \frac{\sigma _0\sigma _h(p)}{D(p)}}.`$ (13b)
For $`\sigma _0`$ within the non-monatonic region of the flow curve the system has three fixed points $`p_{}=\{p_A,p_B,p_C\}`$ on the axis $`q=0`$, corresponding to the strain rates of the three homogeneous flows. Linear stability analysis yields the stable and unstable manifolds of points $`A`$ and $`B`$, with eigenvalues
$$\lambda _\pm =\pm \sqrt{\left[\frac{1}{D(p)}\frac{d\sigma _h}{dp}\right]|_{p=p_{}}}$$
(14)
and eigenvectors at angles $`\theta =\mathrm{arctan}\lambda `$ with respect to the $`p`$-axis. Point $`C`$ has imaginary eigenvalues and is a cycle, while $`A`$ and $`B`$ are saddles with stable and unstable directions.
An interfacial solution corresponds to an orbit connecting saddles $`A`$ and $`B`$, and is denoted a saddle connection; it it also called a heteroclinic orbit, since it connects two different fixed points. This set of ordinary differential equations (ODE) does not generally have a saddle connection for an arbitrary $`\sigma _0`$ in the multi-valued region. It can be shown that for models (with arbitrary numbers of dynamical variables) in planar shear flow with differential non-local terms that, apart from accidents, a saddle connection only exists at isolated points in the control parameter space. Here, the control parameters are $`\sigma _0`$ and parameters which change the shape of $`\sigma _h`$ , while for the Doi model above, the control parameters are (for a given set of molecular parameters such as $`L`$) $`\mu `$ and $`\sigma _{xy}`$ for common stress phase separation; and $`\mu `$ and $`\dot{\gamma }`$ for common strain rate phase separation.
Fig. 10 shows the evolution of “orbits” in the $`pq`$ phase space as the stress is tuned. For $`\sigma =\sigma _0`$ a heteroclinic orbit exists, connecting $`A`$ and $`B`$. This corresponds to an elementary shear band solution, in which one portion of the sample lies on the high strain rate branch $`B`$, another portion lies on the low strain rate branch $`A`$, and a single interface separates the two phases. For $`\sigma \sigma _0`$ there is no heteroclinic orbit or saddle connection, and hence no stationary interface. Fig. 10 (Left) shows a stress slightly greater than $`\sigma _0`$, where a homoclinic orbit connecting state $`A`$ to itself. Kramer pointed out in the context of reaction diffusion equations that such a homoclinic orbit corresponds to the critical droplet in a metastable phase of $`A`$ material. Note that, although in real space it goes from $`A`$ at $`y=\mathrm{}`$ to $`A`$ at $`y=+\mathrm{}`$, the dominant spatial variation is in fact localized, with a size that vanishes when the stress reaches the maximum of the flow curve in Fig. 9 (at which the fixed points $`A`$ and $`C`$ annihilate). Slightly larger droplets are unstable and, when the full dynamics are returned to the problem, presumably flow to the high strain rate branch $`B`$, while smaller droplets are expected to decay back to $`A`$. By analogy with equilibrium behavior, for $`\sigma >\sigma _0`$ we expect phase $`B`$ to be the long time steady state, if fluctuations (*i.e.* noise, thermal or otherwise) were included.
## IV Conclusion
We have outlined the phenomenology of phase separation of rigid rod suspensions in shear flow, using the modified Doi model. Phase separation may occur with common stress *or* strain rate, corresponding to the different coexistence geometry. We have calculated coexistence among three phases (paranematic, flow-aligning nematic, and log-rolling), while only two equilibrium phases exist. That is, the full rotational symmetry of an equilibrium nematic is broken by the biaxial shear flow, leaving two possible stable nematic orientations (the in-plane I and N states, and the out of plane L state). The shear thinning nature of the transition suggests that common stress phase separation is stable; while appealing to a minimization of the chemical potential, subjected to the interface solvability, predicts a curious crossover from common stress to common strain rate phase separation. We do not know which of these, or other, possibilities, are the physical ones. The composite stress strain curves depend on the coupling to composition , and can exhibit an apparent unstable constitutive relation, which would be mechanically unstable under controlled strain rate conditions. Although there have been few experiments on true lyotropic rigid rod systems in flow, wormlike micelles can have a flow-induced nematic phase at higher concentrations, and our results appear to qualitatively describe many aspects of these experiments. See, for example, the phase diagrams in Ref. .
We have also shown schematically how our construction for coexistence can be cast as an equivalent dynamical system, for which coexistence corresponds to a heteroclinic saddle connection. In the most general case stress selection depends on the nature of the gradient terms in the dynamics , while in equilibrium systems the gradient terms can be exactly integrated to yield a condition independent of the gradient terms. The dynamical systems picture also yields an analogy with a critical droplet which may prove promising in understanding the non-equilibrium analogs of nucleation and growth.
Acknowledgments We are grateful to M. Cates, B. L. Hao, R. Ball, and O. Radulescu for fruitful conversations.
|
no-problem/9901/nucl-th9901039.html
|
ar5iv
|
text
|
# 𝑒⁺𝑒⁻ pairs from 𝜋⁻A reactionsWork supported by DFG, BMBF, and GSI Darmstadt
## I Introduction
The spectroscopy of vector mesons $`(\rho ,\omega ,\varphi )`$ by their dileptonic decay in finite or dense nuclear matter is of great interest and new spectrometers are currently being built . Whereas dileptons from nucleus-nucleus collisions are complicated to interpret due to the complex dynamical evolution, $`e^+e^{}`$ pairs from photon-nucleus, proton-nucleus or pion-nucleus reactions essentially probe vector meson properties at normal nuclear matter density provided that appropriate cuts on the (low) momentum-spectrum of the dileptons are applied.
In Ref. dilepton production in pion-nucleus reactions has been calculated within the framework of a BUU transport model . For the production and propagation of vector mesons a ’perturbative’ scheme was imposed where the perturbative particles were treated different from the non-perturbative ones. Especially the finite width of the $`\rho `$-meson was neglected in the production part and only taken into account for the dilepton spectrum by means of a formfactor. Meanwhile we have developed, starting from the very same transport model, a computer algorithm which incorporates the properties of perturbative particles in a dynamical way in line with our treatment of non-perturbative particles. Within this model we have calculated photoproduction of dileptons in nuclei in the energy range from 500 MeV to 2.2 GeV . Since this model, that also contains a number of other improvements, gives different results for pion induced dilepton production than previously published we want to discuss these differences in this article.
## II The model
For a complete description of the underlying model we refer to Ref. . Here we only briefly describe the main differences with respect to the earlier calculations:
* For the elementary meson-nucleon interaction we have meanwhile adopted all resonance parameters from Manley et al. including some additional high-mass resonances. Especially the decay channel $`R\mathrm{\Delta }\rho `$ is now included.
* The finite widths of the $`\rho `$\- and $`\omega `$-mesons are taken into account dynamically. In-medium changes of their spectral functions due to collisional broadening are treated analogously to our description of baryonic resonances .
* The production and absorption of $`\rho `$-mesons are now consistently described within the resonance model of Manley et al. .
* For the electromagnetic decay of the $`\rho `$-meson to $`e^+e^{}`$ we use now a width proportional to $`M^3`$, as resulting from vector meson dominance (VMD) , instead of one proportional to $`M`$ from extended VMD , with $`M`$ being the invariant mass of the $`\rho `$-meson. For our calculations this is more appropriate since we neglect a direct coupling of the virtual photon and can not treat the resulting interference terms properly within a semi-classical transport approach.
## III Results
In Fig. 1 we show the results of our calculations for $`e^+e^{}`$-production in $`\pi ^{}`$C and $`\pi ^{}`$Pb reactions at a kinetic energy of 1.3 GeV. Here neither collisional broadening nor an in-medium mass shift of the vector mesons are taken into account. In the figure the various contributions to the total dilepton yield stemming from ($`\pi ^0,\eta ,\omega ,\mathrm{\Delta }`$) Dalitz decays as well as from vector meson decays ($`\rho ^0,\omega `$) are displayed. Compared to the previous calculations from Ref. , but also to those of Ref. , our calculations give results which are up to an order of magnitude larger at intermediate invariant masses $`M`$ for both the light and heavy system. The contributions from the $`\rho `$-meson and the $`\mathrm{\Delta }`$-resonance are very different in size and in shape. The $`\rho `$-meson contribution is shifted to lower energies and much broader. This is basically due to three reasons. Firstly, the modified dilepton decay width introduces a factor $`(M_\rho /M)^4`$ which, for example, at $`M=0.5`$ GeV gives a factor 5.6. Secondly, in our new calculations some of the higher-lying resonances, especially the $`D_{35}(1930)`$ and the $`F_{37}(1950)`$, decay strongly into the $`\mathrm{\Delta }\rho `$-channel. These decays give predominantly low-mass $`\rho `$’s and lead to a stronger contribution of the $`\mathrm{\Delta }`$-resonance. Thirdly, secondary pions can, especially through the $`D_{13}(1520)`$-resonance, more easily contribute to $`\rho `$-production in the low mass tail. In the earlier calculations this was strongly suppressed because $`\rho `$’s could only be produced with their pole mass.
The deviations of the new calculations from the earlier ones are therefore mainly related to different descriptions of the elementary $`\pi Ne^+e^{}X`$ process for which neither experimental data nor a reliable theoretical prediction exist. In Fig. 2 we, therefore, show the dilepton spectrum for elementary $`\pi ^{}`$p and $`\pi ^{}`$n collisions which enter our calculations as input. The $`\rho ^0`$-contribution on the neutron is very different from that on the proton. This is due to the fact that, because of isospin, on the neutron only the $`\mathrm{\Delta }\rho `$-channel contributes while on the proton the $`N\rho `$-channel is dominant. The discontinuity of the spectrum at the two-pion mass is caused by our neglect of off-shell $`\rho `$-mesons with invariant masses below the two-pion mass.
However, it is questionable if the contributions coming from the $`\mathrm{\Delta }\rho `$-decay of some resonances are realistic since in the analysis of Manley et al. only data for exclusive one- and two-pion production were taken into account and the channel $`\mathrm{\Delta }\rho `$ was only included in order to absorb inelasticity. One should note that the incoherent resonance contributions to the reaction $`\pi ^+pp\pi ^+\pi ^+\pi ^{}`$ via intermediate $`\mathrm{\Delta }^{++}\rho ^0`$-states already exceed the experimental data by about a factor of 2. In Fig. 3 (upper part) we, therefore, show the result of a calculation for which we replaced the $`\mathrm{\Delta }\rho `$ decay by the channel $`\mathrm{\Delta }\sigma `$ where the $`\sigma `$-meson parametrizes a scalar, isoscalar two-pion state with mass $`M=0.8`$ GeV and width $`\mathrm{\Gamma }=0.8`$ GeV. This gives a reduction of the dilepton yield at intermediate masses by about a factor 3.
In Fig. 3 (upper part) we also show the result of a calculation where we used an $`e^+e^{}`$-width of the $`\rho `$-meson proportional to $`M`$ instead of the more consistent $`M^3`$. This also gives a result which is more than a factor of 2 different for dilepton masses around 500 MeV.
Apart from the uncertainties discussed above it is questionable if our description of dilepton production in elementary pion-nucleon collisions is valid since we neglect interference terms between the different contributions as well as all processes that can not be described by a two-step process. There might, for example, come a large contribution from so-called $`\pi N`$ bremsstrahlung where the dilepton couples to the incoming pion.
In view of all these uncertainties in the theoretical description of the elementary cross section it is necessary that the inclusive cross sections for dilepton production on the nucleon are measured. Until then the following results for dilepton production on nuclei are only an – although state of the art – ’educated guess’.
During the last two years the $`D_{13}(1520)`$-resonance has received great interest in connection with medium-modifications of the $`\rho `$-meson . In our calculations this resonance contributes to the production of low-mass $`\rho `$-mesons as well as to their absorption. About 30% of the $`\rho `$-mesons in our calculations are produced via an intermediate $`D_{13}`$-resonance. In Fig. 3 (upper part) we show the result of a calculation where we excluded the $`D_{13}`$-resonance. Here we get a slight enhancement of the dilepton yield because absorption through this resonance is even more important than production.
In Fig. 3 (lower part) we show the result of a calculation where we assumed ’dropping masses’ for the $`\rho `$\- and $`\omega `$-meson . We find a reduction of the vector meson peak around 770 MeV by about a factor 2. The enhancement of the dilepton yield for masses around 600 MeV is quite small because we already started from a quite flat $`\rho `$-meson contribution due to our implementation of the $`\pi ^{}`$n channel and neglected medium-modifications of the $`N\rho `$-widths of the baryonic resonances. Therefore the total cross section for elementary $`\rho `$-meson production remains unchanged.
In Ref. we describe in full detail how we implement the collisional broadening of the $`\rho `$\- and $`\omega `$-mesons in our transport calculations in a dynamical way. In Fig. 3 (lower part) we show the result of a calculation in which we took into account collision broadening in addition to the mass shift. One sees that the effect of collisional broadening is small.
In Fig. 3 (lower part) we also present the result of a calculation with a momentum dependent potential for the vector mesons instead of the constant mass shift. This potential gives the previously used mass shift for $`p=0`$, increases linearly with momentum and crosses zero for $`p=1`$ GeV; for details see Ref. . The result for the dilepton spectrum is quite close to the calculation without medium modifcations because the vector mesons are produced with rather large momenta in pion-nucleon collisions.
In order to discriminate between these ’scenarios’ of in-medium modification it is helpful to look on the spectra for different momenta of the dilepton pair. In Fig. 4 we show the results of our calculations for four different momentum bins. For low momenta ($`p<300`$ MeV) the ’dropping mass’ scenario leads to a complete disappearance of the vector meson peak around 780 MeV because a large fraction of the $`\omega `$-mesons with small momenta decays inside the nucleus. With increasing momentum the fraction of $`\omega `$-mesons decaying outside the nucleus increases and therefore the ’vacuum peak’ becomes more pronounced in the ’dropping mass’ scenarios. The calculation with a momentum dependent potential is getting closer to the calculation without medium modifications for larger momenta since the momentum dependent potential vanishes for $`p=1`$ GeV.
In our calculations we assume an isotropic production of the vector mesons in the pion-nucleon center of mass system because there are only experimental data on the angular distribution for larger energies. The spectra shown in Fig. 4 depend strongly on the angular distribution in the elementary production step since different angles in the pion-nucleon center of mass system correspond to different momenta in the laboratory frame. However, a different angular distribution would primarily rescale the spectra but hardly influence the qualitative effects of the medium modifications.
## IV Summary
We have presented a calculation of dilepton production in $`\pi ^{}`$C and $`\pi ^{}`$Pb collisions at 1.3 GeV and compared our results to previously published calculations. We have discussed the uncertainties concerning the elementary $`\pi Ne^+e^{}X`$ cross section and want to stress the importance of an experimental measurement of the elementary process as prerequisite for reliable calculations in nuclei. The results shown in Fig. 2, for example, could be checked with the new spectrometer HADES , presently under construction at GSI. Here it would be quite desirable if measurements also for lower pion energies could be performed since the contributions from secondary pions are important for pion-nucleus collisions.
We have, furthermore, investigated the effects of different scenarios of in-medium modifications for the vector mesons $`\rho `$ and $`\omega `$. Cuts on the momentum of the dilepton pair might be helpful to distinguish between different scenarios.
|
no-problem/9901/cond-mat9901283.html
|
ar5iv
|
text
|
# Fractionation of polydisperse systems: multi-phase coexistence
## Abstract
The width of the distribution of species in a polydisperse system is employed in a small-variable expansion, to obtain a well-controlled and compact scheme by which to calculate phase equilibria in multi-phase systems. General and universal relations are derived, which determine the partitioning of the fluid components among the phases. The analysis applies to mixtures of arbitrarily many slightly-polydisperse components. An explicit solution is approximated for hard spheres.
It is vital to gain an understanding of polydispersity, due to its ubiquity in both synthetic and biological complex fluids. A polydisperse substance is a mixture of infinitely many components, and can, in general, separate into arbitrary numbers of coexisting phases. These properties typically engender great mathematical complexity, and have been a stumbling block to the concise formulation of polydisperse thermodynamics. Experimental and simulational studies of polydisperse polymeric fluids and colloidal suspensions have catalogued diverse behavior and intricate phase diagrams. Until recently, theoretical treatments of polydispersity relied on uncontrolled approximations or idealised models , while generic schemes and fundamental understanding remained elusive.
The phase behavior of pure (i.e. monodisperse) systems is (in principle at least) relatively straightforward to analyse. The standard method, formulated last century , involves integrating (by various approximate methods) the Boltzmann factor over all configurations to construct the Helmholtz free energy as a function of temperature, density and volume. From this, the densities of coexisting phases can be calculated and the phase diagram deduced. One source of difficulty in analysing the phase equilibria of polydisperse systems is that the density alone does not fully characterize a phase. Instead, we wish to know the entire composition of each phase. That requires the evaluation of an infinite set of variables.
Two systematic schemes were developed recently to solve the polydisperse phase equilibria problem. The powerful ‘annealed moments’ method of Sollich and Cates and Warren applies to a large subset of model systems, and will not be further discussed here. The second scheme, which applies to real systems, was developed by the present author . It uses the width of the distribution of species as a small expansion parameter, and is therefore valid for slightly polydisperse systems. In other words, the scheme is applicable whenever the polydisperse property (e.g. the size or charge of a particle) varies only a little throughout the system. (N.B. Here, ‘particle’ is used to denote the polydisperse fluid elements, which could be polymer molecules, colloidal latices, etc.) The method was used to find the complete distributions of species in two coexisting phases of any slightly polydisperse system, resulting in a universal law of fractionation. In this paper, the method is applied to coexistence between arbitrary numbers of phases — a situation of importance to many polydisperse substances .
A slightly polydisperse system (one with a narrow distribution of species) is in principle very different from a truly pure one, which has no mixing entropy and whose distribution is a Dirac delta function. Nevertheless, one would expect the physical properties of the two systems to be very similar. That similarity motivates this study, since the pure system is vastly simpler to analyse than its polydisperse counterpart. To exploit that simplicity, a formalism is required which treats mono- and poly-disperse systems on an equal footing. Such a formalism is now derived (extending the method for two-phase equilibrium in Ref. ).
Let us first define a number $`\epsilon _i`$ to characterize each of the $`N`$ particles in the system (with $`i=1,\mathrm{},N`$) . For size polydispersity, this is the fractional difference $`\epsilon _i(R_iR_0)/R_0`$ of the particle’s radius $`R_i`$ from some reference length $`R_0`$ (with the obvious generalization to charge polydispersity etc.). Henceforth, $`\epsilon `$ shall be referred to as the size parameter, for definiteness. The population of species in the system is characterized by a continuous distribution $`f(\epsilon )`$, which is unnormalized so
$$_{\mathrm{}}^{\mathrm{}}f(\epsilon )𝑑\epsilon =N.$$
In general, the free energy $`F`$ of a polydisperse system is a complicated functional of $`f(\epsilon )`$. It will be expressed in units of $`k_BT`$ where $`k_B`$ is Boltzmann’s constant and $`T`$ is temperature. For a polydisperse ideal gas, the free energy $`F^{\mathrm{id}}`$ is easily shown to be, per unit volume,
$$\frac{F^{\mathrm{id}}}{V}=𝑑\epsilon \frac{f(\epsilon )}{V}\left[\mathrm{ln}\frac{f(\epsilon )}{V}1\right]$$
(1)
which is the usual ideal gas free energy, summed over all species. As this expression contains the mixing entropy, it is useful to write the free energy of a non-ideal system as
$$FF^{\mathrm{id}}+F^{\mathrm{ex}}.$$
(2)
Here $`F^{\mathrm{ex}}`$ is the ‘excess’ part of the free energy (over and above the ideal part), deriving from interactions.
Let us consider a system whose ‘initial’ population (before phase separation) is known. This will be called the ‘parent’ distribution $`f_P(\epsilon )`$. In a system where this parent is partitioned into $``$ coexisting phases, we wish to determine the distribution $`f(\epsilon )_𝒜`$ in each phase $`𝒜=1,\mathrm{},`$. By conservation of matter,
$$\underset{𝒜=1}{\overset{}{}}f(\epsilon )_𝒜=f_P(\epsilon ).$$
(3)
At equilibrium, the chemical potential is equal in all coexisting phases. This statement applies for each species of particles, so the equation
$$\mu (\epsilon )_𝒜=\mu (\epsilon )_{}\epsilon $$
(4)
represents an uncountable infinity of thermodynamic constraints for any pair of phases $`𝒜`$ and $``$. Since there is a continuum of species, the chemical potential is a functional derivative of the free energy
$$\mu (\epsilon )\frac{\delta F[f(\epsilon )]}{\delta f(\epsilon )}.$$
(5)
From Eq. 2, $`\mu (\epsilon )`$ can be written in two parts
$$\mu (\epsilon )=\mu ^{\mathrm{id}}(\epsilon )+\mu ^{\mathrm{ex}}(\epsilon ).$$
(6)
Functional differentiation of Eq. 1 yields
$$\mu ^{\mathrm{id}}(\epsilon )=\mathrm{ln}\left[\frac{f(\epsilon )}{V}\right].$$
(7)
Collecting together Eqs. 4, 6 and 7 gives the ratios of densities in any two of the $``$ coexisting phases
$$\frac{f(\epsilon )_{}/V_{}}{f(\epsilon )_𝒜/V_𝒜}=\mathrm{exp}\left(\mu ^{\mathrm{ex}}(\epsilon )_𝒜\mu ^{\mathrm{ex}}(\epsilon )_{}\right)$$
(8)
in terms of the excess parts of the chemical potentials. Thus, all but one distribution can be eliminated from Eq. 3, yielding the solution for any given phase
$$f(\epsilon )_𝒜=f_P(\epsilon )/\underset{=1}{\overset{}{}}\frac{V_{}}{V_𝒜}\mathrm{exp}\left(\mu ^{\mathrm{ex}}(\epsilon )_𝒜\mu ^{\mathrm{ex}}(\epsilon )_{}\right)$$
(10)
$$\text{where }\mu ^{\mathrm{ex}}(\epsilon )\frac{\delta F^{\mathrm{ex}}[f(\epsilon )]}{\delta f(\epsilon )}.$$
(11)
Given a knowledge of $`F^{\mathrm{ex}}`$, which specifies the interactions in the system, (and of the phase volumes), Eqs. 10 and 11 represent a complete solution to the problem. However, they constitute an uncountable infinity of non-linear simultaneous equations. This is the source of the mathematical complexity mentioned earlier.
Some simplification is achieved by making a change of variables. Rather than expressing a thermodynamic state in terms of the densities of the individual species of particles, $`f(\epsilon )/V`$, let us use moments of this distribution (as in ). The thermodynamic variables
$$\rho _\alpha _{\mathrm{}}^{\mathrm{}}\epsilon ^\alpha \frac{f(\epsilon )}{V}𝑑\epsilon ;\alpha =0,1,\mathrm{},\mathrm{}$$
(12)
will be called ‘moment densities’. Note that $`\rho _\alpha =\overline{\epsilon ^\alpha }\rho `$, so that $`\rho _0`$ is the overall number density $`\rho `$. \[Mean powers of the size parameter, $`\overline{\epsilon ^\alpha }`$, are moments of the normalized distribution $`p(\epsilon )f(\epsilon )/N`$.\] Each moment density, being a linear combination of conserved species densities, is itself conserved and, accordingly, respects the usual equilibrium conditions. For instance, each ‘moment chemical potential’, defined by $`\mu _\alpha (F/V)/\rho _\alpha `$, is equal in coexisting phases. This is clear from expanding the species chemical potential in partial derivatives
$$\mu (\epsilon )\frac{\delta F}{\delta f(\epsilon )}=\underset{\alpha =0}{\overset{\mathrm{}}{}}\frac{F}{\rho _\alpha }\frac{\delta \rho _\alpha }{\delta f(\epsilon )}=\underset{\alpha =0}{\overset{\mathrm{}}{}}\mu _\alpha \epsilon ^\alpha .$$
(13)
Thus, equality of $`\mu (\epsilon )`$ in coexisting phases requires equality of each $`\mu _\alpha `$.
We now have a discrete set of thermodynamic variables, and can substitute the power series expression (Eq. 13) for $`\mu (\epsilon )`$ into Eq. 10, yielding
$$f(\epsilon )_𝒜=f_P(\epsilon )/\underset{=1}{\overset{}{}}\frac{V_{}}{V_𝒜}\mathrm{exp}\left(\underset{\alpha =0}{\overset{\mathrm{}}{}}(\mu _{\alpha 𝒜}^{\mathrm{ex}}\mu _\alpha ^{\mathrm{ex}})\epsilon ^\alpha \right)$$
(15)
$$\text{with }\mu _\alpha ^{\mathrm{ex}}\frac{F^{\mathrm{ex}}/V}{\rho _\alpha }$$
(16)
which, with Eq. 12, form a countable infinity of simultaneous equations. The excess free energy is now a function $`F^{\mathrm{ex}}(\rho _0,\rho _1,\mathrm{})`$ of the moment densities.
The equations thus far are perfectly general, but the advantages of this formalism become apparent when we consider a narrow distribution of sizes, i.e. a system which is close to monodisperse. If the origin for the parameter $`\epsilon `$ is chosen (by fixing the reference $`R_0`$) to be close to the centre of the narrow distribution, then $`\epsilon `$ is a small number for most if not all particles. Hence in Eq. 15, $`f_P(\epsilon )`$ vanishes for large $`\epsilon `$, so the power series in the denominator becomes a well-controlled expansion. The results have a particularly simple form if the origin is chosen to be the mean of the parent distribution, so that $`\overline{\epsilon }_P0`$. Henceforth this choice is assumed.
The solution to Eqs. 12, 15 and 16 is now calculated to first order in $`\epsilon `$. This will yield the exact phase equilibria in the limit of a narrow parent, $`\overline{\epsilon ^2}_P0`$. Expanding Eq. 15 to first order and integrating over $`\epsilon `$ gives
$$\underset{=1}{\overset{}{}}\frac{V_{}}{V_𝒜}\mathrm{exp}(\mu _{0𝒜}^{\mathrm{ex}}\mu _0^{\mathrm{ex}})=\frac{N}{N_𝒜}\left[1+O(\epsilon ^2)\right].$$
To zeroth order, Eq. 8 gives
$$\frac{V_{}}{V_𝒜}\mathrm{exp}(\mu _{0𝒜}^{\mathrm{ex}}\mu _0^{\mathrm{ex}})=\frac{N_{}}{N_𝒜}\left[1+O(\epsilon )\right].$$
Note the different orders of expansion. Substituting these expressions back into 15 yields
$$\frac{f(\epsilon )_𝒜}{N_𝒜}=\frac{f_P(\epsilon )}{N}\left[1\epsilon \mu _{1𝒜}^{\mathrm{ex}}+\frac{\epsilon }{N}\underset{=1}{\overset{}{}}N_{}\mu _1^{\mathrm{ex}}+O(\epsilon ^2)\right].$$
(17)
To obtain Eq. 17, the prefactor of $`f_P(\epsilon )`$ in Eq. 15 is expanded to first order in $`\epsilon `$, but the distribution $`f_P(\epsilon )`$ itself remains exact. Thus, other than narrowness, no limitations are put on the form of $`f_P(\epsilon )`$. Any distribution can be treated, however asymmetric or discontinuous. The population may even contain finite amounts of some components, contributing delta functions to $`f_P(\epsilon )`$.
In Eq. 17 we see that the distribution in any given phase $`𝒜`$ depends, as one would expect, on the properties of all the other $``$ phases with which it coexists. However, taking the difference (denoted $`\mathrm{\Delta }`$) between the normalized distributions in any two of the $``$ coexisting phases, we find the strikingly simple expression
$$\mathrm{\Delta }p(\epsilon )\epsilon p_P(\epsilon )\mathrm{\Delta }\mu _1^{\mathrm{ex}}$$
(18)
in the limit as $`\overline{\epsilon ^2}_P0`$, where $`p_P(\epsilon )`$ is the normalized parent distribution. \[Note that the solution for each phase (Eq. 17) is recoverable from the neater sum (Eq. 3) and difference (Eq. 18) equations.\] Surprisingly we have found that, in the multi-phase system, the difference in compositions of any pair of phases is identical to the expression found earlier for two-phase coexistence. Thus the same universal laws follow , relating any pair of phases. This is not an obvious result, since the parent appearing in Eq. 18 is the combined population of the whole system, not just of the two phases in question as it is in the two-phase coexistence problem.
Equation 18 is very generally applicable. It is valid for any system with a narrow distribution (that is, narrower than the range of linearizability of the fugacity), whatever particles or interactions it comprises. Furthermore, recall that $`\epsilon `$ need not parameterize size deviations, but could represent charge, mass or any other sole polydisperse quantity. By analysing multi-phase coexistence, we have found that Eq. 18 does not even depend on $``$, the number of phases present.
We have considered a system in which a slightly polydisperse fluid component is partitioned among several phases. The coexistence of more than two phases may be the result of tuning the temperature to the triple point of the monodisperse reference system. Alternatively, the slightly polydisperse particles may be in the presence of other, dissimilar components which, by the Gibbs phase rule (which states that an $`n`$-component mixture can exhibit up to $`n+1`$ coexisting phases at arbitrary temperature), can induce multi-phase coexistence . Within such a multi-component system, a particular, slightly polydisperse component will respect the above relations, which may be tested by an experimental probe which is ‘blind’ to the other components. For instance, near-monodisperse colloidal particles in the presence of ‘depletant’ species exhibit multiple phases. Light scattered only from the near-monodisperse colloid contains information on its fractionation , which should obey Eq. 18. As an illustration, a multi-phase colloidal sample with the composition shown in the figure, will obey the above relations, applied only to those particles in range $`X`$, with the origin of $`\epsilon `$ defined at its centre. The relations are equally applicable to particles in range $`Y`$, if we are blind to all other particles (e.g. they may be made invisible by matching their refractive index to that of the solvent), and redefine $`\epsilon =0`$ appropriately.
The form of the solution in Eq. 18 is of interest in itself, not least for the non-appearance of $``$. However, one quantity remains unknown: the constant of proportionality $`\mathrm{\Delta }\mu _1^{\mathrm{ex}}`$. That constant is system-dependent. For some substances, $`\mu _1^{\mathrm{ex}}`$ can be calculated using thermodynamic perturbation theory . Unfortunately, this is not possible for a system of hard-spheres, as it’s Hamiltonian is non-differentiable. Since the hard-sphere system is of great practical interest for modelling systems with repulsive interactions, the constant of proportionality is now calculated for that case.
The excess part of the free energy of the polydisperse hard-sphere fluid can be Taylor expanded in the small size parameter of each of the $`N`$ particles of interest thus
$$F^{\mathrm{ex}}=F_{\mathrm{mono}}^{\mathrm{ex}}+\underset{i=1}{\overset{N}{}}\epsilon _i\frac{F^{\mathrm{ex}}}{\epsilon _i}|_{\epsilon _i=0}+O(\epsilon ^2)$$
where $`F_{\mathrm{mono}}^{\mathrm{ex}}`$ is the excess free energy of the reference component of monodisperse hard spheres (in the presence of the rest of the system — see Fig. 1). In the reference component, all particles are alike, so the differentiation may be performed on particle number 1 only, without loss of generality, giving
$$F^{\mathrm{ex}}=F_{\mathrm{mono}}^{\mathrm{ex}}+N\overline{\epsilon }\frac{F^{\mathrm{ex}}}{\epsilon _1}|_{\epsilon _1=0}+O(\epsilon ^2).$$
The change in the identity (the species) of particle 1 when its size is varied affects only $`F^{\mathrm{id}}`$. The excess free energy contains the physical effect of the particle’s size on the rest of the system. By its presence in the container, particle 1 simply excludes other particles from a volume $`V_{\mathrm{excl}}`$, given that its interactions are purely hard and repulsive. Thus, increasing its size reduces the effective system volume, so
$$\frac{F^{\mathrm{ex}}}{\epsilon _1}=\frac{F^{\mathrm{ex}}}{V}\frac{dV_{\mathrm{excl}}}{d\epsilon _1}.$$
(19)
In fact the volume from which particle 1 excludes other particles, $`V_{\mathrm{excl}}`$, depends on their species, so the quantity in Eq. 19 is a net effective value, defined by the equation. For the special case of an almost pure hard sphere system (not in the presence of other, dissimilar components), $`V_{\mathrm{excl}}=\frac{4}{3}\pi \overline{R}_P^3(2+\epsilon _1)^3`$ at low density (correct up to second virial coefficient). At high density, the geometry of high-order inter-particle interactions modifies this. In any case, $`dV_{\mathrm{excl}}/d\epsilon _1`$ is of order a particle volume. The resulting excess free energy density of a polydisperse hard sphere fluid is
$$\frac{F^{\mathrm{ex}}}{V}=\frac{F_{\mathrm{mono}}^{\mathrm{ex}}}{V}+12\rho _1P^{\mathrm{ex}}V^{\mathrm{eff}}+O(\epsilon ^2)$$
(20)
where $`V^{\mathrm{eff}}`$ is some (unknown) effective volume, of order the volume of an average sphere, and exactly that for a near-pure, low-density system. Applying Eq. 16 yields
$$\mu _1^{\mathrm{ex}}=12P^{\mathrm{ex}}V^{\mathrm{eff}}$$
(21)
in terms of the system’s excess pressure $`P^{\mathrm{ex}}`$ over an ideal gas. Since coexisting phases have the same total pressure, it follows that $`\mathrm{\Delta }\mu _1^{\mathrm{ex}}=12V^{\mathrm{eff}}\mathrm{\Delta }P^{\mathrm{id}}`$. So the constant of proportionality in Eq. 18 is
$$\mathrm{\Delta }\mu _1^{\mathrm{ex}}=12V^{\mathrm{eff}}\mathrm{\Delta }\rho $$
(22)
for hard spheres in ergodic (fluid) phases. This calculation contains the lowest-order effects of polydispersity. Once the polydispersity is sufficient to alter the mode of packing (e.g. small particles preferentially filling the gaps between big ones), higher-order analysis is needed.
It is apparent that combining a moment description with a small-variable expansion in the distribution’s width is a productive way to analyse polydisperse systems. While the applications of this study are clearly wide-ranging, it is intended to extend its scope by analysing correlation functions and multiply-polydisperse systems . In addition, some work is required, using higher-order analysis, to establish the radius of convergence of the expansion, and quantify more precisely the method’s regime of validity.
Acknowledgements Many thanks for informative discussions go to Michael Cates, Peter Sollich, David Fairhurst, Patrick Warren and Wilson Poon. The work was funded by the EPSRC (GR/K56025) and a Royal Society of Edinburgh SOEID Research Fellowship.
|
no-problem/9901/astro-ph9901359.html
|
ar5iv
|
text
|
# The VLT observations of the HDF–S NICMOS field: photometric catalog and high redshift galaxy candidates
## 1 Introduction
Deep imaging of extragalactic fields has long been recognized to be a powerful tool to understand galaxy evolution (see Ellis 1997 for an extensive review). Although faint galaxy counts have been of paramount importance to show that galaxies do evolve with redshift, the overall scenario and the physical processes that led to galaxy evolution are still debated. A new approach developed in recent years uses deep multicolor surveys to study the fainter magnitude galaxies: deep multi-band images are taken with a complete set of broad–band filters, in order to cover the overall spectrum of the galaxy and to discriminate the populations at different redshifts. The Hubble Deep Field North (Williams et al. 1996) is the best–known examples of this kind of observations, but ground–based images have also been used, mainly to define sharp color criteria that select high–redshift galaxy candidates (Steidel et al 1995, Giallongo et al. 1998, Arnouts et al 1998).
This paper analyses deep observations of the Hubble Deep Field-South (HDF-S, Williams et al 1999) obtained in five colors ($`U`$, $`B`$, $`V`$, $`R`$ and $`I`$) in August 1998 as part of the Science Verification phase of the first VLT 8.2m telescope (UT1). A description of the Science Verification programme at the VLT is to be found at
http://www.eso.org/paranal/sv/ .
The data have been taken for a field centered on $`\alpha =22h32m51.7`$, $`\delta =60d38\mathrm{`}48.2^{\prime \prime }`$ (J2000), thus providing the optical complement to the near-IR $`J`$, $`H`$ and $`K`$ band images obtained with the HST NICMOS instrument (Fruchter et al 1999, see also
http://www.stsci.edu/ftp/science/hdfsouth/hdfs.html), since the area covered by the optical observations is somewhat more extended than the NICMOS field of view.
This paper is organized as follows: In Section 2 we describe the data reduction procedures, while in Section 3 we discuss how the photometric catalogs were constructed. In Section 4 we derive the photometric redshifts for all the objects brighter than $`R26.5`$ and discuss briefly their distribution at high redshift.
## 2 The Data Sample
### 2.1 Observations and data reduction
The data used here were retrieved from the ESO public archive, and refer to the observations that were obtained in the period August 17-September 1, 1998, using the VLT Test Camera (VLTTC) at the Cassegrain focus of the UT1. The VLTTC is a simple imaging camera which reimages the focal plane onto a $`2048^2`$, 24 $`\mu `$m pixels, thinned SITe CCD. In the $`2\times 2`$ binned mode which has been used for these observations the scale is 0.092 arcsec/pixel, giving a total field of 92$`\times `$92 arcsec<sup>2</sup>. For this program a set of several exposures through the standard $`UBVRI`$ Johnson-Cousin filters was obtained with single exposure integration times ranging from 600 to 1200 seconds. Airmass of individual exposures was always $`1.4`$, with a median value of 1.25. Observations were obtained following the standard criteria for deep imaging, i.e. applying a slight offset between individual pointings to allow the removal of the detector imprints. Full details on the Test Camera, the CCD detector, the filter curves and on the individual exposures are available on the WEB site
http://www.hq.eso.org/paranal/sv.
The data reduction has been carried out with different softwares at ESO and at the Rome Observatory. Although the two pipelines used completely independent environments and tools, they are quite similar in concept. Most of the steps are identical to those described elsewhere (e.g. Giallongo et al. 1998; Arnouts et al. 1998), and will not be repeated here. It is worth noting that flat-fielding is particularly critical in the VLTTC, since its CCD suffers from a very large blemish near the center and other lesser defects in the whole area. The central blemish is wavelength dependent, while variations from night to night caused by moving dust grains are also noticeable in the flats. Nevertheless, a satisfactory solution was finally found by constructing a separate “super-flat” as the median image of the unregistered images for each observing night. The final accuracy in flat–fielding is estimated to be better than 1%.
Only frames with seeing better than 1 arcsec were used in the final coaddition, without applying any drizzling algorithm (Fruchter and Hook 1998) since the VLTTC sampling is always much smaller than the seeing. As is customary in dithered multiple exposures, the edges of the final images are of poorer quality, since only a limited number of frames contribute to the observed flux. In the present observations the problem is noticeable because a large dithering pattern had to be applied to remove the blemishes and because of the small size of the field of view. Moreover, the central pointing of the coadded frames was not exactly the same in the different bands, which has further reduced the area common to all frames. We therefore used two different sets of images. In the first - that was part of the ESO public release – we trimmed the outer regions independently, keeping only the inner regions that were covered with 100% of the observations. These images were used to extract independent catalogs in each band.
We also produced a set of coadded images that are trimmed and aligned to the central field of the $`R`$ frame, and these images were used to prepare the multicolor catalog. In practice, about 20% of this field is not covered by all the $`U`$ and $`B`$ frames, and therefore the coadded images are slightly shallower in these bands. The recently available images obtained by NICMOS with the F110W, F160W and F222M filters (Fruchter et al 1999) were also rebinned and aligned to the VLTTC $`R`$ frame.
Table 1 summarizes the observational data that have been used, the FWHM of the PSF of the coadded single color images, their area, the photometric zero point, together with the formal $`5\sigma `$ limiting magnitudes. These were conservatively computed by taking the $`\sigma `$–clipped standard deviation of the sky counts in an aperture 2$`\times `$FWHM wide, taken at random positions on the images.
### 2.2 Photometric calibration
The photometric calibrations were obtained by reducing a series of standard stars from the Landolt (1992) sample. Self-consistent photometric solutions to the standard Johnson-Cousin system were derived for several individual nights, along with average solutions that use all the data from photometric nights. The zero points, color terms and extinction coefficients derived are listed at
http://www.hq.eso.org/paranal/sv/html/data/photom.txt. We emphasize however that the coadded images presented here cannot be calibrated directly using the average coefficients, since they are the average of different exposures taken under various conditions.
A few isolated relatively bright stars have been selected in the field. Then, images obtained during each nights for which a photometric solution exists have been taken, and accurate magnitudes of the selected stars have been measured in each of these images. The resulting instrumental magnitudes have been converted into Johnson magnitudes using the photometric solution for the given night. Finally, each star has been assigned its average magnitude, after a $`\sigma `$–clipping removal of the discordant values.
The same photometry has been applied to the summed images, and the final zero point has been chosen in order to reproduce the magnitudes of the selected stars. We estimate that the final accuracy of this procedure is $`0.05`$ mags in each band. Finally, the zero points were corrected for galactic absorption with $`E(BV)=0.02`$ (Burstein & Heiles 1982) with $`\delta U=0.095`$, $`\delta B=0.084`$, $`\delta V=0.063`$, $`\delta R=0.05`$, $`\delta I=0.038`$.
## 3 The Photometric Catalog
The analysis of the two sets of images (the individual coadded images and those trimmed to the $`R`$ band image) was performed using the SExtractor image analysis package (Bertin & Arnouts, 1996) .
### 3.1 Galaxy Counts
Within SExtractor, images were smoothed with a gaussian filter matching the seeing, and the detection threshold was chosen at $`3\sigma `$ of the background intensity in a contiguous area of 1 FWHM. Following Djorgovski et al. (1995), for each object both isophotal and aperture magnitudes (in a 2 FWHM aperture) were computed. The isophotal magnitude was used for the larger/brighter objects, i.e. for those objects where the isophotal area is larger than the aperture one. For fainter objects, an aperture correction to $`5^{\prime \prime }`$ has been estimated on bright stars and applied to correct the 2 FWHM aperture magnitude. This procedure is strictly valid for star-like objects only, but has been shown to be a good approximation on deep images (Smail et al. 1995).
Bright stars have been excluded from the catalogs using the CLASS\_STAR parameter provided by SExtractor. A threshold CLASS\_STAR$`<0.9`$ has been set, on the basis of the comparison between ground–based and HST data (Arnouts et al 1998). At fainter magnitudes (e.g. $`R24`$) the neural network classifier does not work properly anymore, but stars are not expected to dominate the counts and therefore have been ignored. All the single-color catalogs are available on the web site
http://www.mporzio.astro.it/HIGHZ.
A correction for incompleteness has been estimated in each band as in Arnouts et al. (1998), accounting for both false detection and the non–detection of real objects. We warn the reader that the correction for incompleteness has to be taken with some caution, because of the very limited size of the sample used as a reference. The raw and corrected counts in each band are shown in Fig 1, and compared to the most recent data from the literature. It is noteworthy here that the counts derived from these data are the deepest ever obtained from a single ground-based telescope, thanks to the sub-arcsecond image quality, relatively long exposure time and the large collecting area of the VLT.
### 3.2 The Multicolor Catalog
The multicolor catalog has been obtained from the set of aligned images, taking the $`R`$ frame as reference. Object detection and the measurement of the $`R`$ magnitude have been performed on the $`R`$ frame exactly as described in the previous section. Then colors have been measured in a fixed circular aperture of 14 pixels (corresponding to 2 FWHM of the $`R`$ frame), keeping the object position found on the $`R`$ frame. To allow for the seeing difference among the coadded images, colors have been measured on images degraded to the $`0.82^{\prime \prime }`$ seeing of the $`V`$ band image. The $`R26.5`$ subsample used for the following analysis consists of 91 galaxies and is available on the WEB
http://www.mporzio.astro.it/HIGHZ. The $`R=26.5`$ threshold has been set in order to ensure good photometric accuracy and meaningful colors on the whole sample. Obvious stars have been excluded using the CLASS\_STAR parameter (see above) in the R and - when available - in the NICMOS images.
Fig 2 shows the color distribution of the $`R26.5`$ sample in the $`VI`$ vs $`UV`$ plane. Overplotted on the observed colors are the evolutionary tracks of few galaxy templates as a function of redshift. They have been chosen to broadly encompass the most common spectral types, and are based on the synthetic models of Bruzual and Charlot (GISSEL library, 1996). It is clearly seen that a significant fraction of the faint galaxies has colors typical of star-forming galaxies at $`z2`$, as expected in these very deep images (Metcalfe et al. 1995b). Though most of these objects are probably too faint for a spectroscopic confirmation with FORS or ISAAC at the VLT, their nature and redshifts can be investigated further by means of a photometric redshift analysis.
## 4 The Photometric Redshifts of the $`R26.5`$ galaxies
The multicolor catalog has been used to derive photometric redshifts for all the 91 galaxies brighter than $`R=26.5`$, using a technique extensively described elsewhere (Giallongo et al. 1998). In brief, we have computed the expected galaxy colors as a function of redshift for synthetic models in the GISSEL library, for an extensive variety of combinations of age, metallicity, IMF, and e-folding star formation time-scale. The reddening produced by internal dust in star-forming galaxies has then been added using the SMC extinction law (Pei 1992), along with the absorption produced by hydrogen in the intergalactic medium (Madau 1995). Finally, at any redshift galaxies are allowed to have any age smaller than the Hubble time at that redshift ($`\mathrm{\Omega }=1`$ and $`H_0=50`$ km s<sup>-1</sup>Mpc<sup>-1</sup> have been adopted throughout the paper).
The resulting large dataset includes $`5\times 10^5`$ “simulated galaxies”, and a classical $`\chi ^2`$–minimization procedure has been applied to find the best-fitting spectral template to the observed colors.
This procedure has been tested on 108 galaxies with spectroscopically confirmed redshifts in the HDF-N (Cowie 1997; Cohen et al. 1997; Dickinson et al. 1997; Lowenthal et al. 1997 Fernadez–Soto et al 1998), obtaining an accuracy $`\sigma _z0.1`$ in the redshift interval $`z=03.5`$ (Giallongo et al. 1998; Fontana et al. 1999).
The resulting redshift distribution is shown in Fig 3. A peak is clearly seen at $`z=0.51`$, with a well populated tail extending to higher redshifts: about 28% of galaxies result indeed at $`z2`$.
A very red object was already identified as a $`z2`$ candidate from the preliminary NICMOS observations, from being detected in the $`H`$ band (F160W) and undetected in CTIO 4m telescope $`R`$ and $`I`$ band images with $`(RH)_{\mathrm{AB}}>3.9`$ and $`(IH)_{\mathrm{AB}}>3.5`$ (Treu et al 1998). This object (named VLT 154 in our catalog) is detected in all the VLT images except the $`U`$ band, and our photometric redshift analysis indicates a redshift $`z1.8`$ (see Fig 4). The redshift accuracy is limited by the lack of major spectral features, and acceptable solutions in the range $`1.5<z<2.1`$ can be found using different combinations of the parameters involved, consistent with the result of Stiavelli et al. (1998). The $`z=1.8`$ best-fit is obtained with solar metallicity, $`E(BV)=0.15`$ (with a SMC extinction law), star–formation timescale $`\tau =0.3`$ Gyr and an age of 2 Gyrs. Adopting the Calzetti (1997) extinction law we obtain $`z=1.65`$ with $`E(BV)=0.2`$. Assuming no reddening from dust, we obtain an even higher best–fit redshift of $`z=2.05`$. At $`z=1.8`$ , this object would have $`M_K=25.67`$ and $`M_B=21.84`$ ($`k`$–corrections are computed exactly from the best–fitting spectrum). Spectroscopic follow–up with ISAAC will hopefully reveal whether this object is indeed a high redshift elliptical galaxy undergoing passive evolution, as suggested by the fit parameters, a result that has a wide cosmological relevance.
The list of the objects at $`z2.5`$ is given in Table 2, while Fig. 4 shows the best fitting spectra of five of them. Two of these objects are unresolved in the VLT images, although they are too faint for the CLASS\_STAR parameter to be reliable. Since they also fall outside the HST- STIS image overlapping the HDF-S NICMOS field, we are not able to exclude the possibility that they are actually galactic stars, that are the major source of interlopers in the Steidel et al. (1996) sample.
Before comparing these results to the HDF-N it is worth noting that HDF F300W filter is significantly wider and bluer than the Johnson $`U`$ used here, with only a small overlapping region. As a result, the redshift range sampled by the HDF is wider and centered at a lower redshift ($`2<z<3.2`$) than in the present work ($`2.5<z<3.4`$), and the surface density of “$`U`$-dropout” galaxies in HDF-N with $`V_{606}26.5`$ is significantly higher than found here, or $`18`$ arcmin <sup>-2</sup> (Pozzetti et al. 1998).
Only one galaxy candidate at $`z4`$ results from the redshift distribution in Fig. 3. This is consistent with the number density of 0.8 arcmin<sup>-2</sup>$`B`$-dropout” galaxies detected in the HDF-N (Pozzetti et al. 1998) and lower than a similar prediction in the NTT-Deep Field (2.7 arcmin<sup>-2</sup> at $`r`$26, Arnouts et al 1998).
Given the small size of the field studied with the VLT-TC these results are obviously of limited statistical significance. They demonstrate however the potential of future, wide field VLT instruments like FORS-1 and FORS-2, now planned to become operational in 1999 and 2000. These instruments will allow to explore to an unprecedented depth the distribution and evolutionary status of galaxies in the early universe.
###### Acknowledgements.
We thank B. Leibundgut for obtaining most of the observations, W.Freudling for providing a preliminary NICMOS image of the field and G. De Marchi, F. Natali and V. Testa for their help in the data analysis . R.F. is affiliated to the Astrophysics Division, Space Science Department, European Space Agency.
|
no-problem/9901/hep-ph9901452.html
|
ar5iv
|
text
|
# Primordial Black Hole Formation from Inflaton
## Abstract
Measurements of the distances to SNe Ia have produced strong evidence that the Universe is really accelarating, implying the existence of a nearly uniform component of dark energy with the simplest explanation as a cosmological constant. In this paper a small changing cosmological term is proposed, which is a function of a slow-rolling scalar field, by which the de Sitter primordial black holes’ properties, for both charged and uncharged cases, are carefully examined and the relationship between the black hole formation and the energy transfer of the inflaton within this cosmological term is eluciated.
There is now prima facie evidence that supports two basic tenets, inflation and dark (matter and energy) components of the hot big bang universe paradigm. Measurements of the distances to SNe Ia have produced strong evidence that the Universe is indeed accelarating which indicates that most of the critical density exists in the form of nearly uniform and positive dark energy. This component is poorly understood. So, naturally the identification and elucidation of the mysterious dark-energy component is a very pressing question for nowadays physics. Vacuum energy is only the simplest possibility for the smooth dark component; there are other possibilities: frustrated topological defects or a slow rolling scalar field or quintessence. Independent evidence for the existence of this dark energy, e.g., by CMB anisotropy, the SDSS and 2dF surveys, or gravitational lensing, is crucial for verifying the accounting of matter and energy in the Universe. Additional and more precious measurements of SNe Ia could help shed light on the precise nature of the dark energy. The dark energy problem is not only of great importance for cosmology, but also for fundamental physics as well. Whether it is vacuum energy or quintessence, it is still a puzzle for fundamental physics and possibly a clue about the unification of the forces and particles.
For the not very clear dark matter identity, Primordial black holes (PBHs) are also one of the possible cold dark matter candidates which are believed to take the majority of the matter contents of the Universe. PBHs may form in the early universe when pre-existing adiabatic density fluctuations enter into the cosmological horizon and recollapse. That is, primordial overdensities seeded, for instance by inflation, may collapse to primordial black holes during early eras if they exceed a critical threshold. Thus, it is quite reasonable to discuss PBHs by connecting the mysterious dark energy problems to PBHs’ formation in the de Sitter spacetimes where a cosmological term is essential to describe the PBHs properties.
In this paper we propose a tiny changing cosmological term dependent on a slow-rolling scalar field, which may come from a supersymmetric particle physics model at higher energy scale, just as some classes of quintessence which may originate from the dynamic supersymmetry breaking of a supersymmetry particle theory with a flat direction. We discuss its relation with de-Sitter primordial black holes formation for both charged and uncharged cases, as well as the black holes’s properties.
Studies of black hole formation from gravitational collapse of a (massless) scalar field have revealed interesting nonperturbative and non-linear energy (mass) converting phenomena at the threshold of black hole formation. Specifically, starting from the spherically symmetrical de Sitter black hole spacetimes with charges q and mass m,
$$ds^2=a(t,r)dt^2+a^1(t,r)dr^2+r^2d\mathrm{\Omega }^2,$$
(1)
where $`d\mathrm{\Omega }d\theta ^2+\mathrm{sin}^2\theta d\phi ^2`$, and $`\{x^\mu \}=\{t,r,\theta ,\phi \}`$ are the usual spherical coordinates,
$$a(t,r)=12m/r\mathrm{\Lambda }\times r^2/3+q^2/r^2$$
(2)
The $`\mathrm{\Lambda }`$ is taken the similar form given by Peebles and Vilenkin with a notation that now we only consider a single component field case for simplicity and the reduced Planck mass is set $`M_p=(8\pi G)^{1/2}=1`$, as well as $`c=\mathrm{}=1`$
$$\mathrm{\Lambda }(\varphi )=b(\varphi ^4+M^4)$$
(3)
where the constant energy parameter is assumed to dominate and the self-coupling constant $`b=1\times 10^{14}`$ from the condition that present-day large scale structure grows from quantum fluctuations frozen into $`\varphi `$ during inflation as well as the considerations that the present density parameter in matter is $`\mathrm{\Omega }_m0.3`$, with $`\mathrm{\Omega }_\varphi =1\mathrm{\Omega }_m0.7`$ in the inflaton. Besides, we also require the cosmological term satisfy the flatness conditions at the very early Universe evolution stage to ensure its slow changing property,
$$\dot{\varphi }\mathrm{\Lambda }(\varphi )^{}/H$$
(4)
where H is the Hubble parameter which is a function of the inflaton $`\varphi `$ for the very early Universe evolution period, the overhead dot referring to derivative to time and the prime indicating derivative with respect to the inflaton $`\varphi `$. The two flatness conditions are
$$(\mathrm{\Lambda }(\varphi )^{}/\mathrm{\Lambda }(\varphi ))^2/2=8\varphi ^6/(M^4+\varphi ^4)^2<<1$$
(5)
and
$$\mathrm{\Lambda }(\varphi )^{\prime \prime }/\mathrm{\Lambda }(\varphi )=12\varphi ^2/(M^4+\varphi ^4)<<1$$
(6)
Generally, for a controllable theory it is necessary that the effective potential only valid at scales lower than Planck energy scale.
We can have this quartic term potential from a simple superpotential, Wess-Zumino model,
$$W=c\varphi ^3$$
(7)
(where c is a self-coulping constant) with the following U(1) R-symmetry
$$\varphi exp(i\beta /3)\varphi $$
(8)
where $`\beta `$ is the transformation parameter and
$$Wexp(i\beta )W,$$
(9)
plusing a cosmological constant-like energy term. Thus, this symmetry forbiddes the other higher order terms in $`\varphi `$ and the resultant potential possesses a $`Z_4`$ symmetry. Generally, if we require the system having a $`Z_2`$ symmetry the potential should also include the $`\varphi ^2`$ term, that is the mass term.
As the treatment in literature we define a parameter $`z=r/m`$, and another one, the ” charge-mass-ratio ” parameter $`\alpha =q/m`$ with $`m>0`$ as well as $`r>0`$ and
$$y=3(z^22z+\alpha ^2)/z^4$$
(10)
It is easy to find when a(t,r)=0
$$\mathrm{\Lambda }m^2=y$$
(11)
Generally equation (2) when a(t,r)=0 possesses four un-degenerate solutions and equation
$$dy/dz=0$$
(12)
has two, among which in our case, at the moment, only the small value one is relevent to our following analysis, that is
$$0<\alpha ^2<9/8$$
(13)
and
$$z_{}=3/2(9/42\alpha ^2)^{1/2}.$$
(14)
We have the following observations for the charged de Sitter black hole spacetimes that there shall exist that (we take $`\mathrm{\Lambda }`$ as the value of the potential)
a. Two horizons provided $`\mathrm{\Lambda }m^2>y(z_{})`$,
b. One horizon if $`\mathrm{\Lambda }m^2=y(z_{})`$ and
c. No horizon when $`\mathrm{\Lambda }m^2<y(z_{})`$
The family of parameter, say, $`S[q,m,M,b]`$, such that for the value of $`\mathrm{\Lambda }`$ not less than certain value $`y(z_{})/m^2`$, black holes are formed, otherwise no black holes are formed.
Then, (b), the critical solution is universal with respect to the family of initial data considered,
$$b(M^4+\varphi ^4)m^2=y(z_{})$$
(15)
and
$$y(z_{})=3\frac{[(3\sqrt{98\alpha ^2})^2/43+(98\alpha ^2)^{1/2}+\alpha ^2]}{[3(98\alpha ^2)^{1/2}]^4}$$
(16)
That is, the right hand side of Eq.(15) is only a function of the ”charge-mass-ratio”
$$b(M^4+\varphi ^4)m^2=f(\alpha )$$
(17)
In the standard scenario of inflation the inflating expansion lasts about a Planck time with some 50 e-foldings to solve mainly the original monopole, geometric flatness and physical horizon problems. After that the inflaton will execute oscillations around the minimum of the inflation potential, to convert its stored energy into the to be created physical world during the reheating period. In the charged de Sitter black hole case as we discuss the inflaton energy transfered to form a black hole is constrained by the black hole’s charge-mass-ratio $`\alpha `$ mathematically, which decides the energy transfering rate to form black holes. Of cause, the details of energy transfer mechanism need more physics inputs and theoretical considerations, especially that where the charges come from specifically, besides some proposals such as in PBHs pair production or parameter resonance in preheating era if the inflaton coupling with other boson or charged fermion fields. It is similar and straightway to discuss black holes in de Sitter spacetime without charge, which will be discussed following by taking q=0. In this case the parameter set only consists of three elements $`S[m,M,b]`$ and there still shall exist that, with differences from the charged case,
a. No horizon provided $`3\mathrm{\Lambda }>1/m`$ (the root is negative),
b. One degenerate horizon from three horizons if $`3\mathrm{\Lambda }=1/m`$ with horizon at $`r=1/\mathrm{\Lambda }^{1/2}`$, that is, the horizon increases with energy input, which is obvious in the simplest Schwarzchild metric case and the energy transfering in the uncharged case satisfies
$$b(M^4+\varphi ^4)m=1/3$$
(18)
c. Two distinct horizons when $`3\mathrm{\Lambda }<1/m`$ with horizons at $`r_1=2cos(\delta )/\mathrm{\Lambda }^{1/2}`$ and $`r_2=2cos(\delta /3+4\pi /3)/\mathrm{\Lambda }^{1/2}`$ respectively, under condition $`cos(\delta )=3m\mathrm{\Lambda }^{1/2}`$. And the third root of solutions turns out being negative.
In this simpler case it is clear to see that there still exists the similar energy transfering relation Eq.(11), but without additional parameter constraint as the charged case and the classical thermodynamics quantities to be calculated are dependent on the slow changing inflaton, which is very interesting. We will detail the tedious computations and publish the results elsewhere.
The cosmological term form chosen as the one given by Peeples and Vilenkin is due to the following two reasons. One is the connection to the tree level hybrid inflation model in the very early Universe era, which is a very promising theory to confront all astrophysics observations as we have known so far. Another is from the convincing and consistent results of recent analyses to the gravitational lensings, SNe Ia as well as large-scale structure observations and theoretical physics considerations with inconsistence predictions from theories that disfavor classes of quintessence models or a simplest cosmological constant interpretation.
Acknowledgements
Xin He Meng is indebted to valuable discussions on this topic with Laura Covi, Robert Brandenberger, Ilia Gogoladze, Christopher Kolda, David Lyth, Leszek Roszkowski, Lewis Ryder, Goran Senjanovic, R.Tung and Xinmin Zhang during his stay at Lancaster University, UK and ICTP, Italy. He also thanks Profs S.Randjbar-Daemi, Goran Senjanovic and A.Smirnov for kind invitation for one month visit at ICTP. The authors would all like to express their gratitude to Abdus Salam ICTP for hospitality extended to them during this work being completed. This work is partly supported by grants from National Education Ministry and Natural Science foundation of P.R.China
References
|
no-problem/9901/cond-mat9901324.html
|
ar5iv
|
text
|
# Impurity effects on the spin excitation spectra in a d-wave superconductor
## I INTRODUCTION
The spin excitation spectra Im$`\chi `$ of high-$`T_c`$ superconductors is extensively studied by inelastic neutron scattering(INS) and a consistent picture has emerged in YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub>(YBCO). A remarkable feature in both underdoped and highly doped YBCO is that a sharp neutron resonance peak was observed in the superconducting state(SC) at a 2D wave vector Q=$`(\pi ,\pi )`$\[1-4\]. Also, in the SC state, Im$`\chi `$ is restricted to a small energy range limited by a doping dependent energy gap(spin gap) in low frequencies and has sinusoidal dependence on $`q_z`$, the wave vector perpendicular to the CuO<sub>2</sub> planes . Both the resonance peak and the spin gap disappear in the normal state, and the resonance energy E<sub>r</sub> is found to increases monotonically with the superconducting transition temperature $`T_c`$ , therefore they appear to be correlated to the superconductivity.
A number of theories has been proposed to account for this magnetic resonance. Beyond different modifications, one may basically divide the explanations into two classes. First, it may result from the spin-flip quasiparticle scatterings across the SC gap, causing an enhancement of the electronic spin susceptibility at a specific energy which compensates for the loss of spectral weight below the gap. Second, it may be a consequence of a collective mode in the particle-particle channel which couples to neutron scattering through the particle-hole($`ph`$) mixing in the d-wave SC state . More particularly, the first class includes i) a BCS gap function wth strong Coulomb correlations and a non-BCS gap function resulting from the interlayer pair tunneling theory of high-$`T_c`$ superconductivity , in the framework of a d-wave pairing model. ii) a s-wave order parameter with opposite signs in bounding and antibounding bands formed within the CuO<sub>2</sub> bilayer .
Experimentally, it was shown that the superconducting properties are modified by nonmagnetic impurities, especially $`T_c`$ is rapidly suppressed by the substitution of the copper ions by zinc ions . A possible interpretation of these results is made in term of a $`d_{x^2y^2}`$ order parameter affected by nonmagnetic scattering, assuming that Zn acts as a strong resonant scatterer . In this case, nonmagnetic impurities have a strong pair breaking effect and will modify the spin excitation spectra observed in SC state.
The purpose of this paper is to study the modifications of the resonance peak and the spin gap upon the introduction of nonmagnetic impurites in a BCS $`d_{x^2y^2}`$ superconductor. We treat the impurities in the dilute limit using the self-consistent $`t`$-matrix approximation . Both the impurity self-energy effects and the impurity vertex corrections are considered in our calculations. In the pure system, a sharp magnetic peak is observed and the spectrum is limited by the spin gap in low frequencies. When the impurity self-energy corrections are considered, we find that the peak is broadened and its position is shifted to lower energies. Meanwhile, the magnitude of the spin gap decreases but only a negligible contribution to the spin excitation spectrum is found in the gap due to the impurity self-energy corrections. On the other hand, the vertex corrections alone induce a broad spectral weight in the spin gap at the impurity concentration where no clear reasonance peak is observed, and have only slight effect on both the peak and the magnitude of the spin gap.
The paper is organized as follows. In Sec.II, we discuss the model and study the self-energy corrections. Sec.III contains the effect of the vertex corrections. We present the conclusion in Sec.IV
## II THE SELF-ENERGY CORRECTION
To consider the modulation of the spin susceptibility along the $`c`$ axis, we investigate a bilayer system with a interlayer hopping $`t_{}`$ and the same $`d_{x^2y^2}`$ order parameter in two layers. The effects of antiferromagnetic correlations within and between the layers are considered in the random-phase approximation(RPA) form of the susceptibility. The Nambu Green’s function of single particles for the pure system in the SC state is given by,
$$\widehat{g}_0^{(i)}(𝐤,i\omega _n)=\frac{i\omega _n\widehat{\sigma }_0+\mathrm{\Delta }_k\widehat{\sigma }_1+\xi _k^{(i)}\widehat{\sigma }_3}{(i\omega _n)^2\mathrm{\Delta }_k^2(\xi _k^{(i)})^2},$$
(1)
where $`\widehat{\sigma }_i(\widehat{\sigma _0}=\widehat{\mathrm{𝟏}})`$ is the Pauli matrices, $`i=a`$ or $`b`$ expresses the antibonding or bonding band. For the quasiparticle dispersion, we use $`\xi _k^{(a/b)}=2t(\mathrm{cos}k_x+\mathrm{cos}k_y)4t^{^{}}\mathrm{cos}k_x\mathrm{cos}k_y2t^{^{\prime \prime }}[\mathrm{cos}(2k_x)+\mathrm{cos}(2k_y)]\pm t_{}\mu `$, with $`t^{^{}}/t=0.2,t^{^{\prime \prime }}/t=0.25,t_{}/t=0.44,\mu /t=1.11`$ corresponding to a fit to the angle-resolved photoemission data on the optimal doping YBCO as used before . The order parameter is chosen as $`\mathrm{\Delta }_𝐤=\mathrm{\Delta }_0(\mathrm{cos}k_x\mathrm{cos}k_y)/2`$, where $`\mathrm{\Delta }_0=4T_{c0}`$ and $`T_{c0}`$ the SC transition temperature.
The nonmagnetic impurities are modeled by a zero-range potential $`V`$ and its scattering is treated in the self-consistent $`t`$-matrix approximation . In this approach, two parameters are introduced to describe the scattering process: $`c=1/(\pi N_0V)`$ and $`\mathrm{\Gamma }=n_i/\pi N_0`$, where $`N_0`$ and $`n_i`$ are respectively the density of states at the Fermi level and the impurity concentration. The impurity-average Nambu Green’s function $`\widehat{g}(𝐤,i\omega _m)`$ for single particles can be written formally as ,
$$\widehat{g}^{(i)}(𝐤,i\omega _n)=\frac{i\stackrel{~}{\omega }_n^{(i)}\widehat{\sigma }_0+\mathrm{\Delta }_k\widehat{\sigma _1}+\stackrel{~}{\xi }_k^{(i)}\widehat{\sigma }_3}{(i\stackrel{~}{\omega }_n^{(i)})^2\mathrm{\Delta }_k^2(\stackrel{~}{\xi }_k^{(i)})^2}.$$
(2)
The tilde symbol represents inclusion of the impurity self-energy corrections,
$$\stackrel{~}{\omega }_n^{(i)}=\omega _n\mathrm{\Sigma }_0^{(i)}(\omega _n),\stackrel{~}{\xi }_p^{(i)}=\xi _p^{(i)}+\mathrm{\Sigma }_3^{(i)}(\omega _n),$$
(3)
where we have used the fact that the off-diagonal self-energy $`\mathrm{\Sigma }_1^{(i)}`$ vanishes for a $`d_{x^2y^2}`$ symmetry of the gap function. In the single-site approximation, the self-energy is given by $`\mathrm{\Sigma }_j^{(i)}=\mathrm{\Gamma }T_j^{(i)}`$. The impurity-scattering $`t`$-matrix $`T_j^{(i)}`$ can be calculated from ,
$$T_0^{(i)}=\frac{G_0^{(i)}(\omega )}{c^2[G_0^{(i)}(\omega )]^2},T_3^{(i)}=\frac{c}{c^2[G_0^{(i)}(\omega )]^2},$$
(4)
with $`G_0^{(i)}(\omega )=(1/\pi N_0)_k\mathrm{Tr}[\widehat{\sigma }_0\widehat{g}^{(i)}(𝐤,\omega )]`$. The following calculations are carried out in the unitary limit, $`c=0`$, so only the $`\mathrm{\Sigma }_0`$ contribution remains. The order parameter $`\mathrm{\Delta }(\mathrm{\Gamma },0)`$ and the SC transition temperature $`T_c`$ in the presence of impurities are determined from the gap equation. In the weak-coupling limit, it has been shown that $`\mathrm{\Delta }(\mathrm{\Gamma },0)/\mathrm{\Delta }_0`$ and $`T_c/T_{c0}`$ draw almost the same curve as a function of $`\mathrm{\Gamma }`$, i.e., $`\mathrm{\Delta }(\mathrm{\Gamma },0)/\mathrm{\Delta }_0T_c/T_{c0}`$ . The temperature-dependence of $`\mathrm{\Delta }(\mathrm{\Gamma },T)`$ is taken to be,
$$\mathrm{\Delta }(\mathrm{\Gamma },T)=\mathrm{\Delta }(\mathrm{\Gamma },0)\mathrm{tanh}(2\sqrt{(T_c/T)1}),$$
(5)
where $`T_c`$ is given by the Abrikosov-Gor’kov formula ,
$$\mathrm{ln}(\frac{T_c}{T_{c0}})=\psi (\frac{1}{2}+\frac{\mathrm{\Gamma }}{2\pi T_c})\psi (\frac{1}{2}),$$
(6)
with $`\psi (x)`$ the digamma function.
The spin susceptibility for Matsubara frequencies is calculated from,
$$\chi _0^{(ij)}(𝐪,i\omega _m)=T\underset{n}{}\underset{k}{}\mathrm{Tr}[\frac{1}{2}\widehat{g}^{(i)}(𝐤,i\omega _n)\widehat{g}^{(j)}(𝐤+𝐪,i\omega _m+i\omega _n)].$$
(7)
Its analytic continuation to the real frequency, giving $`\chi _0^{(ij)}(𝐪,\omega )`$, is performed using Pad$`\stackrel{´}{e}`$ approximants . When $`\widehat{g}^{(i)}`$ is replaced by $`\widehat{g}_0^{(i)}`$, Eq.(7) gives the result for the pure system. The antiferromagnetic correlations in the plane $`J_{}`$ and between the planes $`J_{}`$ would renormalize $`\chi _0^{(ij)}`$. This effect is considered in RPA approximation ,
$$\chi ^{(ij)}(𝐪,\omega )=\frac{\chi _0^{(ij)}(𝐪,\omega )}{1+J^+(𝐪)\chi _0^{(ij)}(𝐪,\omega )},$$
(8)
with $`J^+(𝐪)=J(𝐪)J_{}`$ and $`J(𝐪)=J_{}(\mathrm{cos}q_x+\mathrm{cos}q_y)`$. We note that the susceptibility described above comes from the $`ph`$ excitations of quasiparticles within and between the bonding and antibonding bands. However, the susceptibility $`\chi ^{ph}(𝐪,\omega )`$ observed in the neutron scattering is related to the excitations of quasiparticles within and between the layers . The relation between them can be obtained using the transformation matrix between the states in the layer and band representations. It gives,
$$\chi ^{(11)}=\chi ^{(22)}=\frac{1}{4}[\chi ^++\chi ^{}],\chi ^{(12)}=\chi ^{(21)}=\frac{e^{iq_zd}}{4}[\chi ^+\chi ^{}],$$
(9)
where $`d`$ is the distance between two layers, $`\chi ^+=\chi ^{(aa)}(𝐪,\omega )+\chi ^{(bb)}(𝐪,\omega )`$ and $`\chi ^{}=\chi ^{(ab)}(𝐪,\omega )+\chi ^{(ba)}(𝐪,\omega )`$. Then we have,
$$\chi ^{ph}(𝐪,\omega )=\chi ^{(11)}+\chi ^{(12)}+\chi ^{(21)}+\chi ^{(22)}=\chi ^+\mathrm{cos}^2\frac{q_zd}{2}+\chi ^{}\mathrm{sin}^2\frac{q_zd}{2}.$$
(10)
Eq.(10) implies that the experimentally observed $`\mathrm{sin}^2(q_zd/2)`$ modulation of the INS comes from the transitions of quasiparticles between the respective bands.
In following evaluations, the summation over $`𝐤`$ and $`n`$ are performed by dividing the Brillouin zone into 1024$`\times `$1024 lattices and by summing from $`n=100`$ to $`n=100`$ in Matsubara frequency $`\omega _n=\pi T(2n1)`$, respectively. The number of input points in Pad$`\stackrel{´}{e}`$ approximant is chosen to be 100 and $`J^+(𝐐)`$ to be 0.85 in unit of $`t`$(We will use this unit in the following). In addition, we take $`T_{c0}=0.1`$ and $`T=0.1T_{c0}`$.
Results for Im$`\chi ^{ph}(𝐐,\omega )`$ versus $`\omega `$ are shown in Fig.1. The continuous line corresponds to the pure system which reproduces the observed INS features in the SC state. The dashed (dotted) lines are results with the impurity self-energy corrections. To understand the impurity effect, let us first address the origin of the peak for the pure system within the $`d`$-wave BCS framework, which has been studied in Ref. . For a qualitative statement, let $`T=0`$, and set the coherence factor to unity, then one has Im$`\chi _0^{(ij)}(𝐐,\omega )=\pi _k\delta (\omega E_k^{(i)}E_{k+Q}^{(j)})`$. The energy $`E^{(ij)}(𝐤)=E_k^{(i)}+E_{k+Q}^{(j)}`$ which is the function of the 2D wave vector $`𝐤`$ has a minimum at $`E_{min}^{(ij)}(𝐤)2\mathrm{\Delta }_0=0.8`$, corresponding to both $`𝐤`$ and $`𝐤+𝐐`$ near the crossings of the Fermi surface and the magnetic Brillouin zone. At the minimum Im$`\chi _0^{(ij)}(𝐐,\omega )`$ has a step and correspondingly Re$`\chi _0^{(ij)}(𝐐,\omega )`$ has a logarithmic singularity. In the realistic calculations, this divergence exhibits a maximum as shown in Fig.2 and causes a resonant peak due to the RPA renormalization Eq.(8). Meanwhile, there is a saddle point at $`(0,\pi )`$ in the quasiparticle dispersion, and it leads to a logarithmic divergence in Im$`\chi _0^{(ij)}(𝐐,\omega )`$. It arises from the transitions between the occupied states located at $`(0,\pi )`$ and empty states above the SC gap, thus the peak position locates at $`E_{sp}^{(i)}=\mathrm{\Delta }_0+\sqrt{\mathrm{\Delta }_0^2+(\xi _{vH}^{(i)})^2}`$. For the dispersion of quasiparticles considered here, the van Hove singularity of the antibonding band at $`(0,\pi )`$ lies at an energy $`\xi _{vH}^{(a)}=0.25`$ relative to the Fermi level and that of the bonding band is $`\xi _{vH}^{(b)}=1.12`$ due to splitting of the two bands. It gives $`E_{sp}^{(a)}0.87`$, which is close to the energy where Re$`\chi _0^{(ij)}(𝐪,\omega )`$ is divergent, therefore enhances the peak. In fact, these two effects are indistinguishable in the calculations and exhibits only one peak in Im$`\chi _0^{(ij)}(𝐐,\omega )`$ as can be seen in Fig.2. Now, we turn to the impurity effects on the spin excitation spectra. In a $`d`$-wave superconductor with resonant nonmagnetic impurity scattering, the SC gap $`\mathrm{\Delta }_0`$ is suppressed and causes the shift of the peak, which is basically equal to $`2\mathrm{\Delta }_0`$ as discussed above, to low frequencies. Meanwhile, the impurity scattering causes the decays of quasiparticle states and leads to the damping of spin excitaitons associated with Im$`\chi _0^{(ij)}(𝐐,\omega )`$. It gives rise to the broadening of the peak. Exactly this bahavior is observed in Fig.1. Also one can see from Fig.2 that the peak in Im$`\chi _0^{(ij)}(𝐐,\omega )`$ disappears gradually upon the introduction of impurities. It is because the impurity scattering will wash out the van Hove singularity. Consequently, no clear reasonance peak is observed at large impurity concentrations (e.g.$`\mathrm{\Gamma }/\mathrm{\Delta }_0=0.08`$) due to this effect and the damping of spin excitations. Another feature in Fig.1 is that no significant excitation spectrum weight has been found in the spin gap. The origin of the spin gap in the SC state arises from a lack of thermally exciting $`ph`$ pairs across the SC gap with transition wavevector $`𝐐`$ when the exciting energy is lower than the threshold $`E_{th}2\mathrm{\Delta }_0`$. So, though the impurity self-energy produces an increase in the quasiparticle scattering rate, it may be not strong enough to cause an observeable enhancement to the $`ph`$ excitations across the SC gap. We note that the impurity vertex corrections entering Im$`\chi _0^{(ij)}(𝐐,\omega )`$ consist of the $`ph`$ ladder diagrams connected by the impurity scatteing lines, which may allow a strong scatterings and lead to singificant modification of the spin gap.
## III VERTEX CORRECTION
In the above calculations, the self-energy from the impurity scattering is considered to include the multiple scattering of quasiparticles from the same impurity in the noncrossing manner . Because the dynamical susceptibility measured in magnetic neutron scattering is believed here to come from the $`ph`$ pair excitations, the multiple scattering of particles and holes from the same impurity should be examined. That is, we must include the vertex corrections due to the impurity scattering, which is displayed diagrammatically in Fig.3. The single and double arrowed solid lines in Fig.3 stand for the normal and pairing Green’s functions of particles and holes renormalized by the impurity self-energy. The dashed line is the impurity interaction and the impurity is represented by a cross $`\times `$. The multiple scatterings in the form of ladder diagrams and the multiple scatterings of quasiparticles from the same impurity can be explicitly seen from Fig.3 (b) and (c), respectively. The vertex-corrected spin susceptibility can be written as a $`4\times 4`$ matrix equation ,
$$\widehat{\chi }_0^{(ij)}(𝐪,i\omega _m)=T\underset{n}{}\frac{\widehat{M}^{(ij)}(𝐪,i\omega _m,i\omega _n)}{\widehat{1}I^{(ij)}(i\omega _m,i\omega _n)\widehat{M}^{(ij)}(𝐪,i\omega _m,i\omega _n)},$$
(11)
where $`\mathrm{\Gamma }(i\omega _m,i\omega _n)=\widehat{1}I^{(ij)}(i\omega _m,i\omega _n)\widehat{M}^{(ij)}(𝐪,i\omega _m,i\omega _n)`$ is the dressed vertex and the impurity-scattering lines are given by,
$$I^{(ij)}(i\omega _m,i\omega _n)=\frac{n_i}{[\pi N_0]^2}T_0^{(i)}(i\omega _m+i\omega _n)T_0^{(j)}(i\omega _n).$$
(12)
The spin-triplet particle-particle channel into the $`ph`$ bubbles by transfroming e.g. a spin-down particle into a spin-up hole and vice versa via the mixing with the SC condensate is not included here, because its contributions to RPA normalized spin susceptibility is zero when one considers the AF correlations in the form of that in $`tJ`$ model . Thus, the components of $`\widehat{M}`$ are,
$`\widehat{M}_{11}^{(ij)}(𝐪,i\omega _m,i\omega _n)=`$ $`\widehat{M}_{22}^{(ij)}(𝐪,i\omega _m+2i\omega _n,i\omega _n)=\widehat{M}_{33}^{(ij)}(𝐪,i\omega _m2i\omega _n,i\omega _n)`$ (13)
$`=`$ $`\widehat{M}_{44}^{(ij)}(𝐪,i\omega _m,i\omega _n)={\displaystyle \frac{d^2p}{(2\pi )^2}G^{(i)}(𝐩+𝐪,i\omega _m+i\omega _n)G^{(j)}(𝐪,i\omega _n)},`$ (14)
$$\widehat{M}_{14}^{(ij)}(𝐪,i\omega _m,i\omega _n)=\widehat{M}_{23}^{(ij)}(𝐪,i\omega _m,i\omega _n)=\frac{d^2p}{(2\pi )^2}F^{(i)}(𝐩+𝐪,i\omega _m+i\omega _n)F^{(j)}(𝐪,i\omega _n),$$
(15)
where $`G^{(i)}(𝐪,i\omega _n)`$ and $`F^{(i)}(𝐪,i\omega _n)`$ are the normal and paring Green’s functions of quasiparticles which has been renormalized by the impurity self-energy.
The equations (11), (13) and (14) are calculated by using the same method described in Sec.II. Results for Im$`\chi ^{ph}(𝐐,\omega )`$ are shown in Fig.4 for the same impurity concentrations as those in Fig.1. In contrast to the effect of self-energy corrections, an apparent contribution to spin excitation spectra is observed in the spin gap at large impurity concentrations where no clear reasonance peak is observed. In order to separate the contribution of vertex corrections from self-energy corrections, we have carried out the similar calculations in which the Green’s functions of the impurity-free system in $`\widehat{M}^{(ij)}`$ are used. The result shows that the signal in the spin gap is solely due to the vertex corrections, meanwhile the magnitude of the spin gap as well as the position and the width of the peak remain unchanged, except for a slight enhancement of the peak height in the presence of only vertex corrections. The broad contribution in the spin gap may be understood as the strong scattering involved in the impurity vertex which allows for the multiple scatterings due to ladder diagrams. To address the reason why this strong scattering takes effect mainly in low frequencies, we show in Fig.5 the decay rates of the quasiparticles due to the impurity self-energy implied by $`1/\tau _{imp}^{(i)}(\omega )=2\mathrm{I}\mathrm{m}\mathrm{\Sigma }_0^{(i)}(\omega )=2\mathrm{\Gamma }T_0^{(i)}(\omega )`$. The similar result has been obtained by Quinland and Scalapino . We can see that the decay rates increase as the frequency decreases and reache its maximum at $`\omega =0`$. Because the impurity-scattering lines in the vertex corrections is directly related to $`\mathrm{\Sigma }_0^{(i)}(\omega )`$ as expressed in Eq.(12), this enhancement is amplified by the multiple scatterings in the form of ladder diagrams. We note that, in the absence of the vertex corrections, this enhancement is not strong enough to lead to an apparent spectral weight in the gap as discussed in Sec.II. From Fig.4, we can also see that the gaplike region in low frequencies still retains in the impurity-doping system . We may ascribe it to the fact that the off-diagonal impurity self-energy $`\mathrm{\Sigma }_1`$ vanishes identically for a $`d_{x^2y^2}`$ order parameter and therefore the angular (e.g.nodal) structure of the SC gap is not changed. According to these features, we find that the overall modifications of the spin excitation spectra upon the doping of nonmagnetic impurity are in qualitatively consistent with the INS measurement on YBa<sub>2</sub>(Cu<sub>1-y</sub>Zn<sub>y</sub>)<sub>3</sub>O<sub>6+y</sub> . However, the spectral weight in the spin gap is still not large enough to account for quantitatively the experimental result . We note that a nonmagnetic impurity such as Zn in the CuO<sub>2</sub> planes is believed to induce a local magnetic moment and lead to additional spin-flip scattering . From the above discussion, we think that this scattering may lead to more significant spectral weight in the spin gap than that given here. Nevertheless, a detailed investigation of this effect is required and will be carried out in future.
## IV CONCLUSION
We have calculated the spin excitation spectra below $`T_c`$ for a model $`d_{x^2y^2}`$-wave superconductor with resonant impurity scattering. The impurity self-energy corrections shift the position of the resonance peak to low frequencies and broaden the peak. As the impurity concentration increases, the resonance peak disappears gradually. When no clear reasonance peak is observed, the impurity vertex corrections cause a broad contribution to the excitation spectra in the spin gap, but the memory of the spin gap still retains. Thus, impurity-scatterings, together with the vertex corrections, account for qualitatively the experimental measurement on Zn-doping YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub>.
## ACKNOWLEDGMENTS
One of the authors (J.X.Li) acknowledge the support by National Nature Science Foundation of China.
## FIGURE CAPTIONS
Fig.1. Imaginary parts of the renormalized susceptibility Im$`\chi ^{ph}(𝐐,\omega )`$ versus frequency $`\omega `$ for various impurity concentrations $`\mathrm{\Gamma }/\mathrm{\Delta }_0`$. The solid line represents the result of the pure system. Only the self-energy corrections are considered.
Fig.2. Imaginary Im$`\chi _0^{(ba)}(𝐐,\omega )`$ and real Re$`\chi _0^{(ba)}(𝐐,\omega )`$ parts of the bare susceptibility defined in Eq.(7) versus frequency $`\omega `$ for various impurity concentrations $`\mathrm{\Gamma }/\mathrm{\Delta }_0`$. $`a`$ and $`b`$ represent the antibonding and bonding bands respectively. The result for $`\chi _0^{(ab)}(𝐐,\omega )`$ is very similar to $`\chi _0^{(ba)}(𝐐,\omega )`$.
Fig.3. Diagrammatic representation of the impurity vertex corrections to the spin susceptibility in the dilute limit (see text).
Fig.4. Imaginary parts of the renormalized susceptibility Im$`\chi ^{ph}(𝐐,\omega )`$ versus frequency $`\omega `$ for various impurity concentrations $`\mathrm{\Gamma }/\mathrm{\Delta }_0`$. The solid line represents the result of the pure system. Both the self-energy and vertex corrections are considered.
Fig.5. Decay rates of quasiparticles in the antibonding band $`1/\tau _{imp}^{(a)}(\omega )`$ due to impurity self-energy corrections in the unitary limit. Results are shown for various impurity concentrations $`\mathrm{\Gamma }/\mathrm{\Delta }_0`$. The result for the bonding band is very similar to that shown here.
|
no-problem/9901/astro-ph9901321.html
|
ar5iv
|
text
|
# Lyman break galaxies as young spheroids
## 1 Introduction
Colour selection techniques based on the Lyman limit break of the spectral energy distribution caused by neutral hydrogen absorption have been used for many years in surveys for distant QSOs (e.g. Warren et al. 1987). Guhathakurta et al. (1990) and Songaila, Cowie & Lilly (1990) used this method to set limits on the number of star-forming galaxies at $`z3`$ in faint galaxy samples. More recently, Steidel & Hamilton (1992, 1993) and Steidel, Hamilton & Pettini (1995), using this method, designed a broad band filter set (the $`U_nG`$ system), which allowed them to discover a widespread population of star forming galaxies at redshift $`z3`$, the Lyman break galaxies (LBGs). Spectroscopic confirmation of their redshifts was first presented by Steidel et al. (1996), and WFPC2 images of select LBGs were published by Giavalisco, Steidel & Macchetto (1996).
An important recent advance in the study of LBGs was the availability of the first results from a program of near-infrared spectroscopy aimed at studying the familiar rest-frame optical emission lines from H II regions of LBGs (Pettini et al 1998b, hereafter P98). The program was successful in detecting Balmer and \[O III\] emission lines in five LBGs. The nebular luminosities imply star formation rates (SFRs) larger than those deduced from the UV continuum, which suggests significant dust reddening. In four LBGs the velocity dispersion of the emission lines is $`\sigma _{em}70`$ km s<sup>-1</sup>, while the fifth system has $`\sigma _{em}200`$ km s<sup>-1</sup>. The relative redshifts of interstellar absorption, nebular emission, and Lyman $`\alpha `$ emission lines differ by several hundred km s<sup>-1</sup> , a similar effect to that found in nearby HII galaxies ( Kunth et al 1998) indicating that large-scale outflows may be a common characteristic of both starbursts and LBGs.
On the other hand, we have developed a chemodynamical model (Friaça & Terlevich 1994; Friaça & Terlevich 1998, hereafter FT) for formation and evolution of spheroids, which are suspect to be the $`z=0`$ counterparts of LBGs (Steidel et al. 1996). Our chemodynamical model combines multi-zone chemical evolution with 1-D hydrodynamics to follow in detail the evolution and radial behaviour of gas and stars during the formation of an spheroid. The star formation and the subsequent stellar feedback regulate episodes of wind, outflow, and cooling flow. The knowledge of the radial gas flows in the galaxy allows us to trace metallicity gradients, and, in particular, the formation of a high-metallicity core in ellipticals. The first $`1`$ Gyr of our model galaxies shows striking similarities to the LBGs: intense star formation, compact morphology, the presence of outflows, and significant metal content. We now proceed to examine these similarities, and, in particular, to consider the implications of the recent near-infrared observations of P98. We demonstrate that our model supports the scenario in which LBGs are the progenitors of the present-day bright spheroids. In this paper, the SFRs, luminosities and sizes quoted by P98 are converted to the cosmology adopted here ($`H_0=50`$ km s<sup>-1</sup> Mpc<sup>-1</sup>, $`q_0=0.5`$).
## 2 Lyman break galaxies as young spheroids
There are several evidences in favour of the LBGs being the high-redshift counterparts of the present-day spheroidal component of luminous galaxies (Steidel et al. 1996, Giavalisco et al. 1996): their comoving space density is at least 25 % of that of luminous ($`LL^{}`$) present-day galaxies; the widths of the UV interstellar absorption lines in their spectra imply velocity dispersions of $`180320`$ km s<sup>-1</sup>, typical of the potential well depth of luminous spheroids; they have enough binding energy to remain relatively compact despite the very high SN rate implied by their SFRs. In addition, the population of LBGs shows strong clustering in concentrations which may be the precursors of the present rich clusters of galaxies at a time when they were beginning to decouple from the Hubble flow (Steidel et al. 1998). In the context of Cold Dark Matter models of structure formation, the LBGs must be associated with very large halos, of mass $`10^{12}`$$`\text{M}_{}`$, in order to have developed such strong clustering at $`z3`$.
Assuming a Salpeter IMF, P98 inferred from the emission Balmer lines values for the SFR (uncorrected for dust) of their LBGs in the range $`19210h_{50}^2`$$`\text{M}_{}`$ yr<sup>-1</sup>. These values are typically a factor of several larger than those deduced from the UV continuum and indicate that the correction for dust is typically 1-2 magnitudes at 1500 Å. Dickinson (1998), for a large sample of LBGs, deduced from the UV continuum SFRs in the range $`360h_{50}^2`$$`\text{M}_{}`$ yr<sup>-1</sup>. Assuming a 1 Gyr old continuous star formation, he used the $`G`$ colours to compute corrections for dust extinction to the SFR. With a Calzetti (1997) attenuation law, after correction for dust extinction, the SFR range becomes $`31500`$$`\text{M}_{}`$ yr<sup>-1</sup>. These levels of star formation are remarkably close to the values of the SFR exhibited in the early evolution of the chemodynamical models of FT.
FT built a sequence of chemodynamical models reproducing the main properties of elliptical galaxies. The calculations begin with a gaseous protogalaxy with initial baryonic mass $`M_G`$. Intense star formation during the early stages of the galaxy builds up the stellar body of the galaxy, and during the evolution of the galaxy, gas and stars exchange mass through star formation and stellar gas return. Owing to inflow and galactic wind episodes occuring during the galaxy evolution, its present stellar mass is $`1570`$% higher than $`M_G`$. Gas and stars are embedded in a dark halo of core radius $`r_h`$ and mass $`M_h`$ (we set $`M_h=3M_G`$). The models are characterised by $`M_G`$, $`r_h`$, and a star formation prescription. The SFR is given by a Schmidt law $`\nu _{SF}\rho ^{n_{SF}}`$ ($`\rho `$ is the gas density and $`\nu _{SF}=SFR/\rho `$ is the specific SFR). Here we consider the standard star formation prescription of FT, in which the normalization of $`\nu `$ is $`\nu _0=10`$ Gyr<sup>-1</sup> (in order to reproduce the suprasolar \[Mg/Fe\] ratio of giant ellipticals), $`n_{SF}=1/2`$, and the stars form in a Salpeter IMF from 0.1 to 100 $`\text{M}_{}`$. A more detailed account of the models can be found in FT. Figure 1 shows the evolution of the SFR for models with $`M_G`$ in the range $`5\times 10^95\times 10^{11}`$$`\text{M}_{}`$ ($`r_h=0.85`$ kpc). During the maximum of the SFR, the stellar velocities dispersions of these models, $`55220`$ km s<sup>-1</sup>, bracket the $`\sigma _{em}=55190`$ km s<sup>-1</sup> range of the P98’s LBGs. The corresponding present-day (age of 13 Gyr) luminosities are $`0.05L^{}1.4L^{}`$ ($`M_B=17.621.3`$). For our models, the typical range of SFR averaged over the first Gyr, $`10700`$$`\text{M}_{}`$ yr<sup>-1</sup>, reproduces well the SFRs found for LBGs, deduced from both the Balmer lines and the UV continuum corrected for dust extinction. In addition, the SFR drops dramatically after 1.5-2 Gyr, and becomes below the lowest SFRs found for the LBGs. The similarity of of the SFRs of our models to those of LBGs allows us to identify the LBGs to young ($`12`$ Gyr) spheroids.
It is important to note that the moderately high SFRs of the LBGs seem to be difficult to conciliate with the predictions of the simplistic one-zone (or monolithic) models of formation of elliptical galaxies for supra-$`L^{}`$ systems. The monolithic models of formation of early-type galaxies have been worked out in the early 1970’s (e.g. Larson 1975) and are succesful at reproducing the supra-solar \[Mg/Fe\] of bright ellipticals (Matteuccci & Tornambé 1987; Hamann & Ferland 1993), but the required short star formation time scale ($`10^8`$ yr) implies extremely high SFRs during the formation of $`L>L^{}`$ ellipticals. As a matter of fact, in the one-zone model, a gaseous protogalaxy with $`5\times 10^{10}`$$`\text{M}_{}`$, would have a peak SFR of $`5000`$$`\text{M}_{}`$yr<sup>-1</sup>, and a present-day $`M_B=21.1`$. At least at redshift $`3z3.5`$, such SFR is excluded by the properties of the population of LBGs. By contrast, in the chemodynamical model, the metallicity and abundance ratios of the central region of the young elliptical are explained with no need for all the galaxy having a global starburst coordinated with the central starburst, which avoids the excessively high SFRs of the one-zone model. The most massive model here ($`M_G=5\times 10^{11}`$$`\text{M}_{}`$; present-day $`M_B=21.3`$) has a peak SFR of 1050 $`\text{M}_{}`$yr<sup>-1</sup>, consistent, after correction for dust extinction, with the highest SFRs derived from the UV continuum of LBGs (Dickinson 1998). Note that, as we show below, because the observed rest-frame UV colours limit the amount of dust extinction to $`3`$ mag at most, we cannot evoke dust to hide a 5000 $`\text{M}_{}`$ yr<sup>-1</sup> starburst as a LBG at $`z3`$.
HST optical imaging, which probes the rest frame UV between 1400 and 1900 Å, has revealed that the LBGs are generally compact, with a typical half-light radius of $`1.42.1h_{50}^1`$ kpc (Giavalisco et al. 1996). The observed LBGs do not seem to have disk morphology, with the exception of a few objects without central concentration. In addition, some objects have a light profile following a $`r^{1/4}`$ law over a large radial range, which supports the identification of LBGs to young spheroids. Near infrared imaging have yielded half-light radii in the range $`1.72.3h_{50}^1`$ kpc (P98). The similarity of the near-infrared sizes to those obtained by the HST suggests that the optical morphology follows the UV morphology. As shown in the next section, the compact appearance of the LBGs, both in the UV and in the optical, is reproduced by our young spheroid models.
Note that, due to the strong fading of surface brightness with redshift ($`(1+z)^4`$), the outer parts ($`r10`$ kpc) of the galaxy with milder star formation rates ($`\nu _{SF}1`$ Gyr<sup>-1</sup> or less) would be missed in high redshift observations. The difficulty in observing the outer regions of the galaxy would only be compounded if there is some dust extinction. There is an analogy between the LBGs and nearby HII galaxies, in which we are observing only the brightest part of the galaxy, superposed on much more extended low surface brightness object, when deeper expositions are made available (Telles & Terlevich 1997; Telles, Melnick & Terlevich 1997). Additional support to the LBG-starburst connection comes from the fact that the LBGs in the P98 sample fall on the extrapolation to higher luminosities of the correlation $`L_{\mathrm{H}\beta }\sigma `$ found for local H II galaxies by Melnick, Terlevich, & Moles (1988) (Terlevich 1998).
## 3 DSF 2237+116 C2, a young $`L^{}`$ spheroid?
It is of interest to compare the predictions for our models with the observational data of DSF 2237+116 C2, the most massive LBG (the LBG with the largest $`\sigma _{em}`$) in the P92 sample. The properties of this object are successfully described by the fiducial model of FT ($`M_G=2\times 10^{11}`$$`\text{M}_{}`$ and $`r_h=3.5`$ kpc ). Its present-day stellar mass, $`2.4\times 10^{11}`$$`\text{M}_{}`$, corresponds to $`L_B=0.7L^{}`$, which allows us to identify DSF 2237+116 C2 to an $`L^{}`$ spheroid seen during its early evolution, characterised by intense star formation. For the fiducial model, Figure 1 shows the evolution of the SFR within several radii. The initial stage of violent star formation lasts $`1`$ Gyr, and exhibits a maximum SFR of $`500`$$`\text{M}_{}`$ yr<sup>-1</sup> at 0.6 Gyr. After the galactic wind is established (at $`t=1.17`$ Gyr), the SFR plummets and practically all star formation within 10 kpc is concentrated inside the inner kpc. The late central star formation, characterised by a moderate SFR ($``$ few $`\text{M}_{}`$yr<sup>-1</sup>), is fed by a cooling flow towards the galactic centre. The stagnation point separating the wind and the inflow moves inwards until it reaches the galactic core at $`t=1.8`$ Gyr, when a total wind is present throughout the galaxy. After this time, indicating the end of the star-forming stage, only very small levels of star formation are present in the galaxy. The early stage of star formation during which the stellar body of the galaxy is formed (the stellar mass reaches 50% of its present value at $`t=3.9\times 10^8`$ yr), resembles the LBGs. The average SFR during the first Gyr, 328 $`\text{M}_{}`$ yr<sup>-1</sup>, is very similar to the SFR of 210$`h_{50}^2`$$`\text{M}_{}`$ yr<sup>-1</sup> of DSF 2237+116 C2 inferred from its H$`\beta `$ luminosity. In addition, the SFR is concentrated in the inner 2-3 kpc, which gives to our model galaxy the compact appearance typical of LBGs.
Figure 1 also shows $`L_{1500}`$, the luminosity at 1500 Å, which allows a more direct comparison with the imaging data. Note that our models reproduce the compact appearance of LBG, the light being concentrated in the inner $`3`$ kpc until the maximum of the SFR and in the inner $`2`$ kpc after that time. The luminosities predicted during the first Gyr are around $`3\times 10^{42}`$ erg s<sup>-1</sup> Å<sup>-1</sup>. This value is higher than the observed $`L_{1500}`$ of $`4.1\times 10^{41}h_{50}^1`$ erg s<sup>-1</sup> Å<sup>-1</sup> found for DSF 2237+116 C2. Note that the $`(1+z)^4`$ dimming of the surface brightness with the redshift makes it difficult to detect the outer regions of the galaxy. However, considering the UV emission inside a projected radius of 10 kpc, reduces only slightly the UV luminosity ($`L_{1500}(r<10\mathrm{kpc})=2.5\times 10^{42}`$ erg s<sup>-1</sup> Å<sup>-1</sup>). On the other hand, a simple comparison between the SFR deduced from the H$`\beta `$ line, assuming that the extinction at the H$`\beta `$ is negligible, and the SFR deduced from the UV continuum, indicates for DSF 2237+116 C2 a correction factor for dust between 7 and 48 (P98). These very high correction factors should not be taken at face value, since this simplistic approach furnishes some unphysical results, such as negative extinctions for some objects. It would be interesting to consider a dust extinction index based on the UV part of the spectrum, the most easily accessible to observations of LBGs. The effect of dust is to flatten the spectrum, and the colour $`G`$ provides a reliable measure of the UV slope (at $`z3`$, the effective redshifts of the two filters, 4740 and 6850 Å, respectively, are translated to 1190 and 1710 Å). The comparison of the observed $`(G)_{obs}`$ colours to the $`(G)_{calc}`$ colours predicted by an unreddened continuous star formation model with absorption by the Lyman $`\alpha `$ forest, allowed P98 to deduce dust correction factors between $`1`$ and $`10`$ for the UV luminosities of the LBGs in their sample. In the case of DSF 2237+116 C2, a value of $`L_{1500}=3.9(1.8)\times 10^{42}`$ erg s<sup>-1</sup> Å<sup>-1</sup> is obtained after a correction for dust extinction assuming a Calzetti attenuation law and a continuous $`10^7(10^9)`$ years old star formation.
In view of the importance of the $`G`$ colour in checking for star formation and estimating the dust extinction, Figure 1 also shows $`(G)_{calc}`$ predicted for DSF 2237+116 C2 ($`z=3.317`$) by the fiducial model, obtained as follows: in the first place, the integrated SED is calculated for several apertures, using the Bruzual & Charlot (1998) models; then the SED is redshifted to $`z=3.317`$, reddened by the Lyman $`\alpha `$ forest opacity (Madau 1995), and convolved with the filter transmission curves. Finally, when $`(G)_{calc}`$ is bluer than the $`G`$ colour of the galaxy ($`(G)_{obs}`$=1.13), we calculate, assuming a Calzetti attenuation curve, the value of $`A_{1500}`$ needed to match $`(G)_{obs}`$. Since $`G`$ becomes redder with time, we can use the condition $`(G)_{calc}`$$`<`$$`(G)_{obs}`$ to set an upper limit in the age of the galaxy, beyond which $`A_{1500}`$ becomes formally negative. This limit is 1.00 Gyr, for an aperture $`r<10`$ kpc, and 1.52 Gyr, if the aperture encompasses the whole galaxy. The predicted colours are bluer for the larger aperture because: 1) metallicities are typically $`0.1`$ solar for $`r>10`$ kpc, implying bluer colours for the star population; and 2) there is some star formation in the outer parts of the galaxy as the gas driven by the galactic wind is compressed on its way out of the galaxy. At the peak of the SFR, $`A_{1500}`$ reaches $`2.15`$, within the range $`A_{1500}=1.582.44`$ deduced by P98 for a continuous star formation lasting from $`10^9`$ to $`10^7`$ yr. Figure 1 also shows the observed value of $`L_{1500}`$ corrected for dust extinction using the time-dependent value of $`A_{1500}`$ obtained as above, and also the values corrected as in P98. The agreement with the predictions of our models both for the galaxy as a whole as for the inner 10 kpc is excellent. Therefore, if our model galaxy were at a redshift $`3`$, it would be easily seen as an LBG.
In order to explore the recent availability of infrared imaging, tracing the rest-frame optical light, Figure 2 shows the blue luminosity of the fiducial model inside several projected radii. The similarity of rest-frame optical and UV sizes, indicated by the optical and near-infrared observations, is reproduced by the predictions of our model: the half-light radii at the maximum of SFR, at 1500 Å and in the blue band, are 1.64 and 1.51 kpc, respectively. It is useful, due to the possibility of missing light from the outer parts of the galaxy, to consider the half-light radii with respect only to the inner 10 kpc of the galaxy. In this case, the half-light radii at the SFR peak are 1.46 and 1.44 kpc, for 1500 Å and blue light, respectively. Therefore, the optical morphology follows the UV morphology, and the galaxy remains compact in the optical band. Note, however, that the light does not trace the mass. At the maximum of SFR, the half-mass radius (=7.5 kpc) is much larger than the half-light radius. The star formation does not follow the stellar mass, but instead it is regulated by the gas flows (e.g., the star formation within the inner kpc is fed by the cooling flow towards the galaxy centre). The star formation is not coordinated along the galaxy: $`\nu _{SF}`$ in the inner kpc reaches several $`\times 10`$ Gyr<sup>-1</sup>, whereas the $`\nu _{SF}`$ averaged over the whole galaxy is slightly larger than 1 Gyr<sup>-1</sup>. In view of this, estimating the mass from the half-light radius will seriously underestimate the galaxy mass. P98 were suspicious of having underestimating the mass of DSF 2237+116 C2 (the value they derive is $`5.5\times 10^{10}`$$`\text{M}_{}`$). Here we quantify their suspicion, suggesting that the mass underestimate could be a factor 4-5. In fact, at the SFR peak, our model predicts not only half-light radii (whatever their definition) that are very similar to the $`1.7h_{50}^1`$ found for DSF 2237+116 C2, but also a stellar velocity dispersion of 179 km s<sup>-1</sup>, essentially identical to the observed $`\sigma _{em}=190\pm 25`$, whereas the stellar mass of our galaxy model is $`2\times 10^{11}`$$`\text{M}_{}`$ at this time.
The metal lines in the spectra of LBGs, with origin in stellar photospheres, interstellar absorption, and nebular emission, indicate metallicities anywhere between 0.01 solar and solar (Steidel et al. 1996). On the other hand, the strong correlation between the UV spectral index and metallicity in local starbursts would suggest a broad range in metallicity from substantially subsolar to solar or higher (Heckman et al. 1998). In order to make predictions on the metal content of LBGs, Figure 2 also shows the average metallicity of the stellar population, inside several spherical zones. The inner region reaches solar metallicities (at $`1.11\times 10^8`$ and $`1.56\times 10^8`$ for the inner kpc and for the $`1<r<2`$ kpc region, respectively) much earlier than the maximum in the SFR. Therefore, when the galaxy becomes visible as a LBG (i.e. as a star-forming galaxy), its metallicity inside a typical half-light radius ($`1.5`$ kpc) will be solar or suprasolar. On the other hand, substantial abundance gradients are built up. The metallicity approaches 3 $`\text{Z}_{}`$ in the inner kpc, while it is typically $`0.1`$$`\text{Z}_{}`$ for $`r>10`$ kpc.
Other important success of our models is the prediction of important outflows during the stage of intense star formation, which could account for the outflow at a velocity of $`5001000`$ km s<sup>-1</sup>in the interstellar medium of DSF 2237+116 C2 suggested by the relative velocities of the Lyman $`\alpha `$ emission lines and of the interstellar absorption lines. As a matter of fact, following the maximum of the SFR, an outflow appears at the intermediate radii, between 2 and 10 kpc. As we can see from Figure 2, once the outflow in the intermediate region is established, outflow flows velocities of $`5001000`$ km s<sup>-1</sup> are achieved for $`t1`$ Gyr. After 1.17 Gyr, when the outflow reaches the galaxy tidal radius (i.e. the onset of the galactic wind), the wind velocity increases up to about 1900 km s<sup>-1</sup>. However, during the late galactic wind stage, the density in the outflowing gas drops dramatically, making it difficult to obtain any signature of the outflow via interstellar absorption lines and emission lines. The flow structure is complex, because at inner radii there is a highly subsonic (inflow velocity $`10`$ km s<sup>-1</sup>) cooling flow, and through the outer tidal there is infall of low density gas proceeding at 60 km s<sup>-1</sup>. Therefore, the high density, high velocity outflowing gas in the intermediate region just after the peak in the SFR explains the large scale outflows with velocities of $`500`$ km s<sup>-1</sup>, deduced from the relative redshifts of the interstellar absorption and Lyman $`\alpha `$ emission lines, which are a common feature of LBGs (P98).
The success of our model for a $`L^{}`$ spheroid or elliptical galaxy in reproducing several properties of the LBG DSF 2237+116 C2 gives additional support to the scenario in which LBGs are the progenitors of present-day bright spheroids. High angular resolution spectroscopy will in the future provide important information regarding the velocity field and angular momentum of LBGs and help us to discern if they are young bulges or young ellipticals.
## 4 discussion
The agreement of the fiducial model with the properties of DSF2237-C2 suggests that the mass range of LBGs does include present day $`L^{}`$ objects. Note that the present model not only accounts for this particularly massive LBG but also successfully predicts the properties of the ensemble of the LBGs, within the scenario in which they are the progenitors of the present day spheroids with $`0.1L^{}L_BL^{}`$. Our models also reproduce the main properties of the four LBGs with lower $`\sigma _{em}`$’s in P98. This is illustrated in Figure 1, in which the SFRs deduced for these LBGs are similar to those of models with $`M_G=10^{10}5\times 10^{10}`$$`\text{M}_{}`$, for $`t11.5`$ Gyr (present day $`M_B=18.419.7`$).
These models chosen because they exhibit during the period $`0.2t1.5`$ Gyr (the lower limit on time garantees that a significant stellar component has already been formed, and for times later than the upper limit, THE SFR has probably decreased below levels typical of LBGs) the stellar velocity dispersion coincides with the values of $`\sigma _{em}`$ of the 4 low $`\sigma _{em}`$ LBGs of P98 (which are in the range $`55\pm 1585\pm 15`$ km s<sup>-1</sup>).
One of the central aspects of our modelling is that it follows in detail the impact of gas flows on the early evolution of the galaxies. Besides the importance of galactic winds in galaxy evolution, as already highlighted in the pioneering work of Larson (1974), cooling flows also play a central role in galaxy evolution — feeding a central AGN hosted in the galatic core, building up metallicity gradients (FT), and maintaining a moderate level of star formation in the inner regions of the galaxy at late times, i.e. when the major stellar population of the elliptical galaxy has already been formed (Jimenez et al. 1998).
As a matter of fact, the flow structure is complex, exhibiting, for instance, during a considerable span of the galaxy evolution a partial wind, with inflow in the inner parts of the galaxy and outflow in the outskirts of the galaxy.
Moreover, the flow structure varies with time, and the same star-forming galaxy can exhibit a variety of flow profiles, depending on the evolutionary stage being picked up by the observation. As can be seen from Figure 2, in the fiducial model the outflow does not occur during the whole period of intense star formation. Outflow velocities of $`500`$ km s<sup>-1</sup> are achieved only $`0.3`$ Gyr after the maximum of the SFR and of $`1000`$ km s<sup>-1</sup> $`0.4`$ Gyr after the maximum. The delay between the maximum of the SFR and the onset of the outflow reflects the time needed for the energy input by SNe into the ISM to overcome the gravitational binding energy of the gas. It is possible the observation of LBGs, i.e. with a high SFRs, both in the phase of outflow and before the onset of the outflow. For earlier times, there are global inflows, reaching velocities of up to a few 100 km s<sup>-1</sup>. Therefore, we expect a large dispersion in the relative redshifts of the interstellar absorption, nebular emission and Lyman $`\alpha `$ emission lines of LBGs. As we discuss below, this seems to be the case.
We can see from Figure 3 that for the models with $`M_G`$ in the range $`10^{10}5\times 10^{10}`$$`\text{M}_{}`$, which describe well the four $`\sigma _{em}70`$ km s<sup>-1</sup> in P98, the evolution of radial flows is qualitatively similar to that of the fiducial model. The main difference is that the outflow happens earlier, and, once the outflow is established, velocities higher than 1000 km s<sup>-1</sup> are reached faster. This is a result of the shallower potential well of these galaxies. Note however, that the final wind velocities are somewhat lower than in the fiducial model.
Assuming that the Balmer and \[O III\] emission lines are at the galaxy systemic redshift, the velocity shifts of the interstellar absorption, nebular emission, and Lyman $`\alpha `$ emission lines found by P98 indicate that large velocity fields are a common feature of LBGs. It is important to note that exactly the same result has been found in nearby HII galaxies (Kunth etal 1998). In all cases where Lyman $`\alpha `$ emission was detected, the line peak is shifted by $`1000`$ km s<sup>-1</sup> relative to the metal absorption lines. In two out of three cases (Q0000$``$263 D6 and B2 0902$`+`$343 C6) the nebular lines are at intermediate velocities, with Q0000$``$263 D6 exhibiting a conspicuous P-Cygni profile. The most ready interpretation of these characteristics is the presence of large scale outflows with velocities of $`500`$ km s<sup>-1</sup> in the interstellar media of the galaxies observed: the Lyman $`\alpha `$ emission is suppressed by resonant scattering and the only Lyman $`\alpha `$ photons escaping unabsorbed in our direction are those back-scattered from the far side of the expanding nebula, whereas in absorption against the stellar continuum we see the approaching part of the outflow. Within this scenario, the relative velocities of the three line sets of these LBGs are consistent with our model, since velocities of $`500`$ km s<sup>-1</sup> are easily reached in the low mass models, at a galaxy age of $`0.80.9`$ Gyr.
However, this simple symmetric picture probably does not account for all the variety of possible situations, since in the third LBG with Lyman $`\alpha `$ emission (the high $`\sigma _{em}`$ DSF 2237$`+`$116 C2) H$`\beta `$ and \[O III\] emission are apparently at roughly the same velocity as the absorption lines, even though Lyman $`\alpha `$ emission is redshifted by $`1000`$ km s<sup>-1</sup>. Also this case could be explained by our models with the Lyman $`\alpha `$ emission originating in the high velocity shell receding in the back of the galaxy (in the fiducial model, at $`t=1.1`$ Gyr this shell is at $`r10`$ kpc with a velocity of 1250 km s<sup>-1</sup>), while the interstellar absorption lines could arise in the few central kpc, with much lower expansion velocities. See the interesting discussion regarding escape of Lyman $`\alpha `$ photons in HII galaxies by Kunth et al (1999).
In the P98 sample, 2 in 5 objects do not show Lyman $`\alpha `$ emission. In one of these systems (Q0201+113 B13) the interstellar absorption lines are redshifted by $`250`$ km s<sup>-1</sup> with respect to the H$`\alpha `$ emission (this object is at $`z2.2`$ and, as a consequence, H$`\alpha `$ is observed instead of H$`\beta `$). If this velocity difference is real (no error is quoted by P98 for the relative velocity of this object), within the scenario depicted by our model, we could be observing the galaxy during its early global inflow. The remaining object in P98 sample, also with no Lyman $`\alpha `$ emission, Q0201$`+`$113 C6, shows a $`3200`$ km s<sup>-1</sup> difference between emission and absorption lines, too large to be accounted for any for our models. As a matter of fact, this difference is so large that the line set identified as interstellar absorption could be in reality an intervening absorption line system. We would like to point that our predictions regarding the importance of the gas velocity field in the escape of Lyman $`\alpha `$ photons are similar to those of Kunth and collaborators for nearby HII galaxies (Kunth et al. 1998, 1999).
It is of interest to compare the results of the present model with the predictions for LBGs of the semi-analytic models of Baugh et al. (1998) and Mo, Mao, and White (1998) which are based on disk formation models. These models are simpler than the present model and, therefore, their predictions are more limited. For instance, Mo et al. (1998) do not discuss gas flows. However, in principle, their model could allow for some global characterization of the infall associated with the formation of a disk. Note that our prediction of outflows allows one to distinguish the present model from disk models, since disk models include infall but do not exhibit outflows.
One success of the models of Baugh et al. (1998) and Mo et al. (1998) is the good description of the clustering properties of the LBGs within the framework of the most popular hierarchical models for structure formation. Note these success refers in fact to the clustering of halos, independently if they host a spheroidal or a disklike star-forming central galaxy. Mo et al. (1998) claims to explain also the moderate SFRs, the small sizes and the velocity dispersions of the LBGs. However, as pointed out by Mo et al. themselves, their calculation of the SFR is very sketchy. In addition, their correct prediction of the compact size of the LBGs is a consequence of identifying the LBGs with low angular momentum objects, and so with small size for their mass. In connection to this last point, one of the parameters of the individual galaxies in Mo et al. (1998) is $`\lambda `$, the spin parameter of the halo, and for systems with $`\lambda `$ smaller than some critical $`\lambda _{crit}`$, the gas cannot settle into a centrifugally suppoted disk without first becoming self-graviting. The final configuration is probably spheroidal rather than disk-like. In view of this, Mo et al. (1998) admits that a sizeable fraction of LBGs could be spheiroids (see their Section 3.7), although their model was initially designed for the formation of disks. It is possible the existence of a population of disk objects among the LBGs, although the morphological studies of LBGs suggest that disks are a minority in the LBG population. HST imaging (filters F606W to F814W, probing the rest frame UV range $`14001900`$ Å) has shown that the LBGs do not seem to have disk morphology, with the exception of a few objects without central concentration, for which an exponential profile provides a good fit to their surface brightness distribution (Giavalisco et al. 1996). In addition, some objects have a light profile following a $`r^{1/4}`$ law over a large radial range. Near-infrared HST NICMOS imaging (sensitive to the rest-frame optical light) provides a similar picture (Giavalisco, private communication). Future imaging observations would be very useful to establish which fraction of the LBGs are disk-like systems.
With respect to the velocity dispersions of the LBGs, Mo et al. (1998) reproduces the median values of the LBGS, but runs into some problems with the $`\sigma `$ 200 km s<sup>-1</sup> of DSF 2237+116 C2. since their predicted stellar velocity dispersions are typically around 70 km s<sup>-1</sup> for their $`\mathrm{\Omega }_0=1`$ cosmology and 120 km s<sup>-1</sup> for their $`\mathrm{\Omega }_0=0.3`$ flat cosmology. In the distribution of velocity dispersions predicted by Mo et al. (their Figure 8), the highest velocity dispersion is $`170`$ km s<sup>-1</sup> for the $`\mathrm{\Omega }_0=1`$ cosmology and, even for the $`\mathrm{\Omega }_0=0.3`$ flat cosmology (which predicts higher velocity dispersions), values of $`\sigma 200`$ km s<sup>-1</sup> are very rare.
One interesting possibility offered by our modelling of the $`G`$ colors is obtaining constraints on the age of the LBGs within the scenario of the young spheroid model. One minimal constaint is that the galaxy could not be older than $`1.5`$ Gyr, otherwise $`G`$ would be redder than the observed. If, in addition to that, we require that the SFRs predicted for our model be consistent with the SFR derived from the H$`\beta `$ emission, $`SFR_{\text{H}\beta }`$, the age of the LBG is constrained to be not older than $`1`$ Gyr. In this case, the predicted values for $`A_{1500}`$, $`2.91.5`$ (for $`t`$ from 0.1 to 1 Gyr), are comparable to those obtained by P98, because the evolution of the SFR of the fiducial model, with the constraint $`t1`$ Gyr and the strong decrease of the SFR beyond this time, includes their simple models of continuous star formation lasting for $`10^7`$ yrs and $`10^9`$ yr. On the other hand, P98 estimated a value of $`A_{1500}=4.2`$, from comparing the SFRs deduced from H$`\beta `$ and UV luminosities, assuming a Calzetti attenuation law and continuous star formation for $`10^9`$ yrs. This value would represent a discrepancy with the values of $`A_{1500}`$ predicted by our models, were it not for the fact that the simple comparison of H$`\beta `$ and UV luminosities could lead to unreliable estimates of $`A_{1500}`$, as illustrated by some unphysical results, e.g. negative extinctions found by P98 for some objects. In determining $`A_{1500}`$ from the $`SFR_{\text{H}\beta }/SFR_{UV}`$ ratio, one should be aware of the uncertainties in converting $`L1500`$ to SFR and that the long wavelength baseline involved in this estimation increases the uncertainties of assuming one particular extinction law.
Finally, the relatively high metal abundances obtained by our models, could be expected with some hindsight, if we had considered the high star formation rates derived for the LBGs, together with the continued (for periods $`1`$ Gyr long) star formation favoured in previous works (e.g. Steidel et al. 1996). One of the important consequences of these relativelly high metallicities coupled to the large outflow velocities is that the halo of the galaxy will be enriched in metals in $`10^8`$ yr, and the circungalactic environment out to distances of several hundreds of kpc of the galaxy will be contaminated in metals in $`1`$ Gyr or less. This fast chemical enrichment mechanism for the galactic halo and in intergalactic medium could explain the chemical abundances of quasar absorption line systems, taken as a probe of the gaseous galactic halo and of the intergalactic medium as for instance, in Lyman limit systems (Viegas & Friaça 1995) and in the Lyman $`\alpha `$ forest (Friaça, Viegas & Gruenwald 1998).
## Acknowledgments
We thank Gustavo Bruzual for making us available the GISSEL code for evolutionary stellar population synthesis. We thank Mauro Giavalisco for supplying us with the filter transmission curves of the $`U_nG`$ system. A.C.S.F. acknowledges support from the Brazilian agencies FAPESP, CNPq, and FINEP/PRONEX. We would like to thank the anonymous referee, whose suggestions greatly improved this paper.
|
no-problem/9901/hep-th9901028.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Over the last few years the idea of string duality has lead to much greater understanding of the non–perturbative features of string theory, to the extent that we can now visualize the various string theories as being different points in a larger moduli space. Most notably we have learned about the interplay of geometric features in compactification, especially those based on the rich area of Calabi-Yau manifolds.
In particular we have learned to relate strongly coupled type IIB superstrings in a background of 24 7–branes to heterotic string theory through elliptically fibered K3 surfaces . However, to date the majority of this work has been very mathematical in nature with little attention being paid to the explicit duality map. In this letter we will address this issue. There are several well known and phenomenologically interesting methods of constructing heterotic theories in less than ten dimensions that have been known for some time, viz.: covariant lattices , free fermionic constructions , and asymmetric orbifolds . It can be shown that these are all essentially equivalent , with each method having its own benefits and drawbacks.
Nevertheless, it is not trivial to determine a map between F–theory and the heterotic string compactifications directly because it is not known to what extent the moduli spaces overlap. For example, F–theory on K3 has a fixed supersymmetry , however it is relatively easy to construct heterotic models in eight dimensions with less supersymmetry. Since F–theory is non–perturbative understanding the map should provide an interesting relationship between perturbative and non–perturbative aspects of the heterotic string.
In section 2, the basic results needed from F–theory on how to read off gauge groups with the necessary substructure indicating non–perturbative contributions are outlined. In section 3, we look at the prescription to recover the purely perturbative heterotic theory and discuss how to construct the dictionary of the two via the moduli space of Type I theory. In section 4, we conclude the paper with a brief look at how we can relate the work of this paper to the recent work being done on NS9–branes, which are problematic in string theory but appear to be required by duality arguements.
## 2 F–theory
F–theory is not a string theory per se, though attempts have been made to define it as a 12 dimensional theory with two time dimensions. A much more satisfactory approach is to consider it as Type IIB superstring compacted on a sphere (the complex projective surface) in the background of twenty four 7–branes. Type IIB superstring theory in ten dimensions has an $`SL(2,𝐙)`$ self–duality and hence has an associated torus. When this torus is fibered over the sphere in the brane background an elliptically fibered K3 surface is formed, the properties of which are well known. The moduli space is
$$=SO(2,18;𝐙)\backslash SO(2,18)/SO(2)\times SO(18)$$
(1)
and is the same as that of a heterotic string compactified on $`T^2`$ using a Narain lattice.
Since the K3 surface has an elliptic structure its singularity structure can be easily read off from the Weierstrass equation. These singularities have been classified by Kodaira in a way corresponding to the A–D–E classification of Lie algebras . It is standard to accept this correspondence as being exact, i.e. the singularity type corresponds to a gauge group of the same Lie algebra type in the physical theory. However, Witten has shown that this is not necessarily true, though how this works from a heterotic point of view is not yet fully understood. Other algebras can also be constructed using various configurations of mutually non–local 7–branes but as they do not coalesce to a single point they are not of interest here.
Recently there has been much work done in understanding how the gauge groups arise from the K3 surface through the theory of string networks . The singularities of the K3 surface correspond to the positions of the twenty four 7–branes forming the background in F–theory. We know from work on the related theories of D–branes that perturbatively there should be only sixteen D7–branes. By using Seiberg–Witten theory, Sen showed using F–theory, that the orientifold planes could be formed out of 7–branes which have different charges with respect to the Ramond and Neveu–Schwarz sectors. In references it is shown how to combine these mutually non-local branes to provide the states needed to fill out the gauge groups corresponding to the respective singularities on the K3 surface.
The 7–branes are classified by the RR and NS–NS charges, $`[p,q]`$, they carry; 7–branes with different values of $`[p,q]`$ are said to be mutually non–local. From we will be only concerned with three types of mutually non–local 7–branes, denoted $`A,B,C`$. If we have $`n`$ 7–branes of the same type at a singularity then the associated gauge algebra is $`su(n)`$. If there are different types of branes at a singularity, $`A^{n_a}B^{n_b}C^{n_c}`$, the associated algebra is $`su(n_a)su(n_b)su(n_c)`$. This is actually a maximal subalgebra of a larger simply laced algebra since extra massless BPS states also appear in representations of the subalgebra and fill out the adjoint of the larger group in a manner analogous to free fermion constructions . A $`D`$ type singularity corresponds to a 7–brane configuration of the form $`A^nBC`$; the subalgebra is $`su(n)u(1)`$ which is enhanced to a $`D_n`$ algebra. Similarly an $`E`$ type singularity has a 7–brane configuration of the form $`A^nBC^2`$; the subalgebra is $`su(n)u(1)su(2)`$ which is enhanced to a $`E_n`$ algebra. It is the maximal subalgebras that we are most interested in since they obviously encode non–perturbative features and point out where BPS states should be in the heterotic spectrum.
### 2.1 The Orientifold Limit
Since the 24 7–branes are non-perturbative they will not feature directly in string models so we need to find a limit which relates them to a perturbative regime. A method in going between F–theory on K3 and Heterotic on $`T^2`$ is to use Type I and I models as an intermediate step . From this point of view the gauge groups in the $`D_n`$ series are formed by placing $`n`$ D–branes on an orientifold 7–plane, $`𝒪`$. That is in going from F–theory to Type I theory we have the 7–branes behaving as:
$$A^nBCA^n𝒪$$
(2)
In collapsing the $`BC`$ branes to an orientifold, the NS–NS charges cancel whilst the RR charges combine to give the correct value for orientifold planes in eight dimensions. The maximal subalgebra enlarges as
$$su(n)u(1)so(2n)$$
(3)
The effect of this limit on an $`E`$ singularity is
$$A^nBC^2A^n𝒪+C$$
(4)
with the maximal subalgebra reorganizing itself as
$$su(n)u(1)su(2)so(2n)u(1)$$
(5)
Thus dual models to F–theory constructions should have enhanced gauge groups built from these maximal subgroups. The extra states should also be BPS. A corollary is that the corresponding Heterotic string we are going to be interested in is HSO since it is this theory which is S–dual to Type I .
In the following we will use HSO to denote the heterotic string compactified on the Narain lattice $`\mathrm{\Gamma }_{2,2}\mathrm{\Gamma }_{16}`$ while HE8 denotes $`\mathrm{\Gamma }_{2,2}\mathrm{\Gamma }_8\mathrm{\Gamma }_8`$. Though they are the same theories on compactification, they have different Wilson lines when it comes to embedding other gauge groups.
## 3 Heterotic String on $`T^2`$
We now turn to building heterotic string models. From duality there are conditions to be satisfied; as already pointed out there can be no supersymmetry breaking. The moduli space is equivalent to that of compactification on a Narain lattice, prompting the restriction to gauge preserving compactifications, i.e. total rank is 18, and switching off background anti–symmetric tensor fields. We will assume that all rank 18 gauge groups appearing on the F–theory side are acceptable, i.e. have a heterotic dual; and that we are embedding our Wilson Lines in a lattice of the form $`\mathrm{\Gamma }_{2,2}\mathrm{\Gamma }_{16}`$ as opposed to $`\mathrm{\Gamma }_{2,2}\mathrm{\Gamma }_8^2`$; a priori this is due to the $`D_n`$ structure of the maximal subalgebra in the orientifold limit.
We compactify on the two dimensional torus
$$T^2=𝐑^2/2\pi \mathrm{\Lambda }$$
(6)
where $`\mathrm{\Lambda }`$ is a lattice with basis vectors $`\underset{¯}{e}_i`$, $`|e_i|=\frac{1}{R_i}`$ for $`i=1,2`$ where $`R_i`$ are the radii of the circles. Generically $`R_1R_2`$. Winding number is denoted $`\underset{¯}{\omega }_i=n^i\underset{¯}{e}_i`$, $`n^i𝐙`$; while momentum is given by $`\underset{¯}{p}_i=m_i\underset{¯}{e}^i`$ where $`\underset{¯}{e}^i`$ is a basis vector of the dual lattice $`\mathrm{\Lambda }^{}`$. In the lattice frame, the background gauge fields are $`\underset{¯}{A}_\mu ^I=a_i^I(\underset{¯}{e}^i)_\mu `$ with $`I=1,\mathrm{}16`$ labelling coordinates in $`\mathrm{\Gamma }_{16}`$ and $`\mu `$ the spacetime dimensions. $`V`$ is a vector in $`\mathrm{\Gamma }_{16}`$. The momentum, $`(𝐩_𝐋;𝐩_𝐑)`$ defined as
$`𝐩_𝐋`$ $`=`$ $`(V+\underset{¯}{A}\underset{¯}{\omega },{\displaystyle \frac{1}{2}}\underset{¯}{p}{\displaystyle \frac{1}{2}}V^K\underset{¯}{A}^K{\displaystyle \frac{1}{4}}\underset{¯}{A}^K(\underset{¯}{A}^K\underset{¯}{\omega })+\underset{¯}{\omega })`$ (7)
$`𝐩_𝐑`$ $`=`$ $`({\displaystyle \frac{1}{2}}\underset{¯}{p}{\displaystyle \frac{1}{2}}V^K\underset{¯}{A}^K{\displaystyle \frac{1}{4}}\underset{¯}{A}^K(\underset{¯}{A}^K\underset{¯}{\omega })\underset{¯}{\omega })`$ (8)
form a self–dual Lorentzian lattice. The mass of a state is given by
$$\frac{1}{4}M^2=(N_L+\frac{1}{2}𝐩_{𝐋}^{}{}_{}{}^{2}1)+(N_R+\frac{1}{2}𝐩_{𝐑}^{}{}_{}{}^{2}c)$$
(9)
$`N_L,N_R`$ are the left and right moving oscillator numbers and $`c=0,\frac{1}{2}`$ depending on the periodicity of the right moving fermions. The level matching condition is
$$N_L+\frac{1}{2}𝐩_{𝐋}^{}{}_{}{}^{2}1=N_R+\frac{1}{2}𝐩_{𝐑}^{}{}_{}{}^{2}c$$
(10)
Applying this to equation (9) and then imposing the condition $`N_R=c`$ gives the mass formula for BPS states
$$\frac{1}{4}M^2=𝐩_{𝐑}^{}{}_{}{}^{2}$$
(11)
The massless vectors belonging to the roots of the underlying gauge group have $`N_L=0`$ along with $`𝐩_{𝐑}^{}{}_{}{}^{2}=0,𝐩_{𝐋}^{}{}_{}{}^{2}=2`$. When the winding number is zero this gives the subgroup of the $`SO(32)`$ surviving breaking by the Wilson lines. However, for certain values of $`R_i`$ then further massless gauge bosons can appear so as to enhance the gauge group. Writing out $`𝐩_𝐑`$ in component form we get
$$𝐩_𝐑=\underset{¯}{e}^i(\frac{1}{2}m_i\frac{1}{2}V^Ka_i^K\frac{1}{4}a_i^Ka_j^Kn^j)n^i\underset{¯}{e}_i$$
(12)
Note, that $`i`$ is now a label and not a component as far as the $`a_i^K`$ are concerned. The third term in the expansion looks problematic as it has the potential to cause coupling between the Wilson lines. However, our choices of values for the $`a_i^K`$ will actually give zero for the expression $`a_i^Ka_j^K,ij`$ and allow us to decouple the two cases. With this choice
$$𝐩_𝐑=\underset{¯}{e}^i(\frac{1}{2}m_i\frac{1}{2}V^Ka_i^K\frac{1}{4}(a_i)^2n^i)n^i\underset{¯}{e}_i$$
(13)
where there is no summing over $`i`$ and $`(a_i)^2`$ is the length of the shortest vector of the form $`a_i+\lambda `$, where $`\lambda \mathrm{\Gamma }_{16}`$. If the radius of compactification is, for each $`i`$, $`R_i^2=1(a_i)^2/2`$ then extra massless modes appear allowing an enhancement. What actually has occurred here is that the two dimensional case has been split up into to two copies of a single dimensional compactification. They are also automatically BPS.
There are two mechanisms of gauge enhancement: $`(i)`$ the standard D–brane approach of clustering branes on top of each other; in the above notation this means identifying coordinates within the bulk of the fundamental cell so that the generic group $`U(1)^{18}`$ becomes $`G_{16}U(1)^2`$ with the $`U(1)^2`$ dependent only on the structure of $`\mathrm{\Lambda }`$; $`(ii)`$ when the relationship $`R_i^2=1(a_i)^2/2`$ is satisfied for $`(a_i)`$ the shortest length of the Wilson line relative to the cluster of D–branes we wish to enhance. However in the Type I and I dual models the second mechanism is non–trivial and requires the $`\chi `$ string discussed in . Its position in the moduli space is arbitrary except for gauge enhancing points when its position satisfies the above relation relating the radii to the length of the Wilson lines. The $`\chi `$ string can be related to the string junctions as its origin in nine dimensions is from the presence of a $`D0`$–brane which can couple to $`D8`$–branes and orientifold planes. It satisfies the condition that the number of Neumann–Dirichlet boundaries on the string is eight, ie $`ND=8`$ . When we compact down to eight dimensions the $`D0D8`$ system becomes $`D1D7`$ which still satisfies $`ND=8`$ and is similar to the string junction system used in the F–theory duals.
In the work of an investigation of the $`D0D8`$ system was made in nine dimensions where they started off with an arbitrary number, $`n`$ of $`D0`$–branes in the presence of $`D8`$–branes and $`O8`$–planes. It was then shown $`n=1`$ was required for stable configurations as in gauge enhancement. When there is further compactification down to eight dimensions we have $`n=2`$ in the decoupled case<sup>2</sup><sup>2</sup>2It remains to be verified if this will still be the case when the Wilson lines are coupled.. Decoupling basically allows us to take two copies of the nine dimensional case since we can treat the axes as independent except at the non–trivial point of the origin where they intersect which is only significant if there are branes placed at that point.
In taking the orientifold limit of the $`E_n`$ series there was a C–brane “left over”. Nevertheless, it contributes states necessary for the gauge enhancement and does so in a manner analogous to the states contributed by the $`\chi `$ string. We now make the tentative identification that the string junctions states related to the C–branes are dual to the states due to the $`\chi `$ string and hence the C–brane is itself dual to a $`D1D7`$–brane set up in Type I theory (T–dual to a $`D0D8`$–brane set up in nine dimensions). Note, that there appears to be a choice between which C–brane we should identify with the orientifold and which with the $`\chi `$ string. However, in the $`D0D8`$ set up gauge enhancement occurs when the $`D0`$–brane is attached to an orientifold, and likewise here the “left over” C–brane is still at the position of the associated orientifold so it is not possible do separate their overall effects in this picture and the choice does not have to be made. We will return to the C–brane later when we discuss NS branes in heterotic theory.
For gauge groups of rank 18 there are only a finite number of ways of combining the $`E_n`$ groups for $`n6`$. The only one with three exceptional groups is $`E_6^3`$ which has been handled already in . It also does not satisfy the decoupling feature but we will return to it later. The rest of the possibilities we can cluster together as $`E_NE_M𝒢`$ or $`E_N𝒢`$ where $`𝒢`$ is of sufficient rank to make the total 18 <sup>3</sup><sup>3</sup>3In the former case the rank of $`𝒢`$ will always be less than or equal to 6. For it equal to 6 we ignore the possibility it could be $`E_6`$. For simplicity we make it a $`SO(2n)`$ Lie group with no further breaking.
Looking at the first case with two exceptional groups, we can make the decomposition in the orientifold limit:
$$E_{n+1}E_{m+1}D_{16nm}D_nD_mU(1)^2D_{16nm}$$
(14)
We can now see that we can associate the two $`U(1)`$ components to the two circles making up the compactification torus, each dimension being made responsible for the enhancement of a particular $`D_n`$ or $`D_m`$. The single dimensional case has already been dealt with in . For the sake of convenience we associate the $`D_n`$ group with $`i=1`$ and $`D_m`$ with $`i=2`$. Then we can give the Wilson lines as
$`a_1`$ $`=({\displaystyle \frac{1}{2}}^n\mathrm{\hspace{0.25em}0}^m\mathrm{\hspace{0.25em}0}^{16nm})`$
$`a_2`$ $`=(0^n{\displaystyle \frac{1}{2}}^m\mathrm{\hspace{0.25em}0}^{16nm})`$ (15)
These trivially satisfy the condition that they decouple the $`(p_R)_i`$ as their product is always zero. Generalizations to $`𝒢D_{16nm}`$ are straightforward.
When the gauge group is of the form $`E_N𝒢`$ then one merely has to move the appropriate radius away from the critical point of enhancement in the previous case or alter one of the $`a_i`$ depending on the form desired for $`𝒢`$. The other cases of particular interest with regard to enhancement, $`SO(36)`$ and $`SU(19)`$ follow as in the one dimensional case with one of the Wilson lines set entirely to zero.
### 3.1 Coupled Solutions
We can use the duality of HSO with Type I to learn more about the structure of the moduli space of heterotic Wilson lines. First examine the the group $`E_6E_6E_6`$.
This is somewhat anomalous as $`E_6D_5U(1)`$ requires three $`U(1)`$’s. The solution is of the form given in equation (15) with $`n=m=5`$. However, in order to get the third $`U(1)`$ for the enhancement the Wilson lines have the components $`a_1^{16}=a_2^{16}=\frac{1}{2}`$ . This violates the decoupling argument above but provides a solution nevertheless. Hence there exist other solutions where the Wilson lines do not decouple.
This is the generic case though the $`E_6^3`$ one is the only one with enhancing to an exceptional group that cannot be made to decouple. In this notation the Wilson lines act as the coordinates of a square moduli space of axes $`a_1,a_2`$ such that $`0a_i\frac{1}{2}`$. Each pair of components of the Wilson lines, $`(a_1,a_2)^K`$ now forms the coordinate of the Kth D–brane when it is mapped to a dual Type I model in eight dimensions. The orientifold planes are represented by the corners $`(0,0),(0,\frac{1}{2}),(\frac{1}{2},0),(\frac{1}{2},\frac{1}{2})`$ though they only have an effect if there are D–branes on them; decoupled solutions lie purely on the axes. However, this space is only a fundamental cell of a larger lattice and extra massless states can arise, as in $`E_6^3`$, when D-branes are located at special points outside the fundamental cell when other winding modes become massless. These situations will break the $`Z_4`$ symmetry of the Wilson lines that exist when the D–branes all lie within the fundamental cell.
Coupled solutions lie within the bulk and represent the relative difficulty of finding the solution as the positions here are arbitrary, the solutions giving rise to gauge groups lying on loci as opposed to particular points. For many cases the loci of solutions will intersect with the boundary and the decoupled form of the Wilson lines can be recovered.
A final set of gauge groups of interest are those of the form $`D_n^xD_m^y`$ with $`nx+my=18`$. It can be shown that if $`x+y4`$ then there will always be more than 24 branes required on the F–theory side. That is if $`x+y>4`$ then some of the gauge groups would have to have rank less than 4 and thus are in the $`A`$ series as opposed to the $`D`$, in line with the fact that in the Type I picture there are only four orientifold planes. In the case $`x+y=4`$ then the gauge group is $`D_4^2D_5^2`$ which from the F–theory side is not acceptable as it would require 26 7–branes. On the HSO side it would be constructed by placing 4 D–branes on each orientifold plane and choosing the appropriate radii to enhance two of the $`D_4`$ to $`E_5`$ which would give us back the $`D_5`$ gauge groups. The Wilson lines are:
$`a_1`$ $`=(0^4\mathrm{\hspace{0.25em}0}^4{\displaystyle \frac{1}{2}}^4{\displaystyle \frac{1}{2}}^4)`$
$`a_2`$ $`=(0^4{\displaystyle \frac{1}{2}}^4\mathrm{\hspace{0.25em}0}^4{\displaystyle \frac{1}{2}}^4)`$ (16)
The resolution lies in the fact that is not possible to single out only two $`D_4`$ we want to enhance and the construction from the heterotic point of view breaks down as well. There are no other cases such as this so we are justified in the assumption that all rank 18 gauge groups which can be constructed from F–theory on K3 have an appropriate heterotic dual.
So far we have been concentrating purely on the HSO type models. In theory we should be able to construct a dictionary for embedding in HE8 since it has the same moduli space and is related to HSO by T–duality. However, there is no simple orientifold limit as for HSO and the Wilson lines are non–trivial. A case in hand is $`E_6^3`$ which, to build up in $`\mathrm{\Gamma }_{2,2}\mathrm{\Gamma }_8^2`$ the third $`E_6`$, we require a $`su(3)^3`$ maximal subalgebra, the brane structure for which is not apparent. The generic $`SL(2,𝐙)`$ transformation to convert the standard $`A^{n_a}B^{n_b}C^{n_c}`$ configuration to a form where the maximal subalgebra reflects the embedding in $`E_8E_8`$ has not been constructed yet. If the T–duality is modified as discussed in the next section, then it may be the case that this transformation does not exist.
## 4 Comparison with NS9–Branes and Conclusion
In recent papers Hull has discussed the existence of NS9-branes in non–perturbative HSO theory. NS9–branes are the S–duals of the D9–branes in Type I theory and can also be deduced from M9-branes in M–theory . Their discovery, implied by duality, gives the same brane structure in the heterotic string that has proved to be so rich in Type I theory. In particular it is not hard to see that the heterotic Wilson lines should now correspond to the position of the 16 NS9–branes. This in turn provides evidence that the map between the Wilson lines of Type I and HSO is exact under S–duality; the RR charge of the orientifolds will change to NS–NS in HSO models.
In deriving the map between F–theory on K3 and HSO on $`T^2`$ we have implicitly assumed that the A–branes of F–theory have the same charge as $`D7`$–branes. This is not strictly necessary as what mattered in the above construction was the correct cancellation and summing of overall charge along with the maximal subalgebra. This is an intrinsic feature of the F–theory construction as various brane configurations are considered equivalent if they can be related by an $`SL(2,𝐙)`$ transformation . The $`SL(2,𝐙)`$ self–duality of the parent Type IIB theory includes an S–duality. Thus the standard $`A^{n_a}B^{n_b}C^{n_c}`$ configuration can be related to another configuration of charged 7–branes, $`\stackrel{~}{A}^{n_a}\stackrel{~}{B}^{n_b}\stackrel{~}{C}^{n_c}`$, so that the gauge structure is exactly the same but the NS–NS and RR charges are interchanged. Hence, when examining the perturbative string models we can only tell the difference between HSO or Type I from the charges of the F–theory configuration we started with. In terms of the moduli space, it provides evidence that HSO and Type I with all their permitted Wilson line configurations are equivalent up to gauge group/Kodaira classification, and that the subgroup structure discussed above persists under an S–duality transformation. This ties in nicely with the use of truncation techniques on the parent Type IIB spectrum in ten dimensions to obtain the Type I and HSO theories performed in
In this paper we have constructed an explicit map taking us from the Kodaira classification of singularities in F–theory compactified on elliptically fibered K3 surfaces to the Wilson lines in the Heterotic $`SO(32)`$ string compactified on $`T^2`$, and discussed some issues arising out of gauge enhancement.
Acknowledgments. We would like to extend our thanks to D.C. Dunbar and M.R. Gaberdiel for explanation of their work. We are also grateful to P. Aspinwall, L. Bonora, M. Gross, C. Johnson, H. Skarke and Swansea theory group for a series of useful conversions and communications. This work was supported by PPARC.
As this manuscript was in preparation ref appeared, the results of which overlap with this paper.
|
no-problem/9901/hep-th9901100.html
|
ar5iv
|
text
|
# 1 Overview
## 1 Overview
Spherical field theory is a new non-perturbative method for studying quantum field theory. It was introduced in to describe scalar boson interactions. The method utilizes the spherical partial wave expansion to reduce a general $`d`$-dimensional Euclidean functional integral into a set of coupled one-dimensional, radial systems. The coupled one-dimensional systems are then converted to differential equations which then can be solved using standard numerical methods.<sup>4</sup><sup>4</sup>4For free field theory these equations can be solved exactly, as we will demonstrate here for gauge fields. The extension of the spherical field method to fermionic systems was described in . In that analysis it was shown that the formalism avoids several difficulties which appear in the lattice treatment of fermions. These include fermion doubling, missing axial anomalies, and computational difficulties arising from internal fermion loops. This finding suggests that the spherical formalism could provide a useful method for studying gauge theories, especially those involving fermions. As a small but important initial step in this direction, we contribute the present work in which we introduce and discuss the spherical field method for free gauge fields.
The basic formalism for spherical boson fields was described in . In this paper we will build on those results with most of our attention devoted to new features resulting from the intrinsic spin of the gauge field. We discuss the operator structure of the spherical Hamiltonian in detail, using two-dimensional Euclidean gauge fields as an explicit example. Like standard field theory gauge-fixing is essential in spherical field theory, and we have chosen to consider general covariant gauge and radial gauge<sup>5</sup><sup>5</sup>5See and references therein for a discussion of radial gauge.. In each case we derive the spherical Hamiltonian and use the corresponding evolution equations to calculate the two-point correlators for the gauge field and the gauge-invariant field strength. Free gauge fields in higher dimensions can be described by a straightforward generalization of the methods presented here. The application of spherical field theory to non-perturbative interacting gauge systems and related issues are the subject of current research.
## 2 Covariant gauge
In this section we derive the spherical field Hamiltonian for general covariant gauge. We will use both polar and cartesian coordinates with the following conventions:
$$\stackrel{}{t}=(t\mathrm{cos}\theta ,t\mathrm{sin}\theta )=(t^1,t^2)=(x,y).$$
(1)
In general covariant gauge the Euclidean functional integral is given by
$$\left(_i𝒟A^i\right)\mathrm{exp}\left[_0^{\mathrm{}}𝑑tL\right]$$
(2)
where
$$L=𝑑\theta t\left[\frac{1}{2}F^{12}F^{12}\frac{1}{2\alpha }(_iA^i)^2\right].$$
(3)
We can write the field strength $`F^{12}`$ as
$`F^{12}`$ $`=\frac{1}{2i}\left[\left(\frac{}{x}+i\frac{}{y}\right)(A^xiA^y)\left(\frac{}{x}i\frac{}{y}\right)(A^x+iA^y)\right]`$ (4)
$`=\frac{1}{\sqrt{2}i}\left[e^{i\theta }\left(\frac{}{t}+\frac{i}{t}\frac{}{\theta }\right)A^{+1}e^{i\theta }\left(\frac{}{t}\frac{i}{t}\frac{}{\theta }\right)A^1\right]`$
where
$$A^xiA^y=\sqrt{2}A^{\pm 1}.$$
(5)
We now decompose $`A^{\pm 1}`$ into partial waves<sup>6</sup><sup>6</sup>6In our notation $`A_n^{\pm 1}`$ carries total spin quantum number $`n\pm 1`$.
$$A^{\pm 1}=\frac{1}{\sqrt{2\pi }}\underset{n=0,\pm 1,\mathrm{}}{}A_n^{\pm 1}e^{in\theta }.$$
(6)
Returning to our expression for the field strength, we have
$$F^{12}=\frac{1}{\sqrt{2\pi }}\frac{1}{\sqrt{2}i}\underset{n=0,\pm 1,\mathrm{}}{}e^{in\theta }\left(F_n^{+1}F_n^1\right),$$
(7)
where
$`F_n^{+1}`$ $`=\frac{A_{n1}^{+1}}{t}\frac{n1}{t}A_{n1}^{+1}`$ (8)
$`F_n^1`$ $`=\frac{A_{n+1}^1}{t}+\frac{n+1}{t}A_{n+1}^1.`$ (9)
We can also express the gauge-fixing term in terms of $`F_n^{\pm 1}`$,
$$_iA^i=\frac{1}{\sqrt{2\pi }}\frac{1}{\sqrt{2}}\underset{n=0,\pm 1,\mathrm{}}{}e^{in\theta }\left(F_n^{+1}+F_n^1\right).$$
(10)
With these changes the Lagrangian, $`L`$, is
$$\frac{t}{4}\underset{n=0,\pm 1,\mathrm{}}{}\left(F_n^{+1}F_n^1\right)\left(F_n^{+1}F_n^1\right)\frac{t}{4\alpha }\underset{n=0,\pm 1,\mathrm{}}{}\left(F_n^{+1}+F_n^1\right)\left(F_n^{+1}+F_n^1\right).$$
(11)
In the spherical Hamiltonian for the scalar field was found by direct application of the Feynman-Kac formula. This is also possible here, but in view of the number of mixed terms (a result of the intrinsic spin degrees of freedom) we find it easier to use the method of canonical quantization. Let us define the conjugate momenta to the gauge fields,
$`\pi _{n1}^{+1}`$ $`={\displaystyle \frac{\delta L}{\delta \frac{A_{n1}^{+1}}{t}}}=\frac{t}{2}\left[(1\frac{1}{\alpha })F_n^{+1}+(1\frac{1}{\alpha })F_n^1\right]`$ (12)
$`\pi _{n+1}^1`$ $`={\displaystyle \frac{\delta L}{\delta \frac{A_{n+1}^1}{t}}}=\frac{t}{2}\left[(1\frac{1}{\alpha })F_n^{+1}+(1\frac{1}{\alpha })F_n^1\right].`$ (13)
Following through with the canonical quantization procedure, we find a Hamiltonian of the form
$$H=\underset{n=0,\pm 1,\mathrm{}}{}H_n,$$
(14)
where
$`H_n`$ $`=\frac{1}{4t}\left(\pi _{n+1}^1\pi _{n1}^{+1}\right)\left(\pi _{n1}^{+1}\pi _{n+1}^1\right)`$ (15)
$`\frac{\alpha }{4t}\left(\pi _{n+1}^1+\pi _{n1}^{+1}\right)\left(\pi _{n1}^{+1}+\pi _{n+1}^1\right)+\frac{n1}{t}A_{n1}^{+1}\pi _{n1}^{+1}\frac{n+1}{t}A_{n+1}^1\pi _{n+1}^1.`$
We obtain the corresponding Schrödinger time evolution generator by making the replacements
$$A_n^{\pm 1}z_n^{\pm 1},\pi _n^{\pm 1}\frac{}{z_n^{\pm 1}}.$$
(16)
We then find
$`H_n`$ $`=\frac{1}{4t}\left(\frac{}{z_{n+1}^1}\frac{}{z_{n1}^{+1}}\right)\left(\frac{}{z_{n1}^{+1}}\frac{}{z_{n+1}^1}\right)`$ (17)
$`\frac{\alpha }{4t}\left(\frac{}{z_{n+1}^1}+\frac{}{z_{n1}^{+1}}\right)\left(\frac{}{z_{n1}^{+1}}+\frac{}{z_{n+1}^1}\right)+\frac{n1}{t}z_{n1}^{+1}\frac{}{z_{n1}^{+1}}\frac{n+1}{t}z_{n+1}^1\frac{}{z_{n+1}^1}.`$
For the sake of future numerical calculations, it is convenient to re-express $`H`$ in terms of real variables. Let us define
$`u_n`$ $`=\frac{1}{2}\left(z_{n+1}^1+z_{n1}^{+1}+z_{n1}^{+1}+z_{n+1}^1\right)`$ (18)
$`v_n`$ $`=\frac{1}{2}\left(z_{n+1}^1+z_{n1}^{+1}z_{n1}^{+1}z_{n+1}^1\right)`$ (19)
$`x_n`$ $`=\frac{1}{2i}\left(z_{n+1}^1z_{n1}^{+1}z_{n1}^{+1}+z_{n+1}^1\right)`$ (20)
$`y_n`$ $`=\frac{1}{2i}\left(z_{n+1}^1z_{n1}^{+1}+z_{n1}^{+1}z_{n+1}^1\right)`$ (21)
for $`n>0`$ and
$`u_0`$ $`=\frac{1}{\sqrt{2}}\left(z_1^1+z_1^{+1}\right)`$ (22)
$`x_0`$ $`=\frac{1}{\sqrt{2}i}\left(z_1^1z_1^{+1}\right).`$ (23)
The variables $`u_n`$ and $`y_n`$ correspond with combinations of radially polarized gauge fields with total spin $`\pm n,`$ while $`v_n`$ and $`x_n`$ correspond with tangentially polarized gauge fields with total spin $`\pm n`$. In terms of these new variables,
$$H=H_0+\underset{n=1,2,\mathrm{}}{}H_n^{},$$
(24)
where
$$H_0=\frac{1}{2t}\frac{^2}{x_0^2}\frac{\alpha }{2t}\frac{^2}{u_0^2}\frac{1}{t}\left(u_0\frac{}{u_0}+x_0\frac{}{x_0}\right),$$
(25)
and
$`H_n^{}=`$ $`\frac{1}{2t}\left(\frac{^2}{v_n^2}+\frac{^2}{x_n^2}\right)\frac{\alpha }{2t}\left(\frac{^2}{u_n^2}+\frac{^2}{y_n^2}\right)`$ (26)
$`\frac{n}{t}\left(u_n\frac{}{v_n}+v_n\frac{}{u_n}+x_n\frac{}{y_n}+y_n\frac{}{x_n}\right)`$
$`\frac{1}{t}\left(u_n\frac{}{u_n}+v_n\frac{}{v_n}+x_n\frac{}{x_n}+y_n\frac{}{y_n}\right).`$
Despite the somewhat complicated form, we know that the spectrum of $`H`$ should resemble that of harmonic oscillators. To see this explicitly let us define the following ladder operators:
$$U_n^+=\frac{1}{\sqrt{2}}\left(\frac{}{u_n}+\frac{}{v_n}\right),U_n^{}=\frac{1}{\sqrt{2}}\left(\frac{2\alpha n+\alpha n}{4(n+1)}\frac{}{u_n}+\frac{2+n\alpha n}{4(n+1)}\frac{}{v_n}+u_n+v_n\right),$$
(27)
$$V_n^+=\frac{1}{\sqrt{2}}\left(\frac{2\alpha n+\alpha n}{4(n1)}\frac{}{u_n}+\frac{2n+\alpha n}{4(n1)}\frac{}{v_n}+u_nv_n\right),V_n^{}=\frac{1}{\sqrt{2}}\left(\frac{}{u_n}\frac{}{v_n}\right),$$
(28)
$$X_n^+=\frac{1}{\sqrt{2}}\left(\frac{}{y_n}+\frac{}{x_n}\right),X_n^{}=\frac{1}{\sqrt{2}}\left(\frac{2\alpha n+\alpha n}{4(n+1)}\frac{}{y_n}+\frac{2+n\alpha n}{4(n+1)}\frac{}{x_n}+y_n+x_n\right),$$
(29)
$$Y_n^+=\frac{1}{\sqrt{2}}\left(\frac{2\alpha n+\alpha n}{4(n1)}\frac{}{y_n}+\frac{2n+\alpha n}{4(n1)}\frac{}{x_n}+y_nx_n\right),Y_n^{}=\frac{1}{\sqrt{2}}\left(\frac{}{y_n}\frac{}{x_n}\right),$$
(30)
for $`n>1`$ and
$`U_0^+`$ $`=\frac{1}{\sqrt{2}}\frac{}{u_0},U_0^{}=\frac{\alpha }{\sqrt{2}}\frac{}{u_0}+\sqrt{2}u_0,`$ (31)
$`X_0^+`$ $`=\frac{1}{\sqrt{2}}\frac{}{x_0},X_0^{}=\frac{1}{\sqrt{2}}\frac{}{x_0}+\sqrt{2}x_0.`$ (32)
We have constructed these operators so that
$$[U_n^{},U_n^+]=[V_n^{},V_n^+]=[X_n^{},X_n^+]=[Y_n^{},Y_n^+]=1$$
(33)
for $`n>1`$ and
$$[U_0^{},U_0^+]=[X_0^{},X_0^+]=1.$$
(34)
All other commutators involving these operators vanish. We can now rewrite $`H_0`$ and $`H_n^{}`$ (for $`n>1`$) as
$$H_0=\frac{1}{t}U_0^+U_0^{}+\frac{1}{t}X_0^+X_0^{}+\frac{2}{t},$$
(35)
$$H_n^{}=\frac{n+1}{t}U_n^+U_n^{}+\frac{n1}{t}V_n^+V_n^{}+\frac{n+1}{t}X_n^+X_n^{}+\frac{n1}{t}Y_n^+Y_n^{}+\frac{2n+2}{t}.$$
(36)
We now see that $`H_0`$ is equivalent to two harmonic oscillators with level-spacing $`\frac{1}{t},`$ and $`H_n^{}`$ is equivalent to four harmonic oscillators, two with spacing $`\frac{n+1}{t}`$ and two with spacing $`\frac{n1}{t}`$. It is important to note that $`H_1^{}`$, however, is quite different. The $`s`$-wave configurations are independent of $`\theta `$, and so $`H_1^{}`$ does not have terms of the form $`z_0^{+1}\frac{}{z_0^{+1}}`$ and $`z_0^1\frac{}{z_0^1}`$. Furthermore the gauge fields are massless and so $`H_1^{}`$ depends only on the derivatives of $`z_0^{+1}`$ and $`z_0^1`$. One consequence of this is that the spectrum of $`H_1^{}`$ is continuous, and calculations involving $`H_1^{}`$ and relevant boundary conditions must be done carefully. We will discuss these issues later in our analysis of correlation functions.
In preparation for our calculation of gauge-field correlators, let us now couple an external source $`\stackrel{}{𝒥}`$ to the gauge field,
$$L=𝑑\theta t\left[\frac{1}{2}F^{12}F^{12}\frac{1}{2\alpha }(_iA^i)^2+\stackrel{}{A}\stackrel{}{𝒥}\right].$$
(37)
The new spherical field Hamiltonian is then
$$H(\stackrel{}{𝒥})=H\underset{n=0,\pm 1,\mathrm{}}{}\left(z_n^{+1}𝒥_n^1+z_n^1𝒥_n^{+1}\right).$$
(38)
As noted in the vacuum persistence amplitude is given by
$$Z(\stackrel{}{𝒥})\underset{\begin{array}{c}t_{\mathrm{min}}0^+\\ t_{\mathrm{max}}\mathrm{}\end{array}}{lim}b\left|T\mathrm{exp}\left[_{t_{\mathrm{min}}}^{t_{\mathrm{max}}}𝑑tH(\stackrel{}{𝒥})\right]\right|a,$$
(39)
where $`|a`$ and $`|b`$ are any states satisfying certain criteria. These criteria are that $`|a`$ is constant with respect to the s-wave variables $`z_0^1`$, $`z_0^{+1}`$ and has non-zero overlap with the ground state of $`H`$ as $`t0^\text{+}`$, and $`|b`$ has non-zero overlap with the ground state of $`H`$ as $`t\mathrm{}`$.<sup>7</sup><sup>7</sup>7One caveat here is that $`|a`$ must lie in a function space over which the spectrum of $`H`$ is bounded below. Because the spectrum of $`H_1^{}`$ is continuous, in numerical computations it is useful to include a small regulating mass, $`\mu `$, for the gauge fields and then take the limit $`\mu 0`$.
## 3 Radial gauge
We now derive the spherical field Hamiltonian for radial gauge. We take the gauge-fixing reference point, $`\stackrel{}{t}_0`$, to be the origin,
$$(\stackrel{}{t}\stackrel{}{t}_0)\stackrel{}{A}=\stackrel{}{t}\stackrel{}{A}=0.$$
(40)
We expect this gauge-fixing scheme to be convenient in spherical field theory calculations for several reasons. One is that non-abelian ghost fields in radial gauge decouple, as they do in axial gauge. In contrast with axial gauge, however, radial gauge also preserves rotational symmetry. As we will see, the spherical Hamiltonian and correlation functions in radial gauge are relatively simple. Since
$$\stackrel{}{t}\stackrel{}{A}=\frac{t}{\sqrt{2}}\left[e^{i\theta }A^{+1}+e^{i\theta }A^1\right],$$
(41)
we can impose the gauge-fixing condition by setting
$$A^1=e^{2i\theta }A^{+1}.$$
(42)
With this constraint we express the field strength as
$$F^{12}=\frac{1}{i\sqrt{\pi }}\underset{n=0,\pm 1,\mathrm{}}{}e^{in\theta }\left[\frac{A_{n1}^{+1}}{t}+\frac{1}{t}A_{n1}^{+1}\right].$$
(43)
The radial-gauge Lagrangian is then
$`L`$ $`={\displaystyle 𝑑\theta t\left[\frac{1}{2}F^{12}F^{12}\right]}`$ (44)
$`=t{\displaystyle \underset{n=0,\pm 1,\mathrm{}}{}}\left[\frac{A_{n1}^{+1}}{t}+\frac{1}{t}A_{n1}^{+1}\right]\left[\frac{A_{n1}^{+1}}{t}+\frac{1}{t}A_{n1}^{+1}\right].`$
We again follow the canonical quantization procedure. The conjugate momenta to the gauge fields are
$$\pi _{n1}^{+1}=\frac{\delta L}{\delta \frac{A_{n1}^{+1}}{t}}=2t\left[\frac{A_{n1}^{+1}}{t}+\frac{1}{t}A_{n1}^{+1}\right],$$
(45)
and the radial-gauge Hamiltonian has the form
$$H=\underset{n=0,\pm 1,\mathrm{}}{}\left[\frac{1}{4t}\pi _{n1}^{+1}\pi _{n1}^{+1}\frac{1}{t}A_{n1}^{+1}\pi _{n1}^{+1}\right].$$
(46)
In the Schrödinger language the Hamiltonian becomes
$$H=\underset{n=0,\pm 1,\mathrm{}}{}\left[\frac{1}{4t}\frac{}{z_{n1}^{+1}}\frac{}{z_{n1}^{+1}}\frac{1}{t}z_{n1}^{+1}\frac{}{z_{n1}^{+1}}\right].$$
(47)
As before we now define real variables,
$`x_n`$ $`=\frac{i}{\sqrt{2}}\left(z_{n1}^{+1}+z_{n1}^{+1}\right)`$ (48)
$`v_n`$ $`=\frac{1}{\sqrt{2}}\left(z_{n1}^{+1}z_{n1}^{+1}\right)`$ (49)
for $`n>0`$ and
$$x_0=iz_1^{+1}.$$
(50)
Our Hamiltonian can be re-expressed as
$$H=H_0+\underset{n=1,2,\mathrm{}}{}H_n^{},$$
(51)
where
$$H_0=\frac{1}{4t}\frac{^2}{x_0^2}\frac{1}{t}x_0\frac{}{x_0},$$
(52)
$$H_n^{^{}}=\frac{1}{4t}\left[\frac{^2}{x_n^2}+\frac{^2}{v_n^2}\right]\frac{1}{t}\left[x_n\frac{}{x_n}+v_n\frac{}{v_n}\right].$$
(53)
Let us now define ladder operators
$$X_n^+=\frac{1}{2}\frac{}{x_n},X_n^{}=\frac{1}{2}\frac{}{x_n}2x_n,$$
(54)
for $`n0`$ and
$$V_n^+=\frac{1}{2}\frac{}{v_n},V_n^{}=\frac{1}{2}\frac{}{v_n}2v_n,$$
(55)
for $`n>0.`$ These ladder operators satisfy the relations
$$[X_n^{},X_n^+]=[V_n^{},V_n^+]=1,$$
(56)
while all other commutators vanish. In terms of these operators we have
$$H_0=\frac{1}{t}X_0^+X_0^{}+\frac{1}{t},$$
(57)
$$H_n^{}=\frac{1}{t}X_n^+X_n^{}+\frac{1}{t}V_n^+V_n^{}+\frac{2}{t}.$$
(58)
We note the radial gauge constraint has removed the continuous spectrum from the $`s`$-wave sector, and the spectrum of $`H`$ is purely discrete. Furthermore the splitting between energy levels of $`H_n^{}`$ is independent of $`n`$.<sup>8</sup><sup>8</sup>8The reason is that in radial gauge the gauge fields are tangentially polarized, and so purely tangential excitations in two dimensions do not contribute to the field strength $`F^{12}`$.
We now couple an external source $`\stackrel{}{𝒥}`$ to the gauge field,
$$L=𝑑\theta t\left[\frac{1}{2}F^{12}F^{12}+\stackrel{}{A}\stackrel{}{𝒥}\right].$$
(59)
The new Hamiltonian is then
$$H(\stackrel{}{𝒥})=H\underset{n=0,\pm 1,\mathrm{}}{}z_n^{+1}\left(𝒥_n^1+𝒥_{n2}^{+1}\right).$$
(60)
The vacuum persistence amplitude in radial gauge is given by
$$Z(\stackrel{}{𝒥})\underset{\begin{array}{c}t_{\mathrm{min}}0^+\\ t_{\mathrm{max}}\mathrm{}\end{array}}{lim}b\left|T\mathrm{exp}\left[_{t_{\mathrm{min}}}^{t_{\mathrm{max}}}𝑑tH(\stackrel{}{𝒥})\right]\right|a,$$
(61)
where $`|a`$ has non-zero overlap with the ground state of $`H`$ as $`t0^\text{+}`$, and $`|b`$ has non-zero overlap with the ground state of $`H`$ as $`t\mathrm{}`$. Since the level-spacing of $`H_1^{}`$ diverges as $`t0^\text{+}`$ only the ground state projection of $`H_1^{}`$ at $`t=0`$ contributes, and $`|a`$ no longer needs to be constant with respect to the $`s`$-wave variables $`z_0^{\pm 1}`$. This a rather important point since, as we recall from , the constant $`s`$-wave boundary condition at $`t=0`$ follows from the fact that the value of field at the origin is not constrained. This is however not true in radial gauge as a result of the gauge-fixing constraint.
## 4 Gauge-field correlators
In this section we calculate two-point gauge-field correlators using the spherical field formalism. This calculation can be done in several ways, including numerically. For future applications, however, it is useful to have exact expressions for use in perturbative calculations (e.g., for evaluating counterterms). Here we will obtain results by decomposing the fields as a combination of the ladder operators. Let us start with radial gauge. We have
$`z_{n1}^{+1}`$ $`=\frac{1}{2\sqrt{2}}\left(iX_n^++iX_n^{}V_n^+V_n^{}\right)`$ (62)
$`z_{n1}^{+1}`$ $`=\frac{1}{2\sqrt{2}}\left(iX_n^++iX_n^{}+V_n^++V_n^{}\right)`$ (63)
for $`n>0`$ and
$$z_1^{+1}=\frac{i}{2}\left(X_0^++X_0^{}\right).$$
(64)
We would like to calculate the correlation function,
$$f_{n1,n^{}1}^{rad}(t,t^{})=0\left|A_{n1}^{+1}(t)A_{n^{}1}^{+1}(t^{})\right|0_{rad}.$$
(65)
By angular momentum conservation $`f_{n1,n^{}1}^{rad}`$ vanishes unless $`n^{}=n`$. Also
$$f_{n1,n^{}1}^{rad}(t,t^{})=f_{n^{}1,n1}^{rad}(t^{},t),$$
(66)
and so without loss of generality it suffices to consider $`f_{n1,n1}^{rad}`$ for $`n0`$. For typographical convenience let us define
$$\left\{F\right\}_t\left\{G\right\}_t^{}=\theta (tt^{})\frac{b\left|U(\mathrm{},t)FU(t,t^{})GU(t^{},0)\right|a}{b\left|U(\mathrm{},0)\right|a}+\theta (t^{}t)\frac{b\left|U(\mathrm{},t^{})GU(t^{},t)FU(t,0)\right|a}{b\left|U(\mathrm{},0)\right|a},$$
(67)
where
$$U(t_2,t_1)=T\mathrm{exp}\left[_{t_1}^{t_2}𝑑tH\right].$$
(68)
For $`n0`$, we have
$$f_{n1,n1}^{rad}=\frac{1}{8}\left\{iX_n^++iX_n^{}V_n^+V_n^{}\right\}_t\left\{iX_n^++iX_n^{}+V_n^++V_n^{}\right\}_t^{}.$$
(69)
Using the commutation properties of these ladder operators with $`H(t)`$, we find
$$f_{n1,n1}^{rad}=\frac{1}{4}\left[\theta (t^{}t)\frac{t}{t^{}}+\theta (tt^{})\frac{t^{}}{t}\right].$$
(70)
For the special case $`n=0,`$ we obtain
$$f_{1,1}^{rad}=\frac{1}{4}\left\{iX_0^++iX_0^{}\right\}_t\left\{iX_0^++iX_0^{}\right\}_t^{}=\frac{1}{4}\left[\theta (t^{}t)\frac{t}{t^{}}+\theta (tt^{})\frac{t^{}}{t}\right].$$
(71)
These correlation functions are in agreement with results we obtain by decomposing the following expression into partial waves:
$`0\left|A^i(\stackrel{}{x})A^j(\stackrel{}{y})\right|0_{rad}`$ (72)
$`=\frac{1}{4\pi }\underset{\epsilon 0^+}{lim}\left[\begin{array}{c}\delta ^{ij}\mathrm{log}\frac{\left(\stackrel{}{x}\stackrel{}{y}\right)^2}{L^2}_i^x_0^1𝑑sx^j\mathrm{log}\frac{\left(s\stackrel{}{x}\stackrel{}{y}\right)^2+\epsilon }{L^2}_j^y_0^1𝑑ty^i\mathrm{log}\frac{\left(\stackrel{}{x}t\stackrel{}{y}\right)^2+\epsilon }{L^2}\\ +_i^x_j^y_0^1𝑑s_0^1𝑑t\stackrel{}{x}\stackrel{}{y}\mathrm{log}\frac{\left(s\stackrel{}{x}t\stackrel{}{y}\right)^2+\epsilon }{L^2}\end{array}\right].`$ (75)
The length scale $`L`$ is used to render the argument of the logarithm dimensionless. Its purpose, however, is only cosmetic since the gauge-field correlator is not infrared divergent in radial gauge and the dependence on $`L`$ cancels.<sup>9</sup><sup>9</sup>9The radial gauge constraint $`A^1=e^{2i\theta }A^{+1}`$ pairs $`s`$-wave configurations with $`d`$-wave configurations. Since the $`d`$-wave is not infrared divergent, neither is the $`s`$-wave.
These same methods can be applied to gauge-field correlators in covariant gauge. We find
$`0\left|A_{n1}^{+1}(t)A_{n1}^{+1}(t^{})\right|0_{cov}`$ $`=0\left|A_{n+1}^1(t)A_{n+1}^1(t^{})\right|0_{cov}`$ (76)
$`=\frac{\alpha 1}{4}\delta _{n,0}\left[\theta (tt^{})\frac{t^{}}{t}+\theta (t^{}t)\frac{t}{t^{}}\right]`$
$`\frac{\alpha 1}{4}\left[\theta (tt^{})\delta _{n,1}+\theta (t^{}t)\delta _{n,1}\right].`$
and for $`n1`$,
$$0\left|A_{n1}^{+1}(t)A_{n+1}^1(t^{})\right|0_{cov}=\frac{\alpha +1}{4\left|n1\right|}\left[\theta (tt^{})\left(\frac{t^{}}{t}\right)^{\left|n1\right|}+\theta (t^{}t)\left(\frac{t}{t^{}}\right)^{\left|n1\right|}\right].$$
(77)
In covariant gauge the $`s`$-wave correlator is infrared divergent,
$$0\left|A_0^{+1}(t)A_0^1(t^{})\right|0_{cov}=\frac{\alpha +1}{2}\left[\theta (tt^{})\mathrm{log}\frac{t}{L}+\theta (t^{}t)\mathrm{log}\frac{t^{}}{L}\right],$$
(78)
where $`\mathrm{log}L`$ is infinite. This divergence is specific to two-dimensional gauge fields and does not occur in higher dimensions. If we include a regulating gauge-field mass, $`\mu `$, we find that $`L`$ scales as $`1/\mu `$ as $`\mu 0.`$ These correlation functions are in agreement with the results we obtain by decomposing the following known expression into partial waves:
$$0\left|A^i(\stackrel{}{x})A^j(\stackrel{}{y})\right|0_{cov}=\frac{1}{4\pi }\left[\frac{\alpha +1}{2}\delta ^{ij}\mathrm{log}\frac{\left(\stackrel{}{x}\stackrel{}{y}\right)^2}{L^2}+\frac{(\alpha 1)(xy)^i(xy)^j}{\left(\stackrel{}{x}\stackrel{}{y}\right)^2}\right].$$
(79)
## 5 Gauge-invariant correlators
Let us now consider the two-point correlator of the gauge-invariant field strength $`F^{12}`$. We can calculate the $`F^{12}`$ correlator by differentiating the gauge-field correlators calculated in the previous section, but it is instructive to redo the calculation by coupling a source to $`F^{12}`$. This time we describe the calculation in detail for covariant gauge. The same calculation can be done for radial gauge using similar methods. Let us start by quoting the result we expect. From free field theory we know
$$0\left|F^{12}(\stackrel{}{t})F^{12}(\stackrel{}{t}^{})\right|0=\delta ^2(\stackrel{}{t}\stackrel{}{t}^{}).$$
(80)
The $`F^{12}`$ correlator has a simple local structure, a consequence of the fact that in two dimensions gauge fields can be decomposed into scalar and longitudinal polarizations (borrowing Minkowski space terminology). There are no transverse polarizations to produce non-local contributions to the $`F^{12}`$ correlator. We will see this happen explicitly in the calculations to follow.
The two-point correlator for partial waves of $`F^{12}`$ is given by
$$0\left|F_n^{12}(t)F_n^{}^{12}(t^{})\right|0=\frac{1}{2\pi }𝑑\theta 𝑑\theta ^{}e^{in\theta }e^{in^{}\theta ^{}}0\left|F^{12}(\stackrel{}{t})F^{12}(\stackrel{}{t}^{})\right|0,$$
(81)
and we deduce
$$0\left|F_n^{12}(t)F_n^{}^{12}(t^{})\right|0=\frac{1}{t}\delta _{n,n^{}}\delta (tt^{}).$$
(82)
Let us now reproduce this result using the spherical field method. We return to the covariant gauge Lagrangian and couple a source $`𝒦`$ to $`F^{12}`$,
$$L=𝑑\theta t\left[\frac{1}{2}F^{12}F^{12}\frac{1}{2\alpha }(_iA^i)^2+F^{12}𝒦\right].$$
(83)
The conjugate momenta are now
$`\pi _{n1}^{+1}`$ $`=\frac{t}{2}\left[(1\frac{1}{\alpha })F_n^{+1}+(1\frac{1}{\alpha })F_n^1i\sqrt{2}𝒦_n\right]`$ (84)
$`\pi _{n+1}^1`$ $`=\frac{t}{2}\left[(1\frac{1}{\alpha })F_n^{+1}+(1\frac{1}{\alpha })F_n^1+i\sqrt{2}𝒦_n\right],`$ (85)
and the new Hamiltonian is
$$H(𝒦)=\underset{n=0,\pm 1,\mathrm{}}{}H_n(𝒦),$$
(86)
where
$$H_n(𝒦)=H_n(0)+\frac{i}{\sqrt{2}}𝒦_n\pi _{n1}^{+1}\frac{i}{\sqrt{2}}𝒦_n\pi _{n+1}^1\frac{t}{2}𝒦_n𝒦_n.$$
(87)
In the Schrödinger language we have
$$H_n(𝒦)=H_n(0)+\frac{i}{\sqrt{2}}𝒦_n(\frac{n}{\left|n\right|}\frac{}{v_{\left|n\right|}}+i\frac{}{x_{\left|n\right|}})\frac{t}{2}𝒦_n𝒦_n$$
(88)
for $`n0`$ and
$$H_0(𝒦)=H_0(0)𝒦_0\frac{}{x_0}\frac{t}{2}𝒦_0𝒦_0.$$
(89)
The vacuum persistence amplitude is
$$Z(𝒦)=b\left|T\mathrm{exp}\left[_0^{\mathrm{}}𝑑tH(𝒦)\right]\right|a,$$
(90)
and we evaluate the $`F_n^{12}`$ correlator by functional differentiation with respect to $`𝒦`$,
$$0\left|F_n^{12}(t)F_n^{}^{12}(t^{})\right|0=\frac{1}{Z(0)}\frac{}{t𝒦_n(t)}\frac{}{t^{}𝒦_n^{}(t^{})}Z(𝒦)|_{𝒦=0}.$$
(91)
It is clear from angular momentum conservation that this correlator vanishes unless $`n=n^{}`$, and so it suffices to compute
$$0\left|F_n^{12}(t)F_n^{12}(t^{})\right|0.$$
(92)
Since (92) is symmetric under the interchange $`n,tn,t^{}`$, we can also restrict $`n0`$. Differentiating with respect to the sources, we obtain, for $`n>0`$,
$$0\left|F_n^{12}(t)F_n^{12}(t^{})\right|0=\frac{1}{2tt^{}}\left\{\frac{}{v_n}+i\frac{}{x_n}\right\}_t\left\{\frac{}{v_n}+i\frac{}{x_n}\right\}_t^{}+\frac{1}{t}\delta (tt^{}).$$
(93)
When $`n>1`$ we can write $`\pm \frac{}{v_n}+i\frac{}{x_n}`$ as a linear combination of $`U_n^+`$, $`V_n^{}`$, $`X_n^+`$, and $`Y_n^{}`$. These ladder operators are, however, acting in four different spaces. The matrix element of the operator
$$U(\mathrm{},t)(\frac{}{v_n}+i\frac{}{x_n})U(t,t^{})(\frac{}{v_n}+i\frac{}{x_n})U(t^{},0)$$
(94)
from the ground state at $`t=0`$ to the ground state at $`t=\mathrm{}`$ vanishes.<sup>10</sup><sup>10</sup>10As noted before, this is due to the fact that in two dimensions there are no transverse polarizations. Consequently for $`n>1`$ only the delta function contributes to the correlation function in (93). The same arguments apply for the case $`n=0,`$ and only the delta function contributes here as well.
We now turn to the special case $`n=1.`$ The relevant part of the Hamiltonian is
$`H_1^{}`$ $`=\frac{1}{2t}\left(\frac{^2}{v_1^2}+\frac{^2}{x_1^2}\right)\frac{\alpha }{2t}\left(\frac{^2}{u_1^2}+\frac{^2}{y_1^2}\right)`$ (95)
$`\frac{1}{t}\left[\left(u_1+v_1\right)\left(\frac{}{u_1}+\frac{}{v_1}\right)+\left(x_1+y_1\right)\left(\frac{}{x_1}+\frac{}{y_1}\right)\right].`$
The combinations $`u_1v_1`$ and $`x_1y_1`$ correspond with linear combinations of the s-wave variables $`z_0^{+1}`$ and $`z_0^1.`$ The initial configuration $`|a`$ at $`t=0`$ is constant with respect to $`z_0^{+1}`$ and $`z_0^1`$, and therefore constant with respect to $`u_1v_1`$ and $`x_1y_1.`$ We note that when $`H_1^{}`$ and/or $`\pm \frac{}{v_1}+i\frac{}{x_1}`$ acts upon $`|a`$, the result is again a state constant in $`u_1v_1`$ and $`x_1y_1`$. It therefore suffices to compute the correlator restricted to the subspace which is constant in $`u_1v_1`$ and $`x_1y_1`$. In this space $`H_1^{}`$ has the form
$`H_1^{}`$ $`\frac{\alpha +1}{8t}\left(\frac{}{u_1}+\frac{}{v_1}\right)^2\frac{\alpha +1}{8t}\left(\frac{}{x_1}+\frac{}{y_1}\right)^2`$ (96)
$`\frac{1}{t}\left[\left(u_1+v_1\right)\left(\frac{}{u_1}+\frac{}{v_1}\right)+\left(x_1+y_1\right)\left(\frac{}{x_1}+\frac{}{y_1}\right)\right].`$
Comparing with (25), we see that this is analogous with the previous case for $`n=0`$. We again find the result
$$0\left|F_1^{12}(t)F_1^{12}(t^{})\right|0=\frac{1}{t}\delta (tt^{}).$$
(97)
## 6 Summary
In this work we applied the methods of spherical field theory to free gauge fields. We analyzed two dimensional gauge fields in general covariant gauge and radial gauge. In the process we have discussed several new aspects which resulted from the spin degrees of freedom as well as the masslessness of the gauge field. As we have seen, polarization mixing complicates the structure of the spherical field Hamiltonian. Nevertheless in radial gauge we were able to decompose the spherical field Hamiltonian as a sum of harmonic oscillators. We did the same for covariant gauge, but found that the $`s`$-wave part of the Hamiltonian has continuous spectrum. In relation to these differences, we also discussed issues regarding the $`s`$-wave boundary condition at $`t=0.`$ We then used the spherical field evolution equations to calculate two-point correlators for the gauge fields and field-strength tensors $`F^{12}`$. Our presentation here is intended as a first introduction to the application of spherical field methods to gauge theories. Free gauge fields in higher dimensions can be treated by a straightforward generalization of these methods. Interacting gauge systems, however, include many interesting theoretical and computational issues not discussed here, and these are the subject of active research.
|
no-problem/9901/physics9901006.html
|
ar5iv
|
text
|
# Solving for the dynamics of the universe
## 1 Introduction
The Friedmann–Lemaitre–Robertson–Walker (FLRW) solutions to the Einstein field equations of general relativity are a cornerstone in the development of modern cosmology. The FLRW metric describes a spatially homogeneous and isotropic universe satisfying the Copernican principle, and is the starting point for studying dynamical models of the universe. To this end, one must specify the nature of the matter which is the source of gravitation by assigning the corresponding equation of state, which varies during different epochs of the history of the universe.
The FLRW metric is given, in comoving coordinates $`(t,r,\theta ,\phi )`$, by the line element
$$ds^2=dt^2+a^2(t)\left[dr^2+f^2(r)\left(d\theta ^2+\mathrm{sin}^2\theta d\phi ^2\right)\right],$$
(1.1)
where the three possible elementary topologies are classified according to a normalized curvature index $`K`$ which assumes the values $`0,\pm 1`$, and
$$f(r)=\mathrm{sinh}r(K=1)$$
(1.2)
for the open universe,
$$f(r)=r(K=0)$$
(1.3)
for the critical universe, and
$$f(r)=\mathrm{sin}r(K=+1)$$
(1.4)
for the closed universe. Other topologies are possible (see e.g. Ref. 1, p. 725). The function $`a(t)`$ of the comoving time $`t`$ (the “scale factor”) is determined by the Einstein–Friedmann dynamical equations<sup>2-7</sup>
$$\frac{\ddot{a}}{a}=\frac{4\pi G}{3}\left(\rho +3P\right),$$
(1.5)
$$\left(\frac{\dot{a}}{a}\right)^2=\frac{8\pi G\rho }{3}\frac{K}{a^2},$$
(1.6)
where $`\rho `$ and $`P`$ are, respectively, the energy density and pressure of the material content of the universe, which is assumed to be a perfect fluid. An overdot denotes differentiation with respect to the comoving time $`t`$, $`G`$ is Newton’s constant and units are used in which the speed of light in vacuum assumes the value unity. It is further assumed that the cosmological constant vanishes.
One can solve the Einstein–Friedmann equations (1.5) and (1.6) once a barotropic equation of state $`P=P(\rho )`$ is given. In many situations of physical interest, the equation of state assumes the form
$$P=\left(\gamma 1\right)\rho ,\gamma =\text{constant}.$$
(1.7)
This assumption reproduces important epochs in the history of the universe. For $`\gamma =1`$ one obtains the “dust” equation of state $`P=0`$ of the matter–dominated epoch; for $`\gamma =4/3`$, the radiation equation of state $`P=\rho /3`$ of the radiation–dominated era; for $`\gamma =0`$, the vacuum equation of state $`P=\rho `$ of inflation. For $`\gamma =2/3`$, the curvature–dominated coasting universe with $`a(t)=a_0t`$ is obtained.
When the universe is spatially flat ($`K=0`$), the integration of the dynamical equations (1.5), (1.6) with the assumption (1.7) is straightforward<sup>2</sup> for any value of the constant $`\gamma `$:
$$a(t)=a_0t^{\frac{2}{3\gamma }}(\gamma 0),$$
(1.8)
$$a(t)=a_0\text{e}^{Ht},\dot{H}=0(\gamma =0).$$
(1.9)
However, for $`K=\pm 1`$, the solution is given (explicitly or in parametric form) only for the special values $`\gamma =1,4/3`$ in old and recent textbooks<sup>1-7</sup>.
It is actually not difficult to derive a solution for any nonzero value of the constant $`\gamma `$ using a standard procedure<sup>3</sup> which is general, i.e. it does not depend upon the assumption $`P=\left(\gamma 1\right)\rho `$. Section 2 presents a summary of the usual method for deriving the scale factor $`a(t)`$. An alternative method, which consists in reducing the Einstein–Friedmann equations to a Riccati equation, and is applicable when Eq. (1.7) holds, is explained in Sec. 3. From the mathematical point of view, the latter method avoids the consideration of the energy conservation equation
$$\dot{\rho }+3\left(P+\rho \right)\frac{\dot{a}}{a}=0$$
(1.10)
and the calculation of an indefinite integral; rather, one solves a nonlinear Riccati equation. The alternative approach is more direct than the standard one and the gain in clarity of exposition makes it more suitable for an introductory cosmology course. Section 4 presents a brief discussion of the equation of state $`P=\left(\gamma 1\right)\rho `$ and the conclusions.
## 2 The standard derivation of the scale factor
The standard method to obtain the scale factor $`a`$ of the universe proceeds as follows<sup>3</sup>. The energy conservation equation (1.10) yields
$$3\mathrm{ln}a=\frac{d\rho }{P+\rho }+\text{constant}$$
(2.1)
for $`\gamma 0`$. Upon use of the conformal time $`\eta `$ defined by
$$dt=a(\eta )d\eta ,$$
(2.2)
the Einstein–Friedmann equations (1.5), (1.6) yield
$$\eta =\pm \frac{da}{a\sqrt{\frac{8\pi G}{3}\rho a^2K}}.$$
(2.3)
In order to obtain $`a(\eta )`$, one prescribes the equation of state $`P=P(\rho )`$, solves Eq. (2.1) and inverts it obtaining $`\rho =\rho (a)`$. Further substitution into Eq. (2.3) and inversion provide $`a=a(\eta )`$. Integration of Eq. (2.2) provides the comoving time $`t(\eta )`$ as a function of conformal time. The scale factor is then expressed in parametric form $`(a(\eta ),t(\eta ))`$. Sometimes it is possible to eliminate the parametric dependence on $`\eta `$ and obtain the expansion factor as an explicit function $`a(t)`$ of comoving time.
The method is quite general; as a particular case, it can be applied when the equation of state is of the form $`P=\left(\gamma 1\right)\rho `$ with constant nonvanishing $`\gamma `$. Equations (2.1) and (2.3) then yield
$$a^3\rho ^{1/\gamma }=\text{constant},$$
(2.4)
$$\eta =\pm \frac{da}{a\sqrt{\frac{8\pi G}{3}C_1a^{23\gamma }K}}$$
(2.5)
for $`\gamma 0,2/3`$, where $`C_1`$ is an integration constant. By introducing the variable
$$x=\left(\frac{8\pi GC_1}{3}\right)^{\frac{1}{23\gamma }}a,$$
(2.6)
and using
$$\frac{dx}{x\sqrt{x^n+1}}=\frac{1}{n}\mathrm{ln}\left(\frac{\sqrt{x^n+1}1}{\sqrt{x^n+1}+1}\right),$$
(2.7)
$$\frac{dx}{x\sqrt{x^n1}}=\frac{2}{n}\text{arcsec}\left(x^{n/2}\right),$$
(2.8)
one integrates and inverts Eq. (2.5) to obtain
$$a(\eta )=a_0\mathrm{sinh}^{1/c}(c\eta ),$$
(2.9)
$$t(\eta )=a_0_0^\eta 𝑑\eta ^{}\mathrm{sinh}^{1/c}(c\eta ^{}),$$
(2.10)
for $`K=1`$. $`a_0`$ and $`c`$ are constants, with
$$c=\frac{3}{2}\gamma 1,$$
(2.11)
and the boundary condition $`a\left(\eta =0\right)=0`$ has been imposed.
Similarly, for $`K=+1`$, one obtains
$$a(\eta )=a_0\left[\mathrm{cos}\left(c\eta +d\right)\right]^{1/c},$$
(2.12)
$$t(\eta )=a_0_0^\eta 𝑑\eta ^{}\left[\mathrm{cos}\left(c\eta ^{}+d\right)\right]^{1/c}.$$
(2.13)
For $`\gamma =2/3`$ and $`K=1`$ one obtains a curvature–dominated universe for which Eq. (1.6) is approximated by $`\left(\dot{a}/a\right)^2K/a^2`$. In this case, Eq. (2.3) yields $`a=a_0\mathrm{exp}(\beta \eta )`$ ($`\beta =`$ constant), and $`t=t_0`$e<sup>ηβ</sup> gives $`a=a_0t`$. It is easier to obtain this form of the scale factor directly from Eq. (1.5), which reduces to $`\dot{a}=0`$.
The solutions (2.9), (2.10) and (2.12), (2.13) for the scale factor are presented in the textbooks only for the special values $`1`$ and $`4/3`$ of the constant $`\gamma `$. For $`\gamma =4/3`$ one eliminates the parameter $`\eta `$ to obtain
$$a(t)=a_0\left[1\left(1\frac{t}{t_0}\right)^2\right]^{1/2}$$
(2.14)
for $`K=+1`$, and
$$a(t)=a_0\left[\left(1+\frac{t}{t_0}\right)^21\right]^{1/2}$$
(2.15)
for $`K=1`$.
The standard solution method of the Einstein–Friedmann equations has the virtue of being general; it does not rely upon the assumption (1.7). However, the need to invert Eqs. (2.1) and (2.3) and to compute the indefinite integrals (2.6), (2.7) detracts from the elegance and clarity that is possible when the ratio $`P/\rho `$ is constant. The latter condition is satisfied in many physically important situations.
## 3 An alternative method
There is an alternative procedure to derive the scale factor for a general value of $`\gamma `$ when the equation of state of the universe’s material content is given by $`P=\left(\gamma 1\right)\rho `$ with $`\gamma =`$ constant, which covers many cases of physical interest. Being straightforward, this new method is valuable for pedagogical purposes and proceeds as follows: Eqs. (1.5), (1.6) and (1.7) yield
$$\frac{\ddot{a}}{a}+c\left(\frac{\dot{a}}{a}\right)^2+\frac{cK}{a^2}=0,$$
(3.1)
with $`c`$ given by Eq. (2.11). For $`K=0`$, Eq. (3.1) is immediately integrated to give Eqs. (1.8), (1.9). For $`K=\pm 1`$, Eq. (3.1) is rewritten as
$$\frac{a^{\prime \prime }}{a}+(c1)\left(\frac{a^{}}{a}\right)^2+cK=0,$$
(3.2)
by making use of the conformal time $`\eta `$, and where a prime denotes differentiation with respect to $`\eta `$. By employing the variable
$$u\frac{a^{}}{a},$$
(3.3)
Eq. (3.2) becomes
$$u^{}+cu^2+Kc=0,$$
(3.4)
which is a Riccati equation. The Riccati equation, which has the general form
$$\frac{dy}{dx}=a(x)y^2+b(x)y+c(x)$$
(3.5)
where $`y=y(x)`$, has been the subject of many studies in the theory of ordinary differential equations, and can be solved explicitly<sup>8,9</sup>. The solution is found by introducing the variable $`w`$ defined by
$$u=\frac{1}{c}\frac{w^{}}{w},$$
(3.6)
which changes Eq. (3.4) to
$$w^{\prime \prime }+Kc^2w=0,$$
(3.7)
the solution of which is trivial. For $`K=+1`$, one finds the solutions (2.12), (2.13), while for $`K=1`$, one recovers (2.9), (2.10).
The alternative method is applicable when the equation of state assumes the form $`P=\left(\gamma 1\right)\rho `$ and the cosmological constant vanishes. With these conditions satisfied, the alternative solution procedure is more direct than the general method. It is now appropriate to comment on the equation of state $`P=\left(\gamma 1\right)\rho `$, $`\gamma =`$ constant, which reproduces several situations of significant physical interest.
## 4 Discussion
The assumption that the equation of state is of the form $`P=\left(\gamma 1\right)\rho `$ with constant $`\gamma `$ is justified in many important situations which describe the standard big–bang cosmology. However, it is important to realize that Eq. (1.7) is a strong assumption and by no means yields the most general solution for the scale factor of a FLRW universe. To make a physically interesting example, consider the inflationary (i.e. $`\ddot{a}>0`$) epoch of the early universe in the $`K=0`$ case. Many inflationary scenarios are known<sup>10</sup>, corresponding to different concave shapes of the scale factor $`a(t)`$; the assumption (1.7) allows the solutions
$$a(t)=a_0t^{\frac{2}{3\gamma }}$$
(4.1)
(“power–law inflation”) for $`0<\gamma <2/3`$, and
$$a=a_0\text{e}^{Ht},\dot{H}=0$$
(4.2)
for $`\gamma =0`$. Vice–versa, using the dynamical equations (1.5), (1.6), it is straightforward to prove that the latter solutions imply $`P/\rho =`$ constant. The assumption (1.7) reproduces only exponential expansion (the prototype of inflation) and power–law inflation. All the other inflationary scenarios correspond to a $`\gamma (t)`$ which changes with time during inflation. A time–dependent $`\gamma (t)`$ can also be used to describe a non–interacting mixture of dust and radiation; however the method of solution of the Einstein–Friedmann equations (1.5), (1.6) presented in Sec. 3 applies only when $`\gamma `$ is constant, and when the cosmological constant vanishes. If the cosmological constant is nonzero, Eq. (3.1) does not reduce to a Riccati equation.
The limitations of the assumption $`\gamma =`$ constant are thus made clear: while this new approach to the Einstein–Friedmann equations does not replace the standard approach, it is more direct and is preferable in an introduction to cosmology. Its value lies in the ease of demonstration of the solutions, which is crucial for students to grasp the basic concepts of cosmology.
## Acknowledgments
The author thanks L. Niwa for reviewing the manuscript and two anonymous referees for useful comments.
<sup>1</sup> C.W. Misner, K.S. Thorne and J.A. Wheeler, Gravitation (Freeman, San Francisco, 1973), pp. 733–742.
<sup>2</sup> S. Weinberg, Gravitation and Cosmology (J. Wiley & Sons, New York, 1972), pp. 475–491.
<sup>3</sup> L.D. Landau and E.M. Lifschitz, The Classical Theory of Fields (Pergamon Press, Oxford, 1989), pp. 363–367.
<sup>4</sup> R.M. Wald, General Relativity (Chicago University Press, Chicago, 1984), chap. 5.
<sup>5</sup> E.W. Kolb and M.S. Turner, The Early Universe (Addison–Wesley, Mass., 1990), pp. 58–60.
<sup>6</sup> T. Padmanabhan, Cosmology and Astrophysics Through Problems (Cambridge University Press, Cambridge, 1996), pp. 79–89.
<sup>7</sup> R. D’Inverno, Introducing Einstein’s Relativity (Clarendon Press, Oxford, 1992), pp. 334–344.
<sup>8</sup> E. Hille, Lectures on Ordinary Differential equations (Addison–Wesley, Reading, Mass., 1969), pp. 273–288.
<sup>9</sup> E.L. Ince, Ordinary Differential Equations (Dover, New York, 1944), pp. 23–25.
<sup>10</sup> A.R. Liddle and D.H. Lyth, The Cold Dark Matter Density Perturbation, Physics Reports 231, 1–105 (1993).
|
no-problem/9901/cond-mat9901152.html
|
ar5iv
|
text
|
# REFERENCES
Comment on “Adsorption of Polyelectrolyte onto a Colloid of Opposite Charge”
In a recent Letter, Gurovitch and Sens studied the adsorption of a weakly charged polyelectrolyte chain onto an oppositely charged colloidal particle. By using a variational technique they found that the colloidal particle can adsorb a polymer of higher charge than its own, and thus be “overcharged.” I argue that the observed overcharging by a factor of $`16/5`$ is indeed an artifact of the approximations involved in the study. Moreover, I show that the existence of overcharging for a pointlike colloidal particle depends crucially on the choice of the trial wave function, contrary to their claim.
To study the adsorption, they use a restricted class of trial wave functions $`\psi _z(𝐫)`$ based on the assumption that the polyelectrolyte is uniformly confined in space to a sphere of size $`1/z`$, and treat $`z`$ as a variational parameter. A finite value for $`1/z`$ that minimizes the free energy, called $`1/z^{}`$, would then mean complete adsorption, whereas an infinite $`1/z^{}`$ would imply instabilities in the form of dangling segments stretching away from the core. I use a larger class of trial wave functions
$$\psi ^2(𝐫)=\alpha \psi _z^2(𝐫)+\frac{(1\alpha )}{V},$$
(1)
that assumes a fraction $`\alpha `$ of the chain is confined to a sphere of size $`1/z`$, while the rest fills up a considerably larger space (of volume $`V`$), and treat $`z`$ and $`\alpha `$ as parameters. This class of trial functions clearly contains that used in Ref. as a subclass ($`\alpha =1`$), and can thus be used to check the robustness of their results.
One can argue why the above choice for the trial wave function is physically more appropriate. The polyelectrolyte can be either adsorbed to the oppositely charged colloid, or stretched out due to self-repulsion. This suggests that an effective two dimensional phase space is more suitable to describe the state of the system. Any configuration of the chain can then effectively be described as a decomposition into various segments, each of which occupying one of the two states, in this simplified picture. The natural question to ask is then the “occupation” ratio of each state, which is determined by minimization of the free energy.
Consider a chain of length $`N`$ with a fraction $`f`$ of its monomers being charged, which is adsorbed to a colloid of charge $`Q`$. Using $`\psi _z(𝐫)=(z^3/\pi )^{1/2}e^{zr}`$ as in Ref. , one obtains the total free energy per unit charge as
$$\frac{E(z,\alpha )}{k_BT}=c_0a^2z^2\alpha c_1Ql_bz\alpha +c_2fNl_bz\alpha ^2,$$
(2)
in which $`l_b=q^2/ϵk_BT`$ is the Bjerrum length, $`a^2=b^2/f`$ where $`b`$ is the monomer size, and the numerical coefficients are given as $`c_0=1/6`$, $`c_1=1`$, and $`c_2=5/16`$ for the above choice of trial function. Minimizing with respect to $`z`$ and $`\alpha `$ yields a “confinement radius” $`1/z^{}=3c_0a^2/c_1l_bfQ`$, and a “charging fraction”
$$\alpha ^{}=\left(\frac{c_1}{3c_2}\right)\times \frac{Q}{fN}.$$
(3)
Note that within this class of trial functions one always obtains a finite value for $`1/z^{}`$.
The amount of charge that can be adsorbed by the colloid is given by $`\alpha ^{}fN=(c_1/3c_2)Q`$, and is equal to $`(16/15)Q1.07Q`$ for the above choice of trial function, which indeed suggests an overcharging, although considerably smaller than reported in Ref. . However, one can see that this prediction is strongly dependent on the choice of the trial wave function, and thus not robust. For example, a Gaussian wave function of the form $`\psi _z(𝐫)=(z^2/2\pi )^{3/4}e^{z^2r^2/4}`$ yields the above results with $`c_0=1/32\sqrt{2}`$, $`c_1=2\sqrt{2}/\sqrt{\pi }`$, and $`c_2=1/\sqrt{\pi }`$. In this case, an adsorption of $`(2\sqrt{2}/3)Q0.943Q`$ charges is predicted, which indicates an “undercharging!”
Finally, I note that a finite size of the colloidal particle (a hard core), and end effects due to finite length of the chain, have recently shown to lead to overcharging . The overcharging is reduced as the size of the particle decreases, and, interestingly, changes into a slight undercharging as it goes to zero .
In conclusion, I have shown that a variational approach can not be used to unambiguously determine the degree of charging of a pointlike colloidal particle by an oppositely charged flexible polyelectrolyte.
It is a pleasure to acknowledge stimulating discussions with A. Yu. Grosberg, C. Jeppesen, E. Mateescu, and P. Pincus. This research was supported in part by the National Science Foundation under Grants No. PHY94-07194, and DMR-93-03667.
Ramin Golestanian
Institute for Theoretical Physics
University of California
Santa Barbara, CA 93106-4030
|
no-problem/9901/astro-ph9901421.html
|
ar5iv
|
text
|
# Probing the Interstellar Medium using HI absorption and emission toward the W3 HII region
## 1 Introduction
While atomic hydrogen (HI), generally seen in emission, is certainly a pervasive component of the Galaxy, our knowledge of its optical depth ($`\tau `$) and spin temperature ($`T_s`$) close to the Galactic plane is sparse. The main reason for this is the difficulty inherent in disentangling the emission along most sight-lines (Kulkarni & Heiles 1988). Use of absorption towards extragalactic sources has proved somewhat successful but has not been entirely satisfactory at low Galactic latitudes.
The plane is peppered with bright HII regions which could potentially be used. Wendker & Wrigge (1996) studied the HI absorption spectrum towards DR 7 as seen using the Dominion Radio Astrophysical Observatory’s (DRAO) Synthesis Telescope (ST) and argued for the usefulness of such observations for the determination of optical depth and spin temperature as a function of radial velocity (or distance). The authors suggested that the careful study of many lines of sight will contribute to the quasi mapping of $`\tau `$ and $`T_s`$, although for HI at velocities corresponding to gas in the proximity of the HII regions, it is important to bear in mind that some of the absorbing material may have been dissociated by the central stars, thus presenting local enhancements which are not typical of the general surroundings. This paper contributes a second line-of-sight towards a strong, extended continuum source within our galaxy: the W3 HII complex, shown in Figure Probing the Interstellar Medium using HI absorption and emission toward the W3 HII region.
The W3 region has been studied in detail repeatedly and at many frequencies (see e.g. Roberts, Crutcher & Troland 1997 for molecular line observations; Tieftrunk et al. 1997 for a multiwavelength radio continuum study; Roelfsema & Goss 1991 for radio recombination lines; Campbell et al. 1995 for a far-infrared look at subcomponents; and Hofner & Churchwell 1997 for X-ray). It houses active star formation and can be subdivided into multiple HII regions for which the nomenclature varies somewhat with the type of observations. Table 1 defines the terminology used in this paper.
HI studies of sight-lines towards W3 have concentrated on searching for atomic gas associated with the HII regions. While the limited spatial and spectral resolution of data presented by Sullivan & Downes (1973) precluded the detection of variations in HI opacities, observations with the Nançay telescope led Crovisier et al. (1975) to conclude that the HI absorption near –40 km s$`^1`$ (all velocities in this paper are with respect to the Local Standard of Rest) is due to atomic hydrogen related to W3. Read (1981) carried out a detailed study of the HI in this region in both emission and absorption; his data had similar spatial resolution to the DRAO data presented here and somewhat poorer velocity resolution (4 km s$`^1`$). He suggested that photodissociation of molecular gas near the compact HII regions is responsible for several observed HI concentrations. Goss et al. (1983) and van der Werf & Goss (1990) used the Westerbork Synthesis Radio Telescope (WSRT) and so had better spatial resolution ($`30\mathrm{}`$ and $`15\mathrm{}`$) than obtainable with the smaller DRAO array, as well as better velocity resolution (1.24 km s$`^1`$ and 1.03 km s$`^1`$), but they do not provide full visibility plane coverage and so are not sensitive to emission on all angular scales. They also concluded that much of the HI at $`40\text{ km s}^1`$ is due to dissociation and delineate shells associated with various compact HII regions.
The above authors presented studies of the W3 complex itself and therefore concentrated on HI which may be related to it, i.e. their focus was the HI absorption near –40 km s$`^1`$. The present paper uses sight lines towards W3 to probe the interstellar medium between us and this HII region, evaluating optical depths and spin temperatures, and placing it in a Galactic perspective. Section 2 briefly describes the data sets which were used in this analysis. Section 3 gives an overview of the HI profiles towards W3, placing their features in the context of rotation curves and spiral shocks, as well as outlining the method used to derive the optical depths and spin temperatures. The fourth section provides a more detailed discussion of optical depths and spin temperatures at different velocities. And finally, the last section summarises the findings.
## 2 The data
Radio continuum data at both 408 MHz and 1420 MHz, as well as 21cm spectral line data were obtained at the DRAO as part the Canadian Galactic Plane Survey (CGPS) pilot project. The pilot project covered an $`8\mathrm{°}\times 6\mathrm{°}`$ area of the sky, encompassing all of the W3/W4/W5/HB3 Galactic complex. Observations were carried out in June, July, November and December of 1993. The synthesis telescope is a 7-element interferometer with four fixed and three movable antennas, each with a diameter of $``$9 m. In the course of twelve 12-hour periods, observations are taken for all baselines from 12.858 to 604.336 m by increments of 4.286 m. At 1420 MHz, the small dish size results in a field of view of 78 arcmin (at 20% attenuation of the primary beam which is approximated by a Gaussian), the longest spacing gives a spatial resolution of $`1.00\times 1.00\mathrm{csc}\delta `$ arcmin<sup>2</sup> (EW $`\times `$ NS), and the shortest baseline means that the instrument is not sensitive to structures greater than 0.5° in extent.
The spectral line data were collected by a 128-channel spectrometer with a channel width of 2.64 km s$`^1`$ and a channel separation of 1.649 km s$`^1`$. The sensitivity at field centre for the ST data is 3.0 K and degrades with distance from the centre as the inverse of the primary beam. For channels where the continuum emission from W3 is strongly absorbed there are processing artefacts from W3. Information about the lowest spatial frequencies (i.e. large angular sizes) was provided by the DRAO’s 26m telescope, and as a result the images are sensitive to structures on all scales. Relevant observational parameters are outlined in Table 2. Details of the observations and data reduction are described by Normandeau et al. (1997).
Continuum emission was subtracted from the HI images. While the DRAO synthesis telescope collects continuum data at 1420 MHz using four 7.5 MHz bands, two to each side of the line frequency, the resulting image was not used to subtract the continuum. Instead an average was taken of channels where there was no apparent HI , the first and last twenty channels corresponding to velocities between +55 and +24 km s$`^1`$ and between –137 and –154 km s$`^1`$. This allows for a more precise subtraction of the continuum emission for three reasons: 1) the DRAO spectrometers are equipped with automatic level control which adjusts the gain in order to minimise information loss due to quantization whereas the continuum system is not, and the uncertainty of the gain correction factor would decrease the reliability of HI images; 2) the difference in bandwidth between a spectrometer channel and the continuum receiver results in differing amounts of bandwidth smearing in the images, making the subtraction of continuum sources progressively worse with distance from field centre; 3) using the end channels insures that the visibility plane coverage is identical for all the maps involved in the operation, thus allowing a more accurate subtraction of any artefacts related to strong sources. However, the continuum brightness temperature values used in the optical depth calculations (see below) were from the true continuum image.
## 3 Overview of HI spectra towards W3
The most prominent absorption features in the CGPS pilot project data are those associated with W3 which can be seen out to $`v_{\mathrm{LSR}}=50\text{ km s}^1`$. While the profiles towards different points within the W3 region differ significantly from each other, mostly for velocities near –40 km s$`^1`$, they all have the same global properties which are well illustrated in Figure Probing the Interstellar Medium using HI absorption and emission toward the W3 HII region. This shows the average absorption profiles associated with W3-N and W3-W for all positions where the continuum brightness temperature is greater than 50 K. As well, an average emission spectrum from forty nearby positions (see §3.1.2) is plotted.
### 3.1 Optical depth and spin temperature
#### 3.1.1 General considerations
Spectra towards the source (“on”), which show absorption, and towards nearby positions (“off”), showing emission, can be combined to yield the optical depth and spin temperature. For the on-source spectrum,
$$T_{\mathrm{b},\mathrm{on}}(v)=T_s(v)(1e^{\tau _v})+T_ce^{\tau _v}T_c$$
(1)
where $`T_c`$ is the continuum brightness temperature of the background source. The dependance on frequency and therefore radial velocity, $`v`$, is indicated. The off-source spectrum is given by the first term of the above equation:
$$T_{\mathrm{b},\mathrm{o}\mathrm{f}\mathrm{f}}(v)=T_s(v)(1e^{\tau _v}).$$
(2)
Assuming that the off-spectrum has been carefully chosen to be representative of what would be seen at the position of the source if no absorption were occurring, one can solve for the optical depth:
$$\tau _v=\mathrm{ln}\left(1\frac{T_{\mathrm{b},\mathrm{o}\mathrm{f}\mathrm{f}}(v)T_{\mathrm{b},\mathrm{o}\mathrm{n}}(v)}{T_c}\right).$$
(3)
Once the optical depth has been obtained from the observed spectra and continuum brightness temperature, the spin temperature can be calculated using Eq.2. It should be noted that an over- or underestimate of $`T_{\mathrm{b},\mathrm{o}\mathrm{f}\mathrm{f}}`$ will result in an over- or underestimate of $`\tau `$.
#### 3.1.2 The case of W3
As indicated above, the off-spectrum must be carefully chosen to be representative of the on-source emission. This proves difficult for sight-lines towards W3 as there is much variation. For forty positions within a few arcminutes of W3, 20 near W3-N and 20 near W3-W as defined by their 50 K contours, being careful to avoid the image artefacts and the slight absorption associated with W3-E, for velocities ranging from +8 km s$`^1`$ to –91 km s$`^1`$ which includes all of the Galactic emission of note, the standard deviation was as high as 18 K, though the average deviation in both cases was $``$7 K. The average spectra, for W3-N and W3-W, are shown in Figure Probing the Interstellar Medium using HI absorption and emission toward the W3 HII region which also includes the average$`\pm `$1$`\sigma `$ profiles.
For areas where the absorption is almost total, the optical depth calculation depends very sensitively on $`T_{\mathrm{b},\mathrm{o}\mathrm{f}\mathrm{f}}T_{\mathrm{b},\mathrm{o}\mathrm{n}}`$. Therefore, for the present purposes, whenever the value of $`T_{\mathrm{b},\mathrm{o}\mathrm{f}\mathrm{f}}(v)T_{\mathrm{b},\mathrm{o}\mathrm{n}}(v)`$ came within $`2(\sigma _{\mathrm{o}\mathrm{f}\mathrm{f}}(v)+\sigma _{\mathrm{o}\mathrm{n}})`$ of $`T_c`$ it was replaced by $`T_c2(\sigma _{\mathrm{o}\mathrm{f}\mathrm{f}}(v)+\sigma _{\mathrm{o}\mathrm{n}})`$ in the calculation, where $`\sigma _{\mathrm{o}\mathrm{f}\mathrm{f}}(v)`$ is the standard deviation of the appropriate off-spectrum, and $`\sigma _{\mathrm{o}\mathrm{n}}=3.5\mathrm{K}/P(\theta )`$, the uncertainty at field centre for the continuum subtracted HI images adjusted for the effect of the primary beam correction. This provides a lower limit value for the optical depth. It affects approximately 36% of the values of $`\tau `$ for W3-W from –33.51 km s$`^1`$ to –50.00 km s$`^1`$, and $``$41% of those for W3-N in this same interval. Away from this deep trough, none of the values for W3-N needed to be calculated in this manner, and less than $``$2% of the values for W3-W at more positive velocities are affected.
The spin temperature calculations are highly dependant on the value of $`T_{\mathrm{b},\mathrm{o}\mathrm{f}\mathrm{f}}`$ which is the greatest source of uncertainty. Therefore, in addition to indicating limits when only limits were calculated for $`\tau `$, results for $`T_s`$ will be considered trustworthy only when $`T_s>2\sigma _{T_s}`$.
Plots of $`\tau _v`$ and $`T_s(v)`$ for sight lines towards W3-N and W3-W are presented in Figures Probing the Interstellar Medium using HI absorption and emission toward the W3 HII region and Probing the Interstellar Medium using HI absorption and emission toward the W3 HII region. These were calculated using the average spectrum of all positions where the continuum emission is greater than 50 K (i.e. the on-source spectra shown in Fig. Probing the Interstellar Medium using HI absorption and emission toward the W3 HII region) and the average of 20 nearby spectra for the emission (i.e. the emission spectra shown in Fig. Probing the Interstellar Medium using HI absorption and emission toward the W3 HII region). Calculations were also done on a pixel by pixel basis for each region.
### 3.2 Main features placed in a Galactic context
There are three main absorption troughs along this line-of-sight: one corresponding to Local gas, one corresponding to the bulk of the Perseus arm and an intermediate trough centred on $`22\text{ km s}^1`$. There is HI emission at velocities between the intermediate and Perseus arm absorption troughs, at $`v30\text{ km s}^1`$. Three different scenarios are considered to explain this and are illustrated in Figure Probing the Interstellar Medium using HI absorption and emission toward the W3 HII region.
#### 3.2.1 Standard rotation curve without gas displacement
One possibility does not require that any of the gas be displaced relative to the standard rotation curve, nor that the emitting gas at –30 km s$`^1`$ be behind W3 (Fig. Probing the Interstellar Medium using HI absorption and emission toward the W3 HII regiona).
If the opacity of the gas at –30 km s$`^1`$ were low enough, there would be sufficiently little absorption for it to be noticeable. In this framework the values of $`\tau `$ calculated from the observed spectra would imply spin temperatures in excess of 10<sup>4</sup> K for this interarm gas, higher than expected for the Warm Neutral Medium ($`T8000`$ K, Kulkarni & Heiles 1988; direct measurements towards Cyg A by Carilli et al. (1998) yielded $`6000\pm 1700`$ K and $`4800\pm 1600`$ K).
This feature of the spectra extends approximately from –24 km s$`^1`$ to –33 km s$`^1`$. Assuming a flat rotation curve with an Oort constant $`A=14\text{ km s}^1\mathrm{kpc}^1`$ and R<sub>0</sub> = 8.5 kpc, this implies a path length of 0.6 kpc within which there is no cold neutral hydrogen. This seems unlikely considering that the average off-source emission is approximately 55 K and that there is significant absorption at more positive velocities (–20 km s$`^1`$) which also correspond to the interarm region for the standard rotation curve.
#### 3.2.2 Standard rotation curve with –30 km s$`^1`$ gas displaced
In the most intuitive picture, the lack of absorption at –30 km s$`^1`$ places the related gas behind the W3 region.
It could be argued that the –30 km s$`^1`$ gas has been displaced in velocity (Fig. Probing the Interstellar Medium using HI absorption and emission toward the W3 HII regionb), that it is not following the rotation curve of the Galaxy. In this case, the less negative velocity indicates that, while it is approaching us, it is receding relative to the general movement of the Galactic HI at a velocity of at least 19 km s$`^1`$, as determined from the lower velocity edge of the –40 km s$`^1`$ absorption trough and the upper velocity edge of the –30 km s$`^1`$ trough.
#### 3.2.3 TASS model, –40 km s$`^1`$ gas accelerated
For the Perseus arm there are known to be large deviations from the circular motion normally assumed for the Galaxy (see Roberts 1972 and references therein); the location of the optical arm and that of the radio (HI) arm as determined using a standard Galactic velocity curve do not coincide, with the radio arm being apparently further than the optical arm. Roberts (1972) developed the “two-armed spiral shock” model (TASS) to explain these discrepancies. According to this model a shock develops along the inner edge of the Perseus arm where the gas encounters a minimum in the gravitational potential of the density wave. The shock has an amplitude of approximately 20 km s$`^1`$. In this framework the main ridge of Perseus arm emission seen from –40 km s$`^1`$ to –50 km s$`^1`$, depending on the longitude (c.f. Figure 6 of Normandeau et al. 1997), has been displaced from circular velocity by approximately 20 km s$`^1`$. In this context the absorption gap in the W3 spectrum at –30 km s$`^1`$ would be due to the undisturbed gas behind W3, the HII region itself being within or just past the layer of shocked gas at $`40`$ km s$`^1`$ (Fig. Probing the Interstellar Medium using HI absorption and emission toward the W3 HII regionc).
This is reminiscent of the streaming motion seen in other spiral galaxies, e.g. M83 (Lord & Kenney 1991) and M51 (Rand 1993 and references therein). In M51, the strong density wave has concentrated both the diffuse and dense gas along the inner edge of the spiral arm, coincident with the dust lane, in the collision front. The velocity shifts are quite large, as much as 60–90 km s$`^1`$ in the plane of the galaxy (Tilanus & Allen 1991). The density wave is much weaker in M83, resulting in less pronounced streaming motions ($``$12 km s$`^1`$ perpendicular to the spiral arm and $``$0 km s$`^1`$ parallel to it; Lord & Kenney 1991). The dust lane again lies along the inner edge of the spiral arm but in this case the molecular ridge is offset, some 300 pc downstream. Lord & Kenney speculate that the diffuse gas is compressed at the shock front, producing the dust lane, but that the molecular clouds pass through the front to form a broad distribution in the arm.
The Perseus arm lies somewhere between these two cases: the 20 km s$`^1`$ offset is essentially perpendicular to the spiral arm, implying a stronger shock than in M83 but weaker than in M51 where the perpendicular component is approximately 64 km s$`^1`$(Lord & Kenney 1991). Heyer & Terebey (1998) studied the CO and infrared emission in the W3/W4/W5 region and found the bulk of the CO emission to be at velocities near –45 km s$`^1`$. They estimate that the minimum transit time for the interarm region requires an exceedingly long cloud lifetime (3–6 $`\times `$ 10<sup>9</sup> yr), and, when combined with the arm-interarm contrast which they measure to be 28:1 for CO, this implies that the gas which enters the spiral arms is in the atomic phase. All this is more akin to the situation in M51 than in M83, with a pile-up of molecular material along the shock front, though the shock is weaker for the Perseus arm than in M51. It should be pointed out, however, that Heyer & Terebey discounted the possibility that the gas at –40 km s$`^1`$ was in fact showing streaming motion.
Frail & Hjellming (1991) presented an absorption spectrum towards LSI+61$`\mathrm{°}`$303, a Perseus arm object located just east of W4. It is very similar in appearance to the one presented here for W3, and they also called upon the TASS model to explain their observations. A large scale phenomenon such as a spiral density wave seems the most likely explanation for the similar velocity displacement of gas separated by some 78 pc (assuming a distance of 2.3 kpc). Frail & Hjellming also show a spectrum toward an extragalactic source, BG 0237+61 which is 15 arcmin from LSI+61$`\mathrm{°}`$303, where there is absorption in the –20 km s$`^1`$ to –40 km s$`^1`$ velocity interval; this argues against the possibility outlined in §3.2.1. For these reasons, the explanation involving the TASS model is the one favoured here.
## 4 Discussion of optical depth and spin temperature
### 4.1 Absorption near 0 km s$`^1`$
Of the HI studies towards W3 mentioned in the introduction, only the earlier ones cover a wide-enough velocity range to include the absorption by Local gas. Crovisier et al. found that the optical depths and widths of the troughs for W3 N and W3“main” were comparable. The present data show slight differences. The optical depth towards W3-W rises and falls smoothly, attaining at maximum of $`0.66\pm 0.08`$ at $`0.53\text{ km s}^1`$, whereas towards W3-N there is a brief plateau, extending over some 8 km s$`^1`$, at a level of 0.6–0.7. Read also found a peak optical depth of $``$0.6. Within the two subregion there is little variation.
The spin temperature towards W3-N attains slightly lower values than towards W3-W and remains at this level over a wider velocity interval, corresponding to the 8 km s$`^1`$ plateau mentioned above. Both sight-lines show slightly lower $`T_s`$ (as low as $`99\pm 19`$ K towards W3-N and $`115\pm 20`$ K towards W3-W) than found by Wendker & Wrigge towards DR 7 (generally around 140 K). Considering the uncertainties, the discrepancy is not large. Nonetheless a possible explanation for this could be that the presence of relatively small, colder HI “clumps” would have a much greater impact on the spin temperatures measured towards W3 than towards DR 7. This is because for each channel only a mean brightness temperature is measured and, due to Galactic rotation, the same channel width (velocity width) corresponds to a greater path length for the DR 7 case than for W3. For DR7 sight-lines each channel would therefore include contributions from much warm gas as well as from the postulated small, cold clumps. As an illustration, for $`v_{\mathrm{LSR}}=10\text{ km s}^1`$ and assuming a flat rotation curve with an Oort constant $`A=14\text{ km s}^1\mathrm{kpc}^1`$ and R<sub>0</sub> = 8.5 kpc, one finds d<sub>kin</sub> = 4.5 kpc for the longitude of DR 7, but only 0.7 kpc for sight-lines towards W3<sup>1</sup><sup>1</sup>1Kinematic distances can be calculated for this local gas because it is unaffected by the Perseus streaming motions..
### 4.2 Between the 0 km s$`^1`$ and –20 km s$`^1`$ absorption troughs
In the interval between the troughs at 0 and –20 km s$`^1`$, Figure Probing the Interstellar Medium using HI absorption and emission toward the W3 HII region indicates that there is some absorption, though not to the extent seen in the deep troughs. The optical depth calculations reach minima of $`0.14\pm 0.03`$ towards W3-W and $`0.08\pm 0.04`$ towards W3-N. It is reasonable to wonder if this is truly due to interarm absorption or if it is simply attributable to the overlap of wings from the distributions corresponding to the 0 and –20 km s$`^1`$ troughs. Fitting a Gaussian to each of these shows that their wings cannot account for the amount of absorption seen at intervening velocities.
For both sight-lines, the spin temperature increases upon leaving the Local Arm. The rise and fall of $`T_s`$ in this interarm region is fairly smooth towards W3-W, reaching $`286\pm 79`$ K when using the average absorption spectrum. Towards W3-N the interarm spin temperature shows a double peak, though it is smooth within uncertainties.
The above temperatures are, of course, weighted mean values for a given channel. They correspond to what Kulkarni & Heiles (1988) dubbed the naively derived spin temperature. For a single, isothermal cloud, it is equal to that cloud’s spin temperature, however for the more complicated and realistic case where there are contributions from two optically thin components, it becomes the column-density-weighted harmonic mean temperature. Assuming the canonical values of 80 K and 8 000 K for the Cold Neutral Medium (CNM) and the Warm Neutral Medium (WNM) respectively (Kulkarni & Heiles 1988) and using 300 K as the value for the weighted harmonic mean spin temperature, one derives a column density that is three times as high for the WNM as for the CNM. From the measured values of the spin temperature and the optical depth in the –8.78 km s$`^1`$ to –13.72 km s$`^1`$ velocity interval, the total column density of the HI $`3.4\times 10^{20}`$ cm<sup>-2</sup>, implying a column density of $`8.4\times 10^{19}`$ cm<sup>-2</sup> for the CNM and $`2.5\times 10^{20}`$ cm<sup>-2</sup> for the WNM with the above assumptions. If one then uses as a path length the difference between the kinematic distances associated with the velocities given above (0.63 kpc and 0.96 kpc), one finds mean densities of 0.08 cm<sup>-3</sup> for the CNM and 0.24 cm<sup>-3</sup> for the WNM. The ISM pressure in the plane is thought to be approximately $`\frac{P}{k}4000`$ cm<sup>-3</sup> K (Kulkarni & Heiles 1988), implying $`n_{\mathrm{W}\mathrm{N}\mathrm{M}}=0.5`$ cm<sup>-3</sup> and $`n_{\mathrm{C}\mathrm{N}\mathrm{M}}=50`$ cm<sup>-3</sup>. Taking the ratio of the mean densities for the interarm velocities considered here and the expected densities in the plane, one finds volume filling factors $``$50% for the WNM and $`0.2\%`$ for the CNM. The WNM filling factor is comparable to the value of $``$40% which Carilli et al. (1998) found for the average of two interarm regions along a line of sight towards Cyg A, though using only their data for the interarm region between the local gas and the Perseus arm results in a somewhat lower value, closer to 30%.
The CNM and WNM filling factors calculated above are very uncertain, relying as they do on many assumptions. The temperatures adopted for the two components may be incorrect. Carilli et al. measured temperatures for the WNM down to 4800$`\pm `$1600 K. While using this lower value would not noticeably change the mean densities derived for the two components in the interarm region, it would greatly affect the expected density for the plane and would imply a filling factor of $``$30% for the WNM. The CNM filling factor would, of course, be unaffected. The temperature chosen for the CNM has a greater impact on the implied relative column densities of the two phases; for $`T_{\mathrm{C}\mathrm{N}\mathrm{M}}=20`$ K, the filling factor of the WNM becomes $``$60%, while for the CNM it is then below 0.1%. Deriving path lengths from the kinematic distances can only give rough estimates. Small differences in this number will result in widely differing estimates of the filling factor for the WNM ($``$30% for $`\mathrm{}=0.5`$ kpc and $``$80% for $`\mathrm{}=0.2`$ kpc), though the CNM filling factor remains at the 0.1–0.2% level. Clearly, the greatest uncertainty comes from the densities assumed for the two components in the plane. These are affected by the temperatures as noted above, but also depend on the assumption that the phases of the ISM are in pressure equilibrium<sup>2</sup><sup>2</sup>2The expression for the pressure was derived for the Warm Ionized Medium and can only be applied to other components if there is pressure equilibrium, as well as on the reliability of the equation. The quantitative results quoted in the preceeding paragraph are therefore not to be blindly trusted, particularly for the WNM, however the calculations clearly and reliably show that the CNM occupies a negligible fraction of the interarm region. Nonetheless this small amount of cold gas dominates the absorption signal.
### 4.3 Absorption near –20 km s$`^1`$
Crovisier et al. interpreted their observations at these velocities as due to the presence of a large inhomogeneous cloud at a distance of approximately 1.5 kpc. In the context of the TASS model favoured here, the difference between sight-lines towards W3-N and W3-W is due to inhomogeneities in the gas on the near side of the Perseus Arm.
Towards W3-N the average optical depth rises and falls quickly, reaching only $`0.49\pm 0.07`$ at –20.32 km s$`^1`$. For the W3-W line-of-sight, the rise is more gradual than the decline and the maximum attained is much greater, $`1.66\pm 0.26`$ at –20.32 km s$`^1`$. Read found a peak optical depth of $``$1.4 for this trough, in agreement with the value above. Accordingly, sight-lines towards W3-W reach lower spin temperatures ($`69\pm 13`$ K for the average profile) than do those towards W3-N ($`119\pm 23`$ K for the average profile).
The optical depth is fairly uniform over the individual regions. A few anomalously high values (up to 2.6) do result from the calculations but these are correlated with steep gradients in the continuum image and are therefore likely to be artefacts. Two factors which could have affected these pixels are: 1) slight misalignments of the continuum and spectral line images, which would have a great impact on calculations in these steep gradient regions, are not impossible as the continuum image was selfcalibrated; 2) the gridding of the continuum image and of the spectral cube were done separately and both have been regridded, which procedure could have caused mismatches between the two data sets in steep gradient regions. Because the affected region is very near the field centre, bandwidth smearing is not a likely cause of these errors.
### 4.4 Absorption near –40 km s$`^1`$
As indicated in the introduction, the absorption at velocities near –40 km s$`^1`$ has been studied with the WSRT at higher spatial and spectral resolutions (Goss et al. 1983, and van der Werf & Goss 1990). A new detailed analysis of the HI associated with the various compact HII regions using the DRAO data is therefore not warranted, but a few aspects shall be addressed and discussed.
For the W3-N region, the lower limit for the maximum value using the average spectrum is 1.6. The optical depth varies little over the region; for the pixel by pixel calculations, a lower limit of 2.9 on the maximum optical depth was found at a velocity of –40.11 km s$`^1`$.
For this region Goss et al. calculated values up to 2.5 whereas Read found $`\tau `$ up to $``$8. The former are inconsistent with the present data. The latter may be overestimated due to poor continuum subtraction leading to an underestimate of $`T_{\mathrm{b},\mathrm{o}\mathrm{n}}`$ (Read did not have channels perfectly devoid of continuum at his disposal for continuum subtraction and estimated that this may have resulted in a 10 K oversubtraction). It is possible that the Westerbork results have underestimated the optical depth. The WSRT has a shortest baseline of 36 m and, as a result, the largest scale structure that can be imaged is $``$10 arcmin in extent. This filtering property of the array highlights small scale knots, and therefore would not affect the absorption spectra because the structure of the absorbing sources is on scales smaller than 10 arcmin (c.f. Figure Probing the Interstellar Medium using HI absorption and emission toward the W3 HII region). If all the HI emission was on scales greater than 10 arcmin then there would be no surrounding HI emission which would need to be accounted for through an off-source spectrum. However if the interferometer has not filtered out all of the HI emission, then neglecting it, as did Goss et al., will result in an underestimate of $`\tau `$. The W3-N opacity images published by van der Werf & Goss show values of $`4`$; their calculations were done differently than those of Goss et al., circumventing the possible difficulty with $`T_{\mathrm{b},\mathrm{o}\mathrm{f}\mathrm{f}}`$ (see van der Werf & Goss 1989 for details of the method).
It should also be noted that for these velocities, it is likely that the optical depths derived here are also underestimated. This is because there is probably local HI associated with the W3 region itself which will not be accounted for in the “off” spectrum. The latter will therefore contain less HI emission at these velocities than would a spectrum towards the source if there were no absorption.
Spatial variations are much more marked in W3-W which, contrary to W3-N, is made up of several distinct HII regions. In particular, the highest values are attained towards W3 core (lower limit of 4.7 at –38.46 km s$`^1`$ whereas the lower limit for the maximum from the average spectrum is 2.4), and lesser but marked enhancements are seen towards W3 K and W3 J as well as in the southern section of NGC 896. Again Goss et al. obtained slightly lower values, with a maximum of 4.0 being detected towards W3-A and W3-B. However van der Werf & Goss found $`\tau >5.0`$ towards all continuum sources and explained the discrepancy in terms of beam dilution; this reconciles the two Westerbork data sets, though beam dilution should cause the optical depth evaluated with the DRAO data to be even lower. As for Read, once again he quotes a higher value (lower limit of 9.0), but the same warning applies as for W3-N.
The average spin temperature is fairly constant over the entire velocity range of the absorption trough, for both W3-N and W3-W, at somewhat less than 100 K. This temperature is consistent with expected values for cores of Giant Molecular Clouds (20 – 100 K, Turner 1988).
## 5 Summary and conclusions
HI spectra towards W3-N and W3-W for velocities ranging from +55 km s$`^1`$ to –154 km s$`^1`$ have been presented. These were combined with an average spectrum for nearby positions to calculate the optical depth and spin temperature for velocities where there was absorption of the continuum emission.
There is a lack of absorption around –30 km s$`^1`$ which may be indicative of temperatures in excess of $`10^4`$ K in the interarm region or which might correspond to gas behind the W3 region even though there is absorption out to –50 km s$`^1`$. In the latter case, it could be that the –30 km s$`^1`$ gas has been displaced by $`19\text{ km s}^1`$ from the standard rotation curve, or that it is the gas showing absorption near –40 km s$`^1`$ that has been accelerated by a spiral shock in accordance with the Two Armed Spiral Shock model (Roberts 1972). Considering that Frail & Hjellming (1991) observe a very similar spectrum towards LSI+61$`\mathrm{°}`$303, a source east of W4, the explanation wherein the –30 km s$`^1`$ gas has been displaced seems the least probable. The explanation involving the TASS model is favoured here because of the unlikely absence of cold HI over the velocity span of the –30 km s$`^1`$ feature.
For the Local Arm, the sight-lines presented here yield lower spin temperature values than reported by Wendker & Wrigge (1996) towards DR 7. This discrepancy may be due to the longer path length corresponding to each channel width for the earlier study. Additional investigations of this nature, towards other Galactic plane HII regions seen in the CGPS data, will clarify the matter. For the interarm region, values on the order of 300 K are found, from which one can estimate volume filling factors of $``$50% for the WNM and $`1\%`$ for the CNM; the calculations require many assumptions and the number quoted for the warm HI cannot be said to be reliable, but the result for the CNM filling factor is quite robust. The –20 km s$`^1`$ absorption trough which shows lower temperatures is part of the Perseus arm in the TASS model, not the interarm region.
The study of this second line of sight towards a Galactic plane HII region, following on work by Wendker & Wrigge towards DR7, confirms the usefulness of such studies both in determining characteristics of the ISM and in examining elements of Galactic structure. The many HII regions within the 73° longitude span of the CGPS should help us map the temperature and optical depth in the Galaxy.
The author is grateful to H.J. Wendker and C. Heiles for useful comments on previous drafts, as well as to an anonymous referee for comments helpful in the preparation of the final manuscript. The Dominion Radio Astrophysical Observatory’s synthesis telescope is operated by the National Research Council of Canada as a national facility. The Canadian Galactic Plane Survey is a Canadian project with international partners, and is supported by a grant from the Natural Sciences and Engineering Research Council of Canada.
|
no-problem/9901/cond-mat9901169.html
|
ar5iv
|
text
|
# Current self-oscillations, spikes and crossover between charge monopole and dipole waves in semiconductor superlattices
## Abstract
Self-sustained current oscillations in weakly-coupled superlattices are studied by means of a self-consistent microscopic model of sequential tunneling including boundary conditions naturally. Well-to-well hopping and recycling of charge monopole domain walls produce current spikes –high frequency modulation– superimposed on the oscillation. For highly doped injecting contacts, the self-oscillations are due to dynamics of monopoles. As the contact doping decreases, a lower-frequency oscillatory mode due to recycling and motion of charge dipoles is predicted. For low contact doping, this mode dominates and monopole oscillations disappear. At intermediate doping, both oscillation modes coexist as stable solutions and hysteresis between them is possible.
73.40.Gk, 73.50.Fq, 73.50.Mx
Solid state electronic devices presenting negative differential conductance, such as resonant tunneling diodes, Gunn diodes or Josephson junctions , are nonlinear dynamical systems with many degrees of freedom. They display typical nonlinear phenomena such as multistability, oscillations, pattern formation or bifurcation to chaos. In particular, vertical transport in weakly coupled semiconductor doped superlattices (SLs) has been shown to exhibit electric field domain formation , multistability , self-sustained current oscillations , and driven and undriven chaos . Stationary electric field domains appear in voltage biased SLs if the doping is large enough . When the carrier density is below a critical value, self-sustained oscillations of the current may appear. They are due to the dynamics of the domain wall (which is a charge monopole accumulation layer or, briefly, a monopole) separating the electric field domains. This domain wall moves through the structure and is periodically recycled. The frequencies of the corresponding oscillation depend on the applied bias and range from the kHz to the GHz regime. Self-oscillations persist even at room temperature, which makes these devices promising candidates for microwave generation . Theoretical and experimental work on these systems have gone hand in hand. Thus the paramount role of monopole dynamics has been demonstrated by theory and experiments. Monopole motion and recycling can be experimentally shown by counting the spikes –high frequency modulation– superimposed on one period of the current self-oscillations: current spikes correspond to well-to-well hopping of a domain wall through the SL. In typical experiments the number of spikes per oscillation period is clearly less than the number of SL wells . It is known that monopoles are nucleated well inside the SL so that the number of spikes tells over which part of the SL they move. Other possible waves, such as the charge dipole waves appearing in the well-known Gunn effect, are nucleated at the emitter contact . Had they been mediating the self-oscillation, the number of current spikes would be comparable that of SL wells.
In this letter we study the non-linear dynamics of SLs by numerically simulating the model proposed in Ref. . Our simulations show self-sustained oscillations of the current and current spikes reflecting the motion of the domain wall as observed experimentally. Furthermore, when contact doping is diminished, we predict a crossover from monopole to dipole self-oscillations resembling those in the Gunn effect . Indeed, our results show for first time that there is an intermediate range of contact doping and a certain interval of external dc voltage for which monopole and dipole self-oscillations with different frequencies are both stable. Hysteretic phenomena then exist.
1. Model and superlattice sample. Our self-consistent microscopic model of sequential tunneling includes a detailed electrostatic description of the contact regions and SL . It consists of a system of $`3N+8`$ equations for the Fermi energies and potential drops at the $`N`$ wells, the potential drops at the barriers and at the emitter and contact layers, width thereof, charge at the emitter and total current density. These equations comprise Ampère current density balance and Poisson equations, conservation of the global charge, and the overall voltage bias condition. Dynamics enters the model through Ampère’s law for the total current density $`J=J(t)`$,
$`J=J_{i1,i}+{\displaystyle \frac{ϵ}{d}}{\displaystyle \frac{dV_i}{dt}},`$ (1)
which is equivalent to a local charge continuity equation . Here $`J_{i1,i}`$ is the tunneling current density through the $`i`$th barrier of thickness $`d`$, evaluated by using the Transfer Hamiltonian approach . The last term in (1) is the displacement current at the $`i`$th barrier where the potential drop is $`V_i`$ and $`ϵ`$ is the static permittivity.
Our numerical simulations (of the $`3N+8`$ coupled equations) have been performed for a 13.3 nm GaAs/2.7 nm AlAs SL at zero temperature consisting of 50 wells and 51 barriers, as described in . Doping in the wells and in the contacts are $`N_w=2\times 10^{10}`$ cm<sup>-2</sup> and $`N_c=2\times 10^{16}`$ cm<sup>-3</sup> respectively. Notice that the typical experimental value is $`N_c=2\times 10^{18}`$ cm<sup>-3</sup> . For this value, we find current self-oscillations due to monopole dynamics with very small superimposed current spikes. Since the origin of such spikes is the same as for smaller $`N_c`$ (for which spikes are larger and bistability of oscillations is possible), we choose not to present data comparable to experiments in this paper (see Ref. for the relevant experimental data).
2. Monopole-mediated self-oscillations of the current. Fig. 1(a) depicts the current as a function of time for a dc bias voltage of 5.5 V on the second plateau of the SL $`IV`$ characteristic curve. $`J(t)`$ oscillates periodically at 20 MHz. Between each two peaks of $`J(t)`$, we observe 18 additional spikes. The electric field profile is plotted in Fig. 1(b) at the four different times of one oscillation period marked in Fig. 1(a). There are two domains of almost constant electric field separated by a moving domain wall of (monopole) charge accumulation (which is extended over a few wells). Monopole recycling and motion occur on a limited region of the SL (between the 30th and the 50th well) and accompany the current oscillation . Well-to-well hopping of the domain wall is reflected by the current spikes until it reaches the 46th well which is close to the collector. Then the strong influence of the contact causes that no additional spikes appear. Instead the current rises sharply triggering the formation of a new monopole closer to the emitter contact but well inside the SL; see Figs. 1(a) and (b). The number of wells traversed by the domain wall (almost) coincides with the number of spikes per oscillation period, a feature not found in previous models. Fig. 1(b) shows the recycling of a monopole: between times (1) and (3) there is a single monopole propagating towards the collector; at (4) a new monopole is generated at the middle of the structure and the old one collapses at the collector. It is interesting to realize that the region near the emitter does not have a constant electric field profile due to the large doping there (its Fermi level is well above the first resonant level of the first well). This produces a large accumulation layer.
3. Current spikes. What is remarkable in Fig. 1(a) (as compared to previous studies) are the spikes superimposed near the minima of the current oscillations. Such spikes have been observed experimentally and attributed to well-to-well hopping of the domain wall . They are a cornerstone to interpret the experimental results and in fact support the theoretical picture of monopole recycling in part (about $`40\%`$) of the SL during self-oscillations. The identification between number of spikes and of wells traversed by the monopole rests on voltage turn-on measurements supported by numerical simulations of simple models during early stages of stationary domain formation . These models do not predict spikes superimposed on current self-oscillations due to monopole motion . To predict large spikes, a time delay in the tunneling current or random doping in the wells have to be added. Unlike these models, ours reproduces and explains spikes naturally thereby supporting their use to interpret experimental results.
Fig. 2(a) depicts a zoom of the spikes in Fig. 1(a). They have a frequency of about 500 MHz and an amplitude of 2.5 $`\mu `$A. Fig. 2(b) shows the charge density profile at four different times of a current spike marked in Fig. 2(a). Notice that the electron density in Fig. 2(b) is larger than the well doping at only three wells (40, 41 and 42) during the times recorded in Fig. 2(a). The maximum of electron density moves from well 40 to well 41 during this time interval so that: (i) tunneling through the 41st barrier (between wells 40 and 41) dominates when the total current density is increasing, whereas (ii) tunneling through barriers 41 and 42 is important when $`J(t)`$ decreases. The contributions of tunneling and displacement currents to $`J(t)`$ in Eq. (1) are depicted in Figures 2(c) and (d).
More generally, the spikes reflect the two-stage hopping motion –fast time scale– of the domain wall: at time (1) (minimum of the current), the charge accumulates mainly at the i-th well. As time elapses, electrons tunnel from this well to the next one, the (i+1)-st, where most of the charge is located at time (3) (maximum of the current). This corresponds to a hop of the monopole. As the monopole moves, it leaves a lower potential drop on its wake. The reason is that the electrostatic field at the (i+1)-st well and barrier become abruptly flat between times (1) and (3), as they pass from the high to the low field domain. This means that a negative displacement current has its peak at the (i+1)-st barrier, near the wells where most of the charge is. Between times (1) and (3), the tunneling current is maximal where the displacement current is minimal and the total current increases. After that, some charge flows to the next well \[time (4)\] but both, tunneling and displacement currents, are smaller than previously. This occurs because the potential drop at barrier (i+2) (in the high field domain) is larger than at barrier (i+1). Then there is a smaller overlap between the resonant levels of nearby wells –the tunneling current decreases – and the displacement current and, eventually, $`J(t)`$ decreases. This stage lasts until well i is drained, and most of the charge is concentrated at wells (i+1) (the local maximum of charge) and (i+2) (slightly smaller charge). Then the next current spike starts.
4. Dipole self-oscillations of the current. An advantage of our present model over other discrete ones is our microscopic modeling of boundary conditions at the contact regions. Thus we can study what happens when contact doping is changed. The result is that there appear dipole-mediated self-oscillations as the emitter doping is lowered below a certain value. There is a range of voltages for which dipole and monopole oscillations coexist as stable solutions. This range changes for different plateaus. When the emitter doping is further lowered, only the dipole self-oscillations remain. Fig.3 presents data in the crossover range (below $`N_c=4.1\times 10^{16}`$cm<sup>-3</sup> and above $`N_c=1.7\times 10^{16}`$cm<sup>-3</sup> for the second plateau), for the same sample, doping and bias as in Figs. 1 and 2. Except for the presence of spikes of the current, dipole recycling and motion in SLs are similar to those observed in models of the Gunn effect in bulk GaAs . These self-oscillations have not been observed so far in experiments due to the high values of the contact doping adopted in all the present experimental settings. Notice that current spikes appear differently than in the monopole case, Fig. 1(a). The main difference is that now there are many more current spikes, 36, for the dipoles recycle at the emitter and traverse the whole SL. See Figs. 3(b) and (c). Charge transfer and balance between tunneling and displacement current during a spike are similar to those occurring in monopole oscillations. For a simpler model the velocity of a charge accumulation layer (belonging to a monopole or a dipole) has been shown to approximately obey an equal area rule. Then monopole and dipole velocities are similar but a monopole traverses a smaller part of the SL than a dipole does. Therefore dipole oscillations have a lower frequency than monopole ones. Our results agree with this: the frequency of the dipole oscillations discussed above is about 8 MHz, 40% the frequency of monopole oscillations.
Dipole self-oscillations have also been predicted to occur in weakly-coupled SLs as the result of assuming a linear current – field relation at the injecting contact on a simpler model . Since such ad hoc boundary condition has no clear relation to contact doping, no crossover between different oscillation types could appear in that work.
5. Multistability. Monopole and dipole waves coexist in both the first and the second plateaus. The time-averaged current as a function of dc voltage in the first plateau (whose crossover range is below $`N_c=2.1\times 10^{16}`$cm<sup>-3</sup> and above $`N_c=1.5\times 10^{16}`$cm<sup>-3</sup>) has been plotted in Fig. 4. Notice that the average current of dipole oscillations is lower than that of monopole oscillations. Previous studies for Gunn oscillations found that large dipole waves appear only for small current values, whereas monopole recycling requires current values near the maximum of the current-field characteristic curve. Let us start at a bias of 0.5 V (for which the stationary state is stable) and adiabatically increase the voltage. The result is that we go smoothly from the stationary state to the fast monopole self-oscillation at about $`1.3`$ V. This branch of oscillatory states eventually disappears at about 2.6 V. If we now adiabatically lower the bias, we reach a slow dipole self-oscillation at about 2.4 V. There is a small hysteresis loop between dipole oscillations and the stationary state between 2.4 V and 2.6 V: the former may start as a subcritical Hopf bifurcation. About 0.8 V the dipole oscillation disappears and we are back at the stable stationary state. We therefore find the hysteresis loops marked by arrows in Fig. 4.
In conclusion, we have dealt with self-sustained oscillations of the current in SLs whose main mechanism is sequential tunneling. Depending on contact doping, these oscillations may be due to recycling and motion of two different charge density waves: monopoles and dipoles. Experimentally, only the monopole oscillations have been observed, for the contacts doping is usually set to values which are too high. The dipole-like oscillations could be observed constructing samples with lower doping at the contacts. In fact, as the doping of the contacts is reduced, we predict current oscillations due to dipole charge waves. The crossover between both types of self-oscillations occurs at intermediate emitter doping values for which stable monopole and dipole oscillations coexist. Then the diagram of average current versus voltage is multivaluated, presenting hysteresis cycles and multistability between monopole and dipole oscillations (and between oscillatory and stationary states). The time-resolved current in the oscillatory modes presents a number of sharp spikes. They occur because well-to-well hopping of charge accumulation layers occurs in two stages: during the stage where the current rises, charge is mainly transferred through a single barrier. The charge is transferred through two adjacent barriers at the stage in which the current decreases. All these properties form the basis for possible applications of SLs working as multifrequency oscillators in a wide range of frequencies. Quantitative description of such multifrequency oscillators requires calculation of typical output power characteristics and noise levels. This is the purpose of a future work.
Acknowledgments. We thank Rosa López for helpful discussions. This work has been supported by the DGES (Spain) grants PB97-0088, PB95-1203 and PB96-0875, by the European Union TMR contracts ERB FMBX-CT97-0157 and FMRX-CT98-0180 and by the Community of Madrid, project 07N/0026/1998.
|
no-problem/9901/cond-mat9901189.html
|
ar5iv
|
text
|
# X-ray photoemission characterization of La0.67(CaxSr1-x)0.33MnO3 films
## I Introduction
The effect of dopants on the photoemission from the lanthanum based manganese oxides has been an area of intense research activity, but usually the studies have focused on varying the Mn<sup>3+</sup>/Mn<sup>4+</sup> ratio by changing the trivalent/divalent ion ratio. While a great deal of work on studying magnetic, transport and structural properties of these compounds has looked at fixing the Mn<sup>3+</sup>/Mn<sup>4+</sup> ratio and varying the size of the dopant atoms, little work has been done on photoemission studies. In this paper we present an in situ x-ray photoemission spectroscopy (XPS) study of thin films of La<sub>0.67</sub>(Ca<sub>x</sub>Sr<sub>1-x</sub>)<sub>0.33</sub>MnO<sub>3</sub> (LCSMO) where the Mn<sup>3+</sup>/Mn<sup>4+</sup> ratio is fixed while the tolerance factor, which is defined as $`t=(d_{LaO})/\sqrt{2}(d_{MnO})`$, is varied by the replacement of Ca with Sr. A similar system was studied by Hwang et al., where the variation in the peak and Curie temperature for bulk samples of the CMR materials A<sub>0.7</sub>A’<sub>0.3</sub>MnO<sub>3</sub> (where A is a trivalent rare earth ion and A’ is a divalent rare earth ion) were related to changes to the variation in the tolerance factor. In that work, the system La<sub>0.7</sub>(Ca<sub>x</sub>Sr<sub>1-x</sub>)<sub>0.3</sub>MnO<sub>3</sub> was studied, and exhibited a change in $`t`$ from $``$ 0.915 to 0.93 and Curie temperature from 250 to 365 K as x went from 1 to 0.
## II Sample preparation and characterization
The samples were grown by off-axis cosputtering onto (100) oriented neodymium gallate (NdGaO<sub>3</sub>) substrates using composite targets of LCMO and LSMO material under similar conditions as our previous work. The LCMO target was radio frequency (rf) sputtered and the LSMO target was direct current (dc) sputtered, giving deposition rates of $``$ 170-500 Å/hr, with film thicknesses being typically 1000 Å. After deposition, the samples were cooled in 100 Torr of oxygen, and when the samples had cooled to below 100 C, they were moved into a XPS analytical chamber. The chamber pressure during XPS measurements was below 2 x 10<sup>-9</sup> Torr.
The XPS spectra were taken at room temperature with a Vacuum Generators CLAM 100 analyzer using Mg K$`\alpha `$ radiation using the same conditions and analysis as in our earlier work. Photoemission data for all the samples was collected around the Mn 2p doublet, the La 4d and 3d doublet, the O 1s, Sr 3d, Ca 2p, C 1s, Mn 3p lines, and the valence region. In the following figures of XPS spectra, the core level spectra are shown along with their deconvolution into different Gaussian contributions.
In addition to the XPS studies, the samples were characterized by standard $`\theta 2\theta `$ x-ray diffraction scans along the growth direction, atomic force microscopy, electrical resistivity measurements (using the van der Pauw method), and magnetization measurements at low fields using a Quantum Design SQUID Magnetometer.
## III Results and Discussion
In Fig. 1 we show an atomic force microscopy image for a LCSMO film with 61% Ca fraction. For pure LSMO and LCMO films we typically see surface roughness values of $``$ 16 Å, while for the mixtures the value is typically 28 Å. Grain sizes are $``$ 500 Å for the mixtures, while for pure LCMO and LSMO films they are closer to 1000 Å. Standard x-ray diffraction along the growth direction for LCMO films shows only the presence of peaks from NdGaO<sub>3</sub>, indicating the excellent lattice match of the (100) oriented LCMO to (100) NdGaO<sub>3</sub>. This match is to be expected with the in-plane lattice constants of the pseudo-cubic unit cells of the two materials being 3.87 and 3.86 Å, respectively. For LSMO films, however, we find that the lattice match is not as good, with an out-of-plane lattice constant constant of 3.89 Å. This will give a strain in the films with low values of calcium doping. This strain is seen in the increased coercive fields at 10 K for our LSMO films (170 Oe) compared to LCMO films (20 Oe), as seen in Fig. 2.
In Fig. 3 we present the Curie temperatures ($`T_C`$) determined from magnetization data for the samples (taken at 400 Oe) as a function of the calcium fraction. We do not see a smooth variation in the Curie temperature, as was seen in the bulk work. Instead, there seems to be a clustering of the values around 300 K, with a slight increase as the Sr content is increased, which would increase the tolerance factor. We also observe a decrease in the Curie temperature for x=0.25 (where x is the calcium fraction in the sample) which is not expected. We believe this drop is due to disorder in the samples, which is seen in the higher resistivity for this sample (not shown here) and would be consistent with increased strain in the films for lower values of x.
In Figs. 4 and 5, we show scans around the location of the core lines and low energy region for the LCSMO films for different values of the calcium concentration, x. The curves are offset for clarity. We observe no indication for carbon for these in-situ samples, just as in our previous measurements. For the core lines of Mn, O and La, we see little systematic variation as the Ca/Sr is varied (of the order of 0.1 eV), with the trend being that the peak positions for the core lines of LCMO having a slightly lower energy than LSMO. Peak positions and widths are similar to that seen in earlier XPS studies of LCMO films. The lack of variation in peak position as the Ca/Sr ratio is changed is not surprising, in view of the XPS work on La<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub>, which showed only slight variations ($``$ 0.3 eV in the metallic regime) in the core lines as the Sr fraction changed. Scans of the Ca and Sr core lines show the expected variation in intensity as the Ca/Sr ratio is varied, but again, there is no systematic variation in the peak positions, which for the Sr 3d<sub>5/2</sub> is 132.0 eV. The peak ratios are similar to that seen in our earlier study on LCMO, implying that for these systems the terminating layer is MnO<sub>2</sub>
In Fig. 5d we show the valence structure for the samples as a function of the calcium fraction. As the calcium fraction decreases, and the samples go from insulating to metallic at room temperature, we observe no change near the Fermi edge, which is similar to that seen in studies on La<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub>. In previous studies of the valence band of LSMO using XPS,, a peak at 5.8 eV was observed, which did not change position as the Sr fraction was increased. Instead, a reduction in intensity near 3-4 eV was observed, which was interpreted as being due to changes in doping. For our system, we are not changing the filling of the $`e_g`$ band, since we are keeping the doping fixed. However we are changing the overlap integral as the value of t is varied. We also find we can fit the valence structure to a doublet structure, as shown in Fig. 5. As the samples go from pure Sr to pure Ca doping, we find the low and high energy peaks change from 2.7 to 2.9 eV, and 5.6 to 6.0 eV, respectively. This trend in peak positions is opposite that seen for the core lines, which tend to lower their energy in going from LSMO to LCMO. We do not observe any systematic changes in spectral weight in the valence region as the samples go from pure LSMO to LCMO.
## IV Conclusions
Thin film alloy mixtures of (100) oriented La<sub>0.67</sub>(Ca<sub>x</sub>Sr<sub>1-x</sub>)<sub>0.33</sub>MnO<sub>3</sub> have been grown and have somewhat rougher surfaces and smaller grain sizes than seen for pure LSMO or LCMO. The Curie temperature for the films does not follow the expected smooth variation as the calcium fraction is changed, which we interpret as being due to disorder. We believe this disorder is caused by strain in the films which is observed in both X-ray diffraction and coercive field measurements. The core lines in XPS measurements behave as linear combinations of individual LCMO and LSMO films, with no significant change in position as the Ca/Sr ratio is varied. The intensity ratios of the peaks is similar to previous work indicating that the terminating layer for the films is MnO<sub>2</sub>. The low energy valence structure exhibits a doublet character, whose peak positions decrease as the Ca fraction is reduced. We interpret this change as being due to variations in the overlap integral between the Mn 3d and O 2p orbitals.
## V Acknowledgments
We would like to gratefully acknowledge the assistance of Michael Miller for the AFM measurements and Andrew Patton in the production of the films.
|
no-problem/9901/gr-qc9901033.html
|
ar5iv
|
text
|
# Gravitation, Thermodynamics, and Quantum Theory
## 1 Introduction
At the end of a century—particularly one marking the end of a millennium—it is natural to attempt to take stock of the status of a field of endeavor from as broad a perspective as possible. In the field of physics, two theories—general relativity and quantum mechanics—were developed during the first quarter of the present century. These theories revolutionized the way we think about the physical world. Despite enormous progress during the remainder of this century in exploring their consequences and in the application of these theories to construct successful “standard models” of cosmology and particle physics, at the end of this century we are still struggling to reconcile general relativity and quantum theory at a fundamental level.
The revolutions in physics introduced by general relativity and quantum theory were accompanied by major changes in the language and concepts used to describe the physical world. In general relativity, it is recognized that space and time meld into a single entity—spacetime—and that the structure of spacetime is described by a Lorentz metric, which has a dynamical character. Consequently, simple Newtonian statements such as “two particles are a distance $`d`$ apart at time $`t`$” become essentially meaningless in general relativity until one states exactly how $`t`$ and $`d`$ are defined for the particular class of spacetime metrics under consideration. Furthermore, concepts such as the “local energy density of the gravitational field” are absent in general relativity. The situation is considerably more radical in quantum theory, where the existence of “objective reality” itself is denied, i.e., observables cannot, in general, consistently be assigned definite values.
I believe that the proper description of quantum phenomena in strong gravitational fields will necessitate revolutionary conceptual changes in our view of the physical world at least comparable to those that occurred in the separate developments of general relativity and quantum theory. At present, the greatest insights into the physical nature of quantum phenomena in strong gravitational fields comes from the analysis of thermodynamic properties associated with black holes. This analysis also provides strong hints that statistical physics may be deeply involved in any fundamental conceptual changes that accompany a proper description of quantum gravitational phenomena.
At the present time, string theory is the most promising candidate for a theory of quantum gravity. One of the greatest successes of string theory to date has been the calculation of the entropy of certain classes of black holes. However, the formulation of string theory is geared much more towards the calculation of scattering matrices in asymptotically flat spacetimes rather than towards providing a local description of physical phenomena in strong gravitational fields. Within the framework of string theory, it is difficult to imagine even how to pose (no less, how to calculate the answer to) local physical questions like, “What would an observer who falls into a black hole experience as he approaches what corresponds classically to the spacetime singularity within the black hole?” Thus, string theory has not yet provided us with new conceptual insights into the physical nature of phenomena occurring in strong gravitational fields that are commensurate with some of its mathematical successes. It may well be that—even assuming it is a correct theory of nature—string theory will bear a relationship to the “ultimate theory of everything” that is similar to the relationship between “old quantum theory” and quantum theory. Therefore, I feel that it is very encouraging that, at the present time, intensive efforts are underway toward providing reformulations of string theory. However, to date, the these efforts have mainly been concerned with obtaining a formulation that would unify the (different looking) versions of string theory, rather than achieving new conceptual foundations for describing local quantum phenomena occurring in strong gravitational fields.
Thus, at present, most of the physical insights into quantum phenomena occurring in strong gravitational fields arise from classical and semiclassical analyses of black holes in general relativity. In this article, I will review classical and quantum black hole thermodynamics and then discuss some unresolved issues and puzzles, which may provide some hints as to the new conceptual features that may be present in the quantum description of strong gravitational fields. In the discussion, I will not attempt to provide a balanced account of research in this area, but rather will merely present my own views.
## 2 Classical black hole mechanics
Undoubtedly, one of the most remarkable and unexpected developments in theoretical physics to have occurred during the latter portion of this century was the discovery of a close relationship between certain laws of black hole physics and the ordinary laws of thermodynamics. It was first pointed out by Bekenstein that the area nondecrease theorem of classical general relativity is analogous to the ordinary second law of thermodynamics, and he proposed that the area, $`A`$, of a black hole (times a constant of order unity in Planck units) should be interpreted as its physical entropy. A short time later, Bardeen, Carter, and Hawking extended the analogy between black holes and thermodynamics considerably further by proving black hole analogs of the zeroth and first laws of thermodynamics. In this section, I will give a brief review of the laws of classical black hole mechanics.
First, we review the definition of a black hole and some properties of stationary black holes. In physical terms, a black hole is a region where gravity is so strong that nothing can escape. In order to make this notion precise, one must have in mind a region of spacetime to which one can contemplate escaping. For an asymptotically flat spacetime $`(M,g_{ab})`$ (representing an isolated system), the asymptotic portion of the spacetime “near infinity” is such a region. The black hole region, $``$, of an asymptotically flat spacetime, $`(M,g_{ab})`$, is defined as
$$MI^{}(^+),$$
(1)
where $`^+`$ denotes future null infinity and $`I^{}`$ denotes the chronological past. The event horizon, $``$, of a black hole is defined to be the boundary of $``$. In particular, $``$ is a null hypersurface. Note that the entire future history of the spacetime must be known before the location of $``$ can be determined, i.e., $``$ possesses no distinguished local significance.
If an asymptotically flat spacetime $`(M,g_{ab})`$ contains a black hole, $``$, then $``$ is said to be stationary if there exists a one-parameter group of isometries on $`(M,g_{ab})`$ generated by a Killing field $`t^a`$ which is unit timelike at infinity. The black hole is said to be static if it is stationary and if, in addition, $`t^a`$ is hypersurface orthogonal. The black hole is said to be axisymmetric if there exists a one parameter group of isometries which correspond to rotations at infinity. A stationary, axisymmetric black hole is said to possess the “$`t\varphi `$ orthogonality property” if the 2-planes spanned by $`t^a`$ and the rotational Killing field $`\varphi ^a`$ are orthogonal to a family of 2-dimensional surfaces.
A null surface, $`𝒦`$, whose null generators coincide with the orbits of a one-parameter group of isometries (so that there is a Killing field $`\xi ^a`$ normal to $`𝒦`$) is called a Killing horizon. There are two independent results (usually referred to as “rigidity theorems”) that show that in wide variety of cases of interest, the event horizon, $``$, of a stationary black hole must be a Killing horizon. The first, due to Carter , states that for a static black hole, the static Killing field $`t^a`$ must be normal to the horizon, whereas for a stationary-axisymmetric black hole with the $`t\varphi `$ orthogonality property there exists a Killing field $`\xi ^a`$ of the form
$$\xi ^a=t^a+\mathrm{\Omega }\varphi ^a$$
(2)
which is normal to the event horizon. The constant $`\mathrm{\Omega }`$ defined by eq.(2) is called the angular velocity of the horizon. Carter’s result does not rely on any field equations, but leaves open the possibility that there could exist stationary black holes without the above symmetries whose event horizons are not Killing horizons. The second result, due to Hawking (see also ), directly proves that in vacuum or electrovac general relativity, the event horizon of any stationary black hole must be a Killing horizon. Consequently, if $`t^a`$ fails to be normal to the horizon, then there must exist an additional Killing field, $`\xi ^a`$, which is normal to the horizon, i.e., a stationary black hole must be nonrotating (from which staticity follows , ) or axisymmetric (though not necessarily with the $`t\varphi `$ orthogonality property). Note that Hawking’s theorem makes no assumptions of symmetries beyond stationarity, but it does rely on the properties of the field equations of general relativity.
Now, let $`𝒦`$ be any Killing horizon (not necessarily required to be the event horizon, $``$, of a black hole), with normal Killing field $`\xi ^a`$. Since $`^a(\xi ^b\xi _b)`$ also is normal to $`𝒦`$, these vectors must be proportional at every point on $`𝒦`$. Hence, there exists a function, $`\kappa `$, on $`𝒦`$, known as the surface gravity of $`𝒦`$, which is defined by the equation
$$^a(\xi ^b\xi _b)=2\kappa \xi ^a$$
(3)
It follows immediately that $`\kappa `$ must be constant along each null geodesic generator of $`𝒦`$, but, in general, $`\kappa `$ can vary from generator to generator. It is not difficult to show (see, e.g., ) that
$$\kappa =\mathrm{lim}(Va)$$
(4)
where $`a`$ is the magnitude of the acceleration of the orbits of $`\xi ^a`$ in the region off of $`𝒦`$ where they are timelike, $`V(\xi ^a\xi _a)^{1/2}`$ is the “redshift factor” of $`\xi ^a`$, and the limit as one approaches $`𝒦`$ is taken. Equation (4) motivates the terminology “surface gravity”. Note that the surface gravity of a black hole is defined only when it is “in equilibrium”, i.e., stationary, so that its event horizon is a Killing horizon.
In parallel with the two independent “rigidity theorems” mentioned above, there are two independent versions of the zeroth law of black hole mechanics. The first, due to Carter (see also ), states that for any black hole which is static or is stationary-axisymmetric with the $`t\varphi `$ orthogonality property, the surface gravity $`\kappa `$, must be constant over its event horizon $``$. This result is purely geometrical, i.e., it involves no use of any field equations. The second, due to Bardeen, Carter, and Hawking states that if Einstein’s equation holds with the matter stress-energy tensor satisfying the dominant energy condition, then $`\kappa `$ must be constant on any Killing horizon. Thus, in the second version of the zeroth law, the hypothesis that the $`t\varphi `$ orthogonality property holds is eliminated, but use is made of the field equations of general relativity.
A bifurcate Killing horizon is a pair of null surfaces, $`𝒦_A`$ and $`𝒦_B`$, which intersect on a spacelike 2-surface, $`𝒞`$ (called the “bifurcation surface”), such that $`𝒦_A`$ and $`𝒦_B`$ are each Killing horizons with respect to the same Killing field $`\xi ^a`$. It follows that $`\xi ^a`$ must vanish on $`𝒞`$; conversely, if a Killing field, $`\xi ^a`$, vanishes on a two-dimensional spacelike surface, $`𝒞`$, then $`𝒞`$ will be the bifurcation surface of a bifurcate Killing horizon associated with $`\xi ^a`$ (see for further discussion). An important consequence of the zeroth law is that if $`\kappa 0`$, then in the “maximally extended” spacetime representing a stationary black hole, the event horizon, $``$, comprises a branch of a bifurcate Killing horizon . This result is purely geometrical—involving no use of any field equations. As a consequence, the study of stationary black holes which satisfy the zeroth law divides into two cases: “degenerate” black holes (for which, by definition, $`\kappa =0`$), and black holes with bifurcate horizons.
The first law of black hole mechanics is simply an identity relating the changes in mass, $`M`$, angular momentum, $`J`$, and horizon area, $`A`$, of a stationary black hole when it is perturbed. To first order, the variations of these quantities in the vacuum case always satisfy
$$\delta M=\frac{1}{8\pi }\kappa \delta A+\mathrm{\Omega }\delta J.$$
(5)
In the original derivation of this law , it was required that the perturbation be stationary. Furthermore, the original derivation made use of the detailed form of Einstein’s equation. Subsequently, the derivation has been generalized to hold for non-stationary perturbations , , provided that the change in area is evaluated at the bifurcation surface, $`𝒞`$, of the unperturbed black hole. More significantly, it has been shown that the validity of this law depends only on very general properties of the field equations. Specifically, a version of this law holds for any field equations derived from a diffeomorphism covariant Lagrangian, $`L`$. Such a Lagrangian can always be written in the form
$$L=L(g_{ab};R_{abcd},_aR_{bcde},\mathrm{};\psi ,_a\psi ,\mathrm{})$$
(6)
where $`_a`$ denotes the derivative operator associated with $`g_{ab}`$, $`R_{abcd}`$ denotes the Riemann curvature tensor of $`g_{ab}`$, and $`\psi `$ denotes the collection of all matter fields of the theory (with indices suppressed). An arbitrary (but finite) number of derivatives of $`R_{abcd}`$ and $`\psi `$ are permitted to appear in $`L`$. In this more general context, the first law of black hole mechanics is seen to be a direct consequence of an identity holding for the variation of the Noether current. The general form of the first law takes the form
$$\delta M=\frac{\kappa }{2\pi }\delta S_{\mathrm{bh}}+\mathrm{\Omega }\delta J+\mathrm{},$$
(7)
where the “…” denote possible additional contributions from long range matter fields, and where
$$S_{\mathrm{bh}}2\pi _𝒞\frac{\delta L}{\delta R_{abcd}}n_{ab}n_{cd}.$$
(8)
Here $`n_{ab}`$ is the binormal to the bifurcation surface $`𝒞`$ (normalized so that $`n_{ab}n^{ab}=2`$), and the functional derivative is taken by formally viewing the Riemann tensor as a field which is independent of the metric in eq.(6). For the case of vacuum general relativity, where $`L=R\sqrt{g}`$, a simple calculation yields
$$S_{\mathrm{bh}}=A/4$$
(9)
and eq.(7) reduces to eq.(5).
As already mentioned at the beginning of this section, the black hole analog of the second law of thermodynamics is the area theorem . This theorem states that if Einstein’s equation holds with matter satisfying the null energy condition (i.e., $`T_{ab}k^ak^b0`$ for all null $`k^a`$), then the surface area, $`A`$, of the event horizon of a black hole can never decrease with time. In the context of more general theories of gravity, the nondecrease of $`S_{\mathrm{bh}}`$ also has been shown to hold in a class of higher derivative gravity theories, where the Lagrangian is a polynomial in the scalar curvature , but, unlike the zeroth and first laws, no general argument for the validity of the second law of black hole mechanics is known. However, there are some hints that the nondecrease of $`S_{\mathrm{bh}}`$ may hold in a very general class of theories of gravity with positive energy properties .
Taken together, the zeroth, first, and second laws<sup>1</sup><sup>1</sup>1It should be noted that I have made no mention of the third law of thermodynamics, i.e., the “Planck-Nernst theorem”, which states that $`S0`$ (or a “universal constant”) as $`T0`$. The analog of this law fails in black hole mechanics, since there exist “extremal” black holes of finite $`A`$ which have $`\kappa =0`$. However, I believe that the the “Planck-Nernst theorem” should not be viewed as a fundamental law of thermodynamics but rather as a property of the density of states near the ground state in the thermodynamic limit, which is valid for commonly studied materials. Indeed, examples can be given of ordinary quantum systems that violate the “Planck-Nernst theorem” in a manner very similar to the violations of the analog of this law that occur for black holes . of black hole mechanics in general relativity are remarkable mathematical analogs of the corresponding laws in ordinary thermodynamics. It is true that the nature of the proofs of the laws of black hole mechanics in classical general relativity is strikingly different from the nature of the arguments normally advanced for the validity of the ordinary laws of thermodynamics. Nevertheless, as discussed above, the validity of the laws of black hole mechanics appears to rest upon general features of the theory (such as general covariance) rather than the detailed form of Einstein’s equation, in a manner similar to the way the validity of the ordinary laws of thermodynamics depends only on very general features of classical and quantum dynamics.
In comparing the laws of black hole mechanics in classical general relativity with the laws of thermodynamics, the role of energy, $`E`$, is played by the mass, $`M`$, of the black hole; the role of temperature, $`T`$, is played by a constant times the surface gravity, $`\kappa `$, of the black hole; and the role of entropy, $`S`$, is played by a constant times the area, $`A`$, of the black hole. The fact that $`E`$ and $`M`$ represent the same physical quantity provides a strong hint that the mathematical analogy between the laws of black hole mechanics and the laws of thermodynamics might be of physical significance. However, in classical general relativity, the physical temperature of a black hole is absolute zero, so there can be no physical relationship between $`T`$ and $`\kappa `$. Consequently, it also would be inconsistent to assume a physical relationship between $`S`$ and $`A`$. As we shall now see, this situation changes dramatically when quantum effects are taken into account.
## 3 Quantum black hole thermodynamics
The physical temperature of a black hole is not absolute zero. As a result of quantum particle creation effects , a black hole radiates to infinity all species of particles with a perfect black body spectrum, at temperature (in units with $`G=c=\mathrm{}=k=1`$)
$$T=\frac{\kappa }{2\pi }.$$
(10)
Thus, $`\kappa /2\pi `$ truly is the physical temperature of a black hole, not merely a quantity playing a role mathematically analogous to temperature in the laws of black hole mechanics.
In fact, there are two logically independent results which give rise to the formula (10). Although these results are mathematically very closely related, it is important to distinguish clearly between them. The first result is the original thermal particle creation effect discovered by Hawking . In its most general form, this result may be stated as follows (see for further discussion): Consider a a classical spacetime $`(M,g_{ab})`$ describing a black hole formed by gravitational collapse, such that the black hole “settles down” to a stationary final state. By the zeroth law of black hole mechanics, the surface gravity, $`\kappa `$, of the black hole final state will be constant over its event horizon. Consider a quantum field propagating in this background spacetime, which is initially in any (non-singular) state. Then, at asymptotically late times, particles of this field will be radiated to infinity as though the black hole were a perfect black body<sup>2</sup><sup>2</sup>2If the black hole is rotating, the the spectrum seen by an oberver at infinity corresponds to what would emerge from a “rotating black body”. at the Hawking temperature, eq. (10). It should be noted that this result relies only on the analysis of quantum fields in the region exterior to the black hole, and it does not make use of any gravitational field equations.
The second result is the Unruh effect and its generalization to curved spacetime. In its most general form, this result may be stated as follows (see , for further discussion): Consider a a classical spacetime $`(M,g_{ab})`$ that contains a bifurcate Killing horizon, $`𝒦=𝒦_A𝒦_B`$, i.e., there is a one-parameter group of isometries whose associated Killing field, $`\xi ^a`$, is normal to $`𝒦`$. Consider a free quantum field on this spacetime. Then there exists at most one globally nonsingular state of the field which is invariant under the isometries. Furthermore, in the “wedges” of the spacetime where the isometries have timelike orbits, this state (if it exists) is a KMS (i.e., thermal equilibrium) state at temperature (10) with respect to the isometries.
Note that in Minkowski spacetime, any one-parameter group of Lorentz boosts has an associated bifurcate Killing horizon, comprised by two intersecting null planes. The unique, globally nonsingular state which is invariant under these isometries is simply the usual (“inertial”) vacuum state, $`|0>`$. In the “right and left wedges” of Minkowski spacetime defined by the Killing horizon, the orbits of the Lorentz boost isometries are timelike, and, indeed, these orbits correspond to worldlines of uniformly accelerating observers. If we normalize the boost Killing field, $`b^a`$, so that Killing time equals proper time on an orbit with acceleration $`a`$, then the surface gravity of the Killing horizon is $`\kappa =a`$. An observer following this orbit would naturally use $`b^a`$ to define a notion of “time translation symmetry”. Consequently, when the field is in the inertial vacuum state, a uniformly accelerating observer would describe the field as being in a thermal equilibrium state at temperature
$$T=\frac{a}{2\pi }$$
(11)
as originally found by Unruh .
Although there is a close mathematical relationship between the two results described above, it should be emphasized these results refer to different states of the quantum field. In the Hawking effect, the asymptotic final state of the quantum field is a state in which the modes of the quantum field that appear to a distant observer to have propagated from the black hole region of the spacetime are thermally populated at temperature (10), but the modes which appear to have propagated in from infinity are unpopulated. This state (usually referred to as the “Unruh vacuum”) would be singular on the white hole horizon in the analytically continued spacetime containing a bifurcate Killing horizon. On the other hand, in the Unruh effect and its generalization to curved spacetimes, the state in question (usually referred to as the “Hartle-Hawking vacuum”) is globally nonsingular, and all modes of the quantum field in the “left and right wedges” are thermally populated.<sup>3</sup><sup>3</sup>3The state in which none of the modes in the region exterior to the black hole are populated is usually referred to as the “Boulware vacuum”. The Boulware vacuum is singular on both the black hole and white hole horizons.
It also should be emphasized that in the Hawking effect, the temperature (10) represents the temperature as measured by an observer near infinity. For any observer following an orbit of the Killing field, $`\xi ^a`$, normal to the horizon, the locally measured temperature of the modes which appear to have propagated from the direction of the black hole is given by
$$T=\frac{\kappa }{2\pi V},$$
(12)
where $`V=(\xi ^a\xi _a)^{1/2}`$. In other words, the locally measured temperature of the Hawking radiation follows the Tolman law. Now, as one approaches the horizon of the black hole, the modes which appear to have propagated from the black hole dominate over the modes which appear to have propagated in from infinity. Taking eq.(4) into account, we see that $`Ta/2\pi `$ as the black hole horizon, $``$, is approached, i.e., in this limit eq.(12) corresponds to the flat spacetime Unruh effect.
Equation (12) shows that when quantum effects are taken into account, a black hole is surrounded by a “thermal atmosphere” whose local temperature as measured by observers following orbits of $`\xi ^a`$ becomes divergent as one approaches the horizon. As we shall see explicitly below, this thermal atmosphere produces important physical effects on quasi-stationary bodies near the black hole. On the other hand, for a macroscopic black hole, observers who freely fall into the black hole would not notice any important quantum effects as they approach and cross the horizon.
The fact that $`\kappa /2\pi `$ truly represents the physical temperature of a black hole provides extremely strong evidence that the laws of black hole mechanics are not merely mathematical analogs of the laws of thermodynamics, but rather that they in fact are the ordinary laws of themodynamics applied to black holes. If so, then $`A/4`$ must represent the physical entropy of a black hole in general relativity. What is the evidence that this is the case?
Although quantum effects on matter fields outside of a black hole were fully taken into account in the derivation of the Hawking effect, quantum effects of the gravitational field itself were not, i.e., the Hawking effect is derived in the context of semiclassical gravity, where the effects of gravitation are still represented by a classical spacetime. As discussed further below, a proper accounting of the quantum degrees of freedom of the gravitational field itself undoubtedly would have to be done in order to understand the origin of the entropy of a black hole. Nevertheless, as I shall now describe, even in the context of semiclassical gravity, I believe that there are compelling arguments that $`A/4`$ must represent the physical entropy of a black hole.
Even within the semi-classical approximation, conservation of energy requires that an isolated black hole must lose mass in order to compensate for the energy radiated to infinity by the particle creation process. If one equates the rate of mass loss of the black hole to the energy flux at infinity due to particle creation, one arrives at the startling conclusion that an isolated black hole will radiate away all of its mass within a finite time. During this process of black hole “evaporation”, $`A`$ will decrease, in violation of the second law of black hole mechanics. Such an area decrease can occur because the expected stress-energy tensor of quantum matter does not satisfy the null energy condition—even for matter for which this condition holds classically—in violation of a key hypothesis of the area theorem. Thus, it is clear that the second law of black hole mechanics must fail when quantum effects are taken into account.
On the other hand, there is a serious difficulty with the ordinary second law of thermodynamics when black holes are present: One can simply take some ordinary matter and drop it into a black hole, where, classically at least, it will disappear into a spacetime singularity. In this latter process, one loses the entropy initially present in the matter, but no compensating gain of ordinary entropy occurs, so the total entropy, $`S`$, of matter in the universe decreases.
Note, however, that in the black hole evaporation process, although $`A`$ decreases, there is significant amount of ordinary entropy generated outside the black hole due to particle creation. Similarly, when ordinary matter (with positive energy) is dropped into a black hole, although $`S`$ decreases, by the first law of black hole mechanics, there will necessarily be an increase in $`A`$. These considerations motivated the following proposal , . Perhaps in any process, the total generalized entropy, $`S^{}`$, never decreases
$$\mathrm{\Delta }S^{}0,$$
(13)
where $`S^{}`$ is defined by
$$S^{}S+A/4.$$
(14)
It is not difficult to see that the generalized second law holds for an isolated black hole radiating into otherwise empty space. However, it is not immediately obvious that it holds if one carefully lowers a box containing matter with entropy $`S`$ and energy $`E`$ toward a black hole. Classically, if one lowers the box sufficiently close to the horizon before dropping it in, one can make the increase in $`A`$ as small as one likes while still getting rid of all of the entropy, $`S`$, originally in the box. However, it is here that the quantum “thermal atmosphere” surrounding the black hole comes into play. The temperature gradient in the thermal atmosphere (see eq.(12)) implies that there is a pressure gradient and, consequently, a buoyancy force on the box. As a result of this buoyancy force, the optimal place to drop the box into the black hole is no longer the horizon but rather the “floating point” of the box, where its weight is equal to the weight of the displaced thermal atmosphere. The minimum area increase given to the black hole in the process is no longer zero, but rather it turns out to be an amount just sufficient to prevent any violation of the generalized second law from occurring . A number of other analyses , , also have given strong support to validity of the generalized second law.
The generalized entropy (14) and the generalized second law (13) have obvious interpretations: Presumably, for a system containing a black hole, $`S^{}`$ is nothing more than the “true total entropy” of the complete system, and (13) is then nothing more than the “ordinary second law” for this system. If so, then $`A/4`$ truly is the physical entropy of a black hole.
I believe that the above semi-classical considerations make a compelling case for the merger of the laws of black hole mechanics with the laws of thermodynamics. However, if one is to obtain a deeper understanding of why $`A/4`$ represents the entropy of a black hole in general relativity, it clearly will be necessary to go beyond semi-classical considerations and attain an understanding of the quantum dynamical degrees of freedom of a black hole. Thus, one would like to calculate the entropy of a black hole directly from a quantum theory of gravity. There have been many attempts to do so, most of which fall within the following categories: (i) Calculations that are mathematically equivalent to the classical calculation described in the previous section. (ii) Calculations that ascribe a preferred local significance to the horizon. (iii) State counting calculations of configurations that can be associated with black holes.
The most prominent of the calculations in category (i) is the derivation of black hole entropy in Euclidean quantum gravity, originally given by Gibbons and Hawking . Here one starts with a formal, functional integral expression for the partition function in Euclidean quantum gravity and evaluates it for a black hole in the “zero loop” (i.e, classical) approximation. As shown in , the mathematical steps in this procedure are in direct correspondence with the purely classical determination of the entropy from the form of the first law of black hole mechanics. Thus, although this derivation gives some intriguing glimpses into possible deep relationships between black hole thermodynamics and Euclidean quantum gravity, the Euclidean derivation does not appear to provide any more insight than the classical derivation into accounting for the quantum degrees of freedom that are responsible for black hole entropy. Similar remarks apply to a number of other entropy calculations that also can be shown to be equivalent to the classical derivation (see ).
Within category (ii), a key approach has been to attribute the entropy of the black hole to “entanglement entropy” resulting from quantum field correlations between the exterior and interior of the black hole (see, in particular, ). As a result of these correlations, the state of the field when restricted to the exterior of the black hole is mixed, and its von Neumann entropy, $`\mathrm{tr}[\widehat{\rho }\mathrm{ln}\widehat{\rho }]`$, would diverge in the absence of a short distance cutoff. If one now inserts a short distance cutoff of the order of the Planck scale, one obtains a von Neumann entropy of the order of the horizon area, $`A`$. A closely related idea is to attribute the entropy of the black hole to the ordinary entropy of its thermal atmosphere (see, in particular, ). Since $`T`$ diverges near the horizon in the manner specified by eq.(12), the entropy of the thermal atmosphere diverges, but if one puts in a Planck scale cutoff, one gets an entropy of order $`A`$. Indeed, this calculation is really the same as the entanglement entropy calculation, since the state of a quantum field outside of the black hole at late times is thermal, so its von Neumann entropy is equal to its thermodynamic entropy.
These and other approaches in category (ii) provide a natural way of accounting for why the entropy of a black hole is proportional to its surface area, although the constant of proportionality typically depends upon a cutoff or other free parameter and is not calculable. However, it is far from clear why the black hole horizon should be singled out for such special treatment of the quantum degrees of freedom in its vicinity, since, for example, similar quantum field correlations will exist across any other null surface. Indeed, as discussed further at the end of the next section, it is particularly puzzling why the local degrees of freedom associated with the horizon should be singled out since, as already noted above, the black hole horizon at a given time is defined in terms of the entire future history of the spacetime and thus has no distinguished local significance. Finally, for approaches in category (ii) that do not make use of the gravitational field equations—such as the ones described above—it is difficult to see how one would obtain a black hole entropy proportional to eq.(8) (rather than proportional to $`A`$) in a more general theory of gravity.
By far, the most successful calculations of black hole entropy to date are ones in category (iii) that obtain the entropy of certain extremal and nearly extremal black holes in string theory. It is believed that at “low energies”, string theory should reduce to a 10-dimensional supergravity theory. If one treats this supergravity theory as a classical theory involving a spacetime metric, $`g_{ab}`$, and other classical fields, one can find solutions describing black holes. On the other hand, one also can consider a “weak coupling” limit of string theory, wherein the states are treated perturbatively about a background, flat spacetime. In the weak coupling limit, there is no literal notion of a black hole, just as there is no notion of a black hole in linearized general relativity. Nevertheless, certain weak coupling states can be identified with certain black hole solutions of the low energy limit of the theory by a correspondence of their energy and charges. (Here, it is necessary to introduce “D-branes” into string perturbation theory in order to obtain weak coupling states with the desired charges.) Now, the weak coupling states are, in essence, ordinary quantum dynamical degrees of freedom in a flat background spacetime, so their entropy can be computed by the usual methods of flat spacetime statistical physics. Remarkably, for certain classes of extremal and nearly extremal black holes, the ordinary entropy of the weak coupling states agrees exactly with the expression for $`A/4`$ for the corresponding classical black hole states; see for a review of these results.
Since the formula for entropy has a nontrivial functional dependence on energy and charges, it is hard to imagine that this agreement between the ordinary entropy of the weak coupling states and black hole entropy could be the result of a random coincidence. Furthermore, for low energy scattering, the absorption/emission coefficients (“gray body factors”) of the corresponding weak coupling states and black holes also agree . This suggests that there may be a close physical association between the weak coupling states and black holes, and that the dynamical degrees of freedom of the weak coupling states are likely to at least be closely related to the dynamical degrees of freedom responsible for black hole entropy. However, it seems hard to imagine that the weak coupling states could be giving an accurate picture of the local physics occurring near (and within) the region classically described as a black hole. Thus, it seems likely that in order to attain additional new conceptual insights into the nature of black hole entropy in string theory, further significant progress will have to be made toward obtaining a proper local description of strong gravitational field phenomena.
## 4 Some unresolved issues and puzzles
I believe that the results described in the previous two sections provide a remarkably compelling case that black holes are localized thermal equilibrium states of the quantum gravitational field. Although none of the above results on black hole thermodynamics have been subject to any experimental or observational tests, the theoretical foundation of black hole thermodynamics is sufficiently firm that I feel that it provides a solid basis for further research and speculation on the nature of quantum gravitational phenomena. Indeed, it is my hope that black hole thermodynamics will provide us with some of the additional key insights that we will need in order to gain a deeper understanding of quantum gravitational phenomena. In this section, I will raise and discuss four major, unresolved issues in quantum gravitational physics that black hole thermodynamics may help shed light upon.
I. What is the nature of singularities in quantum gravity?
The singularity theorems of classical general relativity assert that in a wide variety of circumstances, singularities must occur in the sense that spacetime cannot be geodesically complete. However, classical general relativity should break down prior to the formation of a singularity. One possibility is that in quantum gravity, these singularities will be “smoothed over”. However, it also is possible that at least some aspects of the singularities of classical general relativity are true features of nature, and will remain present in quantum gravitational physics.
Black hole thermodynamics provides a strong argument that the singularity inside of a black hole in classical general relativity will remain present in at least some form in quantum gravity. In classical general relativity, the matter responsible for the formation of the black hole propagates into a singularity in the deep interior of the black hole. Suppose that the matter which forms the black hole possesses quantum correlations with matter that remains far outside of the black hole. Then it is hard to imagine how these correlations could be restored during the process of black hole evaporation; indeed, if anything, the Hawking process should itself create additional correlations between the exterior and interior of the black hole as it evaporates (see for further discussion). However, if these correlations are not restored, then by the time that the black hole has evaporated completely, an initial pure state will have evolved to a mixed state, i.e., “information” will have been lost. In the semiclassical picture, such information loss does occur and is ascribable to the propagation of the quantum correlations into the singularity within the black hole. If pure states continue to evolve to mixed states in a fully quantum treatment of the gravitational field, then at least the aspect of the classical singularity as a place where “information can get lost” must remain present in quantum gravity. This issue is frequently referred to as the “black hole information paradox”, and its resolution would tell us a great deal about the existence and nature of singularities in quantum gravity.
II. Is there a relationship between singularities and the second law?
The usual arguments for the validity of the second law of thermodynamics rest upon having very “special” (i.e., low entropy) initial conditions. Such special initial conditions in systems that we presently observe trace back to even more special conditions at the (classically singular) big bang origin of the universe. Thus, the validity of the second law of thermodynamics appears to be intimately related to the nature of the initial singularity . On the other hand, the arguments leading to the area increase theorem for black holes in classical general relativity would yield an area decrease theorem if applied to white holes. Thus, the applicability (or, at least, the relevance) of the second law of black hole mechanics appears to rest upon the fact that black holes can occur in nature but white holes do not. This, again, could be viewed as a statement about the types of singularities that can occur in nature . If, as argued here, the laws of black hole mechanics are the laws of thermodynamics applied to a system containing a black hole, then it seems hard to avoid the conclusion that a close relationship must exist between the second law of thermodynamics and the nature of what we classically describe as singularities.
III. Are statistical probabilities truly distinct from quantum probabilities?
Even in classical physics, probablities come into play in statistical physics as ensembles representing our ignorance of the exact state of the system. On the other hand, in quantum physics, probabilities enter in a much more fundamental way: Even if the state of a system is known exactly, one can only assign a probability distribution to the value of observables. In quantum statistical physics, probabilities enter in both of these ways, and it would seem that these two ways should be logically distinguishable. However, density matrices have the odd feature of entering quantum statistical physics in two mathematically equivalent ways: (i) as an exact description of a particular (mixed) quantum state, and (ii) as a statistical ensemble of a collection of pure quantum states. In particular, one may choose to view a thermal density matrix either as a single, definite (mixed) state of the quantum system, or as a statistical ensemble of pure states. In the former case, the probability distribution for the values of observables would be viewed as entirely quantum in origin, whereas in the latter case, it would be viewed as partly statistical and partly quantum in origin; indeed, for certain observables (such as the energy of the system), the probabilities in the second case would be viewed as entirely statistical in origin. The Unruh effect puts this fact into a new light: When a quantum field is in the ordinary vacuum state, $`|0>`$, it is in a pure state, so the probability distribution for any observable would naturally be viewed by an inertial observer to be entirely quantum in origin. On the other hand, for an accelerating observer, the field is in a thermal state at temperature (11), and the probability distribution for “energy” (conjugate to the notion of time translation used by the accelerating observer) would naturally be viewed as entirely statistical in origin. Although there are no physical or mathematical inconsistencies associated with these differing viewpoints, they seem to suggest that there may be some deep connections between quantum probabilities and statistical probablities; see for further exploration of these ideas.
IV. What is the definition/meaning of entropy in general relativity?
The issue of how to assign entropy to the gravitational field has been raised and discussed in the literature (see, in particular, ), although it seems clear that a fully quantum treatment of the degrees of freedom of the gravitational field will be essential for this issue to be resolved. However, as I will emphasize below, even the definition and meaning of the entropy of “ordinary matter” in general relativity raises serious issues of principle, which have largely been ignored to date.
First, it should be noted that underlying the usual notion of entropy for an “ordinary system” is the presence of a well defined notion of “time translations”, which are symmetries of the dynamics. The total energy of the system is then well defined and conserved. The definition and meaning of the usual notion of entropy for classical systems is then predicated on the assumption that generic dynamical orbits “sample” the entire energy shell, spending “equal times in equal volumes”; a similar assumption underlies the notion of entropy for quantum systems (see for further discussion). Now, an appropriate notion of “time translations” is present when one considers dynamics on a background spacetime whose metric possesses a suitable one-parameter group of isometries, and when the Hamiltonian of the system is invariant under these isometries. However, such a structure is absent in general relativity, where no background metric is present.<sup>4</sup><sup>4</sup>4Furthermore, it is clear that gross violations of any sort of “ergodic behavior” occur in classical general relativity on account of the irreversible tendency for gravitational collapse to produce singularities, from which one cannot then evolve back to uncollapsed states—although the semiclassical process of black hole evaporation suggests the possibility that ergodic behavior could be restored in quantum gravity. The absence of any “rigid” time translation structure can be viewed as being responsible for making notions like the “energy density of the gravitational field” ill defined in general relativity. Notions like the “entropy density of the gravitational field” are not likely to fare any better. It may still be possible to use structures like asymptotic time translations to define the notion of the total entropy of an (asymptotically flat) isolated system. (As is well known, total energy can be defined for such systems.) However, for a closed universe, it seems highly questionable that any meaningful notion will exist for the “total entropy of the universe” (including gravitational entropy).
The comments in the previous paragraph refer to serious difficulties in defining the notions of gravitational entropy and total entropy in general relativity. However, as I now shall explain, even in the context of quantum field theory on a background spacetime possessing a time translation symmetry—so that the “rigid” structure needed to define the usual notion of entropy of matter is present—there are strong hints from black hole thermodynamics that even our present understanding of the meaning of the “ordinary entropy” of matter is inadequate.
Consider the “thermal atmosphere” of a black hole. As discussed in Section 3 above, since the locally measured temperature is given by eq.(12), if we try to compute its ordinary entropy, a new ultraviolet catastrophe occurs: The entropy is infinite unless we put in a cutoff on the contribution from short wavelength modes.<sup>5</sup><sup>5</sup>5Since a field has infinitely many degrees of freedom, it threatens to make an infinite contribution to entropy. The old ultraviolet catastrophe—which plagued physics at the turn of the previous century—was resolved by quantum theory, which, in effect, provides a cutoff on the entropy contribution of modes with energy greater than $`kT`$, so that, at any $`T`$, only finitely many degrees of freedom are relevant. The new ultraviolet catastrophe arises because, on account of arbitrarily large redshifts, there now are infinitely many modes with energy less than $`kT`$. To cure it, it is necessary to have an additional cutoff (presumably arising from quantum gravity) on short wavelength modes. As already noted in Section 3, if we insert a cutoff of the order of the Planck scale, then the thermal atmosphere contributes an entropy of order the area, $`A`$, of the horizon (in Planck units). Note that the bulk of the entropy of the thermal atmosphere is highly localized in a “skin” surrounding the horizon, whose thickness is of order of the Planck length. The presence of this thermal atmosphere poses the following puzzle:
Puzzle: What is the “physical entropy” of the thermal atmosphere?
One possibility is that the thermal atmosphere should be assigned an entropy of order the area of the horizon, as indicated above. As discussed in Section 3, this would then account (in order of magnitude) for the entropy of black holes. However, this also would mean that there would be no room left to assign entropy to any “internal degrees of freedom” of the black hole, i.e., all of the entropy of a black hole would be viewed as residing in a Planck scale skin surrounding the horizon. To examine the implications of this view in a more graphic manner, consider the collapse of a very massive spherical shell of matter, say of mass $`M=10^{11}M_{}`$. Then, as the shell crosses its Schwarzschild radius, $`R3\times 10^{11}\mathrm{km}`$, the spacetime curvature outside of the shell is much smaller than that at the surface of the Earth, and it will take more than another week before the shell collapses to a singularity. An unsophisticated observer riding on the shell would have no idea that any doom awaits him, and he would notice nothing of any significance occurring as the Schwarzschild radius is crossed. Nevertheless, within a time of order the Planck time after crossing of the Schwarzschild radius, the “skin” of thermal atmosphere surrounding the newly formed black hole will come to equilibrium with respect to the notion of time translation symmetry for the static Schwarzschild exterior. Thus, if entropy is to be assigned to the thermal atmosphere as above, then the degrees of freedom of the thermal atmosphere—which previously were viewed as irrelevant vacuum fluctuations making no contribution to entropy—suddenly become “activated” by the passage of the shell for the purpose of counting their entropy. A momentous change in the entropy of matter in the universe has occurred, and all of this entropy increase is localized near the Schwarzschild radius of the shell, but the observer riding the shell sees nothing.<sup>6</sup><sup>6</sup>6Similarly, if the entropy of the thermal atmosphere is to be taken seriously, then it would seem that during a period of uniform acceleration, an observer in Minkowski spacetime should assign an infinite entropy (since the horizon area is infinite) to a Planck sized neighborhood of a pair of intersecting null planes lying at a distance $`c^2/a`$ from him. Observers near these null planes presumably would be quite surprised by the assignment of a huge entropy density to an ordinary, empty region of Minkowski spacetime.
Another possibility is that the infinite (prior to the imposition of a cutoff) entropy of the thermal atmosphere is simply another infinity of quantum field theory that needs to be properly “renormalized”; when a proper renormalization has been done, the thermal atmosphere will make a negligible contribution to the total entropy. This view would leave room to attribute black hole entropy to “internal degrees of freedom of the black hole”, and would avoid the difficulties indicated in the previous paragraph. However, it raises serious new difficulties of its own. Consider a black hole enclosed in a reflecting cavity which has come to equilibrium with its Hawking radiation. Surely, far from the black hole, the entropy of the thermal radiation in the cavity should not be “renormalized away”. But this radiation is part of the thermal atmosphere of the black hole. Thus, one would have to postulate that at some distance from the black hole, the renormalization effects begin to become important. In order to avoid the difficulties of the previous paragraph, this would have to occur at a distance much larger than the Planck length. But, then, what happens to the entropy in a box of ordinary thermal matter as it is slowly lowered toward the black hole. By the time it reaches its “floating point”, its contents are indistinguishable from the thermal atmosphere. Thus, if the floating point is close enough to the black hole for the renormalization to have occurred, the entropy in the box must have disappeared, despite the fact that an observer inside the box still sees it filled with thermal radiation. Furthermore, if one lowers (or, more accurately, pushes) an empty box to the same distance from the black hole, it will have an entropy less than the box filled with radiation. Therefore, the empty box would have to be assigned a negative entropy.
I believe that the above puzzle suggests that we presently lack the proper conceptual framework with which to think about entropy in the context of general relativity. In any case, it is my belief that the resolution of the above issues will occupy researchers well into the next century, if not well into the next millenium.
This research was supported in part by NSF grant PHY 95-14726 to the University of Chicago.
|
no-problem/9901/astro-ph9901224.html
|
ar5iv
|
text
|
# ANALYSIS OF LINE CANDIDATES IN GAMMA-RAY BURSTS OBSERVED BY BATSE
## Abstract
ABSTRACT. A comprehensive search of BATSE Spectroscopy Detector data from 117 GRBs has uncovered 13 statistically significant line candidates. The case of a candidate in GRB 930916 is discussed. In the data of SD 2 there appears to be a emission line at 46 keV, however the line is not seen in the data of SD 7. Simulations indicate that the lack of agreement between the results from SD 2 and SD 7 is implausible but not impossible.
1) Department of Physics, Univ. of Alabama in Huntsville, Huntsville, AL 35899 2) Center for Astrophysics and Space Science, Univ. of Calif., San Diego, CA 92093
KEYWORDS: gamma-ray bursts; spectra.
1. INTRODUCTION
A primary goal of adding the Spectroscopy Detectors (SDs) to BATSE was the detection of low-energy spectral features in gamma-ray bursts. At the time, the reported low-energy lines were interpreted as resonant cyclotron scattering in intense magnetic fields of neutron stars in our Galaxy. While the former theoretical explanation of spectral features is now untenable unless there are two populations of GRB sources, the observational status of spectral features is still important.
Each of the eight BATSE modules contains one SD, which consists of a 12.7 cm diameter by 7.6 cm thick crystal of NaI(Tl) viewed by a single 12.7 cm photomultiplier tube. Compared to the BATSE Large Area Detectors, the SDs have better energy resolution and a higher probability of full-energy absorption of incident gamma-rays, but a smaller area.
After the failure of our manual search to discovery a single line , we developed an automatic computer search to comprehensively search the data of bright bursts . Because we do not a priori know the energy, starting time, or duration of spectral features, the procedure searches a wide range of centroid energies and timescales. Many combinations of consecutive spectra are examined: all singles, pairs, triples, and groups of 4, 5, 7, . . . spectra, up to the entirety of the high time resolution SD data. The presence of a line is tested by fitting each spectrum twice, first with Band’s “GRB” continuum function, then with the same continuum function plus a narrow line. A change in $`\chi ^2`$ of more than 20 identifies a line candidate. The present search is limited to low-energy features, so a closely spaced grid of trial centroids extending up to 100 keV is used. The LLD is typically just below 20 keV and, after requiring a continuum interval below the first search centroid, lines are tested starting above 20 keV.
The search was applied to 120,700 spectra from 117 GRBs. Because of the examination of trial spectra with a sliding starting time and a wide range of durations, many of these spectra have substantial overlap. Additionally, most of the spectra have very low signal-to-noise ratios and consequently a real spectral feature could not have been detected; these spectra were searched as controls. Thus the number of independent spectra with sufficient photons to support the detection of a real feature is much lower, below about 1000.
2. RESULTS
The comprehensive search identified 13 candidates. The $`\chi ^2`$ improvement from adding a line ranges from 20, the candidate threshold, to 50, corresponding to chance probabilities of $`4\times 10^5`$ to $`10^{11}`$. The probabilities are calculated for adding two-parameters (line intensity and centroid; the intrinsic width is assumed to be narrow) to the spectral fit to a single spectrum. The energy range searched contains about five resolution elements; the number of independent spectra of sufficient intensity searched is below about 1000. Consequently at most one of these candidates might be a chance fluctuation in the ensemble.
An advantage of BATSE is the observation of bursts by several detectors, thereby enabling further tests of the reality of a candidate. Is the candidate detected with high statistical significance in the second detector? This would be confirmation of the reality of the feature. Confirmation might not be achieved for several reasons. If the feature is not highly significant in the second detector but a sensitivity analysis shows that this is reasonable based upon the line strength and the viewing angle of the detector, then the data are consistent. However, a contradiction obtains if a sensitivity analysis indicates that the feature should have been detected in a second detector but the feature was not detected.
We have previously reported details on two of the candidates. For GRB 940703 (trigger 3057), only SD 5 had a suitable gain and viewing angle. A highly significant 44 keV emission line ($`\mathrm{\Delta }\chi ^2`$ = 31.2, $`P=2\times 10^7`$) was observed in a portion of that burst . Because of the gains or viewing angles of the other SDs, no consistency tests are possible for this candidate. The other candidate previously described, GRB 941017 (trigger 3245), was usefully observed by SDs 0 and 5 . An apparent emission line at 43 keV was discovered in the data of SD 0. A less significant feature appears in SD 5 at a strength consistent with that feature seen in SD 0. This appears to be one of the best cases for detecting a line: the data from two detectors are consistent and a joint fit of their data has $`\mathrm{\Delta }\chi ^2=28.6`$ ($`P=6\times 10^7`$) for adding a line.
An interval of data from SD 2 for GRB 930916 (BATSE trigger 2533) which contains the peak and trailing portion of the event appears to have a significant line ($`\mathrm{\Delta }\chi ^2=24.1`$, $`P=6\times 10^6`$). Because of the coarser time resolution of the data from SD 7, a slightly different interval must be used to compare the results from SD 2 and 7. Using the revised interval, the significance of the feature in SD 2 is slightly reduced to $`\mathrm{\Delta }\chi ^2=23.1`$ (Fig. 1).
There is no evidence for the feature in the data of SD 7 (Fig. 2). Not only does adding a line to the model result in no improvement in $`\chi ^2`$, imposing a line at the strength expected according to SD 2 results in a $`\chi ^2`$ increase of 9.7 (Fig. 2, right).
A quantitative sensitivity or consistency analysis is required to decide whether the failure to detect the candidate in SD 7 is reasonable because of some difference between the detectors. The two detectors have the same gain and viewing angles of $`31^{}`$ for SD 2 and $`64^{}`$ for SD 7. Because of the detectors are almost as thick as their diameters, the effective area has only a small dependence on burst angle.
We have performed simulations to quantitatively test the consistency of the two detectors. We use the joint fit to the data of the detectors that viewed the burst as the best compromise between the line strengths preferred from the data of each detector. Then, using the parameters of the joint fit photon model and the detector response model, 1000 simulated count rate datasets are made for each detector. These simulated spectra are fit to determine the range of line significances expected. A simulated significance above the observed significance for SD 2 ($`\mathrm{\Delta }\chi ^2=23.1`$) is obtained in 9% of the simulations, indicating that the observed significance is slightly better than expected. However, a simulated significance $`\mathrm{\Delta }\chi ^2`$ below 0.1 is obtained in only 2% of the simulations of SD 7, indicating that the observation is only marginally consistent with expectations.
3. CONCLUSIONS
Consistency between the results obtained from several detectors is required for a believable result. In the case of GRB 930916, the consistency is marginal. The event could be understood as a 9% probable fluctuation towards high significance in SD 2 and a 2% probable fluctuation towards insignificance in SD 7. If this were the only such case, such an explanation would be plausible. There are several other such cases among the 13 candidates, raising the possibility of a systematic error that invalidates the statistical significances. We have performed many tests of the reliability of the SDs and for systematics in the line analysis : the SDs and the data pass all tests. Until we have a better understanding of these apparent inconsistencies between the data collected from different detectors, the reality of all of the BATSE line candidates is unclear.
A key lesson is the power of observations from more than one detector for testing the reality of a possible line feature. Agreement would be powerful confirmation; disagreement might indicate a systematic error.
REFERENCES
1. Band, D. L., Ryder, S., Ford, et al. 1996, ApJ, 458, 746
2. Briggs, M. S., Band, D. L., et al. 1996, in Gamma-Ray Bursts, AIP Conf. Proc. 384, 153
3. Briggs, M. S., Band, D. L., et al. 1998, in Gamma-Ray Bursts, AIP Conf. Proc. 428, 299
4. Briggs, M. S., et al., 1998, ApJ, in preparation
5. Paciesas, W. S., Briggs, M. S., et al. 1996, in Gamma-Ray Bursts, AIP Conf. Proc. 384, 213
|
no-problem/9901/astro-ph9901020.html
|
ar5iv
|
text
|
# Disc instability models, evaporation and radius variations
## Abstract
We show that the outcome of disc instability models is strongly influenced by boundary conditions such as the position of the inner and outer disc edges. We discuss other sources of uncertainties, such as the tidal torque, and we conclude that disc illumination, disk size variations and a proper prescription for the tidal torque must be included in models if one wishes to extract meaningful physical information on e.g. viscosity from the comparison of predicted and observed lightcurves.
## 1. Introduction
The bases of the thermal-viscous accretion disc instability have been established 25 years ago (see Smak in these proceedings for an historical overview), and despite the successes of this model in explaining dwarf nova (DN) outbursts and, to a lesser extent, soft X-ray transients (SXTs), there are still a number of observational facts that conflict the predictions of the standard version of the model. One can mention, for example, the detection of a very significant quiescent X-ray flux in both DNs and SXTs at a level that exceeds theoretical predictions by up to six orders of magnitude (Lasota, 1996). One could also quote the existence of steady bright X-ray sources which should be unstable, or of ER UMa systems in which the supercycle appears to be too short to be accounted for by the tidal-thermal instability model. Finally, one should mention the fact with physically plausible parameters most authors produce light curves (often not published) that bear no resemblance whatsoever to any observed one.
Some of these discrepancies have already received an explanation : the X-ray detection of systems in quiescence requires the inner disc radius be larger than that of the compact object as a result of either evaporation or of the presence of a magnetosphere; the stability of the accretion disc in bright X-ray sources is influenced by illumination from the central X-ray source. Other remain, such as the difficulty of reproducing similar peak luminosities in systems exhibiting various outbursts shapes. One should also keep in mind that, despite recent progress, the very nature of viscosity in accretion discs remains largely unknown and that the $`\alpha `$ prescription is a mere parameterization of our ignorance. It is thus extremely important to disentangle numerical and physical effects when modeling accretion discs if one wishes to infer a refined viscosity prescription from observations of accretion disc outbursts.
In this paper, we describe our code, giving its advantages and limitations; we then discuss the influence of physical effects such as variations of the inner and outer disc radius. We show that all these effects (adding irradiation) must be accounted for; these might play a major role in superoutburtsts of SU UMa and ER UMa systems.
## 2. Numerical modeling
### 2.1. Basics
The numerical code used here is described in detail in Hameury et al. (1998), and we recall briefly here its main characteristics. We solve the standard mass continuity and angular momentum conservation equation of a geometrically thin Keplerian disc, in which we take into account the tidal torque, assumed to be of the form (Smak 1984; Papaloizou & Pringle 1977):
$$T_{\mathrm{tidal}}=c\omega r\nu \mathrm{\Sigma }\left(\frac{r}{a}\right)^5$$
(1)
where $`\omega `$ is the angular velocity of the binary motion, $`r`$ the distance to the compact object, $`\nu `$ the viscosity, $`\mathrm{\Sigma }`$ the surface density in the disc, and $`a`$ the orbital separation. The constant $`c`$ is a free parameter that is taken such as to give the required average outer disc radius. The thermal equation is taken from Cannizzo (1993):
$$\frac{T_\mathrm{c}}{t}=\frac{2(Q^+Q^{}+J)}{C_P\mathrm{\Sigma }}\frac{\mathrm{}T_\mathrm{c}}{\mu C_P}\frac{1}{r}\frac{(rv_\mathrm{r})}{r}v_\mathrm{r}\frac{T_\mathrm{c}}{r},$$
(2)
where $`T_\mathrm{c}`$ is the central disc temperature, $`Q^+`$ and $`Q^{}`$ are the surface heating and cooling rates respectively, $`C_P`$ is the heat capacity at constant pressure, $`v_\mathrm{r}`$ is the radial velocity and $`J`$ is a radial energy flux which we assume here to be carried by turbulent eddies, and which takes the form:
$$J=1/r/r(r\frac{3}{2}\nu C_P\mathrm{\Sigma }\frac{T_\mathrm{c}}{r}).$$
(3)
The boundary conditions for the energy equation are unimportant as this equation is dominated by the $`Q^+`$ and $`Q^{}`$ terms except in the transition fronts; we take $`T/r=0`$ to minimize numerical problems. We have as usual $`\mathrm{\Sigma }=0`$ at the inner edge, and we assume that matter is added at the very outer edge, whose position $`r_{\mathrm{out}}`$ is not specified, which gives two conditions at the outer edge:
$$\dot{M}_{\mathrm{tr}}=2\pi r\mathrm{\Sigma }(\dot{r}_{\mathrm{out}}v)$$
(4)
and
$$\dot{M}_{\mathrm{tr}}\left[1\left(\frac{r_\mathrm{k}}{r_{\mathrm{out}}}\right)^{1/2}\right]=3\pi \nu \mathrm{\Sigma },$$
(5)
These equations with their boundary conditions are solved using a fully implicit code in which equations are discretized on an adaptative grid defined such as to resolve the temperature and density gradients. As a result, heating and cooling fronts are always resolved. Moreover, as the code is implicit, large time steps can be used, whatever the spatial resolution, contrary to explicit codes for which the time step has to be smaller than the thermal time ($`1/\alpha `$ times the dynamical time) at the inner edge. This allows us to cover a large number of cycles, so that the memory of the initial conditions has been lost.
The results obtained using this code do reproduce a number of dwarf nova outburst characteristics, such as the occurrence of inside-out outbursts for low mass transfer rate, and outside-in outbursts for larger values of $`\dot{M}_{\mathrm{tr}}`$. However, we also obtain small, inside-out, intermediate outbursts for high primary masses (i.e. low inner disc radius). Such weak outburst are not observed, which means that we might be missing an essential ingredient when modeling these outbursts.
### 2.2. Heating and cooling fronts
Because we use an adaptative grid, we are able to resolve both the heating and cooling fronts. We confirm (Menou, Hameury & Stehle 1998) that the propagation velocity of these fronts is of order of $`\alpha c_s`$, where $`c_s`$ is the sound speed in the hot gas, with heating fronts propagating more rapidly than cooling fronts. Moreover, whereas the general properties of cooling fronts do not vary from one outburst to the other, those of heating fronts depend sensitively upon the actual profile of $`\mathrm{\Sigma }`$ as a function of $`r`$.
We find that the width $`w`$ of the heating and cooling fronts is proportional to the disc scale height $`h`$, contrary to Cannizzo, Chen & Livio (1995) who obtain $`w(hr)^{1/2}`$; we get the same order of magnitude, but a different radius dependence. $`w`$ is a few times $`h`$ for the heating fronts, and much larger for cooling fronts, so that the thin disc equations still apply.
We also confirm the self-similarity of the inner, hot disc during the cooling phase that was found by Vishniac (1997), but we describe this disc property in a slightly differentl way. $`\mathrm{\Sigma }`$ is found to scale naturally with $`\mathrm{\Sigma }_{\mathrm{min}}`$, the minimum value of $`\mathrm{\Sigma }`$ on the hot upper branch in the $`\mathrm{\Sigma }`$$`T_{\mathrm{eff}}`$ diagram. In this regime, as already noted by Vishniac (1997), the inner hot disc empties essentially by transferring matter to the outer disc that has returned to the cold state, and not by accretion onto the compact object. As a consequence, there is a density jump in the passage of a cooling front; in the self-similar regime, we find that $`\mathrm{\Delta }\mathrm{log}(\mathrm{\Sigma })`$ is a constant. If this constant is larger than $`\mathrm{log}(\mathrm{\Sigma }_{\mathrm{max}}/\mathrm{\Sigma }_{\mathrm{min}})`$, where $`\mathrm{\Sigma }_{\mathrm{max}}`$ is the maximum density on the cool branch, self-similarity can never be reached, and reflares are obtained. This happens for high primary masses (a few M) typical of black hole X-ray transients, as well as in cases where the compact object is so hot that the inner portion is stabilized as a result of illumination, as proposed by King (1997). In this case, $`\mathrm{\Sigma }_{\mathrm{max}}=\mathrm{\Sigma }_{\mathrm{min}}`$ at the transition point between the hot inner disc and the outer disc that can be subject to the thermal-viscous instability, and many reflares are observed (Hameury, Lasota & Dubus, 1999).
## 3. Variations of the inner disc radius : evaporation, magnetospheres
Both dwarf novae and soft X-ray transients emit a significant flux in X-rays during quiescence (typically 10<sup>32</sup> erg s<sup>-1</sup> for SXTs, and more than 10<sup>30</sup> erg s<sup>-1</sup> for dwarf novae). As the whole disc must stay on the cool branch during quiescence, the local mass transfer rate $`\dot{M}`$ must be smaller than the critical value $`\dot{M}_{\mathrm{crit}}`$ above which this stable cool branch disappears:
$$\dot{M}<\dot{M}_{\mathrm{crit}}=4\times 10^{15}\alpha _\mathrm{c}^{0.04}M_1^{0.89}r_{10}^{2.67}\mathrm{g}\mathrm{s}^1$$
(6)
where $`\alpha _\mathrm{c}`$ is the viscosity parameter on the cool branch, $`M_1`$ is the primary mass and $`r_{10}`$ is the radius in 10<sup>10</sup> cm. This can be written as a constraint on the inner radius $`r_{\mathrm{in}}`$:
$$r_{\mathrm{in}}>8\times 10^8\left(\frac{L_\mathrm{X}}{10^{32}\mathrm{erg}\mathrm{s}^1}\right)^{0.38}\left(\frac{M_1}{\mathrm{M}_{}}\right)^{0.33}\left(\frac{\eta }{0.1}\right)^{0.38}\mathrm{cm}$$
(7)
where $`\eta `$ is the efficiency of accretion; this is much larger than the neutron star radius or the radius of the innermost stable orbit for SXTs, and is also usually larger that the radius of the white dwarf in DNs.
Several mechanisms may cause the inner disc to be truncated; if the compact object is a magnetized white dwarf, the Alfvén radius exceeds that of the white dwarf in quiescence as soon as the magnetic moment of the white dwarf is larger than 10<sup>30</sup> Gcm<sup>3</sup>. Evaporation (see e.g. Meyer & Meyer-Hofmeister 1994; Liu, Meyer & Meyer-Hofmeister 1992) leading to winds or to the formation of an ADAF is also an attractive possibility, but evaporation rates are quite uncertain.
These mechanisms can be accounted for by either introducing a mass loss rate in the mass conservation equation, or by assuming that the inner disc radius is a specified function of the mass accretion rate onto the compact object; both prescriptions lead to qualitatively similar results. This differs a lot from a situation in which a hot, geometrically thin and optically thick disc would form close to the compact object as a result of e.g. illumination (King, 1997).
### 3.1. The case of WZ Sge
WZ Sge is a dwarf nova which exhibits long outbursts with a recurrence time of about 30 years. Such a long recurrence time, together with the fact that the amount of mass transferred during an outburst is large, requires a very small value of the viscosity parameter $`\alpha `$ (typically $`\alpha 10^510^3`$) in the framework of the standard model (Smak, 1993; Osaki, 1995; Meyer-Hofmeister, Meyer & Liu, 1998), but the reason for such a low $`\alpha `$ in this particular system is left unexplained.
Hameury, Lasota & Huré (1997a) proposed an alternative possibility with a “normal” value of $`\alpha `$ ($`\alpha =0.01`$) in which the inner part of the accretion disc is disrupted by either a magnetic field (Lasota, Kuulkers & Charles 1999) or evaporation, so that the disc is stable (or very close to being stable) in quiescence, as the mass transfer rate is very low and the disc can sit on the cool lower branch of the thermal equilibrium curve. Outbursts are triggered by an enhanced mass transfer which renders the disc unstable. The resulting outburst is strongly affected by the irradiation of the secondary star, as the effective temperature of the irradiated hemisphere is observed to reach 16,000 – 17,000 K at maximum, i.e. increases by a factor 10 with respect to quiescence. This results in a large increase of the mass transfer rate; a straightforward application of the formulae given in Hameury, King & Lasota (1986) shows that $`\dot{M}`$ can increase by 2 to 3 orders of magnitude, and be comparable to the mass accretion rate onto the white dwarf.
Numerical models in which one includes both evaporation (required by the long recurrence time) and illumination (required to account for the total amount of mass transferred) reproduce the overall light curve of WZ Sge.
### 3.2. The case of GRO J1655-40
GRO J1655-40 is a soft X-ray transient in which a 6 days delay between the rise to outburst of optical and X-rays was observed (Orosz et al. 1997). Hameury et al. (1997b) showed that the quiescent optical and X-rays observations of this source indicate the presence of an ADAF which extends to a radius of about 10<sup>10</sup> cm, with a mass transfer through the ADAF of a few times 10<sup>16</sup> g s<sup>-1</sup>.
Hameury et al. (1997b) also show that the presence of such an ADAF that disrupts the standard accretion disc in the vicinity of the black hole is also required to account for the X-ray delay and the optical rise of the outburst. If the disc were extending to the last stable orbit, one would expect inside-out outbursts that de not produce any X-ray delay at all. If on the other hand one assumes that the $`\mathrm{\Sigma }(r)`$ density profile in the disc is for some unspecified reason far from being relaxed, one could imagine that large amounts of matter accumulate at the outer edge, and that outside-in outbursts can occur. One can then reproduce the observed X-ray delay, but the disc brightens much too fast in optical.
### 3.3. The case of SS Cyg
Whereas the interpretation of UV delays relies on a precise modeling of the spectra of accretion discs, and hence on the way viscous heating is distributed vertically in the disc, EUV being emitted by the boundary layer can be directly translated into a mass accretion rate onto the compact object. EUVE observations of SS Cyg by Mauche (1996) showed that there can be a delay as large as 1 day between the EUV and the optical. Such a delay can be easily accounted for if the inner parts of the disc evaporate at a rate $`\dot{M}_{\mathrm{ev}}=1.5\times 10^{16}(r/10^9\mathrm{cm})^2`$ g s<sup>-1</sup> (Hameury et al., 1999), in contrast with what would happen if the sole effect of irradiation of the disc by the white dwarf were to heat it up. In the latter case, the EUV delay is indeed longer than in the standard case, but cannot be as long as one day; moreover, as mentioned earlier, reflares are inevitable.
## 4. Outer radius variations
Hameury et al. (1998) have shown that numerical codes in which the outer radius is allowed to vary produce results which are qualitatively different from those in which $`r_{\mathrm{out}}`$ is kept fixed. When $`r_{\mathrm{out}}`$ is kept fixed, one frequently obtains a large number of small outbursts between major ones; in addition, it is extremely difficult to get outside-in outbursts even for large mass transfer rates. The basic reasons for these differences are (i) the fact that during an outburst, matter is pushed at larger radius, implying a decrease of $`\mathrm{\Sigma }`$ at the outer edge of the disc, so that the cooling wave starts earlier, and the disc is less depleted, and (2) the contraction of the disc in quiescence under the effect of the tidal torque, which causes an increase of the local mass transfer rate to values that can exceed the mass loss rate from the secondary. Both effects facilitate the accumulation of matter in the outer region of the disc, and hence lead to outside-in outbursts.
Such disc variations are observed during outburst cycles in dwarf novae, with a rapid rise during outburst and a slow contraction of $``$ 20% during decline. However, radius variations are given by the tidal torque $`T_{\mathrm{tidal}}(r)`$ for which no realistic analytic expression is available at present, in particular close to the 3:1 resonance radius. This uncertainty is particularly worrying for SU UMa systems, as $`T_{\mathrm{tidal}}(r)`$ is an essential ingredient of the tidal-thermal instability (Osaki 1996).
## 5. Conclusion
Disruption of the inner parts of the disc by either evaporation or by a magnetic field can easily account for the observed X-ray/EUV delays. Such holes in a disc are also required by X-ray observations of quiescent systems, and by the long term cycles that do not exhibit small inside-out outbursts that naturally appear when the inner radius of the disc is small. It also very important to let the outer radius of the disc vary, and one would require a better estimate of the tidal torque than presently available. Finally, illumination by both the compact object and by the disc itself are important, even in the case of cataclysmic variables. We show in Fig. 1 an example of light curves in which all these effects have been included; as can be seen, this curve is quite reminiscent of ER UMA systems, and has been obtained without assuming any tidal instability. This does not mean that the tidal-thermal instability model for superoutbursts is incorrect; it does imply that effects that have been neglected until now must be added in models.
## 6. References
Cannizzo J.K. 1993, ApJ 419, 318
Cannizzo J.K., Chen W., Livio M. 1995, ApJ 454, 880
Hameury J.-M., Lasota J.-P., Huré J.-M. 1997a, MNRAS 287, 937
Hameury J.-M., Lasota J.-P., McClintock J.E., Narayan R. 1997b, ApJ 489, 234
Hameury J.-M., Lasota J.-P., Dubus G. 1999, MNRAS, in press
Hameury J.-M., Menou K., Dubus G., Lasota J.-P., Hure J.-M. 1998, MNRAS 298, 1048
King A.R. 1997, MNRAS 288, L16
Lasota J.P. 1996, in Proceedings of IAU Symposium 165, Compact Stars in Binaries, J. van Paradijs et al (eds.), p. 43
Lasota J.P., Kuulkers E., Charles P.A. 1999, MNRAS, submitted
Liu B.F., Meyer F., Meyer-Hofmeister E. 1998, A&A, submitted
Mauche C. W. 1996, in Astrophysics in Extreme Ultraviolet, IAU Coll. 152, Bowyer S., Bowyer R.F. eds, Kluwer, Dordrecht, p. 317
Menou K., Hameury J.-M., Stehle R. 1998, MNRAS, in press
Meyer F., Meyer-Hofmeister E. 1994, A&A 288, 175
Meyer-Hofmeister E., Meyer F., Liu B. 1998, A&A, in press
Orosz J.A., Remillard R.A., Bailyn C.D., McClintock J.E. 1997, ApJ 478, L83
Osaki Y. 1995, PASJ 47, 47
Osaki Y. 1996, PASP 108, 39
Papaloizou J., Pringle J.E. 1977, MNRAS 181, 441
Smak J. 1984, Acta Astr. 34, 161
Smak J. 1993, Acta Astr. 43, 101
Vishniac E.T. 1997, ApJ 482, 414
|
no-problem/9901/cond-mat9901068.html
|
ar5iv
|
text
|
# Long range Néel order in the triangular Heisenberg model
\[
## Abstract
We have studied the Heisenberg model on the triangular lattice using several Quantum Monte Carlo (QMC) techniques (up to 144 sites), and exact diagonalization (ED) (up to 36 sites). By studying the spin gap as a function of the system size we have obtained a robust evidence for a gapless spectrum, confirming the existence of long range Néel order. Our best estimate is that in the thermodynamic limit the order parameter $`m^{}=0.41\pm 0.02`$ is reduced by about 59% from its classical value and the ground state energy per site is $`e_0=0.5458\pm 0.0001`$ in unit of the exchange coupling. We have identified the important ground state correlations at short distance.
\]
Historically the antiferromagnetic spin-1/2 Heisenberg model on the triangular lattice was the first proposed Hamiltonian for a microscopic realization of a spin liquid ground state (GS) :
$$\widehat{H}=J\underset{i,j}{}\widehat{𝐒}_i\widehat{𝐒}_j,$$
(1)
where $`J`$ is the nearest-neighbors antiferromagnetic exchange and the sum runs over spin-$`1/2`$ operators. At the classical level the minimum energy configuration is the well known $`120^{}`$ Néel state. The question whether the combined effect of frustration and quantum fluctuations favors disordered gapped resonating valence bonds (RVB) or long range Néel type order is still under debate. In fact, there has been a considerable effort to elucidate the nature of the GS and the results of numerical , and analytical works are controversial. From the numerical point of view, ED, which is limited to small lattice sizes, provides a very important feature: the spectra of the lowest energy levels order with increasing total spin, a reminiscence of the Lieb-Mattis theorem for bipartite lattices, and are consistent with the symmetry of the classical order parameter . However, other attempts to perform a finite size scaling study of the order parameter indicate a scenario close to a critical one or no magnetic order at all.
The variational Quantum Monte Carlo (VMC) allows to extend the numerical calculations to fairly large system sizes, at the price to make some approximations, which are determined by the quality of the variational wavefunction (WF). Many WF have been proposed in the literature and the lowest GS energy estimation was obtained with the long range ordered type. In particular, starting from the classical Néel state, Huse and Elser introduced important two and three spin correlation factors in the WF:
$$|\psi _\mathrm{V}=\underset{x}{}\mathrm{\Omega }(x)\mathrm{exp}\left(\frac{\gamma }{2}\underset{i,j}{}v(ij)S_i^zS_j^z\right)|x,$$
(2)
where $`|x`$ is an Ising spin configuration specified by assigning the value of $`S_i^z`$ for each site and
$$\mathrm{\Omega }(x)=T(x)\mathrm{exp}\left[i\frac{2\pi }{3}\left(\underset{i\mathrm{B}}{}S_i^z\underset{i\mathrm{C}}{}S_i^z\right)\right]$$
(3)
represents the three sublattices (say A, B and C) classical Néel state in the $`xy`$-plane multiplied by the three spin term
$$T(x)=\mathrm{exp}\left(i\beta \underset{i,j,k}{}\gamma _{ijk}S_i^zS_j^zS_k^z\right),$$
(4)
defined by the coefficients $`\gamma _{ijk}=0,\pm 1`$, appropriately chosen to preserve the symmetries of the classical Néel state, and by an overall factor $`\beta `$ as discussed in Ref. . Since the Hamiltonian is real and commutes with the $`z`$-component of the total spin, $`\widehat{S}_{\mathrm{tot}}^z`$, a better variational WF on a finite size is obtained by taking the real part of Eq. (2) projected onto the $`S_{\mathrm{tot}}^z=0`$ subspace.
For the two body Jastrow potential $`v(r)`$ it is also possible to work out an explicit Fourier transform $`v_q`$, based on the consistency with linear spin wave (SW) results and a careful treatment of the singular modes coming from the $`SU(2)`$ symmetry breaking assumption. This analysis gives $`v_q=1\sqrt{1+2\gamma _q/1\gamma _q}`$ for $`q0`$ and $`0`$ otherwise, where $`\gamma _q=\left[cos(q_x)+2cos(q_x/2)cos(\sqrt{3}q_y/2)\right]/3`$ and the $`q`$-momenta are the ones allowed in a finite size with $`N`$-sites. For a better control of the finite size effects we have chosen to work with clusters having all the spatial symmetries of the infinite system .
In the square antiferromagnet (AF) the classical part by itself determines exactly the phases (signs) of the GS in the chosen basis, the so called Marshall sign. For the triangular case the exact phases are unknown and the classical part is not enough to fix them correctly. Therefore, one has to introduce the three-body correlations of Eq. (4). Although these do not provide the exact answer, they allow to adjust the signs of the WF in a non trivial way without changing the underlying classical Néel order. To this respect it is useful to define an average sign of the variational WF relative to the normalized exact GS $`|\psi _0`$ as
$$s=\underset{x}{}|\psi _0(x)|^2\mathrm{sgn}\left(\psi _\mathrm{V}(x)\psi _0(x)\right),$$
(5)
with $`\psi (x)=x|\psi `$.
We have compared the variational calculation with the exact GS obtained by ED on the $`N=36`$ cluster. For completeness we have considered the more general Hamiltonian with exchange easy-plane anisotropy $`\alpha `$, ranging from the $`XY`$ case ($`\alpha =0`$) to the standard spin isotropic case ($`\alpha =1`$). As shown in Tab. I, in the variational approach the most important parameter, particularly for $`\alpha 1`$, is the one, $`\beta `$, controlling the triplet correlations. Though the overlap of our best variational WF with the exact GS is rather poor, the average sign $`s`$ is in general very much improved by the triplet term. Our interpretation is that short range many body correlations are very important to reproduce the relative phases of the GS on each Ising configuration. The optimal parameters for our initial guess $`\psi _\mathrm{V}`$ of the GS $`\psi _0`$ are expected to be very weakly size-dependent but they are very difficult to determine accurately for large sizes. For $`\alpha =1`$ and $`N=36`$, where ED is still possible, our best guess for the GS WF - with the maximum overlap and average sign - is slightly different from the one determined with the optimization of the energy. Since the forthcoming calculations, which significantly improve the VMC, are more sensitive to the accuracy of the WF rather than to the one of the GS energy, henceforth we have chosen to work with $`\beta =0.23`$ for all the system sizes.
One way to get more accurate GS properties is to use the Green Function MC technique (GFMC). As in the fermionic case, for frustrated spin systems this numerical method is plagued by the well-known sign problem. Recently, to alleviate the above mentioned instability, the Fixed-Node (FN) GFMC scheme has been introduced as a variational technique, typically much better than the conventional VMC. As shown in Fig. 1, and also pointed in Ref. , for frustrated spin systems, this technique does not represent a significative advance compared to VMC, leading therefore to results biased by the variational ansatz.
In order to overcome this difficulty we have used a recently developed technique: GFMC with Stochastic Reconfiguration (SR) , which allows to release the FN approximation, in a controlled but approximate way, yielding, as shown in Fig. 1 much lower energies, even for the largest sizes where ED is not possible. In the appropriate limit of large number of walkers and high frequency of SR, the residual bias introduced by the SR depends only on the number $`p`$ of operators used to constrain the GFMC Markov process. These constraints, analogously to the FN one, allow simulations without numerical instabilities. In principle the exact answer can be obtained, within statistical errors, provided $`p`$ equals the huge Hilbert space dimension. Practically it is necessary to work with small $`p`$ and an accurate selection of physically relevant operators is crucial. As can be easily expected, the short range correlation functions $`\widehat{S}_i^z\widehat{S}_j^z`$ and $`(\widehat{S}_i^+\widehat{S}_j^{}+\widehat{S}_i^{}\widehat{S}_j^+)`$ contained in the Hamiltonian give a sizable improvement of the FN GS energy when they are put in the SR procedure. In order to be systematic we have included in the SR the short range correlations generated by $`\widehat{H}^2`$ (see Fig. 2), averaged over all spatial symmetries commuting with the Hamiltonian. This local correlations are particularly important to obtain quite accurate and reliable estimates not only of the GS energy but also of the mixed average of the total spin square $`\widehat{S}_{\mathrm{tot}}^2`$ and of the order parameter $`m^2`$ (defined as in Ref. ). These quantities are easily estimated within the GFMC technique and compared with the exact values computed by ED for $`N=36`$ in Tab. II. In particular it is interesting that, starting from a variational WF with no definite spin, the GS singlet is systematically recovered by means of the SR technique. Furthermore, as it is shown in Fig. 1, the quality of our results is similar to the variational one obtained by P. Sindzingre et al. , using a long range ordered RVB wavefunction. The latter approach is almost exact for small lattices, but the sign-problem is already present at the variational level, and the calculation has not been extended to high statistical accuracy or to $`N>48`$.
Having obtained an estimate for the GS energy, at least an order of magnitude more accurate than our best variational guess, it appears possible to obtain physical features, such as a gap in the spin spectrum, that are not present at the variational level. For instance in the frustrated $`J_1J_2`$ spin model, with the same technique and a similar accuracy, a gap in the spin spectrum was found in the thermodynamic limit, starting with a similar ordered and therefore gapless variational WF .
In the isotropic triangular AF, the gap to the first spin excitation is rather small. Furthermore, for the particular choice of the guiding WF (2), the translational symmetry of the Hamiltonian is preserved only if projected onto subspaces with total $`S_{\mathrm{tot}}^z`$ multiple of three. Such an $`S=3`$ excitation belongs to the low-lying states of energy $`E_S`$ and spin $`S`$ of the ordered quantum AF, behaving as $`E_SE_0S(S+1)/N`$. If instead $`E_SE_0`$ remains finite for $`S=3`$ and $`N\mathrm{}`$, this implies a disordered GS. For all the above reasons we have studied the gap to the spin $`S=3`$ excitation as a function of the system size. As it is shown in Fig. 3, for the lattice sizes for which a comparison with ED data is possible, the spin gap estimated with the SR technique is nearly exact. The importance to extend the numerical investigation to clusters large enough to allow a more reliable extrapolation is particularly evident in the same figure in which the $`N=12`$ and 36 exact data extrapolate linearly to a large finite value. This behavior, is certainly a finite size effect and it is corrected by the SR data for $`N48`$, suggesting, strongly, a gapless excitation spectrum.
As we have seen GFMC allows to obtain a very high statistical accuracy on the GS energy, but does not allow to compute directly GS expectation values $`\psi _0|\widehat{O}|\psi _0`$. A straightforward way is to perturb the Hamiltonian with a term $`\lambda \widehat{O}`$ , calculate the energy $`E(\lambda )`$ in presence of the perturbation and, by Hellmann-Feynman theorem, estimate $`\psi _0|\widehat{O}|\psi _0=dE(\lambda )/d\lambda |_{\lambda =0}`$ with few computations at different small $`\lambda `$’s. A further complication for non exact calculations like the FN or SR, is that if the off-diagonal matrix elements $`O_{x^{},x}`$ of the operator $`\widehat{O}`$ (in the chosen basis) have the opposite sign of the product $`\psi _\mathrm{V}(x^{})\psi _\mathrm{V}(x)`$, they cannot be handled exactly within FN because these matrix elements change the nodes of $`\psi _\mathrm{V}`$. A way to circumvent this difficulty if to split the operator $`\widehat{O}`$ in three contributions: $`\widehat{O}=\widehat{D}+\widehat{O}^++\widehat{O}^{}`$, where $`\widehat{O}^+`$ ($`\widehat{O}^{}`$) is the operator with the same off-diagonal matrix elements of $`\widehat{O}`$ when they have the same (opposite) signs of $`\psi _\mathrm{V}(x^{})\psi _\mathrm{V}(x)`$, and zero otherwise, whereas $`\widehat{D}`$ is the diagonal part of $`\widehat{O}`$. Then we can add to the Hamiltonian a contribution that does not change the nodes: $`\widehat{H}(\lambda )=\widehat{H}\lambda (\widehat{D}+2\widehat{O}^+)`$ for $`\lambda >0`$ and $`\widehat{H}(\lambda )=\widehat{H}\lambda (\widehat{D}+2\widehat{O}^{})`$ for $`\lambda <0`$. It is easy to show that $`\underset{\lambda 0}{lim}(E(\lambda )E(\lambda ))/2\lambda =\psi _0|\widehat{O}|\psi _0`$.
We plot in Fig. 4 $`m^2`$ estimated with this method using the FN and SR techniques. For the order parameter the inclusion of many short range correlations in the SR is not very important (see Tab. II). Then, in order to minimize the numerical effort, we have chosen to put in the SR conditions the first four correlation functions shown in Fig. 2, the order parameter itself and $`\widehat{S}_{\mathrm{tot}}^2`$. While the FN data extrapolate to a value not much lower than the variational result, the SR calculation provides a much more reliable estimate of the order parameter with no apparent loss of accuracy with increasing sizes. In this way we obtain for $`\widehat{m}^{}`$ a value well below the linear and the second order (which has actually a positive correction ) SW predictions. This is partially in agreement with the conclusions of the finite temperature calculations suggesting a GS with a small but nonzero long range AF order and with series expansions indicating the triangular antiferromagnetic Heisenberg model to be likely ordered but close to a critical point. However in our simulation, which to our knowledge represents a first attempt to perform a systematic finite size scaling analysis of the order parameter, the value of $`\widehat{m}^{}`$ remains sizable and finite, consistent with a gapless spectrum. This features could be also verified experimentally on the K/Si(111):B interface which has turned out recently to be the first realization of a really bidimensional triangular AF.
Though there is classical long range order, both the VMC and the SR approach show the crucial role of GS correlations defined on the smallest four spin clusters: in the variational calculation they are important to determine the correct relative phases of the GS WF whereas in the latter more accurate approach this correlations allow to obtain very accurate results for the energy and the spin gap and to restore the spin rotational invariance of the finite size GS.
Useful communications with M. Boninsegni and P. W. Leung are acknowledged. We also thank M. Calandra and A. Parola for help and fruitful discussions. This work was supported in part by INFM (PRA HTSC and PRA LOTUS), CINECA grant and CONICET (A.E.T.).
|
no-problem/9901/nucl-th9901057.html
|
ar5iv
|
text
|
# 1 Specific Entropy 𝑆/𝐴 vs. Impact parameter 𝑏. Circles denote the 𝑆/𝐴 values from the 3-fluid model. Diamonds and squares denote the ratios of pions to baryons and deuterons to protons (logarithmic), resp., as calculated from the 𝑆/𝐴 values. Triangles show the ‘𝑆/𝐴’ values parametrized by 𝑆/𝐴=3.945+𝑙𝑛(𝑑/𝑝)+4(𝜋/𝑛_𝑝).
Impact parameter dependencies in Pb(160 AGeV)+Pb reactions – hydrodynamical vs. cascade calculations
J. Brachmann<sup>1</sup>, M. Reiter<sup>1</sup>, M. Bleicher<sup>1</sup>, A. Dumitru<sup>2</sup>, J.A. Maruhn<sup>1</sup>, H. Stöcker<sup>1</sup>, W. Greiner<sup>1</sup>
<sup>1</sup> Institut für Theoretische Physik, Universität Frankfurt a.M., Germany
<sup>2</sup> Department of Physics, Yale University, New Haven, Connecticut, USA
January 18, 1999
Particle ratios are an appropriate tool to study the characteristics of entropy production in heavy-ion collisions, as shown in Fig. 1. We investigate the impact parameter dependence of the $`S/A`$ ratio (entropy $`S`$ per net participating baryon $`A`$) by means of the $`\overline{\mathrm{\Lambda }}/\overline{p}`$ ratio and will find it a tool to distinguish between chemical equilibrium as assumed in hydrodynamics (here: the 3-fluid model ) and chemical non-equilibrium like in microscopic models as the UrQMD model .
In the 3-fluid hydrodynamical model an EoS with a first order phase transition to a QGP is used. We employ that model to calculate $`S/A`$ during the initial stage of the reaction as a function of impact parameter b (cf. ), as shown in Fig. 2. To show the behaviour of the $`\overline{\mathrm{\Lambda }}/\overline{p}`$ ratio for the chemical equilibrium case, the creation of a fireball, composed of all hadrons up to mass $`m=2`$ GeV, with a uniform $`S/A`$ ratio and net baryon density $`\rho `$ is assumed. The hadron ratios are calculated assuming chemical freeze-out at a net baryon density $`\rho =\rho _0/2`$, $`T=160`$ MeV, resp. .
Within the UrQMD model , the non-equilibrium dynamics are treated in a microscopic hadronic scenario. Baryon-baryon, meson-baryon and meson-meson collisions lead to the formation and decay of resonances and color flux tubes. The produced, as well as the incoming particles, rescatter in the further evolution of the system.
Fig. 2 shows the $`\overline{\mathrm{\Lambda }}/\overline{p}`$ ratio for different impact parameters . In the 3-fluid approach the $`\overline{\mathrm{\Lambda }}/\overline{p}`$ ratio stays constant with b. In contrast, the hadronic UrQMD model yields a strong dependence of this ratio on impact parameter b. The $`\overline{\mathrm{\Lambda }}/\overline{p}`$ ratio drops rapidly with increasing b from 1.3 to 0.5.
In the microscopic UrQMD model, there is an interplay between particle production and subsequent annihilation. In peripheral collisions the $`\overline{\mathrm{\Lambda }}`$ production is basically the same as in p+p reactions. Due to the mass difference between strange and up/down quarks the production of (anti-)strange quarks is suppressed, which results in a suppression of $`\overline{\mathrm{\Lambda }}`$ over $`\overline{p}`$ by a factor of 2. In central Pb+Pb encounters meson-baryon and meson-meson interactions work as additional sources for the anti-hyperon and anti-proton production and additional rescattering has to be taken into account in the hot and dense medium. Anti-baryons are strongly affected by the comoving baryon density and annihilate, while, according to the additive quark model, the annihilation probability for $`\overline{\mathrm{\Lambda }}`$’s is smaller, leading to an increase of the $`\overline{\mathrm{\Lambda }}/\overline{p}`$ ratio above 1. Thus, in the UrQMD chemical equilibrium may only be established in very central reactions, where enough secondary collisions drive the system into chemical equilibrium. In contrast, the fluids of the hydrodynamical calculation are, by definition, in chemical<sup>1</sup><sup>1</sup>1In the beginning of the reaction kinetic equilibrium between the fluids is not assumed, but chemical equilibrium is established by assumption of an EoS. equilibrium.
|
no-problem/9901/gr-qc9901027.html
|
ar5iv
|
text
|
# Geodesic Motions in 2+1 Dimensional Charged Black Holes
## I Introduction
Since Ba$`\stackrel{~}{\mathrm{n}}`$ados-Teitelboim-Zanelli(BTZ) reported the three dimensional black hole as a series of solutions in $`2+1`$ dimensional anti-de Sitter gravity , it has become one of the most exciting problems in theoretical gravity. Black hole thermodynamics and statistical properties of BTZ black holes have been representative topics . Recently the importance of BTZ-type black holes is emphasized because it has been demonstrated that the duality between gravity in $`N+1`$ dimensional anti-de Sitter space and conformal field theory in $`N`$ dimensions . Among various branches of black hole researches, the simplest but basic topic is to investigate the classical geodesic motions in $`2+1`$ dimensional BTZ black holes. Though the exact solutions of geodesic motions were found for Schwarzschild- and Kerr-type BTZ black holes , no such solutions are known for a charged BTZ black hole. These aspects seem to be similar for the other research fields, e.g., black hole thermodynamics . It has been believed by the following reason: The metric of a charged BTZ black hole involves both logarithm and the square of the radial coordinates. In this note, we found a class of exact geodesic motions for a charged BTZ black hole despite of the above obstacle. In addition, all other possible geodesic motions are categorized in examining the orbit equation, and analyzed by use of numerical method.
In next section, we briefly recapitulate charged BTZ black holes, and discuss the both null and time-like geodesics. We obtain a class of exact geodesic motions for the massless test particle when the ratio of its energy and angular momentum is given by square root of the absolute value of a negative cosmological constant. We conclude in Sec.III with a brief discussion.
## II Geodesic Motions
A static $`2+1`$ dimensional metric with rotational symmetry has the form;
$$\mathrm{d}s^2=B(r)e^{2N(r)}\mathrm{d}t^2B^1(r)\mathrm{d}r^2r^2\mathrm{d}\theta ^2.$$
(1)
If there exists an electric point charge at the origin, the electrostatic field is given by $`E_r=q/r`$, and the diagonal components of energy-momentum tensor are non-vanishing, i.e., $`T_{}^{t}{}_{t}{}^{}=T_{}^{r}{}_{r}{}^{}=T_{}^{\theta }{}_{\theta }{}^{}=E_{r}^{}{}_{}{}^{2}/2e^{2N(r)}`$. Then the Einstein equations become
$$\frac{1}{r}\frac{\mathrm{d}N(r)}{\mathrm{d}r}=0,$$
(2)
$$\frac{1}{r}\frac{\mathrm{d}B(r)}{\mathrm{d}r}=2|\mathrm{\Lambda }|\frac{8\pi Gq^2}{r^2e^{2N(r)}}.$$
(3)
Static solutions of Eqs. (2) and (3) are
$$N(r)=N_0,$$
(4)
$$B(r)=|\mathrm{\Lambda }|r^28\pi Gq^2\mathrm{ln}r8GM,$$
(5)
where we have two integration constants $`N_0`$ and $`M`$. Note that the integration constant $`N_0`$ can be absorbed by rescaling of the time variable so that one can set it to be zero. The other constant $`M`$ is identified by the mass of a BTZ black hole. The obtained solutions are categorized into three classes characterized by the value of mass parameter $`M`$ for a given value of charger $`q`$: (i) When $`M<\left(\pi q^2/2\right)\left[1\mathrm{ln}\left(4\pi Gq^2/|\mathrm{\Lambda }|\right)\right]`$, the spatial manifold does not contain a horizon. (ii) When $`M=\left(\pi q^2/2\right)\left[1\mathrm{ln}\left(4\pi Gq^2/|\mathrm{\Lambda }|\right)\right]`$, it has one horizon at $`r=\sqrt{4\pi Gq^2/|\mathrm{\Lambda }|}`$ and then it corresponds to the extremal case of a charged BTZ black hole. (iii) When $`M>\left(\pi q^2/2\right)\left[1\mathrm{ln}\left(4\pi Gq^2/|\mathrm{\Lambda }|\right)\right]`$, there are two horizons of a charged BTZ black hole.
Let us consider geodesic equations around the charged BTZ black hole. There are two constants of motions, $`\gamma `$ and $`L`$, associated with two Killing vectors such as
$`B(r){\displaystyle \frac{\mathrm{d}t}{\mathrm{d}s}}=\gamma ,`$ (6)
$`r^2{\displaystyle \frac{\mathrm{d}\theta }{\mathrm{d}s}}=L.`$ (7)
Geodesic equation for radial motions is read from the Lagrangian for a test particle:
$$B\left(\frac{\mathrm{d}t}{\mathrm{d}s}\right)^2\frac{1}{B}\left(\frac{\mathrm{d}r}{\mathrm{d}s}\right)^2r^2\left(\frac{\mathrm{d}\theta }{\mathrm{d}s}\right)=m^2$$
(8)
where $`m=0`$ stands for null (photon) geodesics and $`m>0`$ time-like geodesics so that $`m(>0)`$ sets to be $`1`$ without loss of generality. Inserting Eqs. (6) and (7) into Eq. (8), we have a first-order equation
$$\frac{1}{2}\left(\frac{\mathrm{d}r}{\mathrm{d}s}\right)^2=\frac{1}{2}\left\{B(r)\left(\frac{L^2}{r^2}+m^2\right)\gamma ^2\right\}.$$
(9)
Then, all possible geodesic motions are classified by the shape of effective potential from the right-hand side of Eq. (9):
$$V(r)=\frac{1}{2}\left\{B(r)\left(\frac{L^2}{r^2}+m^2\right)\gamma ^2\right\}.$$
(10)
From Eqs. (7) and (9), orbit equation is
$$\left(\frac{\mathrm{d}r}{\mathrm{d}\theta }\right)^2=B(r)r^2\left(1+\frac{m^2}{L^2}r^2\right)+\frac{\gamma ^2}{L^2}r^4.$$
(11)
From now on let us examine the orbit equation (11) and analyze all possible geodesic motions for various parameters. In the case of a photon without angular momentum ($`m=0`$ and $`L=0`$), the effective potential (10) becomes a constant:
$$V(r)=\frac{\gamma ^2}{2}.$$
(12)
For the regular case, all possible geodesic motions resemble those of a free particle. These solutions do not depend on both electric charge $`q`$ and black hole mass $`M`$. For a black hole, the geodesic motions are similar to those of regular case far away from the horizon, however the existence of black hole horizons should be taken into account. Specifically the photon also has a free particle motion near the horizon, but the redshift is detected at the outside of black hole.
When a test photon carries angular momentum ($`m=0`$ and $`L0`$), the effective potential (10) is
$$V(r)=\frac{1}{2}B(r)\left(\frac{L^2}{r^2}\right)\frac{\gamma ^2}{2},$$
(13)
and the corresponding orbit equation is written as
$$\mathrm{d}\theta =\frac{\mathrm{d}r}{r\sqrt{4\pi Gq^2\mathrm{ln}r^2+8MG+r^2\left(\frac{\gamma ^2}{L^2}|\mathrm{\Lambda }|\right)}}.$$
(14)
There have been reported several well-known analytic solutions of geodesic equation for Schwarzschild- or Kerr-type BTZ black holes because the orbit equations include the terms of the power of radial coordinates alone. Once we look at the form of the orbit equation in Eq. (14) with both the square of the radial coordinate and logarithmic terms, we may easily accept non-existence of analytic solutions of Eq. (14) when the electric charge $`q`$ is nonzero. However, a clever but simple investigation shows an exit when $`\gamma /L=\sqrt{|\mathrm{\Lambda }|}`$ in addition to the trivial Schwarzschild-type BTZ black hole in the limit of zero electric charge ($`q=0`$): The coefficient of $`r^2`$-term in the integrand vanishes and then a set of explicit orbits solution seems to exist. We will show that it is indeed the case.
As shown in FIGs. 1 and 2, all the geodesic motions of a photon in a charged BTZ black hole are categorized by five cases: (i) When $`\gamma /L<\left(\gamma /L\right)_{\mathrm{cr}}`$, there is no allowed motion. Every orbit is allowed only when $`\gamma /L`$ is equal to or larger than the critical value $`\left(\gamma /L\right)_{\mathrm{cr}}`$:
$$\left(\frac{\gamma }{L}\right)_{\mathrm{cr}}=\sqrt{|\mathrm{\Lambda }|\mathrm{exp}\left(\frac{2M}{\pi q^2}+\mathrm{ln}(4\pi Gq^2)1\right)}.$$
(15)
(ii) When $`\gamma /L=\left(\gamma /L\right)_{\mathrm{cr}}`$, this condition gives a circular motion. The radius of this circular motion is
$$r_{\mathrm{cir}}=\sqrt{\frac{4\pi Gq^2}{|\mathrm{\Lambda }|\left(\frac{\gamma }{L}\right)_{\mathrm{cr}}^2}}.$$
(16)
Both critical value and radius of this circular motion are obtained from Eq. (13). (iii) When $`\left(\gamma /L\right)_{\mathrm{cr}}<\gamma /L<\sqrt{|\mathrm{\Lambda }|}`$, the photon has elliptic motions, but for the charged BTZ black hole with two horizon, the lower bound is limited by zero. (iv) When $`\gamma /L=\sqrt{|\mathrm{\Lambda }|}`$, the geodesic equation becomes integrable. These geodesic motions are unbounded spiral motions at a large scale. (v) When $`\gamma /L>\sqrt{|\mathrm{\Lambda }|}`$, the geodesic motions are unbounded. Note that for any charged BTZ black hole, $`(\gamma /L)_{\mathrm{cr}}`$ in Eq. (15) becomes imaginary, and then the circular orbit is not allowed. It is also true for the extremal charged BTZ black hole.
We have already mentioned that Eq. (14) becomes integrable when $`\gamma /L=\sqrt{|\mathrm{\Lambda }|}`$. The explicit form of the integrable orbits is
$$r=\mathrm{exp}\left(2\pi Gq^2\theta ^2\frac{M}{\pi q^2}\right),$$
(17)
and FIGs. 3 and 4 show an example. FIG. 4 shows representative trajectories which are changed by the mass parameter with a fixed charge, $`q^2/|\mathrm{\Lambda }|=1`$. All possible motions are spiral at the large scale (see Fig. 3-(a)). As $`M/|\mathrm{\Lambda }|`$ becomes sufficiently large, the radius of inner horizon approaches zero and that of outer horizon goes to infinity. In this limit, mass parameter determines the black hole dominantly and the charge does not affect much. Therefore, it leads to the Schwarzschild type black hole. When the black hole mass converges to that of extremal black hole case, i.e., $`M(\pi q^2/2)\left[1\mathrm{ln}(4\pi Gq^2/|\mathrm{\Lambda }|)\right]`$, the radii of inner and outer horizons are merged into one;
$$r_H^{\mathrm{ext}}=\sqrt{\frac{4\pi Gq^2}{|\mathrm{\Lambda }|}}.$$
The perihelion of these analytically-obtained orbits in Eq. (17) is trivially obtained
$$r_{\mathrm{ph}}=\mathrm{exp}\left(\frac{M}{\pi q^2}\right),$$
(18)
and, for an extremal charged BTZ black hole, it becomes
$$r_{\mathrm{ph}}^{\mathrm{ext}}=\mathrm{exp}\left[\frac{1}{2}\left(\mathrm{ln}\frac{4\pi Gq^2}{|\mathrm{\Lambda }|}1\right)\right].$$
(19)
The perihelion of these analytically-obtained orbits in Eq. (17) is trivially obtained
$$r_{\mathrm{ph}}=\mathrm{exp}\left(\frac{M}{\pi q^2}\right),$$
(20)
and, for an extremal charged BTZ black hole, it becomes
$$r_{\mathrm{ph}}^{\mathrm{ext}}=\mathrm{exp}\left[\frac{1}{2}\left(\mathrm{ln}\frac{4\pi Gq^2}{|\mathrm{\Lambda }|}1\right)\right].$$
(21)
As we mentioned previously, there remain two classes of solutions: (i) $`\left(\gamma /L\right)_{\mathrm{cr}}<\gamma /L<\sqrt{|\mathrm{\Lambda }|}`$ and (ii) $`\gamma /L>\sqrt{|\mathrm{\Lambda }|}`$. When $`\gamma /L\sqrt{|\mathrm{\Lambda }|}`$, the orbit equation (14) is not integrable. Then the numerical analysis is a useful tool for those geodesic motions. For the first case ($`\left(\gamma /L\right)_{\mathrm{cr}}<\gamma /L<\sqrt{|\mathrm{\Lambda }|}`$), all orbits are bounded between aphelion and perihelion. Two representative examples of elliptic geodesic motions are shown in FIG. 5. For the second case ($`\gamma /L>\sqrt{|\mathrm{\Lambda }|}`$), FIGs. 1 and 2 show that there also exists perihelion but we cannot obtain analytically.
For the motions of a massive particle ($`m=1`$), all allowed motions are bounded since the asymptotic structure of spacetime is not flat, but anti-de Sitter.
In the case of a massive test particle with zero angular momentum ($`m=1`$ and $`L=0`$), the effective potential becomes
$$V(r)=\frac{1}{2}B(r)\frac{\gamma ^2}{2}.$$
(22)
There is no allowed motion under the critical energy $`\gamma _{\mathrm{cr}}`$. When the minimum value of the effective potential is zero, the critical energy of the test particle is computed;
$$\gamma _{\mathrm{cr}}=\sqrt{4\pi Gq^2\left(1\mathrm{ln}\frac{4\pi Gq^2}{|\mathrm{\Lambda }|}\right)8GM}.$$
(23)
When $`\gamma =\gamma _{\mathrm{cr}}`$, the test particle remains at rest. Above the critical energy, radial motion is an oscillation between perihelion and aphelion. For the black hole case, the motion of a test particle is also oscillating, but its range is restricted by the horizons.
In the case of a massive test particle with angular momentum ($`m=1`$ and $`L0`$), the effective potential becomes
$$V(r)=\frac{1}{2}B(r)\left(\frac{L^2}{r^2}+1\right)\frac{\gamma ^2}{2}.$$
(24)
FIG. 6 and FIG. 7 depict effective potentials for various values of $`\gamma `$: FIG. 6 corresponds to a regular spacetime and FIG. 7 a charged BTZ black hole.
For the regular case, the dashed line shows that the minimum of $`V(r)`$ is positive and then there is no allowed motion below this critical energy $`\gamma _{\mathrm{cr}}`$. It is the same as that of the $`L=0`$ case. When $`\gamma =\gamma _{\mathrm{cr}}`$, the minimum of $`V(r)`$ is zero and then there exists a circular motion at $`r=r_{\mathrm{cir}}`$ (see the solid line in FIG. 6). The effective potential given by the dotted line in FIG. 6 supports the elliptic motion with aphelion $`r_{\mathrm{ap}}`$ and perihelion $`r_{\mathrm{ph}}`$. The effective potentials for a charged BTZ black hole are shown in FIG. 7. As shown in FIG. 7, the motions outside the horizons are provided only when $`\gamma >0`$. The unique allowed motion for the extremal charged BTZ black hole is the stopped motion at the degenerated horizon, which means eventually that no motion is allowed. Two examples of the trajectories of a massive test particle are described in FIG. 8.
## III conclusion
In this paper we have studied the geodesic motions of charged BTZ black holes. We found a class of exact geodesic solutions of a massless test particle when the ratio of its energy and angular momentum is equal to the square root of the absolute value of a negative cosmological constant. The obtained geodesics describe the unbounded spiral motion. Though we have some exact geodesic motions, it seems impossible for us to extend our coordinates to Kruskal-Szekeres or Penrose diagram which provide a basis for further researches. We categorized the possible geodesic motions of massive and massless test particles as circular, elliptic, unbounded spiral, and unbounded motions. Several typical examples are analyzed by numerical works. Many works in various field, e.g., black hole thermodynamics, have been done for Schwarzschild- or Kerr-type BTZ black holes . On the other hand, those researches have been limited in the case of charged BTZ black holes, that is different from that of $`3+1`$ dimensional Reissner-Nordstr$`\ddot{\mathrm{o}}`$m black holes. We hope that our simple work provides a building block to further researches about charged BTZ black holes and related topics.
###### Acknowledgements.
The authors would like to thank Yoonbai Kim for helpful discussions. This work was supported by KRF(1998-015-D00075) and KOSEF through Center for Theoretical Physics, SNU.
|
no-problem/9901/astro-ph9901237.html
|
ar5iv
|
text
|
# Faint galaxies, extragalactic background light, and the reionization of the Universe
## Introduction
There is little doubt that the last few years have been exciting times in galaxy formation and evolution studies. The remarkable progress in our understanding of faint galaxy data made possible by the combination of HST deep imaging W96 and ground-based spectroscopy Li96 , El96 , S96 has permitted to shed new light on the evolution of the stellar birthrate in the universe, to identify the epoch $`1\mathrm{}<z\mathrm{}<2`$ where most of the optical extragalactic background light was produced, and to set important contraints on galaxy evolution scenarios M98 , S98 , Bau98 , Gui97 . The explosion in the quantity of information available on the high-redshift universe at optical wavelengths has been complemented by the detection of the far-IR/sub-mm background by DIRBE and FIRAS Ha98 , Fix98 . The IR data have revealed the optically ‘hidden’ side of galaxy formation, and shown that a significant fraction of the energy released by stellar nucleosynthesis is re-emitted as thermal radiation by dust. The underlying goal of all these efforts is to understand the growth of cosmic structures and the mechanisms that shaped the Hubble sequence, and ultimately to map the transition from the cosmic ‘dark age’ to a ionized universe populated with luminous sources. While one of the important questions recently emerged is the nature (starbursts or AGNs?) and redshift distribution of the ultraluminous sub-mm sources discovered by SCUBA Hu98 , B98 , Li98 , of perhaps equal interest is the possible existence of a large population of faint galaxies still undetected at high-$`z`$, as the color-selected ground-based and Hubble Deep Field (HDF) samples include only the brightest and bluest star-forming objects. In hierarchical clustering cosmogonies, high-$`z`$ dwarfs and/or mini-quasars (i.e. an early generation of stars and accreting black holes in dark matter halos with circular velocities $`v_c50\mathrm{km}\mathrm{s}^1`$) may actually be one of the main source of UV photons and heavy elements at early epochs MR98 , HL97 , HL98 .
In this talk I will focus on some of the open issues and controversies surrounding our present understanding of the history of the conversion of cold gas into stars within galaxies, and of the evolution with cosmic time of luminous sources in the universe. An Einstein-deSitter (EdS) universe ($`\mathrm{\Omega }_M=1`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$) with $`h=H_0/100\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1=0.5`$ will be adopted in the following.
## Optical/FIR background
The extragalactic background light (EBL) is an indicator of the total luminosity of the universe. It provides unique information on the evolution of cosmic structures at all epochs, as the cumulative emission from galactic systems and AGNs is expected to be recorded in this background. Figure 1 shows the optical EBL from known galaxies together with the recent COBE results. The value derived by integrating the galaxy counts Pozz98 down to very faint magnitude levels \[because of the flattening at faint magnitudes of the $`N(m)`$ differential counts most of the contribution to the optical EBL comes from relatively bright galaxies\] implies a lower limit to the EBL intensity in the 0.3–2.2 $`\mu `$m interval of $`I_{\mathrm{opt}}12\mathrm{nW}\mathrm{m}^2\mathrm{sr}^1`$.<sup>1</sup><sup>1</sup>1Note that the direct detection of the optical EBL at 3000, 5500, and 8000 Å derived from HST data by RAB implies values that are about a factor of two higher than the integrated light from galaxy counts. When combined with the FIRAS and DIRBE measurements ($`I_{\mathrm{FIR}}16\mathrm{nW}\mathrm{m}^2\mathrm{sr}^1`$ in the 125–5000 $`\mu `$m range), this gives an observed EBL intensity in excess of $`28\mathrm{nW}\mathrm{m}^2\mathrm{sr}^1`$. The correction factor needed to account for the residual emission in the 2.2 to 125 $`\mu `$m region is probably $`\mathrm{}<2`$ Dwe98 . We shall see below how a population of dusty AGNs could make a significant contribution to the FIR background. In the rest of this talk I will adopt a conservative reference value for the total EBL intensity associated with star formation activity over the entire history of the universe of $`I_{\mathrm{EBL}}=40I_{40}\mathrm{nW}\mathrm{m}^2\mathrm{sr}^1`$.
## Cosmic star formation
It has become familiar to interpret recent observations of high-redshift sources via the comoving volume-averaged history of star formation. This is the mean over cosmic time of the stochastic, possibly short-lived star formation episodes of individual galaxies, and follows a relatively simple dependence on redshift. Its latest version, uncorrected for dust extinction, is plotted in Figure 2 (left). The measurements are based upon the rest-frame UV luminosity function (at 1500 and 2800 Å), assumed to be from young stellar populations M96 . The prescription for a ‘correct’ de-reddening of these values has been the subject of an ongoing debate. Dust may play a role in obscuring the UV continuum of Canada-France Reshift Survey (CFRS, $`0.3<z<1`$) and Lyman-break ($`z3`$) galaxies, as their colors are too red to be fitted with an evolving stellar population and a Salpeter initial mass function (IMF) M98 . The fiducial model of M98 had an upward correction factor of 1.4 at 2800 Å, and 2.1 at 1500 Å. Much larger corrections have been argued for by RR97 ($`\times 10`$ at $`z=1`$), Meu97 ($`\times 15`$ at $`z=3`$), and Sa98 ($`\times 16`$ at $`z>2`$). As noted already by M96 and M98 , a consequence of such large extinction values is the possible overproduction of metals and red light at low redshifts. Most recently, the evidence for more moderate extinction corrections has included measurements of star-formation rates (SFR) from Balmer lines by TM98 ($`\times 2`$ at $`z=0.2`$), Gla98 ($`\times 3.1\pm 0.4`$ at $`z=1`$), and Max98 ($`\times 26`$ at $`z=3`$). ISO follow-up of CFRS fields F98 has shown that the star-formation density derived by FIR fluxes ($`\times 2.3\pm 0.7`$ at $`0z1`$) is about 3.5 times lower than in RR97 . Figure 2 (right) depicts an extinction-corrected (with $`A_{1500}=1.2`$ mag, 0.4 mag higher than in M98 ) version of the same plot. The best-fit cosmic star formation history (shown by the dashed-line) with such a universal correction produces a total EBL of $`37\mathrm{nW}\mathrm{m}^2\mathrm{sr}^1`$. About 65% of this is radiated in the UV$`+`$optical$`+`$near-IR between 0.1 and 5 $`\mu `$m; the total amount of starlight that is absorbed by dust and reprocessed in the far-IR is $`13\mathrm{nW}\mathrm{m}^2\mathrm{sr}^1`$. Because of the uncertainties associated with the incompleteness of the data sets, photometric redshift technique, dust reddening, and UV-to-SFR conversion, these numbers are only meant to be indicative. On the other hand, this very simple model is not in obvious disagreement with any of the observations, and is able, in particular, to provide a reasonable estimate of the galaxy optical and near-IR luminosity density.
## Stellar baryon budget
With the help of some simple stellar population synthesis tools it is possible at this stage to make an estimate of the stellar mass density that produced the integrated light observed today. The total bolometric luminosity of a simple stellar population (a single generation of coeval stars) having mass $`M`$ can be well approximated by a power-law with time for all ages $`t\mathrm{}>100`$ Myr,
$$L(t)=1.3L_{}\frac{M}{M_{}}\left(\frac{t}{1\mathrm{Gyr}}\right)^{0.8}$$
(1)
(cf. Bu95 ), where we have assumed solar metallicity and a Salpeter IMF truncated at 0.1 and 125 $`M_{}`$. In a stellar system with arbitrary star-formation rate per unit cosmological volume, $`\dot{\rho }_{}`$, the comoving bolometric emissivity at time $`t`$ is given by the convolution integral
$$\rho _{\mathrm{bol}}(t)=_0^tL(\tau )\dot{\rho }_{}(t\tau )𝑑\tau .$$
(2)
The total background light observed at Earth ($`t=t_H`$) is
$$I_{\mathrm{EBL}}=\frac{c}{4\pi }_0^{t_H}\frac{\rho _{\mathrm{bol}}(t)}{1+z}𝑑t,$$
(3)
where the factor $`(1+z)`$ at the denominator is lost to cosmic expansion when converting from observed to radiated (comoving) luminosity density. From the above equations it is easy to derive in a EdS cosmology
$$I_{\mathrm{EBL}}=740\mathrm{nW}\mathrm{m}^2\mathrm{sr}^1\frac{\dot{\rho }_{}}{\mathrm{M}_{}\mathrm{yr}^1\mathrm{Mpc}^3}\left(\frac{t_H}{13\mathrm{Gyr}}\right)^{1.87}.$$
(4)
The observations shown in Figure 1 therefore imply a “fiducial” mean star formation density of $`\dot{\rho }_{}=0.054I_{40}\mathrm{M}_{}\mathrm{yr}^1\mathrm{Mpc}^3`$. In the instantaneous recycling approximation, the total stellar mass density observed today is
$$\rho _{}(t_H)=(1R)_0^{t_H}\dot{\rho }_{}(t)𝑑t5\times 10^8I_{40}\mathrm{M}_{}\mathrm{Mpc}^3$$
(5)
(corresponding to $`\mathrm{\Omega }_{}=0.007I_{40}`$), where $`R`$ is the mass fraction of a generation of stars that is returned to the interstellar medium, $`R0.3`$ for a Salpeter IMF. The optical/FIR background therefore requires that about 10% of the nucleosynthetic baryons today Bur98 are in the forms of stars and their remnants. The predicted stellar mass-to-blue light ratio is $`M/L_B5`$. These values are quite sensitive to the lower-mass cutoff of the IMF, as very-low mass stars can contribute significantly to the mass but not to the integrated light of the whole stellar population. A lower cutoff of 0.5$`\mathrm{M}_{}`$ instead of the 0.1$`\mathrm{M}_{}`$ adopted would decrease the mass-to-light ratio (and $`\mathrm{\Omega }_{}`$) by a factor of 1.9 for a Salpeter function.
## Two simple models
Based on the agreement between the $`z3`$ and $`z4`$ luminosity functions at the bright end, it has been recently argued Ste98 that the decline in the luminosity density of faint HDF Lyman-break galaxies observed in the same redshift interval M96 may not be real, but simply due to sample variance in the HDF. When extinction corrections are applied, the emissivity per unit comoving volume due to star formation may then remain essentially flat for all redshift $`z\mathrm{}>1`$ (see Fig. 2). While this has obvious implications for hierarchical models of structure formation, the epoch of first light, and the reionization of the intergalactic medium (IGM), it is also interesting to speculate on the possibility of a constant star-formation density at all epochs $`0z5`$, as recently advocated by Pasc98 . Figure 3 shows the time evolution of the blue and near-IR rest-frame luminosity density of a stellar population characterized by a Salpeter IMF, solar metallicity, and a (constant) star-formation rate of $`\dot{\rho _{}}=0.054\mathrm{M}_{}\mathrm{yr}^1\mathrm{Mpc}^3`$ (needed to produce the observed EBL). The predicted evolution appears to be a poor match to the observations: it overpredicts the local $`B`$ and $`K`$-band luminosity densities, and underpredicts the 1$`\mu `$m emissivity at $`z1`$ from the CFRS survey. <sup>2</sup><sup>2</sup>2The near-IR light is dominated by near-solar mass evolved stars, the progenitors of which make up the bulk of a galaxy’s stellar mass, and is more sensitive to the past star-formation history than the blue light.
At the other extreme, we know from stellar population studies that about half of the present-day stars are contained in spheroidal systems, i.e. elliptical galaxies and spiral galaxy bulges, and that these stars formed early and rapidly Bern . The expected rest-frame blue and near-IR emissivity of a simple stellar population with formation redshift $`z_{\mathrm{on}}=5`$ and total mass density equal to the mass in spheroids observed today (see below) is shown in Figure 3. HST-NICMOS deep observations may be able to test similar scenarios for the formation of elliptical galaxies at early times.
## Type II AGNs
Recent dynamical evidence indicates that supermassive black holes reside at the center of most nearby galaxies. The available data (about 30 objects) show a strong correlation (but with a large scatter) between bulge and black hole mass Mag98 , with $`M_{\mathrm{bh}}=0.006M_{\mathrm{bulge}}`$ as a best-fit. The total mass density in spheroids today is $`\mathrm{\Omega }_{\mathrm{bulge}}=0.0036_{0.0017}^{+0.0024}`$ Fuk98 , implying a mean mass density of dead quasars
$$\rho _{\mathrm{bh}}=1.5_{0.7}^{+1.0}\times 10^6\mathrm{M}_{}\mathrm{Mpc}^3.$$
(6)
Noting that the observed energy density from all quasars is equal to the emitted energy divided by the average quasar redshift Zol82 , the total contribution to the EBL from accretion onto black holes is
$$I_{\mathrm{bh}}=\frac{c^3}{4\pi }\frac{\eta \rho _{\mathrm{bh}}}{1+z}18\mathrm{nW}\mathrm{m}^2\mathrm{sr}^1\eta _{0.1}1+z^1,$$
(7)
where $`\eta _{0.1}`$ is the efficiency for transforming accreted rest-mass energy into radiation (in units of 10%). A population of AGNs at (say) $`z1.5`$ could then make a significant contribution to the FIR background if dust-obscured accretion onto supermassive black holes is an efficient process Hae98 , Fab . It is interesting to note in this context that a population of AGNs with strong intrinsic absorption (Type II quasars) is actually invoked in many current models for the X-ray background Mad94 , Com95 .
## Sources of ionizing radiation
The application of the Gunn-Peterson constraint on the amount of smoothly distributed neutral material along the line of sight to distant objects requires the hydrogen component of the diffuse IGM to have been highly ionized by $`z5`$ SSG , and the helium component by $`z2.5`$ DKZ . From QSO absorption studies we also know that neutral hydrogen at early epochs accounts for only a small fraction, $`10\%`$, of the nucleosynthetic baryons LWT . It thus appears that substantial sources of ultraviolet photons were present at $`z\mathrm{}>5`$, perhaps low-luminosity quasars HL98 or a first generation of stars in virialized dark matter halos with $`T_{\mathrm{vir}}10^410^5`$K OG96 , HL97 , MR98 .
Early star formation provides a possible explanation for the widespread existence of heavy elements in the IGM Cow95 , while reionization by QSOs may produce a detectable signal in the radio extragalactic background at meter wavelengths Mad97 . Establishing the character of cosmological ionizing sources is an efficient way to constrain competing models for structure formation in the universe, and to study the collapse and cooling of small mass objects at early epochs.
What keeps the universe ionized at $`z=5`$? The problem can be simplified by noting that the breakthrough epoch (when all radiation sources can see each other in the Lyman continuum) occurs much later in the universe than the overlap epoch (when individual ionized zones become simply connected and every point in space is exposed to ionizing radiation). This implies that at high redshifts the ionization equilibrium is actually determined by the instantaneous UV production rate MHR . The fact that the IGM is rather clumpy and still optically thick at overlapping, coupled to recent observations of a rapid decline in the space density of radio-loud quasars and of a large population of star-forming galaxies at $`z\mathrm{}>3`$, has some interesting implications for rival ionization scenarios and for the star formation activity at $`<3<z<5`$.
The existence of a decline in the space density of bright quasars at redshifts beyond $`3`$ was first suggested by O82 , and has been since then the subject of a long-standing debate. In recent years, several optical surveys have consistently provided new evidence for a turnover in the QSO counts HS90 , WHO , Sc95 , KDC . The interpretation of the drop-off observed in optically selected samples is equivocal, however, because of the possible bias introduced by dust obscuration arising from intervening systems. Radio emission, on the other hand, is unaffected by dust, and it has recently been shown Sha that the space density of radio-loud quasars also decreases strongly for $`z>3`$. This argues that the turnover is indeed real and that dust along the line of sight has a minimal effect on optically-selected QSOs (Figure 4, left). The QSO emission rate (corrected for incompleteness) of hydrogen ionizing photons per unit comoving volume is shown in Figure 4 (right) MHR .
Galaxies with ongoing star-formation are another obvious source of Lyman continuum photons. Since the rest-frame UV continuum at 1500 Å (redshifted into the visible band for a source at $`z3`$) is dominated by the same short-lived, massive stars which are responsible for the emission of photons shortward of the Lyman edge, the needed conversion factor, about one ionizing photon every 10 photons at 1500 Å, is fairly insensitive to the assumed IMF and is independent of the galaxy history for $`t10^7`$ yr. Figure 4 (right) shows the estimated Lyman-continuum luminosity density of galaxies at $`z3`$.<sup>3</sup><sup>3</sup>3At all ages $`\mathrm{}>0.1`$ Gyr one has $`L(1500)/L(912)6`$ for a Salpeter mass function and constant SFR BC98 . This number neglects any correction for intrinsic $`\mathrm{I}`$absorption. The data point assumes a value of $`f_{\mathrm{esc}}=0.5`$ for the unknown fraction of ionizing photons which escapes the galaxy $`\mathrm{I}`$layers into the intergalactic medium. A substantial population of dwarf galaxies below the detection threshold, i.e. having star-formation rates $`<0.3\mathrm{M}_{}\mathrm{yr}^1`$, and with a space density in excess of that predicted by extrapolating to faint magnitudes the best-fit Schechter function, may be expected to form at early times in hierarchical clustering models, and has been recently proposed by MR98 and MHR as a possible candidate for photoionizing the IGM at these epochs. One should note that, while highly reddened galaxies at high redshifts would be missed by the dropout color technique (which isolates sources that have blue colors in the optical and a sharp drop in the rest-frame UV), it seems unlikely that very dusty objects (with $`f_{\mathrm{esc}}1`$) would contribute in any significant manner to the ionizing metagalactic flux.
## Reionization of the IGM
When an isolated point source of ionizing radiation turns on, the ionized volume initially grows in size at a rate fixed by the emission of UV photons, and an ionization front separating the $`\mathrm{II}`$and $`\mathrm{I}`$regions propagates into the neutral gas. Most photons travel freely in the ionized bubble, and are absorbed in a transition layer. The evolution of an expanding $`\mathrm{II}`$region is governed by the equation
$$\frac{dV_I}{dt}3HV_I=\frac{\dot{N}_{\mathrm{ion}}}{\overline{n}_\mathrm{H}}\frac{V_I}{\overline{t}_{\mathrm{rec}}},$$
(8)
where $`V_I`$ is the proper volume of the ionized zone, $`\dot{N}_{\mathrm{ion}}`$ is the number of ionizing photons emitted by the central source per unit time, $`\overline{n}_\mathrm{H}`$ is the mean hydrogen density of the expanding IGM, $`H`$ is the Hubble constant, and $`\overline{t}_{\mathrm{rec}}`$ is the hydrogen mean recombination timescale,
$$\overline{t}_{\mathrm{rec}}=[(1+2\chi )\overline{n}_\mathrm{H}\alpha _BC]^1=0.3\mathrm{Gyr}\left(\frac{\mathrm{\Omega }_bh^2}{0.02}\right)^1\left(\frac{1+z}{4}\right)^3C_{30}^1.$$
(9)
One should point out that the use of a volume-averaged clumping factor, $`C`$, in the recombination timescale is only justified when the size of the $`\mathrm{II}`$region is large compared to the scale of the clumping, so that the effect of many clumps (filaments) within the ionized volume can be averaged over (see Figure 5). Across the I-front the degree of ionization changes sharply on a distance of the order of the mean free path of an ionizing photon. When $`\overline{t}_{\mathrm{rec}}t`$, the growth of the $`\mathrm{II}`$region is slowed down by recombinations in the highly inhomogeneous medium, and its evolution can be decoupled from the expansion of the universe. Just like in the static case, the ionized bubble will fill its time-varying Strömgren sphere after a few recombination timescales,
$$V_I=\frac{\dot{N}_{\mathrm{ion}}\overline{t}_{\mathrm{rec}}}{\overline{n}_\mathrm{H}}(1e^{t/\overline{t}_{\mathrm{rec}}}).$$
(10)
In analogy with the individual $`\mathrm{II}`$region case, it can be shown that hydrogen component in a highly inhomogeneous universe is completely reionized when the number of photons emitted above 1 ryd in one recombination time equals the mean number of hydrogen atoms MHR . At any given epoch there is a critical value for the photon emission rate per unit cosmological comoving volume,
$$\dot{𝒩}_{\mathrm{ion}}(z)=\frac{\overline{n}_\mathrm{H}(0)}{\overline{t}_{\mathrm{rec}}(z)}=(10^{51.2}\mathrm{s}^1\mathrm{Mpc}^3)C_{30}\left(\frac{1+z}{6}\right)^3\left(\frac{\mathrm{\Omega }_bh^2}{0.02}\right)^2,$$
(11)
independently of the (unknown) previous emission history of the universe: only rates above this value will provide enough UV photons to ionize the IGM by that epoch. One can then compare our estimate of $`\dot{𝒩}_{\mathrm{ion}}`$ to the estimated contribution from QSOs and star-forming galaxies. The uncertainty on this critical rate is difficult to estimate, as it depends on the clumpiness of the IGM (scaled in the expression above to the value inferred at $`z=5`$ from numerical simulations GO97 ) and the nucleosynthesis constrained baryon density. The evolution of the critical rate as a function of redshift is plotted in Figure 4 (right). While $`\dot{𝒩}_{\mathrm{ion}}`$ is comparable to the quasar contribution at $`z\mathrm{}>3`$, there is some indication of a deficit of Lyman continuum photons at $`z=5`$. For bright, massive galaxies to produce enough UV radiation at $`z=5`$, their space density would have to be comparable to the one observed at $`z3`$, with most ionizing photons being able to escape freely from the regions of star formation into the IGM. This scenario may be in conflict with direct observations of local starbursts below the Lyman limit showing that at most a few percent of the stellar ionizing radiation produced by these luminous sources actually escapes into the IGM Le95 .<sup>4</sup><sup>4</sup>4Note that, at $`z=3`$, Lyman-break galaxies would radiate more ionizing photons than QSOs for $`f_{\mathrm{esc}}\mathrm{}>30\%`$.
It is interesting to convert the derived value of $`\dot{𝒩}_{\mathrm{ion}}`$ into a “minimum” SFR per unit (comoving) volume, $`\dot{\rho }_{}`$ (hereafter we assume $`\mathrm{\Omega }_bh^2=0.02`$ and $`C=30`$):
$$\dot{\rho }_{}(z)=\dot{𝒩}_{\mathrm{ion}}(z)\times 10^{53.1}f_{\mathrm{esc}}^10.013f_{\mathrm{esc}}^1\left(\frac{1+z}{6}\right)^3\mathrm{M}_{}\mathrm{yr}^1\mathrm{Mpc}^3.$$
(12)
The star formation density given in the equation above is comparable with the value directly “observed” (i.e., uncorrected for dust reddening) at $`z3`$ M98 . The conversion factor assumes a Salpeter IMF with solar metallicity, and has been computed using a population synthesis code BC98 . It can be understood by noting that, for each 1 $`M_{}`$ of stars formed, 8% goes into massive stars with $`M>20M_{}`$ that dominate the Lyman continuum luminosity of a stellar population. At the end of the C-burning phase, roughly half of the initial mass is converted into helium and carbon, with a mass fraction released as radiation of 0.007. About 25% of the energy radiated away goes into ionizing photons of mean energy 20 eV. For each 1 $`M_{}`$ of stars formed every year, we then expect
$$\frac{0.08\times 0.5\times 0.007\times 0.25\times M_{}c^2}{20\mathrm{eV}}\frac{1}{1\mathrm{yr}}10^{53}\mathrm{phot}\mathrm{s}^1$$
(13)
to be emitted shortward of 1 ryd.
|
no-problem/9901/astro-ph9901026.html
|
ar5iv
|
text
|
# Comment on “Correlation between Compact Radio Quasars and Ultrahigh Energy Cosmic Rays”
In the paper “Correlation between Compact Radio Quasars and Ultrahigh Energy Cosmic Rays,” Farrar and Biermann argue that there is a strong correlation between the direction of the five highest-energy cosmic-ray events and compact, radio-loud quasars. Because this result, if true, would have profound implications, it has been widely reported in the popular scientific press . This Comment shows that the analysis in Ref. contains several inconsistencies and errors so that the significance of any such correlation is certainly greatly overestimated and perhaps nonexistent.
A proton, nucleus or photon above 10<sup>20</sup> eV is exceedingly unlikely to travel further than 50 Mpc due to interactions with the cosmic microwave background: this is the Greisen-Zatsepin-Kuzmin limit . The compact, radio-loud quasars (CQSOs) in question have red shifts ranging from $`0.29<z<2.18`$. Thus new physics is required if the observed high-energy cosmic rays did originate in CQSOs. The authors suggest two possibilities:
1. The existence of a new neutral long-lived hadron with a mass of a few GeV, such as a light gluino , and
2. Neutrinos with a mass of a few eV interacting resonantly with dark-matter neutrinos clustered in the halo of either our galaxy or the local cluster of galaxies producing a $`Z`$-boson. This results in an ultrahigh-energy hadron or photon that, in turn, produces the observed extensive air shower .
In both of these scenarios, the high-energy cosmic ray should point back to its source. It is this hypothesis that is tested in Ref. , with the conclusion that the five high-energy cosmic-ray events examined are all aligned with CQSOs, and that the probability that these alignments are coincidental is 0.005.
The authors chose to examine all high-energy cosmic-ray events whose energy is at least 1$`\sigma `$ above $`8\times 10^{19}`$ eV and whose direction is known with a solid angle resolution, $`\mathrm{\Delta }\mathrm{\Omega }`$, of 10 deg<sup>2</sup> or better. While only qualitative reasons are given for these specific criteria, it is striking that the energy cut barely excludes two events from Haverah Park with energies of $`(1.02\pm 0.3)\times 10^{20}`$ eV and $`(1.05\pm 0.3)\times 10^{20}`$ eV, and one from Yakutsk with an energy of $`(1.1\pm 0.4)\times 10^{20}`$ eV . Remarkably, one of the events used in the analysis of Ref. also fails the cut: this is the event labeled Ag110, which has a measured energy of $`(1.10\times 10^{20})`$ eV $`\pm `$ 30%.
It is unclear what Farrar and Biermann mean by “solid angle resolution.” In Table I of Ref. , the $`\mathrm{\Delta }\mathrm{\Omega }`$s given for two events (Ag210 and Ag110) contain the true event directions with 68% probability, while the $`\mathrm{\Delta }\mathrm{\Omega }`$s given for the other three events (FE320, HP120, HP105) contain the true event direction with only $`12`$% probability. In addition, the values of $`\mathrm{\Delta }\mathrm{\Omega }`$ in Table I of Ref. for FE320 and HP120 are incorrectly calculated.
These discrepancies call into question whether the events used in further analysis have been selected in an fair, unbiased manner, and whether the subsequent analysis, in which the probability that randomly distributed objects with a given surface density would appear aligned with the five events is calculated, is correct.
Even if the analysis of Ref. is correct, the statistical significance given for the alignment (0.005) is not. The authors formulate the hypothesis of correlations between the cosmic-ray directions and CQSOs (as opposed to other possible astrophysical objects) because a correlation with a CQSO had already been noted for the 320 TeV Fly’s Eye event (FE320) . An event used to formulate a hypothesis may not be used to test that hypothesis. Eliminating FE320 from the analysis lowers the statistical significance of the alignment to 0.03. It is this number, not 0.005, that correctly assesses the evidence for the hypothesis that the high-energy cosmic-ray events point back to compact quasars, assuming that the selection criteria are unbiased and that the rest of the analysis in Ref. has been done correctly.
I gratefully acknowledge illuminating conversations with C. Sinnis. This work was partially supported by the U.S. Department of Energy, Los Alamos National Laboratory, and the University of California.
|
no-problem/9901/math9901114.html
|
ar5iv
|
text
|
# Untitled Document
Sur le lemme fondamental pour les groupes unitaires:
le cas totalement ramifié et homogène
G. Laumon et J.-L. Waldspurger
Paper withdrawn
There is a gap in the proof of Proposition 4.3 (its conclusion “d’où la proposition.” is incorrect). Therefore theorems 1.1 and 3.1, which were the main results of the paper, are not proved.
|
no-problem/9901/chao-dyn9901016.html
|
ar5iv
|
text
|
# Untitled Document
TAUP 2541-99
17 December, 1998
Chaos and Maps in Relativistic Dynamical Systems
L.P. Horwitz<sup>a,b</sup> and Y. Ashkenazy<sup>b</sup>
<sup>a</sup>Raymond and Beverly Sackler Faculty of Exact Sciences
School of Physics, Tel Aviv University, Ramat Aviv 69978, Israel
<sup>b</sup>Department of Physics
Bar Ilan University, Ramat Gan 52900, Israel
Abstract: The basic work of Zaslavskii et al showed that the classical non-relativistic electromagnetically kicked oscillator can be cast into the form of an iterative map on the phase space; the resulting evolution contains a stochastic flow to unbounded energy. Subsequent studies have formulated the problem in terms of a relativistic charged particle in interaction with the electromagnetic field. We review the structure of the covariant Lorentz force used to study this problem. We show that the Lorentz force equation can be derived as well from the manifestly covariant mechanics of Stueckelberg in the presence of a standard Maxwell field, establishing a connection between these equations and mass shell constraints. We argue that these relativistic generalizations of the problem are intrinsically inaccurate due to an inconsistency in the structure of the relativistic Lorentz force, and show that a reformulation of the relativistic problem, permitting variations (classically) in both the particle mass and the effective “mass” of the interacting electromagnetic field, provides a consistent system of classical equations for describing such processes. 1. INTRODUCTION
Zaslavskii et al have studied the behavior of particles in the field of a wave packet of an electric field in the presence of a static magnetic field. For a broad wave packet with sufficiently uniform spectrum, the problem can be stated in terms of an electrically kicked harmonic oscillator. They find that for rational ratios between the frequency of the kicking field and the Larmor frequency associated with the magnetic field, the phase space of the system is covered by a mesh of finite thickness; inside the filaments of the mesh, the dynamics of the particle is stochastic and outside (in the cells of stability), the dynamics is regular. This structure is called a stochastic web. It was found that this pattern covers the entire phase plane, permitting the particle to diffuse arbitrarily far into the region of high energies (a process analogous to Arnol’d diffusion ).
Since the stochastic web leads to unbounded energies, several authors have considered the corresponding relativistic problem. Longcope and Sudan studied this system (in effectively $`1+1/2`$ dimensions) and found that for initial conditions close to the origin of the phase space there is a stochastic web, which is bounded in energy, of a form quite similar, in the neighborhood of the origin, to the non-relativistic case treated by Zaslavskii et al . Karimabadi and Angelopoulos studied the case of an obliquely propagating wave, and showed that under certain conditions, particles can be accelerated to unlimited energy through an Arnol’d diffusion in two dimensions.
The equations used by Longcope and Sudan and Karimabadi and Angelopoulos are derived from the well-known covariant Lorentz force
$$f^\mu =m\frac{d^2x^\mu }{ds^2}=\frac{e}{c}F_\nu ^\mu \frac{dx^\nu }{ds},$$
$`(1.1)`$
where $`ds`$ is usually taken to be the “proper time” of the particle. Multiplying both side by $`dx_\mu /ds`$ and summing over $`\mu `$ (we use the Einstein summation convention that adjacent indices are summed unless otherwise indicated, and the metric is taken to be $`(,+,+,+)`$ for the indices $`(0,\mathrm{\hspace{0.17em}1},\mathrm{\hspace{0.17em}2},\mathrm{\hspace{0.17em}3})`$, distinguishing upper and lower indices), one obtains
$$\frac{dx_\mu }{ds}\frac{d^2x^\mu }{ds^2}=\frac{1}{2}\frac{d}{ds}\left(\frac{dx_\mu }{ds}\frac{dx^\mu }{ds}\right)=0;$$
$`(1.2)`$
taking the usual value for the constant (in $`s`$), we have that
$$\frac{dx_\mu }{ds}\frac{dx^\mu }{ds}=c^2.$$
$`(1.3)`$
This result provides a consistent identification of the parameter $`s`$ on the particle trajectory (world-line) as the “proper time”:
$$\begin{array}{cc}\hfill ds^2& =\frac{1}{c^2}dx_\mu dx^\mu \hfill \\ & =dt^2\frac{1}{c^2}d𝐱^2\hfill \\ & =dt^2\left(1\frac{v^2}{c^2}\right)\hfill \end{array}$$
$`(1.4)`$
so that
$$dt=\frac{ds}{\sqrt{1\frac{v^2}{c^2}}}\gamma ds,$$
$`(1.5)`$
the Lorentz transformation of the time interval for a particle at rest to the interval observed in a moving frame. This formula has been used almost universally in calculations of the dynamics of relativistic charged particles . The Lorentz trtansformation, however, applies only to inertial frames. Phenomena occurring in two inertial frames in relative motion are, according to the theory of special relativity, related by a Lorentz transformation. An accelerating frame, as pointed out by Landau and Lifshitz, induces a more complicated form of metric than the flat space $`(,+,+,+)`$. Mashoon has emphasized that the use of a sequence of instantaneous inertial frames, as has also often been done, is not equivalent to an accelerating frame. He cites the example for which a charged particle at rest in an inertial frame does not radiate, while a similar particle at rest in an accelerating frame does. As another example, consider again the first of $`(1.4)`$. If we transform to the inertial frame of a particle with constant acceleration along the $`x`$ direction,
$$x^{}=x+\frac{1}{2}at^2,$$
then $`(1.4)`$ becomes (as in the discussion of rotating frames in )
$$ds^2=(1\frac{1}{c^2}a^2t^2)dt^2\frac{2}{c^2}atdx^{}dt\frac{1}{c^2}(dx^2+dy^2).$$
In the frame in which $`dx^{}=dy=dz=0`$, $`dt`$ is the interval of proper time, and it is not equal to $`ds`$. For short times, or small acceleration, the effect is small. We shall discuss this problem further in Section 3.
Continuing for now in the standard framework, Eq. $`(1.3)`$ effectively eliminates one of the equations of $`(1.1)`$. We may write
$$\begin{array}{cc}\hfill \frac{d^2𝐱}{ds^2}& =\left(\frac{dt}{ds}\right)\frac{d}{dt}\left(\frac{dt}{ds}\frac{d𝐱}{dt}\right)\hfill \\ & =\frac{e}{m}\frac{dt}{ds}(𝐄+\frac{1}{c}𝐯\times 𝐇).\hfill \end{array}$$
Cancelling $`\gamma =dt/ds`$ from both sides, one obtains
$$\frac{d}{dt}(\gamma 𝐯)=\frac{e}{m}(𝐄+\frac{1}{c}𝐯\times 𝐇),$$
$`(1.6)`$
the starting point for the analysis of Longcope and Sudan and Karamabadi and Angelopoulis . A discrete map can be constructed from $`(1.6)`$ just as was done for the nonrelativistic equations of Zaslavskii et el . As we have remarked above, the stochastic web is found at low energies; it deteriorates at high energies due to the $`\gamma `$ factor.
The time component of $`(1.1)`$ is
$$c\frac{d^2t}{ds^2}=\frac{e}{mc}𝐄𝐯\frac{dt}{ds}$$
$`(1.7)`$
or
$$\frac{d\gamma }{dt}=\frac{e}{mc^2}𝐄𝐯.$$
$`(1.8)`$
Landau and Lifshitz comment that this is a reasonable result, since the “energy” of the particle is $`\gamma mc^2`$, and $`e𝐄𝐯`$ is the work done on the particle by the field. It is important for what we have to say in the following that Eq. $`(1.7)`$ is not interpretable in terms of the geometry of Lorentz transformations. The second derivative corresponds to an acceleration of the observed time variable relative to the “proper time”; the Lorentz transformaton affects only the first derivative, as in $`(1.4)`$. We understand this equation as an indication that the observed time emerges as a dynamical variable. Mendonça and Oliveira e Silva have studied the relativistic kicked oscillator by introducing a “super Hamiltonian”, resulting in a symplectic mechanics of Hamiltonian form, which recognizes that the variables $`t`$ and $`E`$ are dynamical variables of the same type as $`𝐱`$ and $`𝐩`$. This manifestly covariant formulation is equivalent to that of Stueckelberg and Horwitz and Piron , which we shall discuss in the next section.
We have computed solutions to the Lorentz force equation for the case of the kicked oscillator (see fig. 1), using methods slightly different from that of Longcope and Sudan and Karimabadi and Angelopoulis . At low velocities, the stochastic web found by Zaslovskii et al occurs; the system diffuses in the stochastic region to unbounded energy, as found by Karimabadi and Angelopoulis . The velocity of the particle is light speed limited by the dynamical equations, in particular, by the suppression of the action of the electric field at velocities approaching the velocity of light .
The rapid acceleration of the charged particles of the kicked oscillator further suggest that radiation can be an important correction to the motion. The counterexample of Mashoon was based on the phenomenon of radiation. It has been shown by Abraham , Dirac , Rohrlich and Sokolov and Ternov that the relativistic Lorentz force equations in the presence of radiation reaction is given by the Lorentz-Abraham equation
$$m\ddot{x}^\mu =\frac{e}{c}F^{\mu \nu }\dot{x}_\nu +\frac{2}{3}\frac{r_0}{c}m(\frac{d}{ds}\ddot{x}^\mu \frac{1}{c^2}\dot{x}^\mu \ddot{x}_\nu \ddot{x}^\nu ),$$
$`(1.9)`$
where $`r_0=e^2/mc^2`$, the classical electron radius, and the dots refer here, as in $`(1.1)`$, to derivatives with respect to $`s`$. Note that from the identity $`(1.3)`$, it follows (by differentiation with respect to $`s`$) that
$$\dot{x}_\mu \ddot{x}^\mu =0,\dot{x}_\mu \frac{d}{ds}\ddot{x}^\mu +\ddot{x}_\mu \ddot{x}^\mu =0,$$
$`(1.10)`$
and hence $`(1.9)`$ can be written as
$$m\ddot{x}^\mu =\frac{e}{c}F^{\mu \nu }\dot{x}_\nu +\frac{2}{3}\frac{r_0}{c}m\frac{d}{ds}\ddot{x}^\nu (\delta _\nu ^\mu +\frac{1}{c^2}\dot{x}^\mu \dot{x}_\nu ).$$
$`(1.11)`$
The last factor on the right is a projection orthogonal to $`\dot{x}^\mu `$ (if $`\dot{x}^\mu \dot{x}_\mu =c^2`$), and therefore $`(1.11)`$ is consistent with conservation of $`\dot{x}^\mu \dot{x}_\mu `$. Sokolov and Ternov state that this conservation law follows automatically from $`(1.9)`$, but it is apparently only consistent. Radiation reaction therefore also implies that the connection between proper time and the Lorentz invariant interval may be subject to question.
We have calculated the motion of the kicked oscillator using the form $`(1.9)`$ of the Lorentz force, corrected for radiaton reaction, undoubtedly a good approximation under certain conditions, and will report on this in another paper in this volume .
2. THE STUECKELBERG FORMULATION
As we have remarked above, Mendonça and Oliveira e Silva have used a “super Hamiltonian” formulation to control the covariance of the electromagnetically kicked oscillator. Their formulation of the problem is equivalent to the theory of Stueckelberg and Horwitz and Piron ; we shall therefore use the notation of the latter formulation. We first explain the physical basis of this theory, and then derive sytematically the covariant Lorentz force from a model Hamiltonian.
The original thought experiment of Einstein discussed the generation of a sequence of signals in a frame $`F`$, according to a clock imbedded in that frame, and their detection by apparatus in a second frame $`F^{}`$ in uniform motion with respect to the first. The time of arrival of the signals in $`F^{}`$ must be recorded with a clock of the same construction, or there would be no basis for comparison of the intervals between signals sent and those received. It is essential to understand that the clocks in both $`F`$ and $`F^{}`$ run at the same rate. The relation of the interval $`\mathrm{\Delta }\tau `$ between pulses emitted in $`F`$ and the interval between signals $`\mathrm{\Delta }\tau ^{}`$ received in $`F^{}`$, according to the (equivalent) clock in $`F^{}`$ is, from the special theory of relativity, given by
$$\mathrm{\Delta }\tau ^{}=\frac{\mathrm{\Delta }\tau }{\sqrt{1\frac{v^2}{c^2}}}.$$
$`(2.1)`$
This time interval, measured on a “standard” time scale established by these equivalent clocks, is identified to the interval $`\mathrm{\Delta }t^{}`$, the time interval between signals in the first frame, observed in the second, and called simply the time by Einstein. One sees that this Einstein time is subject to distortion due to motion. In general relativity, it is subjuct to distortion due to the gravitational field as well, and in this case the distortion is called the gravitational red-shift. We see that there are essentially two types of time, one corresponding to the time intervals at which signals are emitted, and the second, according to the time intervals for which they are detected. The first type, associated with signals that are pre-programmed, is not a dynamical variable, but a given sequence (as for the Newtonian time), and the second, associated with the time at which signals are observed (the Einstein time), is to be understood as a dynamical variable both in classical and quantum theories .
Stueckelberg noted that for a free particle, the signals emitted at regular intervals would be recorded at regular intervals in a laboratory, since the free particle would be in motion with respect to the laboratory with the same relation as between $`F`$ and $`F^{}`$; the motion would then be recorded as a straight line (within the light cone) on a spacetime diagram. In the presence of forces, however, this line could curve. A sufficient deviation from the straight line could make it begin to go backward in time, and then the coordinate $`t`$ wuld no longer be adequate to parametrize the motion. He therefore introduced an invariant parameter $`\tau `$ along the curve, so that there would be a one-to-one corrrespondence between this parameter and the spacetime coordinates. He proposed a Hamiltonian for a free particle of the form (the parameter $`M`$ provides a dimensional scale, for example, in $`(2.5)`$; it may also be considered as the Galilean target mass for the variable $`(1/c)\sqrt{E^2c^2𝐩^2}`$)
$$K=\frac{p^\mu p_\mu }{2M}$$
$`(2.2)`$
for which the Hamilton equations (generalized) give
$$\frac{dx^\mu }{d\tau }=\frac{K}{p_\mu }=\frac{p^\mu }{M}.$$
$`(2.3)`$
It is clear that such a theory is intrinsically “off-shell”; the variables $`𝐩`$ and $`p^0=E/c`$ are independent, as are the observables $`𝐱`$ and $`t`$, so that the phase space is eight-dimensional. Dividing the equation for the space indices by the equation for the time index, one obtains
$$𝐯=\frac{d𝐱}{dt}=c^2\frac{𝐩}{E},$$
$`(2.4)`$
precisely the Einstein formula for velocity. Furthermore, for the time component,
$$\frac{dt}{d\tau }=\frac{E}{Mc^2};$$
$`(2.5)`$
in case the particle is “on-shell”, so that $`Mc^2=\sqrt{E^2c^2𝐩^2}`$, $`(2.5)`$ reads
$$\frac{dt}{d\tau }=\frac{1}{\sqrt{1\frac{𝐯^2}{c^2}}},$$
coinciding with $`(2.1)`$.
Stueckelberg then considered adding a potential term $`V(x)`$, to treat one-body mechanics, and the gauge substitution $`p^\mu p^\mu eA^\mu (x)`$ for the treatment of problems with electromagnetic interaction. He proposed a quantum theory, for which the Hamiltonian generates a Schrödinger type evolution
$$i\mathrm{}\frac{}{\tau }\psi _\tau (x)=K\psi _\tau (x).$$
$`(2.6)`$
Horwitz and Piron generalized the Stueckelberg theory for application to many body problems. They assumed that the standard clocks constitute a universal time, as for the Robertson-Walker time (the Hubble time) of general relativity , so that separate subsystems are correlated in this time. In this framework, it became possible to solve, for example, the two body problem in both classical and quantum theory .
The equations $`(1.1)`$ are not generally derived rigorously from a well-defined Lagrangian or Hamiltonian. They result from a relativistic generalization of the nonrelativistic Lorentz force (which is derivable from a nonrelativistic Hamiltonian). In the following, we shall derive these equations rigorously from the Stueckelberg theory, to emphasize more strongly the nature of the problem we have discussed above, and to clarify some important points.
The Hamiltonianian form for a particle with electromagnetic interaction proposed by Stueckelberg is
$$K=\frac{(p^\mu \frac{e}{c}A^\mu (x))(p_\mu \frac{e}{c}A_\mu (x))}{2M}.$$
$`(2.7)`$
The equation of motion for $`x^\mu `$ is (we use the upper dot from now on to denote differentiation by $`\tau `$, the universal invariant time)
$$\dot{x}^\mu =\frac{K}{p_\mu }=\frac{(p^\mu \frac{e}{c}A^\mu (x))}{M}$$
$`(2.8)`$
and we see that then
$$\frac{dx^\mu }{d\tau }\frac{dx_\mu }{d\tau }=c^2\left(\frac{ds}{d\tau }\right)^2=\frac{(p^\mu \frac{e}{c}A^\mu (x))(p_\mu \frac{e}{c}A_\mu (x))}{M},$$
$`(2.9)`$
a quantity proportional to $`K`$, and therefore strictly conserved. In fact, this quantity is the gauge invariant mass-squared:
$$(p^\mu \frac{e}{c}A^\mu (x))(p_\mu \frac{e}{c}A_\mu (x))=m^2c^2,$$
$`(2.10)`$
where we define $`m`$ as the dynamical mass, a constant of the motion. It then follows that
$$c^2\left(\frac{ds}{d\tau }\right)^2=c^2\left(\frac{dt}{d\tau }\right)^2\left(\frac{d𝐱}{d\tau }\right)^2=\frac{m^2c^2}{M^2}$$
$`(2.11)`$
and, extracting a factor of $`(dt/d\tau )`$,
$$\left(\frac{dt}{d\tau }\right)^2=\frac{m^2/M^2}{1\frac{v^2}{c^2}}.$$
$`(2.12)`$
Up to a constant factor, the Stueckelberg theory therefore maintains the identity $`(1.3)`$.
We now derive the Lorentz force from the Hamilton equation (this derivation has also been carried out by C. Piron ). The Hamilton equations for energy momentum are
$$\begin{array}{cc}\hfill \frac{dp^\mu }{d\tau }& =\frac{K}{x_\mu }=\frac{(p^\nu \frac{e}{c}A^\nu )}{M}\left(\frac{e}{c}\frac{A_\nu }{x_\mu }\right)\hfill \\ & =\frac{e}{c}\frac{dx^\nu }{d\tau }\frac{A_\nu }{x_\mu }.\hfill \end{array}$$
$`(2.13)`$
Since $`p^\mu =M\frac{dx^\mu }{d\tau }+\frac{e}{c}A^\mu `$, the left hand side is ($`A^\mu `$ is evaluated on the particle world line $`x^\nu (\tau )`$)
$$\frac{dp^\mu }{d\tau }=M\frac{d^2x^\mu }{d\tau ^2}+\frac{e}{c}\frac{A^\mu }{x^\nu }\frac{dx^\nu }{d\tau },$$
$`(2.14)`$
and hence
$$M\frac{d^2x^\mu }{d\tau ^2}=\frac{e}{c}\left(\frac{A_\nu }{x_\mu }\frac{A^\mu }{x^\nu }\right)\frac{dx^\nu }{d\tau },$$
or
$$M\frac{d^2x^\mu }{d\tau ^2}=\frac{e}{c}F_\nu ^\mu \frac{dx^\nu }{d\tau },$$
$`(2.15)`$
where $`(^\mu /x_\mu )`$
$$F^{\mu \nu }=^\mu A^\nu ^\nu A^\mu .$$
$`(2.16)`$
The form of $`(2.15)`$ is identical to that of $`(1.4)`$, but the temporal derivative is not with respect to the variable $`s`$, the Minkowski distance along the particle trajectory, but with respect to the universal evolution parameter $`\tau `$.
One might argue that these should be equal, or at least proportional by a constant, since the proper time is equal to the time which may be read on a clock on the particle in its rest frame. For an accelerating particle, however, one cannot transform by a Lorentz transformation, other than instantaneously, to the particle rest frame. It appears, therefore, that the formula $`(2.15)`$ could have a more reliable interpretation. The parameter of evolution $`\tau `$ does not require a Lorentz transformation to achieve its meaning.
Since $`m^2`$ is absolutely conserved by the Hamiltonian model $`(2.7)`$, however, we have the constant relation
$$ds=\frac{m}{M}d\tau ,$$
$`(2.17)`$
assuming the positive root (as we shall also do for the root of $`(2.12)`$; we do not wish to discuss the antiparticle solutions here). Eq. $`(2.15)`$ can therefore be written exactly as $`(1.1)`$.
We see that the Stueckelberg formulation in terms of an absolute time does not avoid the serious problem of consistency that we have pointed out before. It is clear that the difficulty is associated with the fact that the Stueckelberg Hamiltonian, as we have written it, preserves the mass-shell, and we therefore understand the identity $`(1.3)`$ as a mass-shell relation.
Returning to the Stueckelberg-Schrödinger equation $`(2.6)`$, we see that the gauge invariant replacement $`p^\mu p^\mu \frac{e}{c}A^\mu (x)`$ is not adequate. The additional derivative on the left hand side of $`(2.6)`$ must also be replaced by a gauge covariant term. The possibility of $`\tau `$ dependence in the gauge transformation implies that the gauge fields themselves may depend on $`\tau `$. The gauge covariant equation should then be
$$i\frac{}{\tau }\psi _\tau (x)=\left\{\frac{1}{2M}(p_\mu \frac{e_0}{c}a_\mu )(p^\mu \frac{e_0}{c}a^\mu )\frac{e_0}{c}a_5\right\}\psi _\tau (x),$$
$`(2.18)`$
, where the fields $`a_\alpha ,\alpha =(0,\mathrm{\hspace{0.17em}1},\mathrm{\hspace{0.17em}2},\mathrm{\hspace{0.17em}3},\mathrm{\hspace{0.17em}5})`$ with $`_5/\tau `$, change under the gauge transformation $`\psi \mathrm{exp}i\frac{e_0}{c}\mathrm{\Lambda }\psi `$ according to $`a_\alpha a_\alpha _\alpha \mathrm{\Lambda }`$. It follows from this equation, in a way analogous to the Schrödinger non-relativistic theory, that there is a current
$$j_\tau ^\mu =\frac{i}{2M}\{\psi _\tau ^{}(^\mu i\frac{e_0}{c}a^\mu )\psi _\tau \psi _\tau (^\mu +i\frac{e_0}{c}a^\mu )\psi _\tau ^{}\},$$
$`(2.19)`$
which, with
$$\rho _\tau j_\tau ^5=|\psi _\tau (x)|^2,$$
satisfies
$$_\tau \rho _\tau +_\mu j_\tau ^\mu _\alpha j^\alpha =0.$$
$`(2.20)`$
We see that for $`\rho _\tau 0`$ pointwise ($`\rho _\tau (x)d^4x=1`$ for any $`\tau `$),
$$J^\mu (x)=_{\mathrm{}}^{\mathrm{}}j_\tau ^\mu (x)𝑑\tau $$
$`(2.21)`$
satisfies
$$_\mu J^\mu (x)=0,$$
$`(2.22)`$
and can be a source for the standard Maxwell fields. Since the field equations are linear, with source $`j^\alpha `$, one identifies the integral $`𝑑\tau a^\mu (x,\tau )`$ (or, alternatively, the $`0`$-mode) with the Maxwell potentials . It then follows that the so-called pre-Maxwell fields $`a^\alpha `$ have dimension $`L^2`$, and that the charge $`e_0`$ has dimension $`L`$. The Lagrangian density for the fields, quadratic in the field strengths ($`\alpha ,\beta =0,\mathrm{\hspace{0.17em}1},\mathrm{\hspace{0.17em}2},\mathrm{\hspace{0.17em}3},\mathrm{\hspace{0.17em}5}`$)
$$f^{\alpha \beta }=^\alpha a^\beta ^\beta a^\alpha ,$$
which has dimension $`L^3`$, must carry a dimensional parameter, say $`\lambda `$, and from the field equations $`\lambda _\alpha f_\alpha ^\beta =e_0j^\beta `$, one sees that the Maxwell charge is $`e=e_0/\lambda `$ .
We understand the operator on the right hand side of $`(2.18)`$ as the quantum form of a classical evolution function
$$K=\frac{1}{2M}(p_\mu \frac{e_0}{c}a_\mu )(p^\mu \frac{e_0}{c}a^\mu )\frac{e_0}{c}a_5.$$
$`(2.23)`$
It follows from the Hamilton equations that
$$\frac{dx^\mu }{d\tau }=\frac{p^\mu \frac{e_0}{c}a^\mu }{M}$$
$`(2.24)`$
and
$$\frac{dp^\mu }{d\tau }=\frac{e_0}{c}\frac{dx^\nu }{d\tau }\frac{a_\nu }{x_\mu }+\frac{e_0}{c}\frac{a_5}{x_\mu }.$$
Hence,
$$M\frac{d^2x^\mu }{d\tau ^2}=\frac{e_0}{c}\frac{dx^\nu }{d\tau }f_\nu ^\mu +\frac{e_0}{c}\left(\frac{a_5}{x_\mu }\frac{a^\mu }{\tau }\right).$$
$`(2.25)`$
If we define $`x^5\tau `$, the last term can be written as $`^\mu a_5_5a^\mu =f_5^\mu `$ , so that
$$M\frac{d^2x^\mu }{d\tau ^2}=\frac{e_0}{c}\frac{dx^\nu }{d\tau }f_\nu ^\mu +\frac{e_0}{c}f_5^\mu .$$
$`(2.26)`$
Note that in this equation, the last term appears in the place of the radiation correction terms of $`(1.9)`$. It plays the role of a generalized electric field. Furthermore, we see that the relation $`(1.3)`$, consistent with the standard Maxwell theory, no longer holds as an identity; the Stueckelberg form of this result $`(2.11)`$ in the presence of standard Maxwell fields, where $`m^2`$ is conserved, is also not generally valid. We now have
$$\frac{d}{d\tau }\frac{1}{2}M\left(\frac{dx^\mu }{d\tau }\frac{dx_\mu }{d\tau }\right)=\frac{e_0}{c}\frac{dx^\mu }{d\tau }f_{\mu 5},$$
$`(2.27)`$
and does not vanish. The right hand side corresponds to mass transfer from the field to the particle.
As for the method of Longcope and Sudan , we may transform the derivatives with respect to $`\tau `$ to derivatives with respect to $`t`$ in the equation $`(2.26)`$ as follows. Defining $`\zeta =dt/d\tau `$, it follows from $`(2.26)`$ that there is an additional term in the analogous form of the rate of change of $`\zeta `$ (we use lower case to denote the pre-Maxwell field strengths),
$$\frac{d\zeta }{dt}=\frac{e_0}{Mc^2}(𝐞𝐯)+\frac{e_0}{\zeta Mc^2}f_5^0$$
$`(2.28)`$
The space components of $`(2.26)`$ can be written as
$$\begin{array}{cc}\hfill \frac{d^2x^j}{dt^2}& =\frac{e_0}{\zeta M}\left[e^j+\frac{1}{c}(𝐯\times 𝐡)^j\frac{v^j}{c^2}(𝐞𝐯)\right]\hfill \\ & +\frac{e_0}{Mc\zeta ^2}\left[f_5^j\frac{v_j}{c}f_5^0\right].\hfill \end{array}$$
$`(2.29)`$
To illustrate some of the properties of this system of equations, we treat a simple example in Appendix A. The effective additonal forces include not only the term associated with the work done by the field, but additonal terms associated specifically with the $`\tau `$-dependence of the fields, and the fifth (scalar) field $`a_5`$. Given the fields $`f_\beta ^\alpha `$, Eqs.$`(2.28)`$ and $`(2.29)`$ form a nonlinear coupled system of equations for the particle motion.
For a gauge (generalized Lorentz) in which $`_\alpha a^\alpha =0`$, the field equations
$$_\alpha f^{\beta \alpha }=ej^\beta $$
become
$$_\alpha ^\alpha a^\beta =ej^\beta ,$$
where, classically, $`j^\beta =\dot{x}^\mu \delta ^4(xx(\tau )),\rho =\delta ^4(xx(\tau ))`$, and $`xx^\mu (\tau )`$ is the world line. The analysis of these equations is in progress.
It has recently been shown that, with the help of the Green’s functions for the wave equations of the fields in $`x^\mu ,\tau `$, that the self-reaction derived from the contributions on the right hand side of $`(2.26)`$ are precisely of the form of the radiation reaction terms in the Abraham-Lorentz equations $`(1.9)`$ in the limit that the theory is constrained to mass shell, i.e., that $`(1.3)`$ is enforced . The off-shell corrections provided by $`(2.26)`$ make the system of equations consistent, and should therefore provide a basis for computing problems involving the interaction of radiation with relativistic particles in a consistent way.
CONCLUSIONS
We have shown that the standard relativistic Lorentz force equations are not consistent since they imply the mass-shell constraint $`\dot{x}^\mu \dot{x}_\mu =c^2`$, a relation that can be valid only for a charged particle moving at constant velocity. The corrections are generally small for short times or small accelerations, and therefore calculations made with this Lorentz force are, in many applications, quite acceptable. However, for very large accelerations ($`at`$ large compared to $`c`$), they could become inaccurate.
A consistent theory may be constructed from a fully gauge covariant form of the Stueckelberg manifestly covariant dynamics, a theory which introduces a fifth gauge field . The Lorentz invariant force equation derived from this theory contains an additional term which enters in a way analogous to the radiation reaction term in the Abraham-Lorentz-Dirac equation (the self reaction force derived from this generalized equation in the mass-shell limit coincides with the radiation reaction term obtained by quite different methods for the Abraham-Lorentz-Dirac equation; it contains contributions from both terms on the right hand side ).
It appears that the consistency of the classical equations governing the interaction of charged particles with electromagnetic radiation requires that both the particles and the fields must be permitted to move “off-shell”, as in the vertices of quantum field theory.
Acknowledgements
We thank J. Beckenstein, E. Comay and F. Rohrlich for discussions.
Appendix A
The purpose of the following example is to show that in some cases the fifth field $`f_{}^{\mu }{}_{5}{}^{}`$ can cause to an effect which is very similar to the radiation effect that is calculated by Lorentz-Dirac equation. The fact that the mass is not conserved (the off-mass-shell case) is equivalent, in the case of radiation, to loss of energy through the radiation process. The particular example that we treat here is that of a charged particle in the presence of an uniform magnetic field in $`z`$ direction ($`𝐁=(0,0,B_0)`$).
As for the radiation reaction term of the Lorentz-Dirac equation, we choose the fifth field term to be <sup>*</sup><sup>*</sup>It appears that for the usual form of the radiation reaction, in an example with the field magnitudes that we shall choose, the $`\frac{d\ddot{x}^\mu }{d\tau }`$ term seems to be negligible, and the $`\ddot{x}^\mu \ddot{x}_\mu `$ may be approximated by a constant number; one is left with the $`\dot{x}^\mu `$ term. We choose the fifth field term to have a similar structure. This choice is appropriate due to the close relation of these the radiation reaction terms of the usual theory .
$$f_{}^{\mu }{}_{5}{}^{}=(C_1\dot{t},C_2\dot{x},C_2\dot{y},0),$$
$`(A.1)`$
where the dot indicates derivative with respect to $`\tau `$. The Lorentz force $`(2.26)`$ can be written as a set of differential equations,
$$M\frac{d^2t}{d\tau ^2}=\frac{e}{c}C_1\frac{dt}{d\tau }$$
$`(A.2)`$
$$M\frac{d^2x}{d\tau ^2}=\frac{eB_0}{c}\frac{dy}{d\tau }+\frac{e}{c}C_2\frac{dx}{d\tau }$$
$`(A.3)`$
$$M\frac{d^2y}{d\tau ^2}=\frac{eB_0}{c}\frac{dx}{d\tau }+\frac{e}{c}C_2\frac{dy}{d\tau }.$$
$`(A.4)`$
The solution of Eq. $`(A.2)`$ is
$$\dot{t}=\dot{t}_0e^{\alpha _1\tau },$$
$`(A.5)`$
where $`\alpha _1=\frac{eC_1}{Mc}`$. Using the complex coordinate $`u=\dot{x}+i\dot{y}`$, Eqs. $`(A.3)`$ and $`(A.4)`$ can be written as
$$\frac{du}{d\tau }=i\mathrm{\Omega }u+\alpha _2u,$$
$`(A.6)`$
where $`\alpha _2=\frac{eC_2}{Mc}`$ and $`\mathrm{\Omega }=\frac{eB_0}{Mc}`$ (the Larmor frequency). The solution is
$$u=u_0\mathrm{exp}^{\alpha _2\tau }e^{i\mathrm{\Omega }\tau }.$$
$`(A.7)`$
Using $`u(\tau )`$ one finds that,
$$\begin{array}{cc}\hfill \dot{x}& =e^{\alpha _2\tau }(\dot{x}_0\mathrm{cos}(\mathrm{\Omega }\tau )+\dot{y}_0\mathrm{sin}(\mathrm{\Omega }\tau ))\hfill \\ \hfill \dot{y}& =e^{\alpha _2\tau }(\dot{x}_0\mathrm{sin}(\mathrm{\Omega }\tau )+\dot{y}_0\mathrm{cos}(\mathrm{\Omega }\tau )).\hfill \end{array}$$
$`(A.8)`$
As expected, the radiation effect is determined by the constants $`\alpha _1`$ and $`\alpha _2`$.
It is possible to calculate the actual velocities by dividing Eqs. $`(A.8)`$ by $`\dot{t}`$; this results in
$$\begin{array}{cc}\hfill \frac{dx}{dt}& =e^{\alpha \tau }\left(\left(\frac{dx}{dt}\right)_0\mathrm{cos}(\mathrm{\Omega }\tau )+\left(\frac{dy}{dt}\right)_0\mathrm{sin}(\mathrm{\Omega }\tau )\right)\hfill \\ \hfill \frac{dy}{dt}& =e^{\alpha \tau }\left(\left(\frac{dx}{dt}\right)_0\mathrm{sin}(\mathrm{\Omega }\tau )+\left(\frac{dy}{dt}\right)_0\mathrm{cos}(\mathrm{\Omega }\tau )\right),\hfill \end{array}$$
$`(A.9)`$
where $`\alpha =\alpha _1\alpha _2`$. Notice that when $`\alpha _1=\alpha _2`$, there is apparent radiation (decrease of amplitude) as a function of $`\tau `$ but not as a function of $`t`$ ; in terms of $`t`$ (which is redshifted) the particle appears to be circling forever on the same circle. This remarkable illustration is somewhat analogous to the phenomenon in which there is an infinite time required for a particle to arrive at the Schwarzschild radius in the Schwarzschild coordinate $`t`$, but a finite interval in the proper time of the particle.
The magnitude of the ($`t`$-) velocity of the particle is
$$v=v_0e^{\alpha \tau }.$$
$`(A.10)`$
When $`\alpha =\frac{1}{\tau _0}`$, where $`\tau _0=\frac{1}{\gamma _0\mathrm{\Omega }^2}`$ ($`\gamma _0`$ is the radiation constant of the Lorentz-Dirac equation), Eq. $`(A.10)`$ is exactly the solution which was obtained using the Lorentz-Dirac equation . This result is consistent with the approximations we have made in constructing the example. References
G.M. Zaslavskii, M.Yu. Zakharov, R.Z. Sagdeev, D.A. Usikov, and A.A. Chernikov, Zh. Eksp. Teor. Fiz 91, 500 (1986) \[Sov. Phys. JEPT 64, 294 (1986)\].
V.I. Arnold’d, Dokl. Akad. Nauk. SSSR 159, 9 (1964).
D.W. Longcope and R.N. Sudan, Phys. Rev. Lett. 59, 1500 (1987); See also, A.J. Lichtenberg and M.A. Lieberman, Regular and Chaotic Dynamics 2nd ed., (Springer-Verlag, New York, 1992).
H. Karimabadi and V. Angelopoulos, Phys. Rev. Lett. 62, 2342 (1989).
We thank T. Goldman for a discussion of this point.
L.D. Landau and E.M. Lifshitz,The Classical Theory of Fields 4th ed., (Pergamon Pr., Oxford, 1975).
F. Rohrlich, Classical Charged Particles, Addison Wesley, Reading, (1965).
J.T. Mendonça and L. Oliveira e Silva, Phys. Rev E 55, 1217 (1997).
E.C.G. Stueckelberg, Helv. Phys. Acta 14, 322 (1941); 14, 588 (1941).
L.P. Horwitz and C. Piron, Helv. Phys. Acta 46, 316 (1973).
B. Mashoon,Proc. VII Brazilian School of Cosmology and Gravitation, Editions Frontiéres (1944);see also, Phys. Lett. A 145, 147 (1990) and Phys. Rev.A47, 4498 (1993). We thank J. Beckenstein for bringing these references to our attention.
M. Abraham, Theorie der Elektrizität, vol. II, Springer, Leipzig (1905). See ref. for a discussion of the origin of these terms.
P.A.M. Dirac, Proc. Roy. Soc. London Ser. A, 167, 148(1938).
A.A. Sokolov and I.M. Ternov, Radiation from Relativistic Electrons, Amer. Inst. of Phys. Translation Series, New York (1986).
Y. Ashkenazy and L.P. Horwitz, Discrete Dyn. in Nature and Soc., this volume.
A. Einstein, Phys. Z. 12, 509 (1911). See also W. Pauli, Theory of Relativity, Dover, N.Y. (1981).
L.P. Horwitz, R.I. Arshansky and A. Elitzur, Found. Phys. 18, 1159 (1988).
For example, S. Weinberg, Gravitation and Cosmology: Principles and Applications of the General Theory of Relativity, Wiley, N.Y. (1972).
R.I. Arshansky and L.P. Horwitz, Jour. Math. Phys. 30, 66, 380, (1989).
C. Piron, personal communication.
D. Saad, L.P. Horwitz and R.I. Arshansky, Found. of Phys. 19, 1125 (1989); M.C. Land, N. Shnerb and L.P. Horwitz, Jour. Math. Phys. 36, 3263 (1995); N. Shnerb and L.P. Horwitz, Phys. Rev A48, 4068 (1993).
O. Oron and L.P. Horwitz, in preparation.
Figure Caption
fig. 1. A typical relativstic stochastic web.
|
no-problem/9901/math9901043.html
|
ar5iv
|
text
|
# The laced interactive gamesandtheir a posteriori analysis
## §1. The laced interactive games
In this paragraph we shall briefly introduce some general concepts of the theory of interactive games.
### 1.1. The differential interactive games
###### Definition Definition 1
An interactive system (with $`n`$ interactive controls) is a control system with $`n`$ independent controls coupled with unknown or incompletely known feedbacks (the feedbacks, which are called the behavioral reactions, as well as their couplings with controls are of a so complicated nature that their can not be described completely). An interactive game is a game with interactive controls of each player.
Below we shall consider only deterministic and differential interactive systems. For symplicity we suppose that $`n=2`$. In this case the general interactive system may be written in the form:
$$\dot{\phi }=\mathrm{\Phi }(\phi ,u_1,u_2),$$
$`1`$
where $`\phi `$ characterizes the state of the system and $`u_i`$ are the interactive controls:
$$u_i(t)=u_i(u_i^{}(t),[\phi (\tau )]|_{\tau t}),$$
i.e. the independent controls $`u_i^{}(t)`$ coupled with the feedbacks on $`[\phi (\tau )]|_{\tau t}`$. One may suppose that the feedbacks are integrodifferential on $`t`$ in general, but below we shall consider only differential dependence. It means that
$$u_i(t)=u_i(u_i^{}(t),\phi (t),\dot{\phi }(t),\ddot{\phi }(t),\mathrm{},\phi ^{(n)}(t)).$$
A reduction of the general integrodifferential case to the differential one via an introduction of the intention fields was considered in .
### 1.2. The phase lacing integrals
###### Definition Definition 2
Let us consider a differential interactive game with two players, $`\phi `$ is the state of the system, $`u_i`$ are the interactive controls whereas $`u_i^{}(t)`$ are the pure controls ($`i=1,2`$). The magnitude $`K=K(\stackrel{}{u}(t),\stackrel{}{u}^{}(t),\phi (t),\dot{\phi }(t))`$ is called the phase lacing integral iff it is constant in time for all possible parties of the game. Here $`\stackrel{}{u}(t)=(u_1(t),u_2(t))`$, $`\stackrel{}{u}^{}(t)=(u_1^{}(t),u_2^{}(t))`$.
The definition of the phase lacing integral may be generalized on games with an arbitrary number of players.
Let us consider some special classes of the phase lacing integrals:
###### Remark Remark 1
The evolution equations of the interactive game supply us by the dynamical lacing integrals $`K_i=\dot{\phi }_i(t)\mathrm{\Phi }_i(\phi ,\stackrel{}{u})`$, where indices $`i`$ denote the components of the magnitudes in any coordinate system.
###### Remark Remark 2
The phase lacing integrals are closed under natural operations (summation, multiplication on any real number or on other phase lacing integrals, functional transformations, etc). The configuration and dynamical lacing integrals possess the same property.
The phase lacing integrals may be considered for any differential interactive system. They are analogs of ordinary integrals for dynamical systems.
### 1.3. The laced interactive games
###### Definition Definition 3
Let us consider a differential interactive game with two players, $`\phi `$ is the state of the system, $`u_i`$ are the interactive controls whereas $`u_i^{}(t)`$ are the pure controls ($`i=1,2`$), each $`u_i`$ and $`u_i^{}`$ has $`n`$ degrees of freedom. The game is called the laced interactive game iff it admits $`2n`$ functionally independent over $`\stackrel{}{u}`$ phase lacing integrals $`K_\alpha (\stackrel{}{u},\stackrel{}{u}^{},\phi ,\dot{\phi })`$ ($`\alpha =1,2,\mathrm{},2n`$).
The functional independence of $`K_\alpha `$ over $`\stackrel{}{u}`$ means that for any fixed values of $`\stackrel{}{u}^{}`$, $`\phi `$ and $`\dot{\phi }`$ the magnitudes $`K_\alpha `$ are (locally) functionally independent as functions of $`\stackrel{}{u}`$.
Here and below we shall suppose that the phase lacing integrals depend smoothly on their arguments. In this situation the (local) functional independence is equivalent to the inequality of the Jacobian of the mapping $`\stackrel{}{u}(K_1,\mathrm{},K_{2n})`$ to zero.
Note that the evolution equations provide us by $`m`$ dynamical lacing integrals (see remark 1), where $`m`$ is the number of degrees of freedom of the system, i.e. the number of coordinates, which describe the state $`\phi `$. If all such dynamical lacing integrals are functionally independent we should look for at least $`2nm`$ other phase lacing integrals. Often we are able to choose them from the configuration lacing integrals.
## §2. A posteriori analysis of the laced interactive games
### 2.1. A posteriori determination of feedbacks
A posteriori determination of feedbacks means the expression of $`\stackrel{}{u}`$ via $`\stackrel{}{u}^{}`$, $`\phi `$ and $`\dot{\phi }`$ using the phase lacing integrals $`K_\alpha `$. These determination presupposes the knowledge of such a posteriori data as $`\dot{\phi }`$.
###### Theorem 1
Let us consider a laced interactive game, $`\phi `$ is the state of the system, $`u_i`$ are the interactive controls whereas $`u_i^{}(t)`$ are the pure controls of two players ($`i=1,2`$), each $`u_i`$ and $`u_i^{}`$ has $`n`$ degrees of freedom, $`K_1,\mathrm{}K_{2n}`$ are the phase lacing integrals. In this case the interactive controls $`u_i`$ may be expressed (locally) via pure controls $`u_i^{}`$, the state $`\phi `$, its time derivative $`\dot{\phi }`$ and the phase lacing integrals $`K_\alpha `$ as known constants.
###### Demonstration Proof
One should use the fact that the phase lacing integrals are known constants and that the Jacobian of the mapping $`\stackrel{}{u}(K_1,\mathrm{},K_{2n})`$ is not equal to zero.
###### Remark Remark 3
Note that each interactive control $`u_1`$ and $`u_2`$ is expressed via both $`u_1^{}`$ and $`u_2^{}`$.
### 2.2. Virtual a posteriori decomposition of a collective control
The virtual a posteriori decomposition of a collective control means such simultaneous transformation of both interactive and pure controls of two players to the controls of the virtual players that a posteriori determined feedback of each such player does not contain a dependence on the controls of the other player.
###### Theorem 2
Let us consider a laced interactive game, $`\phi `$ is the state of the system, $`u_i`$ are the interactive controls whereas $`u_i^{}(t)`$ are the pure controls of two players ($`i=1,2`$), each $`u_i`$ and $`u_i^{}`$ has $`n`$ degrees of freedom, $`K_1,\mathrm{}K_{2n}`$ are the phase lacing integrals. Let the interactive controls $`u_i`$ be expressed (locally) via pure controls $`u_i^{}`$, the state $`\phi `$, its time derivative $`\dot{\phi }`$ and the phase lacing integrals $`K_\alpha `$ as known constants:
$$u_i=u_i(\stackrel{}{u}^{},\phi ,\dot{\phi };K_\alpha ).$$
Suppose that $`\stackrel{}{u}^{}=0`$ implies that $`\stackrel{}{u}=0`$ (i.e. $`0`$ is the stationary point of the mapping $`\stackrel{}{u}^{}\stackrel{}{u}`$) and also the Jacobi matrix of the mapping $`\stackrel{}{u}^{}\stackrel{}{u}`$ is nondegenerate and diagonalizable at certain neighbourhood of zero then there exist the functions $`\zeta (\stackrel{}{x};\phi ,\dot{\phi },K_\alpha )`$ locally at some neighbourhood of zero so that if we transform both interactive and pure controls $`\stackrel{}{u}`$ and $`\stackrel{}{u}^{}`$ according to them:
$`\stackrel{}{w}=`$ $`\zeta (\stackrel{}{u};\phi ,\dot{\phi },K_\alpha )`$
$`\stackrel{}{w}^{}=`$ $`\zeta (\stackrel{}{u}^{};\phi ,\dot{\phi },K_\alpha )`$
then in new variables $`\stackrel{}{w}`$ and $`\stackrel{}{w}^{}`$ a posteriori determined feedbacks would contain the dependences only between the related components of the controls $`\stackrel{}{w}`$ and $`\stackrel{}{w}^{}`$, i.e.
$`w_1=`$ $`w_1(w_1^{},\phi ,\dot{\phi };K_\alpha ),`$
$`w_2=`$ $`w_2(w_2^{},\phi ,\dot{\phi };K_\alpha ).`$
###### Demonstration Proof
The function $`\zeta `$ is constructed explicitely starting from the point $`\stackrel{}{u}^{}=0`$ to provide the claimed conditions on the mapping $`\stackrel{}{w}^{}\stackrel{}{w}`$.
###### Remark Remark 4
The controls $`w_i`$ and $`w_i^{}`$ ($`i=1,2`$) may be interpreted as interactive and pure controls of two virtual players. The transformation of the real players to the virtual players depends on the state $`\phi `$ of the game and a posteriori determined its time derivative $`\dot{\phi }`$.
###### Remark Remark 5 (some psychological interpretations)
The virtual a posteriori decomposition of a collective control may be regarded as an identification of two virtual players in the area of collective subconscious behavioral reactions. A posteriori analysis of the laced interactive games allows to make such procedure on the mathematical level of precision. It is remarkable that such decomposition depends on the state $`\phi `$ and on its time derivative $`\dot{\phi }`$ (hence, on the intensity of the real controls).
## §3. The retarded control approximation
A posteriori analysis of the laced interactive games allows to give some approximations of the interactive game by ordinary differential games in real time; the obtained series of the approximating ordinary differential games may be used for the formulation of predictions for processes in the considered interactive game.
### 3.1. The frozen feedback approximation
The simplest approximation of a laced interactive game by the ordinary differential games is the frozen feedback approximation. Let us consider the fixed moment $`t_0`$ of time and the observed values of $`\phi (t_0)`$ and $`\dot{\phi }(t_0)`$. Then the knowledge of the phase lacing integrals $`K_\alpha `$ allows us to express the interactive controls $`\stackrel{}{u}(t_0)`$ via the pure controls $`\stackrel{}{u}^{}(t_0)`$:
$$\stackrel{}{u}(t_0)=\stackrel{}{u}(\stackrel{}{u}^{}(t_0),\phi (t_0),\dot{\phi }(t_0);K_\alpha ).$$
This a posteriori determined feedback may be frozen and substituted into the evolution equations:
$$\dot{\phi }(t)=\mathrm{\Phi }(\phi (t),\stackrel{}{u}(\stackrel{}{u}^{}(t),\phi (t_0),\dot{\phi }(t_0);K_\alpha )).$$
Thus we obtain an ordinary differential game, which is just the frozen feedback approximation of the initial laced interactive game.
### 3.2. The retarded control approximation
The retarded control approximation of a laced interactive game is constructed in the following manner. Let us consider an arbitrary $`\mathrm{\Delta }t>0`$. For any moment of time $`t`$ the knowledge of the phase lacing integrals $`K_\alpha `$ allows us to perform a posteriori determination of feedbacks, i.e. to express the interactive controls $`\stackrel{}{u}(t)`$ via the pure controls $`\stackrel{}{u}^{}(t)`$:
$$\stackrel{}{u}(t)=\stackrel{}{u}(\stackrel{}{u}^{}(t),\phi (t),\dot{\phi }(t);K_\alpha ).$$
Let us now substitute the state $`\phi (t)`$ and its time derivative $`\dot{\phi }(t)`$ by their retarded (delayed) values $`\phi (t\mathrm{\Delta }t)`$ and $`\dot{\phi }(t\mathrm{\Delta }t)`$. Such approximated feedback may be substituted into the evolution equations:
$$\dot{\phi }(t)=\mathrm{\Phi }(\phi (t),\stackrel{}{u}(\stackrel{}{u}^{}(t),\phi (t\mathrm{\Delta }t),\dot{\phi }(t\mathrm{\Delta }t);K_\alpha )).$$
Thus we obtain an ordinary differential game (with the retarded, delayed arguments), which is just the retarded control approximation of the initial laced interactive game.
Sometimes the frozen feedback approximation and the retarded control approximation coincide but it is not so in general.
###### Remark Remark 6
The retarded control approximations form a series of approximation defined by the initial time $`t_0`$, such that the states $`\phi (t)`$ for the considered laced interactive game at $`t<t_0`$ are the initial data the approximation. One may describe more subtle effects considering the mutual correlations of the retarded control approximations with variable $`t_0`$.
I suspect that the ideological intuition of the nonlinear geometric algebra (see also ) may be essential for the analysis of series of the retarded control approximation, for instance a verification of any algebraic correlations between the different retarded control approximations may be important as during analysis of concrete games as for the axiomatic separation of the interesting classes of the laced interactive games or for a classification of various types of interactivity.
Note that the procedure of virtualization may be considered not only for the interactive systems but also for the interactive games.
###### Remark Remark 7
The approximating series of games may be used for a formulation of predictions for processes in the initial interactive game. Such formulation may be analytic, however, it is difficult to perceive and to interpret the obtained results in real time. Thus, it is rather reasonable to use some visual representation for the series of the approximating games. Thus, we are constructing an enlargement of the interactive game, in which the players interactively observe the visual predictions for a game in real time. Certainly, such enlargement may strongly transform the structure of interactivity of the game (i.e. to change the feedbacks entered into the interactive controls of players). Note that the aim of the virtualization is to restore the past of the interactive processes whereas the goal of the proposed enlargement is to correct their future. In some sense they are two complementary faces of the unique general procedure.
## §4 Conclusions
Thus, an important class of differential interactive games, namely, one of the laced interactive games, was considered. A posteriori analysis of such games (including the virtual a posteriori decomposition of a collective control) was discussed. Approximations of the laced interactive games by the ordinary differential games, the frozen feedback approximation and the retarded control approximation, were constructed.
|
no-problem/9901/astro-ph9901010.html
|
ar5iv
|
text
|
# On the frequency and remnants of Hypernovae
## 1 Introduction
Recent studies of Gamma-Ray Burst afterglows (van Paradijs et al 1997; Costa et al 1997; Frail et al 1997; Kulkarni et al 1998a, Galama et al 1998) and the determination of some host or intervening galaxy redshifts (Metzger et al 1997, Kulkarni et al 1998b; Djorgovski et al 1998) have indicated the presence of one or more new classes of astrophysical explosion. The tentative identification of extragalactic star-formation regions as the site of these events (Paczynski 1998; Djorgovski et al 1998) suggests a link of such events with massive star evolution. In this letter I shall consider the possibility that these events represent unusually energetic stellar explosions, termed ‘hypernovae’ in the literature.
Given a new class of astrophysical object or event, several natural questions arise. What is the frequency with which it occurs (in our Galaxy and others)? What, if any, are the observable manifestations of such an event in manners other than that of the discovery?
## 2 Rate Estimates
Estimates of the cosmological Gamma-Ray Burst (or GRB) rate range from $`10^6\mathrm{yr}^1`$ per Galaxy for a constant comoving rate (Cohen & Piran 1995) to $`10^8\mathrm{yr}^1`$ per Galaxy (Totani 1997; Wijers et al 1998) for a population that follows the cosmological star formation rate (Lilly et al 1996; Madau et al 1996). These estimates assume an isotropically emitting source. The estimated observed $`\gamma `$-ray energy of the GRB 971214 is $`3\times 10^{53}`$ ergs (Kulkarni et al 1998b), 300 times the total energy output of an average supernova. This may be reduced drastically if the emission is strongly beamed, but the corresponding event rate increases by the same factor as the energy is reduced. Thus, one may scale the event rate according to the true energy output of the average GRB,
$$R10^7\mathrm{yr}^1\left(\frac{3\times 10^{52}ergs}{ϵE}\right)$$
(1)
where I have used the star formation rate estimate (The comoving rate will be 100 times larger). E is the total energy release and $`ϵ`$ is the efficiency of conversion to observed $`\gamma `$ ray energy.
Observations of a supernova 1998bw associated with the GRB980425 (Galama et al 1998; Kulkarni et al 1998a) suggests that there exists a second class of GRB events which are indeed associated with the explosion of a massive star and with $`\gamma `$-ray energy output ($`10^{48}`$ ergs) significantly less than the few other bursts with known distances. Thus, bursts of this type can only be detected out to distances $`100`$ Mpc (Bloom et al 1998), as opposed to the Gpc distances to other detected bursts. Hence the detectable volume for such events is $`10^3`$ that for the cosmological bursts and, given that as much as $`10\%`$ of observed bursts could belong to this second class (Bloom et al 1998), the intrinsic comoving event rate is almost certainly higher.
The association of type Ib/c SN1998bw and GRB980425 have prompted some (Wang & Wheeler 1998) to suggest an association between GRB and all type Ib/c supernovae. This is disputed by several authors (Bloom et al 1998; Kippen et al 1998; Graziani et al 1998) who point out the lack of other convincing associations as well as the unusually bright nature of SN1998bw. Although event rates based on a single event are necessarily uncertain, I shall adopt a hypernova Ib rate $`10\%`$ that of the Supernova Ib rate. This is based on the fact that Kippen et al present a catalogue of 160 bright supernovae (selection effects are claimed to be less important for this subset) since 1991 (the BATSE era), which contains 11 type Ib supernovae. The 10$`\%`$ hypernova fraction is high enough to allow the detection of at least one hypernova from samples of this size while remaining consistent with the lack of other convincing associations (Bloom et al 1998; Kippen et al 1998; Graziani et al 1998). The Supernova Ib rate is approximately half that of type II supernovae (van den Bergh & Tammann 1991). Thus, I shall adopt a rate of $`10^3\mathrm{yr}^1`$ in the Galaxy as a hypernova rate.
## 3 A Possible Class of Hypernova Remnants
The offspring of supernovae are believed to be neutron stars (Baade & Zwicky 1934). This is supported by the association of some young pulsars with supernova remnants and the approximate agreement of the pulsar birthrate with the supernova rate (Helfand & Becker 1984; Weiler & Sramek 1988; Gaensler & Johnston 1998). In some scenarios, the offspring of a hypernova is an isolated black hole (Woosley 1993; Paczynski 1998), which powers the GRB from either the binding energy of accreted material or by magnetic field extraction of rotational energy. In the latter case, the rotational energy required implies that the massive stellar core of the pre-collapse giant star must be spinning rapidly with respect to the overlying envelope, contrary to some evolutionary calculations (Spruit & Phinney 1998 and references therein). Others (Wang & Wheeler 1998; Cen 1998) have suggested that the high mean velocities of the pulsars (Lyne & Lorimer 1994) result from hypernova-like processes. However, these authors claim associations between all supernovae of type Ib/c and GRB, which seems unlikely (Bloom et al 1998; Kippen et al 1998; Graziani et al 1998).
Here I suggest a modified version of the above scenario. Recent work on the distribution of pulsar velocities, incorporating information from different sources such as pulsar-supernova remnant associations (Kaspi 1996), X-ray binary properties (Brandt & Podsiadlowski 1995; Kalogera, King & Kolb 1998) as well as improved treatments of the proper motion data (Hansen & Phinney 1997; Hartman et al 1997; Cordes & Chernoff 1998) all favour a lower median velocity of $`200300\mathrm{k}\mathrm{m}.\mathrm{s}^1`$. However, there is dramatic evidence in some individual cases for very high pulsar velocities (e.g. Cordes, Romani & Chernoff 1993; Cordes & Chernoff 1998). The detailed statistical studies indicate that the fraction of pulsars with velocities $`>800\mathrm{k}\mathrm{m}.\mathrm{s}^1`$ is less than 20%(Hansen & Phinney 1997; Cordes & Chernoff 1998). This curious bimodal distribution is, as yet, unexplained.
If we associate the pulsars in the lower velocity, majority component with the ordinary supernovae (birthrate $`10^2\mathrm{yr}^1`$), then the high velocity pulsars represent a population with a birthrate appropriate to that of the hypernovae $`10^3\mathrm{yr}^1`$. Furthermore, modelling of the SN1998bw lightcurve (Iwamoto et al 1998; Woosley, Eastman & Schmidt 1998) suggests an energy release $`3\times 10^{52}`$ ergs, approximately 30 times that of a traditional supernova. If we regard this extra energy as the defining characteristic of a hypernova, we might expect a consequently higher velocity for the remnant as well. Indeed, if the fraction of the total energy channelled into pulsar kinetic energy is constant, the median velocity of pulsars born from hypernovae is $`\sqrt{30}\times 200\mathrm{k}\mathrm{m}.\mathrm{s}^11100\mathrm{k}\mathrm{m}.\mathrm{s}^1`$, appropriate for the observed fast pulsar population. However, the exact link between energy release and pulsar velocities is still unknown. Proposed scenarios range from hydrodynamic or global instabilities and asymmetric collapse (Burrows, Hayes & Fryxell 1995; Janka & Müller 1996; Goldreich, Lai & Sahrling 1998) to various anisotropic radiation (‘rocket’) mechanisms (Harrison & Tademaru 1975; Chugai 1984; Vilenkin 1995; Kusenko & Segre 1996; Horowitz & Li 1997; Lai & Qian 1998).
What of the spins of the hypernova pulsar offspring? Spruit & Phinney (1998) have conjectured that the velocities and spins of the observed pulsars may have the same origins (since an off-centre kick will generate both linear and angular momentum). If that holds true in this case as well, we expect the spins of the high velocity pulsars to also be particularly rapid. The initial spins of pulsars are difficult to determine, since young pulsars undergo rapid spin down, but the fastest rotating young X-ray pulsar rotates at a period of 16ms (Marshall et al 1998). This suggests that initial spins $`<3`$ms may be possible in hypernova events. I shall return to this point in the next section.
Hypernovae may also find application in the study of the production of r-process material. The most widely accepted site of r-process production is the neutrino-heated ejecta of hot protoneutron stars (Woosley & Baron 1992; Meyer et al 1992). However, Qian, Vogel & Wasserburg (1998) find that the production of <sup>129</sup>I and <sup>180</sup>Hf for the protosolar nebula requires at least two different production sites, with different ratios of neutrons to seed nuclei. They find that the two hypothetical processes have to occur at different rates, with the less frequent events occurring at a rate $`1/10`$ as often as the more frequent events and with lower ratios of neutrons to seed nuclei. If we associate the high rate option with traditional core collapse supernovae, then our inferred hypernova rate is appropriate to be the second kind of event. Furthermore, the r-process operates on neutrino diffusion timescales $``$1-10s and on length scales corresponding to the neutrino-heated ‘hot bubble’ surrounding the nascent neutron star $`1050`$km, where the mostly dissociated material yields a high neutron/seed nuclei ratio (Meyer et al 1992). A neutron core moving at velocities $`1000\mathrm{km}.\mathrm{s}^1`$ will cross this bubble length within $`0.1`$ seconds. The neutrino-heated wind velocities on these scales are $`1001000\mathrm{k}\mathrm{m}.\mathrm{s}^1`$ also (Qian & Woosley 1996). Thus, the velocity of the neutron star is likely to have a significant effect on the nucleosynthetic yield. If hypernovae result in black hole remnants, they will not contribute to this process, as they swallow most of their heavy element production (Timmes, Woosley & Weaver 1996 and references therein).
Finally, it is worth noting that the distance ($`40`$ Mpc) of SN1998bw/GRB980425 is approximately the value of the Greisen-Zatsepin-Kuz’min cutoff, estimated to be $`30`$ Mpc (Protheroe & Johnson 1995). Thus, if GRB events are responsible for the generation of Ultra-High Energy cosmic rays (Milgrom & Usov 1995; Waxman 1995), there are reasonable prospects for detection of Cosmic Rays associated with this event. Recall that delays of $``$ 1 year are expected due to Galactic magnetic fields.
## 4 Magnetars and Cosmological Bursts
If the spins of pulsars are determined by the kicks they receive during their birth, as suggested by Spruit & Phinney (1998), then the spins of pulsars born from hypernovae will be particularly fast. Indeed, if spins reach $`<1`$ms, the conditions for efficient field amplification by proto-neutron star convection are met (Duncan & Thompson 1992) and the remnant will most likely be a magnetar,<sup>1</sup><sup>1</sup>1 The existence of such objects has recently been demonstrated by Kouveliotou et al (1998). or neutron star with magnetic field $`10^{15}10^{16}`$ G. However, there is likely to be a distribution of spins and some normal field pulsars must result, since many of the known high velocity pulsars have average magnetic field strengths.
If some fraction of hypernovae do yield magnetars, these events may power the cosmological GRB as well, providing a common origin for the two observed classes. Several authors (Usov 1992; Fatuzzo & Melia 1993; Thompson 1994; Blackman, Yi & Field 1996) have discussed powering cosmological bursts using high field neutron stars, although the usual scenario invokes accretion induced collapse of a strongly magnetic white dwarf. The scenario presented here plumbs a different energy source to power the burst in that the rotational energy of the magnetar arises from the same mechanism that taps the explosion to provide the kick velocity.
The estimated birthrate of magnetars in the Galaxy (see Kouveliotou et al 1998 and references therein) suggest a rate of similar order of magnitude to the hypernova rate. Thus, the fraction of hypernovae that yield magnetars is $`f_m0.11`$. Assuming that one requires $`P<1`$ms to generate a magnetar (Duncan & Thompson 1992), cosmological GRB should then tap an energy reservoir $`E>2\times 10^{52}`$ergs in this scenario. If we wish to match a rate $`f_m\times 10^3\mathrm{yr}^1`$ $`10^4\mathrm{yr}^1`$ with the rate in equation (1), then we need only a total energy in the beam $`3\times 10^{49}`$ergs, corresponding to a beaming angle $`3`$ degrees and an efficiency of conversion of rotational to beamed energy of $`10^3`$. If we use the constant comoving rate estimate, then the beaming angle is $`30`$ degrees and conversion efficiency $`0.1`$. Thus, this scenario can easily generate sufficient events with appropriate energies and beaming angles. If only a fraction of magnetar births generate GRB, then we require a greater efficiency and larger beaming angle. Further constraints on the beaming are possible by studying the effects of afterglows in other wavebands (Perna & Loeb 1998). The high spins appropriate to the magnetars may also help to explain the variation in durations between bursts via the competition between gravitational and electromagnetic radiation (Blackman & Yi 1998).
## 5 Constraints and Predictions
The scenarios I have described above invoke the release of energy in the core collapse of a massive star to power them. As such, there is little dependance on the stellar envelope composition. Thus, just as we believe type II and type Ib supernovae to correspond to core collapse of stars with and without hydrogen envelopes respectively, we must expect hypernovae to occur in both hydrogen-rich and hydrogen-poor forms. SN1998bw is believed to be a type Ib hypernova and we might ask whether there exist any candidates for type II hypernovae? One possible candidate for such an event would be SN1979c (Branch et al 1981) which outshone most other type II by 2-3 magnitudes. Furthermore, this was a supernova of type II-L, a class which Gaskell (1992) claims is the hydrogen-rich equivalent of the type Ib events, in that this subset seem to be much closer to standard ‘bombs’ than the full, rather heterogeneous, type II sample. Gaskell’s estimate for the fraction of unusually luminous type II-L is $`48\%`$, consistent with our assumption that the overenergetic fraction of all core collapse explosions is $`<20\%`$.
How many hypernovae generate GRB? Let us first consider the SN1998bw/GRB980425 class. Kulkarni et al (1998a) find evidence in the radio emission for a relativistic shock preceding the main shock. It is thought that this decelerating shock may have generated the gamma-rays at an earlier time by an as yet poorly understood mechanism. To generate such a shock a significant amount of energy $`>10^{49}`$ ergs must have been coupled to the outer $`10^5M_{}`$ of the stellar envelope. As such it is likely that this type of GRB will be associated only with type Ib hypernovae (by virtue of the smaller envelope mass and steeper density gradient).
If we believe that this model can also explain the more energetic cosmological GRB, the scenario requires the beaming of energy from a young magnetar. Whether such events can occur in type II hypernovae will depend on whether the jet can penetrate the overlying hydrogen envelope while still avoiding the baryon loading problem. If not, we do not expect a GRB associated with such an event. However, the rotational energy $`10^{52}`$ergs released is still a substantial fraction of the hypernova energy and may perhaps result in observable asymmetry in the explosion. Such events should be detected in high-z supernova or direct optical transient searches.
If we consider the possibility that GRB may be associated with cosmological type II hypernovae, what are the chances of observing such an association? Let us consider the detectability of a bright supernova such as SN1979c in each of the well-studied cosmological afterglow cases. I model the peak flux of this event as a diluted black body of effective temperature $`13000`$ K as inferred from the parameters presented in Schmidt, Kirshner & Eastman (1992) and Cappellaro, Turatto & Fernley (1995). The maximum brightness may be compared to the observed afterglow or host galaxy emission at the appropriate redshifted time of maximum light ($``$ 7 days in the rest frame for SN1979C). In all three cases with redshift information (Metzger et al 1997;Kulkarni et al 1998a; Djorgovski et al 1998), the peak R magnitude is larger than the afterglow or host magnitude at the appropriate time. Furthermore, the sensitivity to extinction is large since the observed emission is from the rest-frame UV. The detectability of type Ib events ($``$ 1.5 magnitudes fainter at peak) is even harder. At limiting magnitudes $`R25`$, type II hypernovae are detectable out to $`z1`$ even for reasonable extinctions, but may be dwarfed by the GRB afterglow itself.
The connection between high velocities and spins proposed by Spruit & Phinney (1998) and the connection between rapid spins and strong magnetic fields proposed by Duncan & Thompson (1998) naturally leads to a halo of magnetars and neutron stars about our Galaxy and others. This is, in fact, the GRB scenario proposed by Duncan & Thompson which sought to explain GRB as magnetic reconnection events in the Galactic magnetar halo. Although I now invoke their births as the source of the GRB, it is possible that there is a third class of GRB event waiting to be discovered<sup>2</sup><sup>2</sup>2Perhaps some of the bursts with no observable optical afterglow could arise in this extended halo.
## 6 Conclusions
In this paper I have presented circumstantial evidence from pulsar velocity and r-process nucleosynthesis studies which support the existence of another class of astrophysical explosion besides the supernovae, and with a rate and properties similar to that inferred for the hypernovae. Such links are highly speculative, but, given the complexity of the theory underlying these phenomena, any suggestion or hint of corroborating evidence is invaluable. Furthermore, the conditions that are likely to result in a hypernova are appropriate for the production of magnetars, which could generate the cosmological GRB as well.
An important point to note here is that the connection between kicks and spins proposed by Spruit & Phinney provides a new source of rotational energy to power cosmological GRB. This scenario is essentially the inverse of that proposed by Cen (1998) or Wang & Wheeler (1998), in that we invoke the kick mechanism (whatever that may be) to provide the energy source of the burst (rather than a momentum imbalance in the burst jet emission to provide the kicks). It may also serve to alleviate the problems associated with strong core-envelope coupling in collapsar progenitor models for GRB.
Note also that this model rests on an (as yet) unknown mechanism for generating $`2030`$ times the canonical $`10^{51}`$ ergs of mechanical energy in a core collapse explosion. Under this hypothesis, hypernovae should occur in both hydrogen-rich and hydrogen-poor form, just as core-collapse supernovae do. However, it may be more difficult to generate GRB if there is a massive hydrogen envelope to penetrate. Nevertheless, both events should appear in optical transient searches that don’t trigger on gamma rays.
This model provides an explanation for the curious bimodality in the pulsar velocity distribution, given the reasonable assumption that the contribution to kinetic energy is an approximately constant fraction of the collapse energy release. However, it must be noted that this is based on a sample of $`100`$ objects and pulsar surveys are bedevilled by myriad selection effects. Ongoing observational programs will add to the data in forthcoming years and should conclusively address the veracity of the velocity bimodality.
|
no-problem/9901/cond-mat9901019.html
|
ar5iv
|
text
|
# Metal-insulator transition in spatially-correlated random magnetic field system
\[
## Abstract
We reexamine the problem of delocalization of two-dimensional electrons in the presence of random magnetic field. By introducing spatial correlations among random fluxes, a well-defined metal-insulator transition characterized by a two-branch scaling of conductance has been demonstrated numerically. Critical conductance is found non-universal with a value around $`e^2/h`$. Interesting connections of this system with the recently observed $`B=0`$ two-dimensional metallic phase (Kravchenko et al., Phys. Rev. B 50, 8039 (1994)) are also discussed.
today \] Whether two-dimensional (2D) electrons can become delocalized in the presence of random magnetic field (RMF) is still controversial. This is a very important issue related to many interesting systems, like half-filled quantum Hall effect (QHE) , gauge-field description of high-$`T_c`$ superconductor and so on. By using the standard transfer-matrix method, a number of numerical calculations have been performed for a non-interacting 2D electron system subject to spatially-uncorrelated RMF. The results indicate that electrons are always localized near the band edge, while there is a dramatic enhancement of localization length as one moves towards the band center. However, the interpretation of the latter is rather conflicting, ranging from that all states are still localized with an extremely large localization length close to the band center to the existence of a critical region with divergent localization length. Even if a critical region characterized by wavefunctions with fractional dimensionality could exist here, a metallic phase seems being ruled out by those numerical calculations since a two-branch scaling as a hallmark for metal-insulator transition (MIT) has never been found. Analytically, while the study based on a perturbative nonlinear sigma model approach pointed to the localization of all states, the existence of extended states was shown possible in the presence of a long-range logarithmic interaction of the topological density (due to fluctuating Hall conductance), which is supported by direct numerical calculations of topological Chern number for the case of spatially-uncorrelated RMF with reduced field strength.
In contrast to spatially-uncorrelated RMF, however, magnetic flux fluctuations in realistic systems may be much more smooth with finite-range spatial correlations. Such a smoothness can significantly reduce the random scattering effects while still retain the delocalization effect introduced by magnetic fluxes. In this paper, we demonstrate numerically for the first time the existence of MIT which is characterized by a two-branch scaling of conductance in the presence of spatially-correlated RMF. The critical conductance itself is non-universal, with its value around $`e^2/h`$ which generally increases as the Fermi energy shifts towards the band center. With much reduced error bar, the present numerical algorithm is also applied to an uncorrelated (white noise limit) RMF case and the results unambiguously show that all states are localized without a critical region at strong strength of RMF. Possible connections of the present RMF system to the zero-magnetic-field (B=0) 2D metal are also discussed at the end of the paper.
We consider a tight-binding lattice model of noninteracting electrons under RMF. The Hamiltonian is defined as follows:
$$H=\underset{<ij>}{}e^{ia_{ij}}c_i^+c_j+\underset{i}{}w_ic_i^+c_i$$
(1)
Here $`c_i^+`$ is a fermionic creation operator, and $`<ij>`$ refers to two nearest neighboring sites. $`w_i`$ is an uncorrelated random potential (white noise limit) with strength $`|w_i|W`$. A magnetic flux per plaquette is given as $`\varphi (k)=_{\mathrm{}}a_{ij}`$, where the summation runs over four links around a plaquette labeled by k. We are interested in the case where $`\varphi (k)`$ at different k’s is correlated which can be generated in the following way:
$$\varphi (k)=\frac{h_0}{\lambda _f^2/4}\underset{i}{}f_ie^{|R_kR_i|^2/\lambda _f^2}$$
(2)
where $`R_k`$ ($`R_i`$) denotes the spatial position of a given plaquette $`k`$($`i`$). $`h_0`$ and $`\lambda _f`$ are the characteristic strength and correlation length scale of RMF, respectively. $`f_i`$ is a random number distributing uniformly between (-1,+1).
We employ the following numerical algorithm to calculate the longitudinal conductance $`G_{xx}`$. Based on the Landauer formula, $`G_{xx}`$ for a square sample $`𝒩=L\times L`$ can be determined as a summation over contributions from all the Lyapunov exponents of the Hermitian transfer matrix product $`T^+T`$. To reduce the boundary effect of a finite-size system, we connect $`M`$ different square samples together to form a very long stripe along $`x`$ direction \[of size $`L\times (LM)`$\]. Typically $`M`$ is chosen to be larger than $`5000`$ even for the largest sample size ($`L=200`$) in this work. In this way, the statistical error bar is significantly reduced in our results (about $`1.5\%`$). In most of earlier numerical calculations, finite-size localization length has been computed where the statistical fluctuation is usually quite big (especially near the band center) as compared to a direct calculation of the finite-size longitudinal conductance in the present algorithm.
As a test, we have first re-studied the case in which the flux $`\varphi (k)`$ is randomly distributed between $`\pi `$ to $`\pi `$ without spatial correlations – the situation investigated previously as mentioned at the beginning of the paper. We find that $`G_{xx}`$ monotonically decreases with the sample size $`L`$ at all strengths of the on-site disorders: from $`W=0`$ to $`W=4`$, and is extrapolated to zero at large sample-size limit as shown in Fig. 1 at a fixed Fermi energy $`E_f=1`$. In the insert of Fig. 1, $`G_{xx}`$ is shown as a function of the disorder strength $`W`$ at different sample sizes: $`L=24`$, $`80`$, and $`200`$, which shows that even at $`W=0`$ the conductance monotonically decreases with the increase of $`L`$, indicating that the dominant role of the random flux here is similar to the random potential in causing localization of electrons. The one-parameter scaling of $`G_{xx}`$ can be obtained by choosing a scaling variable $`\xi `$ at each random potential $`W`$. As plotted in Fig. 2, all data can be then collapsed onto a single curve of $`L/\xi `$, in which $`\xi `$ is given in the insert of Fig. 2. Clearly $`\xi `$ is always finite although it becomes extremely large at weak disorder limit. This is consistent with the conclusion that electrons are all localized and excludes the possibility of a critical region as the error bar in our calculation is much less than the variation of the conductance itself. Notice that in weak-disorder limit $`\xi `$ may no longer be interpreted as localization length which characterizes an exponential decay of conductance with sample size at strong localized region.
Now let us focus on RMF with smooth spatial correlations as defined in (2). With the correlation length $`\lambda _f=5.0`$ (the lattice constant as the unit) and flux strength $`h_0=1`$, $`G_{xx}`$ as a function of disorder strength $`W`$ is computed at a given Fermi energy $`E_f=1`$ as shown in Fig. 3. Curves at different sample sizes ($`L=16`$$`200`$) all cross at a fixed-point $`W=W_c`$, which is independent of lattice size $`L`$ within the statistical error bars. It is qualitatively different from the behavior of $`G_{xx}`$ in spatially-uncorrelated RMF case discussed above. At $`W>W_c`$, $`G_{xx}`$ continuously decreases with the increase of the sample size, which can be extrapolated to zero at large $`L`$ limit, corresponding to insulating phase. On the other hand, at $`W<W_c`$, $`G_{xx}`$ monotonically increases with lattice sizes like a typical metallic behavior. The insert of Fig. 3 shows the critical conductance $`G_c`$ (corresponding to $`W=W_c`$) at different Fermi energies and $`h_0`$’s. The data of $`G_{xx}`$ in Fig. 3 can be collapsed onto a two-branch curve as a function of scaling variable $`L/\xi `$ as shown in Fig. 4 for $`W>W_c`$ and $`W<W_c`$, respectively. The insert of Fig. 4 shows the scaling variable $`\xi `$ vs. $`W`$ which diverges at the critical point $`W_c`$. In the metallic phase at $`W<W_c`$, $`G_{xx}`$ can be approximately fitted by the following form: $`G_{xx}=G_sc_0exp(L/\xi _0)`$. Here $`G_s`$ is the saturated conductance at $`L\mathrm{}`$, which is non-universal and depends on the disorder strength $`W`$ as well as the correlation length $`\lambda _f`$ of random fluxes.
The introduction of spatial correlations in random fluxes is crucial for such a metal-insulator transition. We also found a well-defined MIT at an even shorter correlation length: $`\lambda _f=2.0`$. But the larger $`\lambda _f`$ is, the stronger the metallic behavior becomes with a larger saturated conductance. The previously discussed RMF in white noise limit may only belong to a very special case in which the localization effect of strong randomness of fluxes overwrites the delocalization effect of the same fluxes. We would like to point out that even in such an uncorrelated random flux case, the delocalization may be still enhanced if one reduces the strength of RMF. Earlier topological Chern number calculations clearly indicates a delocalization transition as the maximum strength of $`\varphi (k)`$ is reduced to around $`\pi /2`$. We have computed the conductance in this case using the present method at much larger sample sizes and indeed found a slight increase of the conductance with sample size at $`W<W_c`$, which is opposite to strong random flux limit where conductance always decreases with the increase of sample size (Fig. 1), although a two-branch scaling curves here is not as clear-cut as in the spatially-correlated RMF case shown in Fig. 4.
As mentioned above, the critical conductance $`G_c`$ varies from $`0.5`$$`e^2/h`$ to around $`2`$$`e^2/h`$ as the Fermi energy shifts from the band edge towards band center (the insert of Fig. 3). It is interesting to note that $`G_c`$ obtained here is in the same range as the experimental data found in recent $`B=0`$ 2D MIT system. In the following, we would like to point out a possible deeper connection between the two systems.
In a recent experiment in p-type GaAs/AlGaAs heterostructure, the evolution of delocalized states was studied continuously from the QHE regime at strong magnetic field to zero field limit where the $`B=0`$ MIT is recovered. The authors found that the critical density of the lowest extended level in QHE regime flattens out, instead of floating up towards infinity, as magnetic field is reduced and can be extrapolated to the critical density of $`B=0`$ MIT in such a material. Similar result has been also observed in Si-MOSFET samples. At first sight, it is tempting to think that the lowest extended level of QHE somehow survives at $`B=0`$, but physically it does not make much sense because QHE extended states carry quantized Hall conductance known as Chern number whereas at $`B=0`$ the total Hall conductance must be zero without time-reversal symmetry-breaking. In fact, experiments indicated that before $`B`$ vanishes, extended levels of the QHE may already merge with a different kind of extended level (called QHE/Insulator boundary in Ref,) which carries an opposite sign of Hall conductance. Theoretically, it has been previously found that QHE extended states indeed can be mixed with some boundary extended level moving down from high-energy side at strong disorder or weak magnetic field limit which carries negative Chern number in a lattice model. When those extended states with different signs of Chern numbers mix together at weak magnetic field limit, there could be two consequences: one is that no states will eventually carry non-zero Chern number due to the cancellation such that all of them become localized. This is what happens in non-interacting system; The second possibility is that individual states may still carry nonzero Chern numbers and form a delocalized region even though the average Hall conductance still vanishes at $`B=0`$. Such a system is then physically related to the RMF system where the delocalization mechanism is also due to the fluctuating Hall conductance. Below we give a heuristic argument how a strong Coulomb interaction may lead to such a realization.
At strong Coulomb interaction with $`r_s1`$ (here $`r_s`$ is the ratio of the strength of the Coulomb interaction over the Fermi energy), the 2D electron state is very close to a Wigner glass phase where the low-lying spin degrees of freedom may be described by an effective spin Hamiltonian $`H_s`$ given in Ref.. The low-lying charge degrees of freedom may be regarded as “defects” which can hop on the “lattice” governed by a generalized $`tJ`$ like model. Based on many studies on the $`tJ`$ model in high-$`T_c`$ problem, especially the gauge-field description, charge carriers moving on a magnetic spin background can generally acquire fictitious fluxes. Such kind of fluxes usually can be treated as random magnetic fields with some finite-range spatial correlations. According to the numerical results presented above, such a system indeed can have a MIT at $`B=0`$. Of course, further model study is needed in order to fully explore this connection which is beyond the scope of the present paper.
In conclusion, we have numerically demonstrated the existence of a metal-insulator transition characterized by a two-branch scaling for 2D electrons in the presence of spatially-correlated random magnetic fields. In contrast to usual three-dimensional metal where the conductance scales to infinity, this 2D metal has a saturated non-universal conductance. The range of the critical conductance is very similar to that found in $`B=0`$ 2D metal-insulator transition. We briefly discussed a possible connection between a 2D interacting electron system at $`r_s1`$ and the spatially-correlated random-magnetic-field problem based on both experimental and theoretical considerations.
Acknowledgments -The authors would like to thank C. S. Ting, X. G. Wen, and especially S. V. Kravchenko for stimulating and helpful discussions. The present work is supported by Texas ARP grant No. 3652707, a grant from Robert Welch foundation, and by the State of Texas through the Texas Center for Superconductivity at University of Houston.
Fig. 1 The evolution of conductance $`G_{xx}`$ (in units of $`e^2/h`$) with sample width $`L`$ at differnent disorder strength $`W`$’s for spatially-uncorrelated RMF case. The insert: $`G_{xx}`$ as a function of $`W`$ at different $`L`$’s. Fermi energy is fixed at $`E_f=1`$.
Fig. 2. The data of $`G_{xx}`$ at different $`L`$’s and $`W`$’s all collapse onto a scaling curve as a function of $`L/\xi `$. The insert: $`\xi `$ versus $`W`$.
Fig. 3 $`G_{xx}`$ versus $`W`$ at different sample sizes ($`L=16(),24,32,48,64,80,120,200`$). $`W_c`$ is the critical disorder. Fermi energy is chosen at $`E_f=1`$. The insert: critical conductance $`G_c`$ as a function of Fermi energy $`E_f`$.
Fig. 4. Two branch-scaling curve of $`G_{xx}`$ as a single function of $`L/\xi `$ for different $`L`$’s and $`W`$’s. The insert: $`\xi `$ versus $`W`$.
|
no-problem/9901/hep-th9901053.html
|
ar5iv
|
text
|
# 1 Matrix String Theory
## 1 Matrix String Theory
This talk is based on Ref. , but it also includes some new results and, we hope, a better understanding of the results already contained in the original paper. This work has its origin in an old puzzle that dates back to 1993 and in some recent development. The puzzle consists in the following: while studying the functional integral approach to quantization of YM2 on a torus (see Ref. ) some of us noticed that in order to get the correct result for the partition function on the torus in the gauge where the field strength $`F`$ is diagonal (unitary gauge) we would have been obliged to neglect some contributions from twisted sectors that seem to arise naturally in that gauge. The problem was put aside and ascribed to some lack of understanding from us. Recently however the same type of contributions were considered in the Matrix String theory model of Ref. where they give origin to string configurations of different lengths. To be more specific let us consider the Matrix String theory action:
$$S=\frac{1}{2\pi }𝑑\sigma 𝑑\tau \mathrm{tr}\left((D_\mu X^M)^2+\theta ^TD/\theta +g_s^2F_{\mu \nu }^2\frac{1}{g_s^2}[X^M,X^S]^2+\frac{1}{g_s}\theta ^T\gamma _M[X^M,\theta ]\right)$$
(1)
where the fields $`X^M`$ ( $`M=1,2,\mathrm{}8`$) are $`N\times N`$ hermitian matrices, as are the 8 fermionic fields $`\theta _L^\alpha `$ and $`\theta _R^{\dot{\alpha }}`$. The two dimensional world sheet is an infinite cylinder parametrized by coordinates $`(\sigma ,\tau )`$, with $`\sigma `$ between $`0`$ and $`2\pi `$. This action can be obtained from the ten dimensional super Yang-Mills theory by dimensional reduction and it features the same set of fields as Green-Schwarz action for type II superstring, except that here the fields are matrices. In the limit where the string coupling constant $`g_s`$ goes to zero the fields $`X`$ and $`\theta `$ will commute and can be simultaneously diagonalized. The eigenvalues $`x_i^M`$ ($`i=1,2,\mathrm{},N`$) of $`X^M`$ can be identified with string coordinates which describe the world sheets of a gas of $`N`$ Green-Schwarz light-cone strings. The key point is that the eigenvalues $`x_i^M`$ can be interchanged as one goes around the compact dimension $`\sigma `$:
$$x^M(\sigma +2\pi )=Px^M(\sigma )P^1,$$
(2)
where $`P`$ is an element of $`S_N`$. In conclusion the fields $`x_i^M(\sigma )`$ take value on the orbifold space $`S^N𝐑^8`$, with strings of different lengths associated to the cycles of $`P`$, and the corresponding Hilbert space consists of twisted sectors in correspondence with the conjugacy classes of $`S_N`$.
## 2 YM2 in the unitary gauge
The same twisted sectors appear naturally in YM2 in the unitary gauge, where the field strength $`F`$ is diagonal. Consider the partition function of YM2 in the first order formalism on a general Riemann surface $`\mathrm{\Sigma }_g`$ of genus $`g`$:
$$Z(\mathrm{\Sigma }_g,t)=[dA][dF]\mathrm{exp}\left\{\frac{t}{2}\mathrm{tr}_{\mathrm{\Sigma }_g}𝑑\mu F^2+\mathrm{i}\mathrm{tr}_{\mathrm{\Sigma }_g}f(A)F\right\},$$
(3)
where $`d\mu `$ is the volume form on $`\mathrm{\Sigma }_g`$ and $`f(A)=dA\mathrm{i}AA`$. It is always possible, at least locally, to find a gauge transformation $`g`$ that diagonalizes $`F`$:
$$g^1Fg=\mathrm{diag}(\lambda ).$$
(4)
However $`g`$ is not unique: if $`g`$ diagonalizes $`F`$, so does any gauge transformation of the form $`gP`$, with $`PS_N`$: in general, there are $`N!`$ Gribov copies of the gauge–fixed field strength $`F`$. As in the case of Matrix string theory, the twisted sectors appear because as we go around a homotopically non trivial loop on $`\mathrm{\Sigma }_g`$, the eigenvalues can be interchanged, namely we can go from one Gribov copy to another. Consider now the case where $`\mathrm{\Sigma }_g`$ is a torus parametrized by coordinates $`(\sigma ,\tau )`$ both ranging from $`0`$ and $`2\pi `$. The twisted sectors are labelled by the pair of permutations $`P`$ and $`Q`$ associated to the two homotopically non trivial loops, more precisely by the boundary conditions
$`\lambda _i(\tau +2\pi ,x)`$ $`=`$ $`\lambda _{P(i)}(\tau ,x),`$
$`\lambda _i(\tau ,x+2\pi )`$ $`=`$ $`\lambda _{Q(i)}(\tau ,x),`$ (5)
where consistency requires $`P`$ and $`Q`$ to commute: $`PQ=QP`$.
Pairs of commuting permutations also define $`N`$-coverings of the torus without branch points, the $`N`$ sheets of the covering at each point of the target space being labelled by one eigenvalue of $`F`$. This argument is easily generalized to higher genus Riemann surfaces, and one can say in conclusion that twisted sectors are in correspondence with the inequivalent $`N`$-coverings of $`\mathrm{\Sigma }_g`$ in absence of branch points. However if the genus is greater than one the quantization in the unitary gauge leads to divergences whose regularization, according to Ref. , would amount to set to zero all twisted sectors. The argument is as follows: the BRST invariant action in the unitary gauge consists of two terms, the first, denoted by $`S_{\mathrm{Cartan}}`$, depends only on the diagonal components of the gauge fields and is just the gauge action in the first order formalism for the residual $`U(1)^N`$ gauge invariance:
$$S_{\mathrm{Cartan}}=_{\mathrm{\Sigma }_g}\underset{i=1}{\overset{N}{}}\left[\frac{t}{2}\lambda _i^2d\mu \mathrm{i}\lambda _idA^{(i)}\right],$$
(6)
where $`A^{(i)}`$ is the $`i`$-th diagonal term of the matrix form $`A`$. The second term, named $`S_{\mathrm{off}\mathrm{diag}}`$, contains the ghost and anti-ghost fields and the non–diagonal components of $`A`$:
$$S_{\mathrm{off}\mathrm{diag}}=_{\mathrm{\Sigma }_g}𝑑\mu \underset{i>j}{}(\lambda _i\lambda _j)\left[\widehat{A}_0^{ij}\widehat{A}_1^{ji}\widehat{A}_1^{ij}\widehat{A}_0^{ji}+\mathrm{i}(c^{ij}\overline{c}^{ji}+\overline{c}^{ij}c^{ji})\right],$$
(7)
where $`\widehat{A}_a^{ij}=E_a^\mu A_\mu ^{ij}`$ and $`E_a^\mu `$ denotes the inverse of the two dimensional vierbein. $`S_{\mathrm{off}\mathrm{diag}}`$ has a fermionic symmetry, which exchanges gauge and ghost fields; hence one would expect the contributions to the partition functions from the ghost fields and the non–diagonal part of the gauge fields to cancel exactly.
However, this “supersymmetry” is in general broken by an anomaly in the functional measure, due to the fact that on a curved surface the number of degrees of freedom of a one-form (like $`A_\mu `$) and of two zero forms (like $`c`$ and $`\overline{c}`$) do not match exactly. In fact the corresponding functional integral has been calculated exactly in and it is given by:
$$\underset{i>j}{}[dc^{ij}][d\overline{c}^{ij}][dA_\mu ^{ij}]e^{S_{\mathrm{off}\mathrm{diag}}}=\mathrm{exp}\left[\frac{1}{8\pi }_{\mathrm{\Sigma }_g}R\underset{i>j}{}\mathrm{log}(\lambda _i\lambda _j)\right],$$
(8)
where $`R`$ is the curvature scalar: only on flat Riemann surfaces like the torus or the cylinder the symmetry is preserved at the quantum level.
Following one finds, by gauge fixing the residual U$`(1)^N`$ invariance, that in the end only configurations where the eigenvalues $`\lambda _i`$ are constant and equal to integers $`n_i`$ contribute. So the r.h.s. of (8) gives the standard dependence of YM2 partition function from the genus, while the dependence from $`t`$ is given by the U$`(1)^N`$ action $`S_{\mathrm{Cartan}}`$ thus reproducing the well known partition function of YM2 on a Riemann surface :
$$Z(\mathrm{\Sigma }_g,t)=\underset{\{n_i\}}{}\frac{1}{_{i>j}(n_in_j)^{2g2}}\mathrm{e}^{2\pi ^2t_in_i^2}.$$
(9)
However in the present derivation nothing forbids $`n_i=n_j`$ for $`ij`$, and such terms, that we shall name non-regular terms following , are divergent for $`g>1`$. Notice that non-regular terms always appear in the twisted sectors, where at least two sheets of the $`N`$-covering are connected by going round some non contractible loop. Therefore the corresponding eigenvalues are given by the same integer. The regularization proposed in is done by adding mass terms to the non diagonal part of the gauge fields; in this way the non-regular terms vanish due to the ghost contribution which is proportional to $`(n_in_j)^2`$, while the contribution from the gauge fields is now finite and proportional to $`(n_in_jm_{ij})^{2g}`$. The limit $`m_{ij}0`$ is performed at the end. This regularization scheme, while preserving the U$`(1)^N`$ gauge symmetry, violates the original BRST invariance as well as the fermionic symmetry between gauge fields and ghosts discussed above. A fully consistent treatment of the non-regular terms in the unitary gauge for $`g>1`$ is indeed lacking, and in our opinion this is a problem worth looking into in the future. The case $`g=1`$ is special: non regular terms are finite, the fermionic symmetry is anomaly free and hence there is no need of regularization. Therefore we shall write the partition function as a sum over all sectors labelled by commuting pairs of permutations $`(P,Q)`$, thus including non regular terms. We will find that the result does not coincide with the standard partition function , which can be reproduced only by limiting the sum to the subset of sectors of the type $`(P,1)`$. This choice however is not invariant under modular transformations on the torus.
## 3 Free energy and partition function
The twisted sectors can also be labelled, as discussed earlier on, by the $`N`$-fold covers without branch points of the torus. In order to sum over all sectors it is convenient on one hand to work in the grand canonical formalism, in which $`N`$ is not fixed, and introduce the grand-canonical partition function $`Z(t,q)`$ and the corresponding free energy $`F(t,q)`$:
$$Z(t,q)=\mathrm{e}^{F(t,q)}=\underset{N}{}Z_N(t)q^N.$$
(10)
On the other hand it is convenient to work directly on the free energy $`F(t,q)`$, that receives contributions only from the connected coverings. The computation of the free energy entails two aspects: an “entropic” one, i.e. the counting of the inequivalent connected coverings, and the determination of the Boltzmann factor that the functional integral (3) implies for each covering. The counting of $`N`$-coverings of the torus without branch points, namely of the $`N`$-fold maps of a world sheet torus into the target torus, has already been discussed in the literature (see for instance ) and its free energy is given by
$$F_{\mathrm{cov}}=\underset{N}{}\underset{r|N}{}\frac{1}{r}q^N=\underset{k=1}{\overset{\mathrm{}}{}}\mathrm{log}(1q^k)$$
(11)
where $`r|N`$ means that the sum is extended over the divisors $`r`$ of $`N`$. The coefficient of $`q^N`$ in Eq. (11), namely $`_{r|N}\frac{1}{r}`$, enumerates the connected $`N`$-coverings of the torus. This result can be derived as follows: let the periods of the target space torus be $`\stackrel{}{\pi }_1^{\mathrm{tar}}=2\pi `$ and $`\stackrel{}{\pi }_2^{\mathrm{tar}}=2\pi \mathrm{i}`$, the most general connected $`N`$-covering is then a torus of area $`4\pi ^2N`$ whose periods are given by
$$\stackrel{}{\pi }_1^{\mathrm{ws}}=2\pi k;\stackrel{}{\pi }_2^{\mathrm{ws}}=2\pi s+\mathrm{i}\mathrm{\hspace{0.17em}2}\pi r,$$
(12)
with $`kr=N`$ and $`s=0,1,\mathrm{},k1`$.
An example with $`N=12`$, $`k=4`$ and $`s=3`$ is given in Figure 1. There are $`_{k|N}k=N_{r|N}1/r`$ choices of the integers $`k`$,$`r`$ and $`s`$ that satisfy these conditions, but one has to divide by the symmetry factor $`N`$ to account for the fact that coverings corresponding to different labelings of the sheets (i.e. of the eigenvalues) have to be identified. Notice that the world sheet torus has a modular parameter $`\tau =s/k+\mathrm{i}r/k=s/k+\mathrm{i}N/k^2`$ and that summing over these tori with the weight $`1/r=k/N`$ is the discrete version of integrating over the modular parameter $`\tau `$ with the usual modular-invariant measure. In fact from $`\mathrm{Im}\tau =N/k^2`$ and $`N`$ fixed we have $`𝑑\mathrm{Im}\tau /(\mathrm{Im}\tau )^2_{k|N}\frac{k}{N}`$.
It is easy to see that in terms of the field $`\lambda `$, the partition function (6) becomes the partition function of a U$`(1)`$ theory on the world sheet torus. This partition function depends only on the area; it is well-known to be $`\theta _3(0|\mathrm{i}\mathrm{\hspace{0.17em}2}\pi Nt)`$ $`=_{n=\mathrm{}}^{\mathrm{}}\mathrm{exp}(2\pi ^2Ntn^2)`$. This is the Boltzmann weight to be associated to the connected coverings of degree $`N`$.
We are now in the position of writing down the free energy, and thus automatically the grand-canonical partition function:
$`F^\pm (t,q)`$ $`=`$ $`\pm {\displaystyle \underset{N}{}}{\displaystyle \underset{r|N}{}}{\displaystyle \frac{1}{r}}{\displaystyle \underset{n=\mathrm{}}{\overset{\mathrm{}}{}}}\mathrm{e}^{2\pi ^2tNn^2}q^N={\displaystyle \underset{n=\mathrm{}}{\overset{\mathrm{}}{}}}{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}\mathrm{log}(1\mathrm{e}^{2\pi ^2tkn^2}q^k),`$
$`Z^\pm (t,q)`$ $`=`$ $`{\displaystyle \underset{n=\mathrm{}}{\overset{\mathrm{}}{}}}{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}(1\mathrm{e}^{2\pi ^2ktn^2}q^k)^1.`$ (13)
We have allowed in front of the free energy $`F(t,q)`$ a sign ambiguity, which corresponds to different choices of the a priori undetermined weights with which the contributions from the different twisted sectors are added. It is clear from (3) that the plus sign leads to partition function $`Z^+`$ of “bosonic” type (a “state” with fixed $`k`$ and $`n`$ may appear any number of times), while $`Z^{}`$ is “fermionic” (the exclusion principle holds).
If we consider only the contribution of a subset of connected coverings, namely those with $`k=1`$, $`r=N`$, that are associated to permutations of the type $`(P,1)`$, we obtain the following partition function:
$$𝒵^\pm (t,q)=\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}(1\mathrm{e}^{2\pi ^2tn^2}q)^1.$$
(14)
These expressions are known in the literature. $`𝒵^{}(t,q)`$ is the grand-canonical expression that encodes, as shown in , the standard partition function for the U$`(N)`$ theory on the torus <sup>2</sup><sup>2</sup>2The coefficient of $`q^N`$ in the power series expansion of $`𝒵^{}(t,q)`$ coincides, up to a sign and an overall normalization factor, with the standard partition function of U$`(N)`$ only for odd values of $`N`$. For even values of $`N`$ the integers $`n_i`$ are replaced by half-integers in the standard YM partition function. This half-integer shift however can be re-absorbed by adding to the action a term proportional to $`\mathrm{tr}F`$ (which is entirely in the U$`(1)`$ factor of U$`(N)`$). The expansion of $`𝒵^{}(t,q)`$ for even $`N`$ gives such modified theory. Notice that in the case of SU$`(N)`$ the problem does not arise and the sum over the $`(P,1)`$ sectors correctly reproduces the standard result for all values of $`N`$. . $`𝒵^+`$ reproduces the partition function obtained in by quantizing on the algebra rather than on the group. By comparing Eq.s (3) and (14) we find
$$Z^\pm (t;q)=\underset{k=1}{\overset{\mathrm{}}{}}𝒵(kt;q^k).$$
(15)
An expansion in powers of $`q`$ of both sides of Eq. 15 leads for the fermionic case to the following relation:
$$(1)^NZ_N^{}(t)=\underset{\{r_k\}}{}\delta (\underset{k=1}{\overset{N}{}}kr_kN)\underset{k=1}{\overset{N}{}}(1)^{r_k}𝒵_{r_k}^{}(kt),$$
(16)
where $`Z_N^{}(t)`$ is the U$`(N)`$ partition function including all sectors and $`𝒵_{r_k}^{}(kt)`$ is the standard U$`(r_k)`$ partition function (see however the discussion in the footnote). As an example we give the partition function for the case $`N=3`$:
$$Z_3^{}(t)=𝒵_3^{}(t)𝒵_1^{}(t)𝒵_1^{}(2t)+𝒵_1^{}(3t).$$
(17)
The extra terms at the r.h.s. of (17) are related to the states with $`k>1`$ and are not present in the conventional approach.
### 3.1 Modular invariance
One of the new features of our approach is that the ensemble of twisted sectors that contribute to the partition function is invariant under modular transformation on the cylinder, while the subset that reproduces the conventional result is not. In fact for instance a Dehn twist maps the sector labelled by $`(1,P)`$ into one labelled by $`(P,P)`$. As a result the conventional formulation is not modular invariant. This does not show in the partition function, which depends only on the area, but it should appear at the level of correlation function. For instance correlation functions of Wilson loops that are mapped into each other by modular transformations on the torus (like the two sets depicted in Figure 2) should coincide in our approach but not in the conventional formulation. Work is in progress to verify this point.
### 3.2 Quantization on the cylinder
The partition function on a torus is often calculated by first considering a cylinder with fixed holonomies at the edges and then by sewing the two ends together. This involves identifying the two end holonomies up to a gauge transformation, namely identifying their eigenvalues up to a permutation $`P`$. Hence a sum over $`P`$ appears in the final result. It is clear that in this way only the $`(P,1)`$ sectors are taken into account, and in order to consider also sectors corresponding to non trivial commuting pairs $`(P,Q)`$, with $`Q`$ associated to the compact dimension of the cylinder, one has somehow to generalize the possible boundary conditions at the edges of the cylinder which in the conventional approach are just the U$`(N)`$ holonomies. In order to do that let us choose the unitary gauge on the cylinder as in Eq. (4), and consider a non trivial sector labelled by a permutation $`Q`$ that defines, according to the second of Eq.s (5), the boundary conditions for the eigenvalues $`\lambda _i(\tau ,x)`$ as we go round the compact dimension parametrized by the coordinate $`x`$. We refer to the original paper for the details of the calculation and give here only the main result. This can be summarized by saying that for non trivial $`Q`$ there are as many independent invariant angles in the holonomies as the number of cycles in $`Q`$. More precisely, supposing that $`Q`$ has $`r_k`$ cycles of order $`k`$ the invariant angles $`\theta _i`$ of a Wilson loop winding round the compact dimension have the following structure:
$$\theta _{k,\alpha ,n}=\theta _{k,\alpha }+\frac{2\pi \mathrm{i}n}{k},$$
(18)
where we have made the replacement $`i(k,\alpha ,n)`$ with $`\alpha =1,\mathrm{},r_k`$ and $`n=0,1,\mathrm{},k1`$ to denote that $`i`$ is the $`n`$-th element of the $`\alpha `$-th cycle of order $`k`$. The independent invariant angles are just the $`\theta _{k,\alpha }`$, and are associated to the cycles, the other eigenvalues within each cycle being spaced like the $`k`$-th roots of 1. When sewing the cylinder the invariant angles are identified up to a permutation $`P`$ that preserves the cycle structure, namely
$$(k,\alpha ,n)\stackrel{P}{}(k,\pi _k(\alpha ),n+s(k,\alpha )),$$
(19)
where $`\pi _kS_{r_k}`$ is a permutations of the $`r_k`$ cycles of order $`k`$ and $`s(k,\alpha )`$ is an integer shift $`\mathrm{mod}k`$. Eq. (19) is equivalent to the statement that $`P`$ commutes with $`Q`$, and hence it reproduces the by now familiar pattern of the twisted sectors. From the previous discussion it is clear that the end states on the cylinder are not parametrized by the U$`(N)`$ holonomies but rather by the holonomies of $`\mathrm{U}(r_1)\mathrm{U}(r_2)\mathrm{}`$. It is also clear that by considering just the U$`(N)`$ holonomies one is automatically projecting on the trivial sector $`Q=1`$. Although this projection appears the most natural thing to do in the framework of gauge theories, and it is so far also the only approach that we know how to implement on a lattice, it introduces an asymmetry between the two generators of the torus and ultimately breaks the modular invariance in the sense mentioned above.
## 4 YM2 as a Matrix String Theory.
The grand canonical partition function $`Z^\pm (t,q)`$ given in Eq. (3) describes in the large $`t`$ limit the coverings of the torus. In fact in that limit the Boltzmann factor given by the partition function of the U$`(1)`$ gauge theory tends to $`1`$ and we are left with the partition function that simply counts the homotopically distinct coverings of the $`g=1`$ target space by a $`G=1`$ world-sheet: $`Z^\pm (t,q)\stackrel{t\mathrm{}}{}Z_{\mathrm{cov}}(q)`$. So in this limit the twisted sectors define a string theory, exactly in the same way as they do in Matrix string theory. It can also be argued, and in the fermionic case it is obvious from the exclusion principle, that in this limit large values of $`k`$, namely long strings, are relevant. Quite the opposite happens in the limit $`t0`$, in fact one can see from the Poisson re-summation formula that in this limit the Boltzmann weight behaves like $`1/\sqrt{t}`$, and so the leading contributions to the partition function come from coverings that maximize the number of disconnected world sheets. So only states with $`k=1`$ survive in the $`t0`$ limit, and the theory reduces to the conventional one without twisted sectors. It is possible, but yet not proved, that a phase transition separates the two phases dominated respectively by long and short strings. The small $`t`$ region is also the relevant one for the large $`N`$ limit of YM2 studied by Gross and Taylor in a series of papers . In fact the limit is done, following the original idea of ’t Hooft, by taking $`t=\stackrel{~}{t}/N`$ and keeping $`\stackrel{~}{t}`$ fixed. Gross and Taylor proved that the partition function of $`U(N)`$ Yang-Mills theory on a two dimensional Riemann surface $`M_g`$ of genus $`g`$ counts the homotopically distinct maps from a world-sheet $`W_G`$ of genus $`G`$ to $`M_g`$. In particular if the target space is a torus and one considers only the leading term in the large $`N`$ expansion, which means considering only world sheets of genus $`1`$, one finds the partition function
$$Z_{GT}=\underset{k=1}{\overset{\mathrm{}}{}}(1\mathrm{e}^{k\stackrel{~}{t}})$$
(20)
This is the same string partition function that is found from the twisted sectors in the large $`t`$ limit, with $`q`$ replaced by $`\mathrm{e}^{\stackrel{~}{t}}`$. This coincidence is suggestive of some kind of underlying duality in the theory. However for this duality to exists beyond the rather trivial case of $`W_{G=1}M_{g=1}`$ maps we should be able to extend the twisted sectors introduced in to include coverings with branch points, which correspond to higher genus world sheets, namely to the possibility for strings to split and join. These would be dual to the sub-leading terms in the $`1/N`$ expansion of . However branch points correspond to points where the curvature of the world sheet becomes a delta function, leading through Eq. (8) to terms coupling different sheets of the coverings. The problem here is of the same nature as the one that is encountered if one quantizes YM2 in the unitary gauge on a surface with $`g>1`$, namely the appearance of logarithmic interactions between different eigenvalues, that eventually lead to divergences. A better understanding of this point is then crucial also for a consistent formulation of YM2 as a matrix string theory on a torus, so it is not yet clear to us if such formulation exists also at the level of string interactions, or weather this can be done only by embedding YM2 in the more general framework of the Matrix String Theory given by (1), where it is known that string interactions can be consistently introduced . In both cases the analysis developed in is relevant. In fact gauge degrees of freedom are crucial in Matrix String theory for the description of string interactions as well as for the relation of non-trivial fluxes with D-brane charges . Shortly after our paper appeared on the hep-th archive, Kostov and Vanhove calculated the partition function of the Matrix String Theory of Eq. (1). They took advantage of the fact that the contributions to the partition function of the ”matter fields” $`X^M`$ and $`\theta `$ cancel due to supersymmetry, so that in the end the only contributions come from the different topological sectors of YM2. In fact their result coincides with ours, except that they obtain our free energy rather than the partition function because the structure of the fermionic zero-modes in the matter sector effectively kills the contributions from disconnected world sheets. In the dimensional reduction of this free-energy to zero dimensions is shown to correctly reproduce the partition function for the IKKT model , namely $`Z_{\mathrm{IKKT}}_{r|N}\frac{1}{r^2}`$. The latter is related to the moduli space of D-instantons or, by T-duality, to the counting of bound states of D0-branes.
|
no-problem/9901/hep-ph9901398.html
|
ar5iv
|
text
|
# Is it always possible to discover supersymmetry broken at TeV scale at LHC?
## Abstract
We show that the search for supersymmetry at LHC will be very problematic for the particular case of nonuniversal relations among gaugino masses. Namely, if gluino, first chargino and LSP masses are closed to each other it would be very difficult to discover supersymmetry even if sparticle masses are lighter than 1 TeV.
Supersymmetric electroweak models offer the simplest solution of the gauge hierarchy problem -. In real life supersymmetry has to be broken, and the masses of superparticles must be lighter than $`O(1)`$ TeV . The scientific program at the large hadron collider (LHC) - which will be the largest particle-accelerator complex ever built in the world has many goals. Among them the discovery of the supersymmetry broken at TeV scale with sparticle masses less than $`O(1)`$ TeV is the most important one. For the supersymmetric extension of the Weinberg-Salam model, soft supersymmetry breaking terms usually consist of the gaugino mass terms , squark and slepton masses and trilinear soft scalar terms. In general soft supersymmetry breaking terms are arbitrary. Within the minimal SUGRA-MSSM framework it would be possible to discover supersymmetry with squark and gluino masses up to (2 - 2.5) TeV . The standard signatures proposed for the search for squarks and gluino at LHC are -
$$jets+E_{miss}^T,$$
(1)
$$jets+(n1)leptons+E_{miss}^T$$
(2)
In SUGRA-MSSM framework all sparticle masses are determined mainly by two parameters: $`m_0`$(common squark and slepton mass at GUT scale) and $`m_{\frac{1}{2}}`$(common gaugino mass at GUT scale). However, in general, due to many reasons we can expect that real sparticle masses can differ in a drastic way from sparticle masses pattern of SUGRA-MSSM model \- . Therefore, it is more appropriate to investigate LHC SUSY discovery potential in a model-independent way. Some prelimenary results in this direction have been obtained in refs. . In particular it is very important to answer the question: is it always possible to discover supersymmetry broken at TeV scale at LHC for the case of arbitrary sparticle masses.
In this paper we show that the search for supersymmetry at LHC will be very problematic for the particular case of nonuniversal relations among gaugino masses. Namely, for the case when gluino, first chargino and LSP masses are closed to each other it would be very difficult or even impossible to discover supersymmetry at LHC even if sparticle masses are lighter than 1 TeV. We assume that R-parity is conserved.
To be concrete consider the case when gluino, first chargino, second neutralino, LSP(lightest stable particle $`\stackrel{~}{\chi }_1^0`$) , squark and slepton masses are $`m_{\stackrel{~}{g}}=500`$ GeV, $`m_{\stackrel{~}{\chi }_1^\pm }=m_{\stackrel{~}{\chi }_2^0}=480`$ GeV, $`m_{\stackrel{~}{\chi }_1^0}=450`$ GeV, $`m_{\stackrel{~}{q}}=m_{\stackrel{~}{l}}=600`$ GeV. For such sparticle masses the search for direct slepton pair and gaugino $`\stackrel{~}{\chi }_1^\pm \stackrel{~}{\chi _2^0}`$ productions is hopeless at LHC due to small cross sections. So we can expect to detect only strongly interacting particles(squarks, gluino) production using signatures (1,2). Consider gluino pair production $`pp\stackrel{~}{g}\stackrel{~}{g}+\mathrm{}`$ . Gluino decays $`\stackrel{~}{g}\overline{q}q\stackrel{~}{\chi }_2^0`$ and $`\stackrel{~}{g}\overline{q}q^{^{}}\stackrel{~}{\chi }_1^\pm `$ are suppresed in comparison with gluino decay into quark-antiquark pair and LSP $`\stackrel{~}{g}\overline{q}q\stackrel{~}{\chi }_1^0`$. Hence the signature (2) which arises as a result of leptonic decays $`\stackrel{~}{\chi }_2^0l^+l^{}\stackrel{~}{\chi }_1^0`$ and $`\stackrel{~}{\chi }_1^\pm l^\pm \nu \stackrel{~}{\chi }_1^0`$ is useless for the search for supersymmetry at LHC. The gluino decay mode $`\stackrel{~}{g}\overline{q}q\stackrel{~}{\chi }_1^0`$ leads to the signature (1). However for such values of gluino and LSP masses LSP particle is soft in gluino centre of mass frame. In parton model gluino are pair produced with small total value of transverse momentum $`p_T`$, therefore in our case the average missing transverse energy $`E_{miss}^T`$ is rather small and it is determined by the mass difference $`m_{\stackrel{~}{g}}m_{\stackrel{~}{\chi }_1^0}=50`$ GeV. For such small values of $`E_{miss}^T`$ SM background is much bigger than signal that prevents the use of the signature (1) for gluino detection. For the squark pair production $`pp\stackrel{~}{q}\stackrel{~}{q}^{^{}}+\mathrm{}`$ the main squark decay mode is $`\stackrel{~}{q}\stackrel{~}{g}q`$ with soft gluino. Again in this case the signature (2) is not useful. For the signature (1) the typical $`E_{miss}^T`$ is less than 100 GeV that prevents SUSY discovery due to huge SM background.
We have made simulations at the particle level with parametrised detector responses based on a detailed detector simulation. We have made our concrete calculations for CMS detector . The CMS detector simulation program CMSJET 3.2 has been used. It incorporates the full electro-magnetic(ECAL) and hadronic (HCAL) calorimeter granularity, and includes main calorimeter system cracks in rapidity and azimuth. The energy resolutions for muons, electrons(photons), hadrons and jets are parametrised. Transverse and longitudinal shower profiles are also included through appropriate parametrisations. All SUSY processes have been generated with ISAJET7.32, ISASUSY In our paper we have used the results of the background simulations of refs. . The main results of our simulations is that SM background dominates for both the signatures (1) and (2) and prevents SUSY observation.
For the second example with $`m_{\stackrel{~}{g}}=800`$ GeV, $`m_{\stackrel{~}{\chi }_2^0}=m_{\stackrel{~}{\chi }_1^\pm }=690`$ GeV, $`m_{\stackrel{~}{\chi }_1^0}=650`$ GeV, $`m_{\stackrel{~}{q}}=m_{\stackrel{~}{l}}=700`$ GeV the main gluino and squark decay modes are $`\stackrel{~}{g}\overline{q}\stackrel{~}{q}`$, $`\stackrel{~}{q}q\stackrel{~}{\chi }_1^0`$. Again in this case for signatures (1,2) SM background dominates.
For the third example with $`m_{\stackrel{~}{g}}=700`$ GeV, $`m_{\stackrel{~}{\chi }_2^0}=m_{\stackrel{~}{\chi }_1^\pm }=750`$ GeV, $`m_{\stackrel{~}{\chi }_1^0}=650`$ GeV $`m_{\stackrel{~}{q}}=m_{\stackrel{~}{l}}=670`$ GeV the decays of squarks and gluino into the first chargino and second neutralino are prohibited by kinematics and the main gluino and squark modes are $`\stackrel{~}{g}\overline{q}\stackrel{~}{q}`$, $`\stackrel{~}{q}q\stackrel{~}{\chi }_1^0`$. Again in this case for the signature (1) SM background dominates.
Let us state the main results of this paper: standard signatures (1,2) used for the search for supersymmetry at LHC not always allow to discover supersymmetry at LHC even if sparticle masses are lighter than 1 TeV. Namely, the search for supersymmetry will be very problematic for the particular case when gluino, first chargino and LSP masses are closed to each other. Probably $`e^+e^{}`$ Next Linear Coillider with total energy $`E_{cm}=2`$ TeV will have better perspectives to discover supersymmetry with such sparticle masses by the measurement of cross section of $`e^+e^{}`$ annihilation into hadrons.
I am indebted to the collaborators of INR Theoretical Division for useful discussions and comments.
|
no-problem/9901/astro-ph9901162.html
|
ar5iv
|
text
|
# Hard X-ray lags in GRO J1719-24
## 1 Introduction
The soft X-ray transient GRO J1719$``$24 (= GRS 1716$``$249, Nova Oph 1993) was detected simultaneously with BATSE on board the Compton Gamma Ray Observatory, and the SIGMA telescope on GRANAT, on 1993 September 25 (Harmon et al. 1993a; Ballet et al. 1993). The source reached a maximum X-ray flux of $``$ 1.4 Crab (20–100 keV) within five days after first detection, and was remarkable for the stability of its hard X-ray emission on a time scale of days; its hard X-ray flux declined at a rate of $``$ 0.3 $`\pm `$ 0.05% per day (Harmon et al. 1993b). GRO J1719$``$24 was detected above the BATSE 3$`\sigma `$ one-day detection threshold of 0.1 Crab (20–100 keV) for $``$ 80 days following the start of the X-ray outburst (Harmon & Paciesas 1993). A time-series analysis of the hard X-ray variability of GRO J1719$``$24, observed with BATSE in the 20–100 keV energy band, was presented by van der Hooft et al. (1996). They analyzed the entire 80 day X-ray outburst of GRO J1719$``$24 in the frequency interval 0.002–0.488 Hz. The power density spectra (PDSs) of GRO J1719$``$24 show a significant peak, indicative of quasi periodic oscillations (QPOs) in the time series, whose centroid frequency increases from $``$ 0.04 Hz at the start of the outburst, to $``$ 0.3 Hz at the end. Van der Hooft et al. (1996) discovered that the evolution in time of the PDSs of GRO J1719$``$24 can be described by a single characteristic profile. The evolution of the PDSs can be described as a gradual stretching by a factor $``$ 7.5 in frequency of the power spectrum, accompanied by a decrease of the power level by the same factor, such that the integrated power in a scaled frequency interval remains constant. Therefore, it is likely that the X-ray variability during the entire outburst of GRO J1719$``$24 can be described by a single process, the characteristic time scale of which becomes shorter, but the fractional amplitude of which is invariant. This may be related to the strong anticorrelation of the break frequency and power density at the break observed in the PDSs of several black-hole candidates (Belloni & Hasinger 1990). Méndez & van der Klis (1997) suggest a correlation with mass accretion rate may exist, i.e., the break frequency increases (and the power density decreases) with increasing mass accretion rate. Two average PDSs (20–100 keV) corresponding to days 13–15 and 51–60 of the X-ray outburst of GRO J1719$``$24 are displayed in Figure 1.
GRO J1719$``$24 remained undetectable until 1994 September, when several X-ray flares were detected with both SIGMA and BATSE (Churazov et al. 1994; Harmon et al. 1994). Subsequent to strong X-ray flares in 1995 February (Borozdin, Alexandrovich & Sunyaev 1995), a rapidly decaying radio flare was detected, followed by recurrent radio flaring activity (Hjellming et al. 1996). The relation between X-ray and radio events is similar to that observed in the superluminal radio-jet sources GRO J1655$``$40 and GRS 1915$`+`$105 (Hjellming et al. 1996; Foster et al. 1996): radio emission follows the peak, or onset to decay of X-ray flares observed with BATSE in the 20–100 keV energy band, by intervals ranging from a few to 20 days (Hjellming et al. 1996). GRO J1655$``$40 is a galactic black-hole candidate (BHC) with a dynamically determined mass of 7.0 $`\pm `$ 0.7 M (Orosz & Bailyn 1997; van der Hooft et al. 1998a).
A possible optical counterpart to the X-ray source was discovered by Della Valle, Mirabel & Rodriquez (1994), the photometric and spectroscopic properties of which suggest that GRO J1719$``$24 is a low-mass X-ray binary. The optical brightness of GRO J1719$``$24, measured during three weeks after first X-ray detection, is modulated at a period of 0.6127 days, thought to be the superhump period (Masetti et al. 1996). Quiescent (optical) photometry and/or spectroscopy of GRO J1719$``$24 has not been reported. The source is considered a black-hole candidate on the basis of its X-ray and radio similarities to dynamically proven BHCs.
We have investigated the phase (or, equivalently, time) lags in the hard X-ray variability of GRO J1719$``$24 during its 1993 X-ray outburst. We calculated lags between the 20–50 and 50–100 keV energy bands of the 1.024 sec time resolution BATSE data and compare our results with those obtained in recent similar studies of the black-hole candidates Cyg X-1 (Cui et al. 1997; Crary et al. 1998) and GRO J0422$`+`$32 (Grove et al. 1997; van der Hooft et al. 1999).
## 2 Analysis
A time-series analysis of the hard X-ray (20–100 keV) data of the entire 1993 outburst of GRO J1719$``$24 was presented by van der Hooft et al. (1996). These data were obtained in two broad energy channels (20–50 and 50–100 keV) with the large-area detectors of BATSE, collected during 80 days following first X-ray detection on 1993 September 25. Fast Fourier Transforms were created for 524.288 sec long time intervals (512 time bins of 1.024 sec each); the corresponding frequency interval covered 0.002–0.488 Hz. The average number of uninterrupted 512 bin segments available with the source unocculted by the Earth was approximately 35 per day. See van der Hooft et al. (1996) for a detailed description of the reduction and analysis of these data.
The complex Fourier cross spectra were created from the Fourier amplitudes in a way identical to that described by van der Hooft et al. (1999). These cross spectra were averaged daily. Errors on the real and imaginary parts of the daily averaged cross spectra were calculated from the respective sample variances, and formally propagated when computing the phase and time lags. The phase lags, $`\varphi _j`$, as a function of frequency were obtained from the cross spectra via $`\varphi _j`$$`=`$ arctan\[Im($`C_j^{12}`$)/Re($`C_j^{12}`$)\], and the corresponding time lag $`\tau _j`$$`=`$$`\varphi _j/2\pi \nu _j`$, with $`\nu _j`$ the frequency in Hz of the $`j`$-th frequency bin. With these definitions, lags in the hard X-ray variations (50–100 keV) with respect to the soft X-ray variations (20–50 keV) appear as positive angles.
Cross spectra for a large number of days must be averaged and converted to lag values in order to obtain sufficiently small errors (see, e.g., Crary et al. 1998; van der Hooft et al. 1999). Therefore, we averaged the phase and time lags between the 20–50 and 50–100 keV energy bands of the entire 80 day X-ray outburst of GRO J1719$``$24. These are presented in Figure 2. The time lags are displayed on a logarithmic scale. Time lags at frequencies above 0.5 $`\nu _{\mathrm{Nyq}}`$ are displayed but not taken into account in our analysis, as Crary et al. (1998) have shown that data binning effects distort the shape of the cross spectra at these frequencies. These data show that at the lowest frequencies the phase lags are likely smaller than the high frequency lags (0.021 $`\pm `$ 0.028 rad, average of 0.001–0.02 Hz; 9 bins). At frequencies above 0.02 Hz, the hard X-rays lag the soft by 0.072 $`\pm `$ 0.010 rad (average of 0.02–0.20 Hz; 94 bins). The phase lags averaged over two 40 day intervals are similar to those averaged over the entire 80 day outburst, being 0.0017 $`\pm `$ 0.028 rad and 0.041 $`\pm `$ 0.043 rad, respectively, for the 0.001–0.02 Hz interval, and 0.082 $`\pm `$ 0.013 rad and 0.061 $`\pm `$ 0.016 rad, respectively, for the 0.02–0.20 Hz interval. The time lags of GRO J1719$``$24 decrease with frequency as a power law, with index 1.04 $`\pm `$ 0.13 for frequencies $``$ 0.01 Hz. The extrapolation of this power law for frequencies smaller than 0.01 Hz is well above the measured time lags.
## 3 Discussion
The 20–100 keV energy spectrum steadily softened during the entire X-ray outburst of GRO J1719$``$24 in 1993; the photon index increased from 2.0 to 2.3 $`\pm `$ 0.05 during the rise to peak intensity, beyond which the spectrum softened more gradually. No marked changes in the spectral shape were observed during the sudden decrease in X-ray flux in 1993 December (van der Hooft et al. 1996). It is not possible, on the basis of 20–100 keV BATSE observations alone to distinguish between black hole source states. However, observations at low X-ray energies during the decay of the X-ray light curve of GRO J1719$``$24, suggest that the source was most likely in the low (or hard) state. The 2–300 keV X-ray spectrum, obtained about 30 days after first detection of GRO J1719$``$24 by combining SIGMA data with quasi-contemporaneous data taken by TTM on board Mir-Kvant, was quite similar to the low state spectrum of Cyg X-1. The 2–300 keV spectrum of GRO J1719$``$24 then had a power-law shape without a soft component, and a cut off at energies above 100 keV (Revnivtsev et al. 1998). Therefore, these observations indicate that 30 days after the X-ray outburst had started, GRO J1719$``$24 was in the low state. The lack of significant changes in the hard X-ray properties (van der Hooft et al. 1996) of GRO J1719$``$24, suggests that this conclusion applies to the entire 1993 outburst.
Recently, Crary et al. (1998) and van der Hooft et al. (1999) have studied lags between the X-ray flux variations in 20–50 and 50–100 keV BATSE data of the black-hole candidates Cyg X-1 and GRO J0422$`+`$32. Cui et al. (1997) measured hard X-ray time lags in 2–60 keV RXTE data of Cyg X-1, obtained during 1996. Crary et al. (1998) studied Cyg X-1 for a period of almost 2000 days, during which the source was likely in both the low, and high or intermediate state. They found that the lag spectra between the X-ray variations in the 20–50 and 50–100 keV energy bands of Cyg X-1 do not show an obvious trend with source state. They grouped the phase lag data according to the squared fractional rms amplitude of the noise, integrated in the frequency interval 0.03–0.488 Hz. They find that at the lowest frequencies the phase lag is consistent with zero. For higher frequencies the hard phase lag increases to a maximum of 0.04 rad near 0.20 Hz, and decreases again to near zero at the Nyquist frequency.
Crary et al. (1998) showed that binning effects decrease the observed hard X-ray time lags to zero at the Nyquist frequency. Therefore, time lags obtained for frequencies between 0.5 $`\nu _{\mathrm{Nyq}}`$ and $`\nu _{\mathrm{Nyq}}`$ may be affected by data binning. The Cyg X-1 X-ray variations in the 50–100 keV band lag those in the 20–50 keV band over the 0.01–0.20 Hz frequency interval by a time interval proportional to $`\nu ^{0.8}`$.
Cui et al. (1997) studied Cyg X-1 during its 1996 spectral transitions. The observed period can be divided into a transition from the hard state to the soft state, a soft state, and a transition from the soft state back to the hard state. The lag spectra obtained by Cui et al. (1997) cover the frequency range 0.01–100 Hz. They find that during the state transitions the time lags between energy bands with average energy $`E_0`$ and $`E_1`$, scale with photon energy roughly as $`\mathrm{log}`$$`(E_1/E_0)`$. Such a scaling is consistent with the predictions of thermal Comptonization in the corona (see, e.g., Payne 1980; Hua & Titarchuk 1996; Kazanas, Hua & Titarchuk 1997). In the soft state the time lags become much smaller. This implies that in the soft state the size of the corona becomes much smaller.
Van der Hooft et al. (1999) determined lags in the hard X-ray variability of GRO J0422$`+`$32 during its 1992 outburst. Their time-series analysis covered the entire 180 day X-ray outburst. GRO J0422$`+`$32 is a dynamically proven black-hole candidate; during its 1992 outburst it was most likely in the low state (van der Hooft et al. 1999). They averaged the phase lags of GRO J0422$`+`$32 over a 30 day interval following first X-ray detection of the source, and over a flux-limited sample of the remaining data (95 days). Statistically significant lags were derived for the shorter interval only. They find that at the lowest frequencies the phase lag of GRO J0422$`+`$32 is consistent with zero (0.014$`\pm `$ 0.006 rad, 0.001–0.02 Hz). At frequencies $``$ 0.02 Hz, the variations in the 50–100 keV band lag those in the 20–50 keV band by 0.039$`\pm `$ 0.003 rad (average of 0.02–0.20 Hz).
The time lags of GRO J0422$`+`$32, during the first 30 days of its outburst, decrease with frequency as a power law, with index $``$ 0.9 for $`\nu `$$`>`$ 0.01 Hz (van der Hooft et al. 1999). Grove et al. (1997) studied the time lags of GRO J0422$`+`$32 between the X-ray variations in the 35–60 keV band and 75–175 keV band with OSSE. They find that the hard X-ray emission lags the soft emission at all Fourier frequencies, decreasing roughly as $`\nu ^1`$ up to about 10 Hz. At frequencies of $``$ 0.01 Hz, hard time lags as large as 0.3 sec are observed. The hard time lags of GRO J0422$`+`$32 obtained by Grove et al. (1997), are consistent with those obtained by van der Hooft et al. (1999).
The phase lags of GRO J1719$``$24 are very similar to those of GRO J0422$`+`$32 and Cyg X-1. At frequencies below 0.02 Hz very small lags are observed (consistent with zero), while at frequencies of $``$ 0.10 Hz the variations in the 50–100 keV band lag those in the 20–50 keV band. However, the phase lags of GRO J1719$``$24, averaged in the interval 0.02–0.20 Hz, are about twice as large as those detected in GRO J0422$`+`$32 and Cyg X-1.
These results show that the hard time lags observed in GRO J1719$``$24, GRO J0422$`+`$32 and Cyg X-1 are all very similar. The hard X-ray radiation lags the soft by as much as $``$ 0.1–1 sec at low frequencies. The time lags are strongly dependent on the Fourier frequency, and decrease roughly as $`\nu ^1`$. The $`\nu ^1`$ dependence of the hard time lags is very different from the lags expected from simple models of Compton upscattering of soft X-rays by a cloud of hot electrons near the black hole. In such a case, the energy of the escaping photons increases with the time they reside in the cloud. Therefore, higher energy photons lag the photons with lower energies by an amount proportional to the photon scattering time. If the hard X-rays are emitted from a compact region near the black hole, the resulting time lags should be independent of Fourier frequency and of the order of milliseconds.
Analysis of the hard time lags in the X-ray variability of black-hole candidates can provide information on the density structure of the accretion gas (Hua, Kazanas & Titarchuk 1997). Kazanas et al. (1997) argued that the Comptonization process takes place in an extended non-uniform cloud around the central source. They showed that such a model can account for the form of the observed PDS and energy spectra of compact sources. Hua et al. (1997) showed that the phase and time lags of the X-ray variability depend on the density profile of such an extended scattering atmosphere. Their Monte Carlo simulations of scattering in a cloud with a density profile proportional to $`r^1`$ agree with our time lag data both in magnitude ($``$ 0.1 sec at 0.10 Hz) and frequency dependence ($`\nu ^1`$). The results presented here support the idea that the Comptonizing regions around the black holes in Cyg X-1, GRO J0422$`+`$32 and GRO J1719$``$24 are quite similar in density distribution and size.
However, the observed lags require that the scattering medium has a size of order 10<sup>3</sup> to 10<sup>4</sup> Schwarzschild radii. It is unclear how a substantial fraction of the X-ray luminosity, which must originate from the conversion of gravitational potential energy into heat close to the black hole, can reside in a hot electron gas at such large distances. This is a generic problem for Comptonization models of the hard X-ray time lags. Also, such models do not specify the source of soft photons, nor do they account for the soft excesses and weak Fe lines seen in the energy spectra. Very detailed high signal-to-noise cross-spectral studies of the rapid X-ray variability of accreting BHCs, and combined spectro-temporal modeling may solve this problem.
FvdH acknowledges support by the Netherlands Foundation for Research in Astronomy with financial aid from the Netherlands Organisation for Scientific Research (NWO) under contract number 782-376-011. FvdH also thanks the ‘Leids Kerkhoven–Bosscha Fonds’ for a travel grant. CK acknowledges support from NASA grant NAG-2560. JvP acknowledges support from NASA grants NAG5-2755 and NAG5-3674. WHGL gratefully acknowledges support from the National Aeronautics and Space Administration. MvdK gratefully acknowledges the Visiting Miller Professor Program of the Miller Institute for Basic Research in Science (UCB). This project was supported in part by NWO under grant PGS 78-277.
|
no-problem/9901/math-ph9901009.html
|
ar5iv
|
text
|
# References
Quantum dynamics and
Gram’s matrix
M. De Cock, M. Fannes<sup>1</sup><sup>1</sup>1Onderzoeksleider FWO Vlaanderen, P. Spincemaille
Instituut voor Theoretische Fysica
Katholieke Universiteit Leuven
Celestijnenlaan 200D
B-3001 Heverlee, Belgium
## Abstract
We propose to analyse the statistical properties of a sequence of vectors using the spectrum of the associated Gram matrix. Such sequences arise e.g. by the repeated action of a deterministic kicked quantum dynamics on an initial condition or by a random process. We argue that, when the number of time-steps, suitably scaled with respect to $`\mathrm{}`$, increases, the limiting eigenvalue distribution of the Gram matrix reflects the possible quantum chaoticity of the original system as it tends to its classical limit. This idea is subsequently applied to study the long-time properties of sequences of random vectors at the time scale of the dimension of the Hilbert space of available states.
PACS numbers: 02.50.Cw, 03.65.-w, 05.45.+b
Discretising a classical dynamical system is in order if we want to simulate it on a computer. Its compact phase space may for this purpose be covered by a large number $`N`$ of small patches of Lebesgue measure $`1/N`$. The evolution, which we assume measure preserving and discrete in time, translates approximately in a bijection of the patches. Such a description involves always an approximation as patches change their shape in the course of time. It is of course helpful for actual model systems to label the patches in a way that mimics the kinematic structure of phase space. So, we obtain after this coarse-graining procedure for each $`N`$ a one to one transformation $`\pi `$ of the set $`\{1,2,\mathrm{},N\}`$ that determines the evolution during one tick of the clock. The phase portrait consists in partitioning the discrete phase space into closed orbits of $`\pi `$ and the crucial information is the number of orbits together with their length as a function of $`N`$. An ergodic island of non-zero measure in the dynamical system will signal its presence by the occurrence of an orbit with a period proportional to $`N`$. Iterating the dynamical map on an initial point $`i_0`$ provides us with a sequence $`𝒊=(i_0,\pi (i_0),\pi ^2(i_0),\mathrm{})`$ of points in $`\{1,2,\mathrm{},N\}`$ and we can distinguish between points belonging to ergodic or regular regions of phase space by examining the period of the time sequence of $`i_0`$ as a function of $`N`$.
Truly quantum dynamical systems with compact phase space are finite dimensional in virtue of the uncertainty principle. As each state occupies a same volume $`\mathrm{}`$, the dimension of their Hilbert space of states is $`1/\mathrm{}`$. Planck’s constant has here a rather symbolic meaning: for $`d`$-dimensional systems it is the $`d`$-th power of the actual Planck constant, while $`\mathrm{}=1/(2j+1)`$ for a spin with angular momentum $`j`$. The $`d`$-dimensional complex Hilbert space, or, more precisely, the space of complex rays in $`𝑪^d`$ is the quantum space with $`d`$ elements. The space of rays is called the projective Hilbert space of dimension $`d`$ and denoted by $`\mathrm{pr}𝑪^d`$. In dimension 2, it turns out to be the unit sphere in $`𝑹^3`$ with antipodal points identified. In contrast to a classical space, distinct points can lie arbitrarily close, the distance between the rays $`[\phi ]=𝑪\phi `$ and $`[\psi ]=𝑪\psi `$ generated by the normalised vectors $`\phi `$ and $`\psi `$ being
$$\mathrm{d}([\phi ],[\psi ]):=\underset{z𝑪,|z|=1}{inf}\phi z\psi =22|\phi ,\psi |=4\mathrm{sin}^2\frac{\theta }{2},$$
(1)
with $`\theta [0,\pi /2]`$ the angle between the rays. The maximal separation between points is reached when they correspond to orthogonal rays. Projective Hilbert spaces carry a natural Riemannian structure given by the Study-Fubini metric, but we shall not be so much concerned here with this continuum feature and rather focus on their discrete aspects.
A quantum evolution in discrete time, also called kicked evolution, is determined by a unitary Floquet operator $`u`$. In Schrödinger picture, $`\phi u\phi `$ is the evolution between two consecutive kicks. We use the same notation to denote the corresponding evolution in the space of rays: $`pup`$. We now face the problem of studying time sequences $`𝒑=(p_0,up_0,u^2p_0,\mathrm{})`$ generated by a Floquet operator $`u`$ as it acts repeatedly on an initial condition $`p_0`$.
In the vast literature on quantum chaos, dynamical properties of quantum systems are often investigated by considering the temporal behaviour of the Husimi or the Wigner functions corresponding to well-localised states in phase space, such as coherent states. Such a description relies, in taking the (semi)-classical limit, on a definite geometrical picture of the corresponding classical phase space i.e. on a particular choice of basic observables such as the usual position and momentum or angular momentum. Many references can be found in and . We argue in this letter that it is worthwhile to put things in a more abstract perspective. Gram’s matrix, or rather its spectrum, provides us with a powerful tool to analyse the statistical properties of time-sequences of points in a projective Hilbert space.
The Gram matrix $`\mathrm{G}(𝝋)`$ of a sequence $`𝝋=(\phi (1),\phi (2),\mathrm{},\phi (K))`$ of vectors is
$$\mathrm{G}(𝝋)=\left(\begin{array}{cccc}\phi (1),\phi (1)& \phi (1),\phi (2)& \mathrm{}& \phi (1),\phi (K)\\ \phi (2),\phi (1)& \phi (2),\phi (2)& \mathrm{}& \phi (2),\phi (K)\\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}\\ \phi (K),\phi (1)& \phi (K),\phi (2)& \mathrm{}& \phi (K),\phi (K)\end{array}\right).$$
(2)
$`\mathrm{G}(𝝋)`$ is positive definite and its rank equals the dimension of the space spanned by the $`\phi (j)`$ . In particular $`𝝋`$ is linearly independent if and only if $`det(\mathrm{G}(𝝋))0`$. The spectrum of $`\mathrm{G}(𝝋)`$ is independent on the order of the $`\phi (j)`$ in $`𝝋`$ and on multiplying the $`\phi (j)`$ with a complex number of modulus 1. This means that for a given sequence $`𝒑=(p(1),p(2),\mathrm{},p(K))`$ of points in a projective Hilbert space, specified by normalised vectors $`\phi (j)`$ as $`p(j)=[\phi (j)]`$, the spectrum of $`\mathrm{G}(𝝋)`$ depends only on $`𝒑`$ and that it is insensitive to the order of the points in $`𝒑`$. It may therefore be denoted by $`\mathrm{\Sigma }(𝒑)`$.
Let us for a moment consider a classical word $`𝒊=(i(1),i(2),\mathrm{},i(K))`$ where the letters are chosen from a given alphabet $`\{1,2,\mathrm{}\}`$. In fact, $`𝒊`$ is in one to one correspondence with a sequence $`𝒑=([e_{i(1)}],[e_{i(2)}],\mathrm{},[e_{i(K)}])`$ of vectors through the identification of $`j`$ with $`e_j`$ for an orthonormal basis $`\{e_1,e_2,\mathrm{}\}`$ of a Hilbert space. Grouping $`e_{i(\mathrm{})}`$ with equal index $`j`$, the Gram matrix is block-diagonal with block $`E(j)`$ of the type
$$\left(\begin{array}{cccc}1& 1& \mathrm{}& 1\\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}\\ 1& 1& \mathrm{}& 1\end{array}\right).$$
(3)
The dimension of $`E(j)`$ is precisely the multiplicity $`m(j)`$ of $`j`$ in $`𝒊`$. As the spectrum of $`E(j)`$ consists of the non-degenerate eigenvalue $`m(j)`$ and the $`m(j)1`$ degenerated eigenvalue 0, we find that $`\mathrm{\Sigma }(𝒑)`$ determines precisely the amount of different numbers appearing in $`𝒊`$ with their multiplicity i.e. the relative frequencies of the different letters in $`𝒊`$. The spectrum of the Gram matrix of a very regular sequence $`𝒊`$ will consist of a few large naturals and a highly degenerated 0 while for sequences with many different indices the spectrum will be concentrated on small natural numbers appearing with high multiplicities. A same interpretation remains valid for the general non-commutative case: a Gram matrix with spectrum concentrated around small natural numbers points at a vector wandering wildly through the Hilbert space of the system and is therefore a sign of chaotic behaviour. More regular motion, such as precession or slow diffusion, signals its presence by large eigenvalues and a high occurrence of eigenvalues close to 0. In contrast to the classical case however, eigenvalues are no longer limited to natural numbers so that a same point in a projective Hilbert space can now be visited a fractional number of times.
The existence for quantum systems of an intermediate time scale with interesting and describable behaviour in between the very short one, of order $`\mathrm{log}\mathrm{}`$, where the quantum system slavishly follows its classical limit and the very long one where quasi-periodic behaviour, due to the discreteness of the spectrum of the Floquet operator, is dominant, is a central theme in many papers . We are, more precisely, interested in the limiting eigenvalue distribution of the Gram matrix when its dimension, the number of time-steps, tends to infinity, appropriately scaled with respect to the quantum parameter. To obtain this, we consider the limit of the empirical measure
$$\frac{1}{K}\underset{j=1}{\overset{K}{}}\delta (\lambda \gamma _j)$$
where the $`\gamma _j`$ are the eigenvalues of the Gram matrix. The limiting distribution should reflect the quantum character of the dynamics as it tends to its classical limit. We do not claim that we can settle this question but we shall at least provide some rigorous support for this expectation.
Instead of considering a genuine unitary dynamics acting on an initial condition, we consider sequences $`𝒑=(p(1),p(2),\mathrm{},p(K))`$ of points in the projective Hilbert space of dimension $`N`$, independently and randomly chosen with respect to the uniform measure. Recall that $`\mathrm{pr}𝑪^N`$ is compact and that it carries a unique normalised measure, called uniform, which is invariant under the action of every $`N\times N`$ unitary. Picking independent normalised vectors randomly with respect to this measure is quite different from picking the components of the vectors with respect to a given basis in an independent and random way with respect to some suitably chosen probability measure. Next, we compute the spectrum of the Gram matrix of such a sequence. This spectrum is of course a random object but it turns out that, in the limit of large $`N`$ and for a rescaled time $`\tau =K/N`$, the spectral distribution tends to a definite limit given by the Marchenko-Pastur distribution $`\mu _\tau `$ . The actual computations are somewhat involved and will be presented in . Though the Gram matrices are given in terms of independent random vectors, there is no independence between the entries. E.g., each Gram matrix is positive definite which is totally incompatible with independence of matrix elements. This makes the computation quite different from Wigner’s random matrix computation. $`\mu _\tau `$ is obtained either by a combinatorial argument in terms of its moments or by determining the expectation of the resolvent of the Gram matrix.
When $`0<\tau 1`$ the probability measure $`\mu _\tau `$ is given by a continuous density $`\rho `$. For very small $`\tau `$, we must choose relatively few vectors in a large space. This will often lead to almost orthogonal choices and therefore $`\rho `$ will be concentrated around 1. When $`\tau `$ increases to 1, there is a fair chance of many vectors overlapping and the support of $`\rho `$ will simultaneously extend towards 0, which is a lower bound of its support, and to larger positive values. When $`\tau >1`$ there will almost surely be a sizeable degree of linear dependence responsible for a high multiplicity of the eigenvalue 0. In fact, it turns out that $`\mu _\tau `$ decomposes for $`\tau >1`$ into an atom at 0 and an absolutely continuous part:
$$d\mu _\tau (x)=\frac{\tau 1}{\tau }\delta (x)dx+\rho (x)dx,$$
(4)
where $`\rho `$ is the absolutely continuous part of $`\mu _\tau `$. The weight of $`\rho `$ is $`1/\tau `$. Moreover $`\rho `$ is compactly supported in the interval $`[(\sqrt{\tau }1)^2,(\sqrt{\tau }+1)^2]`$. A similar computation in the classical case yields a Poisson distribution. This result is reminiscent of Wigner’s semicircular distribution for the spectrum of large random matrices where compactness of the limiting distribution is also a typical feature of the non-commutativity . A simple measure of the dynamical entropy of the system is the length of the support of $`\mu _\tau `$. For our random dynamics, this quantity grows as $`4\sqrt{\tau }`$, in contrast to an expanding chaotic dynamics where the entropy grows linearly in $`\tau `$.
Figure 1: The limiting spectral distribution of $`G(𝒑)`$
The figure shows the limiting spectral distribution of Gram matrices in the region $`0.02\tau 3`$. The $`\delta `$ contribution of weight $`(\tau 1)/\tau `$ that appears for $`\tau >1`$ is rendered by the fat line, which has height $`(\tau 1)/\tau `$. The continuous part is, for all values of $`\tau `$, only non-vanishing for $`(\sqrt{\tau }1)^2<x<(\sqrt{\tau }+1)^2`$, but this is only visible in the figure for moderately small values of $`\tau `$. For $`\tau `$ tending to 0, a $`\delta `$ distribution at $`x=1`$ will appear and the probability density has for $`\tau =1`$ a singularity at $`x=0`$ of order $`1/2`$.
It is a pleasure to thank H. Wagner for his constant interest and enthusiasm in pointing out the relevance of geometrical cum statistical ideas in physics. Two of the authors (M.D.C. and P.S.) acknowledge financial support from FWO project G.0239.96.
Figure Captions
Figure 1: The limiting spectral distribution of $`G(𝒑)`$
|
no-problem/9901/cond-mat9901304.html
|
ar5iv
|
text
|
# Universal features of the off-equilibrium fragmentation with the Gaussian dissipation
## Abstract
We investigate universal features of the off-equilibrium sequential and conservative fragmentation processes with the dissipative effects which are simulated by the Gaussian random inactivation process. The relation between the fragment multiplicity scaling law and the fragment size distribution is studied and a dependence of scaling exponents on the parameters of fragmentation and inactivation rate functions is established.
Fragmentation is the universal process which can be found at all scales in the nature. The most general sequential binary and conservative fragmentation processes with scale-invariant fragmentation and inactivation rate functions, have been previously studied in much details . The phase diagramme of these off-equilibrium processes has been established and the universal aspects of both the fragment size distribution and the total number of fragments distribution (i.e., the multiplicity distribution) have been determined . In this fragmentation-inactivation binary (FIB) model , one deals with fragments characterized by some conserved scalar quantity that is called the fragment mass . The anscestor fragment of mass $`N`$ is fragmenting via an ordered and irreversible sequence of steps. The first step is either a binary fragmentation, $`(N)(j)+(Nj)`$ , or an inactivation $`(N)(N)^{}`$ . Once inactive, the cluster cannot be reactivated anymore. The fragmentation leads to two fragments, with the mass partition probability $`F_{j,Nj}`$ . In the following steps, the process continues independently for each active descendant fragment until either the low mass cutoff for further indivisible particles (monomers) is reached or all fragments are inactive. For any event, the fragmentation and inactivation occur with the probabilities per unit of time $`F_{j,kj}`$ and $`I_k`$ respectively. The fragmenting system and its evolution is completely specified by that rate functions and the initial state. It is also useful to consider the fragmentation probability $`p_F`$ without specifying masses of descendants : $`p_F(k)=_{i=1}^{k1}F_{i,ki}(I_k+_{i=1}^{k1}F_{i,ki})^1`$ . If the instability of smaller fragments is smaller than instability of larger fragments, $`p_F(k)`$ is an increasing function of fragment mass and the total mass is converted into finite size fragments. This is the shattered phase. The fragment mass independence of $`p_F(k)`$ at any stage of the process until the cutoff-scale for monomers characterizes the critical transition region. The multiplicity anomalous dimension : $`\gamma =d(\mathrm{ln}<m>)/d(\mathrm{ln}N)`$ , is the order parameter in the FIB model. It equals 1 in the shattering phase and takes the intermediate value between 0 and 1 in the critical transition region.
For most fragmenting systems, the off-equilibrium relaxation process ceases due to a dissipation. The dissipation is not always scale-invariant as considered in Ref. 1 but, on the contrary, it is often characterized by a definite and usually small length scale. It is then an open question to which extent the fragmentation processes which on one side are driven by the homogeneous scale-invariant fragmentation rate function and on other side are inactivated at a certain fixed scale by the random inactivation process, may develop scale-invariant and universal features in both the fragment mass distribution $`n(k)`$ and the fragment multiplicity distribution $`P(m)`$ . This question is important in view of the widespread occurence of scale-invariant fragment mass distributions $`n(k)k^\tau `$ and the lack of convincing arguments for using homogeneous dissipation functions in many processes including parton cascading in the perturbative quantum chromodynamics (PQCD) or the fragmentation of highly excited atomic nuclei, atomic clusters or polymers. In this work, we address this fundamental question using the FIB process with the homogeneous fragmentation rate function : $`F_{j,kj}=[j(kj)]^\alpha `$ , and with the dissipation at small scales which is modelled by the Gaussian inactivation rate function :
$`I_k=c\mathrm{exp}[{\displaystyle \frac{1}{2\sigma ^2}}\left({\displaystyle \frac{k1}{N}}\right)^2].`$ (1)
An asymptotic ($`t\mathrm{}`$) fragment mass distribution in the critical transition region of FIB model with scale-invariant dissipation phenomena , is a power law with an exponent $`\tau 2`$ . In the shattering phase, the fragment mass distribution is also power law but with an exponent $`\tau >2`$ . Another characteristic observable is the fragment multiplicity distribution : $`P(m)=_kP_k(m)`$ , where $`P_k(m)`$ is the probability distribution of the number of fragments of mass $`k`$ . This quantity has been intensely studied in the strong interaction physics . Of particular importance is a possibility of asymptotic scaling of multiplicity probability distributions :
$`<m>^\delta P(m)=\mathrm{\Phi }(z_{(\delta )}),z_{(\delta )}{\displaystyle \frac{m<m>}{<m>^\delta }}`$ (2)
where the asymptotic behaviour is defined as $`<m>\mathrm{}`$ , $`m\mathrm{}`$ for a fixed
$`(m/<m>)`$ – ratio. $`<m>`$ is the multiplicity of fragments averaged over an ensemble of events. The scaling law (2) means that for example data for differing energies (hence differing $`<m>`$) should fall on the same curve when $`<m>^\delta P(m)`$ is plotted against the scaled variable $`z_{(\delta )}(m<m>)/<m>^\delta `$ . Some time ago Koba, Nielsen and Olesen (KNO) suggested an asymptotic scaling (2) with $`\delta =1`$ in the strong interaction physics . The same scaling has been found also in the critical transition region of scale-invariant FIB process for $`p_F>1/2`$ and $`\alpha 1`$ . Recently, Botet, Płoszajczak and Latora (BPL) reported another scaling limit in (2) with $`\delta =1/2`$ , which holds in the percolation and in the shattering phase of scale-invariant FIB process . $`\delta =1/2`$ and 1 are the two limiting values since $`\delta >1`$ or $`\delta <1/2`$ are incompatible with the scaling hypothesis (2) .
The study presented in this Letter correspond to the domain $`\alpha 1`$ of fragmentation rate functions $`F_{j,kj}`$ . Many known homogeneous fragmentation kernels correpond to this domain. These include the singular kernel $`\alpha =1`$ in the PQCD gluodynamics , $`\alpha =2/3`$ for the spinodal volume instabilities in three dimensions , $`\alpha =+1`$ in the scalar $`\lambda \varphi _6^3`$ field theory in six dimensions , and many others . For $`\alpha <1`$ , the fragmentation process is dominated by the splitting $`(k)(k1)+(1)`$ at each step in the cascade, and leads to the finite limiting value of $`<m>`$ independently of the initial size $`N`$ . In this evaporation phase, the scaling solution (2) does not hold and the multiplicity anomalous dimension is equal zero when $`N\mathrm{}`$ . This phase is not relevant for the problem we want to address in this Letter.
Without restricting the generality of our discussion, we will present below results for fragmentation kernels with : $`\alpha =1`$ and $`\alpha =+1`$ . The upper part of Fig. 1 shows multiplicity distributions for $`\alpha =1`$ in the scaling variables (2) for $`\delta =1`$ (the upper left part), and fragment mass distributions for the same parameters (the upper right part). The cascade equations of Gaussian FIB model have been solved by Monte-Carlo simulations for different initial system sizes ($`N=1024,4096`$) and for the following exemplaric parameters : $`c=1`$ and $`\sigma =0.1`$ , $`1`$ of inactivation rate function $`I_kI_k(c,\sigma )`$ . We have made exhaustive analysis of $`P(m)`$ for a broad range of $`c,\sigma `$ parameters , finding in all cases the KNO scaling ($`\delta =1`$). We have found the KNO scaling uniquely for $`\alpha =1`$ . The shape of KNO scaling function $`\mathrm{\Phi }(z_{(1)})`$ depends on the precise value of both $`c`$ and $`\sigma `$ .
In the lower left part of Fig. 1, we show typical multiplicity distributions for $`\alpha =+1`$ which are plotted for different system sizes in the BPL scaling variables ($`\delta =1/2`$ ). The corresponding fragment mass distributions are shown in the lower right part of Fig. 1. Again, the precise form of BPL scaling function $`\mathrm{\Phi }(z_{(1/2)})`$ depends on the chosen set of parameters $`c`$ and $`\sigma `$ . In contrast to these results of Gaussian FIB model, fragmentation process in the scale-invariant FIB model for any value of exponent $`\alpha `$ may be found either in the critical transition region or in the shattering phase depending on the homogeneity index $`\beta `$ of the inactivation rate function $`I_k=I_1k^\beta `$ . This means that e.g. both for $`\alpha =1`$ and $`+1`$, one may see either the KNO scaling or the BPL scaling of multiplicity distributions depending on the precise value of the homogeneity index of the inactivation term.
Concerning the fragment mass distributions, Fig. 1 shows the distributions for $`\alpha =1,+1`$ and different parameters of Gaussian inactivation rate function $`I_k(c,\sigma )`$ . For $`\sigma `$ larger than $`0.5`$ , one finds the power law distribution of fragment masses for any value of parameter $`c`$ . In the studied case : $`\sigma =1`$, $`c=1`$ , the exponent $`\tau `$ equals 1.8 and 2.8 for $`\alpha =1`$ and $`\alpha =+1`$ respectively. For a given $`\alpha `$ , the value of exponent $`\tau `$ is remarkably independent of $`\sigma `$ but depends strongly on the value of parameter $`c`$ in $`I_k(c,\sigma )`$ . For a smaller value of $`\sigma `$ ($`\sigma =0.1`$ is shown in Fig. 1), the fragment mass distribution decreases exponentially and the shape of scaling function resembles the Gaussian distribution. The form of this exponential distribution depends both on $`c`$ and $`\sigma `$ parameters.
As a generic case for $`\alpha =1`$ , we have found the scale-invariant region of power law fragment mass distributions with $`\tau 2`$ for $`\sigma `$ above $`0.5`$ , and the exponential region of mass distributions for $`\sigma `$ less than $`0.5`$ . The power law region is completely analogous to the critical transition region of scale-invariant FIB model for $`\alpha >1`$ and $`p_F>1/2`$ , because the multiplicity anomalous dimension in both models is :
$`\gamma =\tau 1(0\gamma 1).`$ (3)
We have verified validity of this relation in Gaussian FIB model for a broad range of $`c,\sigma `$ values. In the exponential region , $`\gamma `$ is always equal 1 independently of the value of parameter $`c`$ , i.e. this region is in the shattering phase. One should remind that shattering in the scale-invariant FIB model is related exclusively with the BPL scaling , whereas in the Gaussian FIB model for $`\alpha =1`$ the KNO scaling holds.
The fragment size distributions for $`\alpha =+1`$ and different values of $`\sigma `$ behave similarly as for the $`\alpha =1`$ case, except that now for $`\sigma `$ above $`0.5`$ the power law exponent is $`\tau >2`$ . For all $`\sigma `$, i.e. in both exponential and power law regions of mass distribution, the multiplicity anomalous dimension is $`\gamma =1`$ and the BPL scaling holds. This generic situation is completely analogous to the multiplicity behaviour found in the shattering phase of scale-invariant FIB model .
Whenever the fragment size distribution is a power law, the KNO scaling of multiplicity distributions is associated with $`\tau 2`$ and the BPL scaling of multiplicity distributions with $`\tau >2`$ in both scale-invariant and scale-dependent regimes of dissipation. This clearly indicates a direct relation between the multiplicity scaling law and the fragment mass distribution scaling regimes in the FIB model. In view of the generality of FIB process, it would be very interesting to test this relation experimentally. A novel aspect of the Gaussian FIB model is associated with properties of multiplicity scaling in the new region of exponential fragment mass distributions. In this region, BPL scaling holds for $`\alpha =+1`$ whereas KNO scaling is seen for $`\alpha =1`$ .
In Fig. 2 we plot for different values of the parameter $`c`$ the normalized cumulant factorial moment of order two : $`\gamma _2=(<m(m1)><m>^2)/<m>^2`$ , vs the width $`\sigma `$ of inactivation rate function $`I_k(c,\sigma )`$ . The exponent of homogeneous fragmentation kernel is : $`\alpha =1`$ . For this choice of $`\alpha `$ , the KNO scaling holds and $`\gamma _2`$ becomes the second moment of scaling function $`\mathrm{\Phi }(z_{(1)})`$ which is independent of initial mass $`N`$ . For each point ($`c`$ , $`\sigma `$) , cascade equations of the FIB model have been solved exactly by recurrent formula up to initial system size $`N=2^{18}`$ . As can be seen in Fig. 2, the multiplicity fluctuations as measured by $`\gamma _2`$ are extremely small in the exponential region for $`\sigma `$ less than $`0.5`$ . The change of $`\gamma _2`$ when passing from the power law to exponential region is continuous but the largest variations of $`\gamma _2(\sigma )`$ appear at $`\sigma 0.5`$ . For large values of $`\sigma `$ , the cumulant factorial moment approaches a limiting value which depends on the value of parameter $`c`$ .
The experimental informations about $`\gamma _2`$ are not numerous and concern mainly charged particle multiplicities at relativistic and ultrarelativistic energies. The DELPHI Collaboration reported the data on hadron production in $`e^+e^{}`$ annihilations for the center of mass (c.m.) energy of $`\sqrt{s}=91`$GeV finding $`\gamma _2=0.04`$ . In hadron-hadron collisions $`\pi ^+p`$ , $`K^+p`$ , $`pp`$ , $`p\overline{p}`$ for c.m. energies ranging up to 1000 GeV , values of $`\gamma _2`$ increase from about 0.05 to 0.3 as energies increase to collider values. Distribution of galaxy counts in the regions of sky covered by the Zwicky catalogue yields $`\gamma _20.3`$ . Independently of the question whether the KNO scaling holds in all those different physical systems , the measured values of $`\gamma _2`$ clearly exclude the exponential region of Gaussian FIB process. Much more information could be extracted if in addition to the moments of the multiplicity distribution also the mass distribution would be available. In high energy lepton and/or hadron collisions for example , this would require measuring the hadron mass distribution.
In conclusion, we have demonstrated that the off-equilibrium binary fragmentation with scale-invariant fragmentation kernel and the scale-dependent inactivation simulating the dissipation at small scales, yields the fragment mass and fragment multiplicity distributions which are scale-invariant for a broad range of parameters. This is an important finding because most of fragmentation processes in nature which have these scale-invariant features are probably not associated with the dissipative processes acting at all scales. The scale-dependent fragmentation processes may also develop strong scale-invariant fluctuations (the KNO scaling) though the region of their appearance is restricted to the particular value of exponent : $`\alpha =1`$ , of the homogeneous fragmentation function. The region at $`\alpha =1`$ and $`\sigma `$ above $`0.5`$ is the critical transition region of Gaussian FIB process. For other values of $`\alpha `$ , the fragment multiplicity distributions obey the BPL scaling, i.e. the small amplitude limit of scaling multiplicity fluctuations. Another transition zone of the Gaussian FIB model is defined by the width $`\sigma `$ of inactivation rate function. At $`\sigma 0.5`$ , the fragment size distribution changes from exponential (for $`\sigma <0.5`$) into power law (for $`\sigma >0.5`$) . The form of scaling function $`\mathrm{\Phi }(z_\delta )`$ , together with the form of fragment mass distribution $`n(k)`$ impose strong constraints on the choice of basic functions of FIB kinetic equations : the fragmentation and inactivation functions. This has been demonstrated on the example of hadron production data in the $`e^+e^{}`$ annihilation . Results of this Letter show that the closing of gap between experimental observables related to the fragment mass distribution and/or the fragment multiplicity distribution and the basic ingredients of the kinetic theory, i.e. the rates of activation $`F_{j,kj}`$ and inactivation $`I_k`$ , can be achieved for many physical systems in the nature.
Figure captions
Fig. 1
Multiplicity probability distributions in the scaling variables (see eq. (2)), and the fragment mass distribution for two homogeneous fragmentation kernels and two Gaussian inactivation rate functions. Each set of data corresponds to $`10^6`$ independent events of Monte-Carlo simulations.
(i) Upper left part : the fragmentation kernel with $`\alpha =1`$ and the inactivation rate function (1) for $`c=1`$ and two typical values of $`\sigma `$. Two sets of data are plotted for two different total mass : $`N=1024`$ (crosses) and $`N=4096`$ (circles). These data are plotted in the KNO form, i.e. : $`\delta =1`$ (see eq. (2)).
(ii) Upper right part : the fragment mass distributions in a double-logarithmic scale are shown for the same parameters $`\alpha ,c,\sigma `$ as in (i). The total mass is $`N=4096`$. Big stars represent results obtained for the same value of $`\alpha ,c`$ parameters and for a much larger value of $`\sigma `$ ($`\sigma =10`$ ) , to show the independence of the scaling part of the fragment mass distribution with the value of $`\sigma `$. The line in between points is shown to guide the eyes.
(iii) Lower left part : the same as in (i) but for the fragmentation kernel with $`\alpha =+1`$ . These data are plotted in the BPL form, i.e. : $`\delta =1/2`$ (see eq. (2)).
(iv) Lower right part : the fragment mass distributions for $`\alpha =+1`$. Parameters $`c,\sigma ,N`$ as in (ii).
Fig. 2
The cumulant factorial moment $`\gamma _2`$ of the fragment multiplicity distribution is plotted vs the width parameter $`\sigma `$ of the Gaussian inactivation function (1) with $`c=0.5,1,5`$ . The homogeneous fragmentation kernel is taken with $`\alpha =1`$ . Each point corresponds to system of size $`N=2^{18}`$ , and the values of $`\gamma _2`$ are calculated by solving exact recurrent equations. The line joining points is shown to guide the eyes.
|
no-problem/9901/astro-ph9901284.html
|
ar5iv
|
text
|
# The temporal characteristics of the TeV Gamma emission from Mkn 501 in 1997 - Part II: Results from HEGRA CT1 and CT2
## 1 Introduction
The BL-Lac object Mkn 501 showed strong and frequent flaring in 1997. The source has been observed by many different experiments using imaging air Cherenkov telescopes (IACTs). Here we report on observations with the HEGRA stand-alone telescopes CT1 and CT2 while observations with the HEGRA CT system are reported in part I of this paper (Aharonian et al. , subsequently Part I).
From March 11 to October 20, 1997 the source was monitored every night whenever weather and background light permitted it. A fraction of the observations was carried out by up to 6 telescopes.
A detailed discussion of Mkn 501 and its history of $`\gamma `$-emission is given in Part I, as well as the details of the stereo-mode observations with 4 telescopes, the related analysis methods, the stereo-mode results and some comparisons of the stereo-mode data with RXTE observations. The data from the stand alone telescopes CT1 and CT2 are less precise than the system data as energy and angular resolution are somewhat worse. Nevertheless they complement th CT system data as significantly longer observations were carried out. Due to additional observations made under the presence of moonlight the lightcurve of CT1 is the most complete of all observations in 1997.
This paper has the following structure: Section 2 summarizes the relevant telescope parameters of CT1 and CT2 and important performance data. The details of the observations and data analysis are presented in section 3 together with the combined lightcurve. For the comparison with lower energy data from RXTE we include HEGRA observations from 1996. The CT2 data analysis is presented in section 4. The combined lightcurve, specific details and conclusions are discussed in sections 5 and 6.
## 2 The HEGRA Cherenkov Telescopes CT1 and CT2
The HEGRA collaboration is operating six imaging atmospheric Cherenkov telescopes for Gamma Astronomy as part of its cosmic ray detector complex at the Observatory Roque de los Muchachos on the Canary island of La Palma (28.75 N, 17.89 W, 2200 m a.s.l., see e.g. Lindner et al. ). While the first two telescopes (CT1 and CT2) are operated in stand-alone mode, the other four (CT3, 4, 5 and 6) are run as a system of telescopes in order to achieve stereoscopic observations of the air-showers.
### 2.1 The telescope CT1
HEGRA CT1 was commissioned in August 1992. In its 1997 configuration, CT1 had a mirror made up of 18 spherical round glass mirrors of 5 m focal length and a total mirror area of 5 m<sup>2</sup>. The photomultiplier (PM) camera of CT1 consists of 127 3/4<sup>′′</sup> EMI tubes 9083A in a hexagonally dense package with an angular diameter of $`3^{}`$ (individual pixel diameter: $`0.25^{}`$). The tracking accuracy of CT1 is better than 0.1. The telescope hardware is described in detail in Mirzoyan et al. () and Rauterberg et al. ().
#### 2.1.1 Camera settings and observations under the presence of moonlight
During the 1997 observing period, CT1 was run with a range of slightly different high voltage settings for the PM camera:
During dark nights, two settings were used: Before 29th of April the settings from previous years had been used which we name HV1 in this paper. After the 29th of April, the high voltages were increased by $`6`$ % in order to compensate for PM dynode aging effects and to lower the energy threshold of the telescope to below its pre-1996 value. This second setting we denote by HV2.
Soon after the beginning of the 1997 observing period, the strong variability of Mkn 501 made it obvious that it was of great importance to dedicate as much observation time as possible to the source. Until recently, it was believed that Cherenkov telescopes can only operate during moonless nights due to the increase in PM current and noise caused by the general increase in background light. As our studies with CT1 show, this limitation can be largely overcome by fast amplifiers with AC coupling to low-gain PM cameras for which the high voltage is reduced by several percent compared to the optimal setting for moonless nights. The used voltage reduction increases the telescopes threshold by a factor up to 2.6, but observations of strong gamma sources can still give useful results. Details of the observation in the presence of moonlight are given in Raubenheimer et al. ().
CT1 has observed Mkn 501 for nearly 7 months whenever the weather fulfilled the standard observing conditions and the source was at zenith angles below $`60^{}`$. The additional observations under the presence of moonlight make the lightcurve obtained from CT1 the most complete one compared to all other 1997 light curves of this source in the TeV energy range. The moonlight observations were taken with four different PM voltage settings: HV1 and HV2 as described above and settings with voltage reduced by 10% and 14% relative to HV2. The latter settings are named HV3 and HV4. Nearly all the data taken under the presence of moonlight were taken with the settings HV1 and HV2. HV4 was used only for observations close to the nearly full moon.
### 2.2 The telescope CT2
The second HEGRA Cherenkov telescope, CT2, was built in 1993 and has been observing in an essentially unchanged configuration since 1994. CT2 is located at 93 m distance from CT1, i.e. some of the showers are seen simultaneously by both telescopes when operated at the same time. Nevertheless, we treat the observations as independent ones.
CT2 was the prototype for the HEGRA Cherenkov telescope system. As opposed to the equatorially mounted CT1, it has an ALT-AZ mount. The mirror elements are again round glass mirrors of 60 cm $``$ and 5 m focal length, but 30 instead of 18 are used which give CT2 a mirror area of 8.5 m<sup>2</sup> and thus a lower energy threshold compared to CT1.
In 1997 CT2 was still operated with its original 61 pixel camera with a field of view of 3.7 and an angular diameter of the individual pixel of 0.43. Studies of the trigger rate as a function of trigger threshold showed that the performance of the telescope has not noticeably changed since 1995 and that the nominal energy threshold of 1 TeV for primary gammas is still valid. The telescope is described in Wiedner () and Petry (\[1997b\]).
### 2.3 Performance of the telescopes
Table 1 summarizes some essential parameters of the telescopes. Most of the parameters were determined experimentally while some were calculated from Monte Carlo (MC) simulations. For the MC simulations we used the computational code developed by Konopelko (). This program includes the losses of Cherenkov light due to atmospheric effects, i.e. Rayleigh and Mie scattering, as well as the telescope parameters such as spectral mirror reflectivity, PM quantum efficiency etc. The simulations took into account the imperfections of the telescope optics and the differences in the CT1 PM noise for the different night sky background (NSB) conditions, e.g. due to the presence of moonlight. The relation between photoelectrons and measured quantities, i.e., the ADC conversion factor, has been determined by a separate experiment in 1995/96 for the HV setting HV1, i.e. before the dynode aging. For the other HV settings of CT1 the related change in conversion factors has been calculated from the HV-gain characteristics of the PMs, which were found to be in excellent agreement with the change of trigger rate (after subtracting noise triggers).
The effective collection area depends on the HV setting, zenith angle and used $`\gamma `$/hadron separation cuts. Fig.1a-c shows the collection area of CT1 for the four HV settings and three different zenith angles and Fig.1d the areas for CT2, respectively. The image cut procedures are different for CT1 and CT2. For the CT1 data the so-called dynamical supercuts, depending on the zenith angle, the image size and the distance parameters were used, see Petry & Kranich () and Kranich () for details. If the HV setting was the same for moon and non-moon observations, then the difference in NSB only changes the effective collection area at the $`<5`$ % level. This change was taken into account in flux (resp. spectrum) calculations but is too small to be visible in Fig. 1a-c. Standard supercuts were used for CT2 as in Petry et al. () due to the coarse pixel structure of the camera.
The energy reconstruction as well as the energy resolution of both telescopes depend mainly on the image parameter SIZE. The SIZE is in first order a good approximation of the initial $`\gamma `$ energy. In second order one has to apply corrections due to the zenith angle and the impact parameter. Also intrinsic fluctuations in the height of the shower maximum, xmax, can affect the energy reconstruction. With a single telescope one cannot determine the impact parameter directly. Nevertheless, the image parameter DIST provides a sufficiently precise measure of the impact parameter, while up to now no equivalent observable for xmax is known. From MC simulations, as well as from accelerator experiments it is known, that electromagnetic showers have a much smaller fluctuation of the depth of the shower maximum compared to hadronic showers. From MC data we developed a correction function which allowed us to calculate the initial energy, as well as predicting the energy resolution from the image parameters SIZE, DIST, WIDTH and the zenith angle. We used the Levenberg-Marquardt method(Marquardt ) on MC data to determine the parameters of a Taylor series expansion of the photon energy in the variables SIZE, WIDTH, zenith angle and Exp(DIST<sup>2</sup>). Note that the latter term takes empirically into account both, shower image leakage outside the FOV, as well as the drop in light intensity for impact parameters larger than $`100`$ m. For the energy reconstruction studies we used a slightly harder distance cut that does nearly not affect the collection area below 5 TeV but reduces the collection area for higher energies by about 25 %. The results obtained by this method on a complementary MC data sample are shown in Figures 2a and 2b.
Fig.2a shows the distribution of the relative difference between initial and the reconstructed energy for a power law spectrum (differential coefficient: -2.2) above 3.0 TeV. Fig.2b shows the predicted energy resolution as function of energy. The worse energy resolution of CT1 compared to that of CT2 has its origin, besides the smaller mirror area, in the smaller CT1 camera field of view. At higher energies a considerable amount of Cherenkov light is falling outside the camera and thus information is lost. This loss also affects the angular resolution. It should be noted that the shown RMS values are 10-30% larger compared to standard deviations derived from a gaussian fit. Fig. 2b also shows the relative deviation between mean reconstructed energy and initial energy. The deviation is less than 9% of the initial energy for both telescopes, but has no effect on the derived spectra as the unfolding method (see section 3.1) takes these systematics, both the fluctuation and the small offset in the energy reconstruction, properly into account. A detailed description of this energy reconstruction method will be presented in a forthcoming paper.
## 3 Observations and data analysis - CT1
Between March and October 1997 we observed Mkn 501 (ON-source data) with CT1 for 380 hours at zenith angles between 11 and 60. Background data were recorded for a total of 140 h. In order to maximize ON-source observation time, particularly at small zenith angles, the OFF-source data were not taken in ON/OFF cycles but mostly a few hours before or after the Mkn 501 observations. Thus not always the equivalent time for a certain zenith angle setting could be obtained. To compensate for this deficiency we blended the background at a specific zenith angle range from data taken at larger and smaller values. For details (also for the general cutting proceedure) we refer to Petry (\[1997b\]), Kranich () and Petry & Kranich (). It should be noted that the observation time was planned well in advance and that shift operators had no feedback on nightly results such that a bias to prolonged observation in case of large excess was avoided.
The data analysis proceeded in the following order. In a first step of data selection the following criteria were applied:
* The atmospheric transmission must be high. Whenever available, the atmospheric extinction measurements from the nearby Carlsberg Automatic Meridian Circle (CAMC) were used. For good data, we require the extinction coefficient in the Johnson V-band to be smaller than $`0.25`$.
* The trigger rate, based on 20 min runs, must be within $`\pm 10\%`$ of the expected one. This rate is zenith angle dependent.
* Only data up to 38 zenith angle were used for further analysis. Due to a lack of MC events at large zenith angles the data for $`\theta _z>38^{}`$ will be analyzed later and presented elsewhere.
Since the weather was exceptionally good in 1997 only a few nights were lost due to dense cloud coverage while for the remaining nights always a high atmospheric transmission was given, see Table 4 for the Johnson V values (whenever available). Only 27 hours of data were rejected due to large deviations from the expected trigger rate. Data from 58 hours of observations were deferred for later analysis because the zenith angle exceeded $`38^{}`$.
Next, so-called filter cuts were applied rejecting mostly noise induced triggers. After the FILTER cut the surviving data present a nearly pure sample of hadronic and $`\gamma `$ shower images. For these events the usual Hillas image parameters were calculated. Fig.3 (upper data points) shows the ALPHA distribution for the ON-source data as well as for the OFF-source data normalized to the ALPHA range between 20 and 80. Already a clear ON-source excess at small ALPHA values is seen in the raw data.
After the filter cuts, the data are further reduced by applying the above-mentioned dynamical supercuts. These cuts vary with the zenith angle, the image parameter SIZE (a coarse measure of the initial energy) and the image parameter DIST (a coarse measure of the impact parameter). The dynamical cuts enhance significantly the $`\gamma `$/hadron ($`\gamma `$/h) ratio. Hadrons are suppressed by a factor 50-60 while about 60% of the $`\gamma `$ showers are retained.
Fig.4a shows the ALPHA distribution for the ON/OFF data after the dynamical supercuts (HV2, dark nights only). The data correspond to 153 hours ON-source time. Fig.4b shows the equivalent moonlight data for one HV setting, HV1 (after April 29th). Table 2 summarizes for the different HV settings the observation times and rates for the ON-source data collected with CT1, as well as the excess signals and significances.
### 3.1 Average spectrum
In order to derive the energy spectrum from the CT1 data we have used a technique which implicitly takes into account the effects of the finite energy resolution. This technique is well known in high energy physics by the name of “regularised unfolding” and was developed by Blobel (). In brief, this procedure avoids the oscillating behavior of the solution to unfolding problems by attenuating insignificant components of the measurements.
The software package “RUN” (Blobel ) takes three sets of data: Monte Carlo data, and background data and on-source data after cuts. From this, it produces - using the regularised unfolding technique - the corrected fluxes in bins of the energy with a statistical error estimation. These values are converted into differential flux values by dividing by the energy bin width, or into integral flux values by summing up all contributions above a certain bin number.
Finally, parameters of the spectra are determined by fitting appropriate functions (see below) to the resulting differential or integral spectrum.
In the examination of the spectrum we used only the data from dark night observations<sup>1</sup><sup>1</sup>1Here we exclude the moonshine-data because of their higher thresholds making the plot less clear.. For the energy estimation for each HV setting, a separate Monte Carlo simulation was undertaken and an energy reconstruction function derived (see section 2.3). The energy resolution achieved by this procedure is shown in Figure 2b. After this reconstruction, the data were combined and subdivided into two separate zenith angle bins according to the zenith angles of the available Monte Carlo Data (0 and 30). The first bin (0-21) corresponded to 73.0 h observation time, the second (21-38) to 80.2 h observation time. The lowest energy bin for the combined 0-21 data set has its threshold at 2.25 TeV, that for 21-38 at 3.5 TeV. The resulting two spectra were scaled such that the fluxes at the point of the lowest common energy were equal. This was done in order to compensate for the time variability of the Mkn 501 emission. The result of the unfolding can be seen in Figure 5. The comparison with the spectral shape from CT2 and the HEGRA CT system (Part I) is discussed in section 6 (see also Fig. 15).
A power law fit to the 10 data points from CT1 yields a differential spectral index of
$$\alpha =2.8\pm 0.07$$
with a reduced $`\chi ^2`$ of 1.1. In the concurrently taken data with the CT system (Part I) and CT2 a significant curvature of the spectrum was seen. These data include measurements at much lower energies and are inconsistent with an unbroken power law. On the other hand the CT1 data are also consistent with the curved spectrum derived from the system and CT2 data, see discussion in section6.
The unfolding method was tested using data on the Crab Nebula. This data was taken in the years 1995-1997 and amounts to 29 h of observation time. For comparison the results of this are shown in Figure 6. A power law fit gives a differential spectral index of
$$\alpha _{\mathrm{Crab}}=2.69\pm 0.15$$
with a $`\chi ^2`$ of 0.5. This is in good agreement with other measurements of this source (Carter-Lewis et al. , Konopelko et al. , Petry et al. , Tanimori et al. )
### 3.2 Average flux
For a rapidly varying source, such as Mkn 501 in 1997, an averaged flux is not strictly meaningful because measurements sample the light curve to only 10-20% and the observed variability in time is often similar to the size of the gaps between the daily measurements. Integration over a long period of more than 200 days should nevertheless give a fairly reliable value on the mean flux. In the following we present the average integral flux above 1.5 TeV. Because of the various HV settings and the threshold variation with zenith angle the threshold was sometimes above 1.5 TeV and extrapolation to 1.5 TeV was necessary. This was performed using a spectral index of 2.8 as determined in section 3.1. The systematic error on the integral flux, arising from the sometimes necessary extrapolation to 1.5 TeV is small. Using a simple power law spectrum with differential index of 2.5 yields only a 5% difference in flux compared to the above spectral parametrisation.
The signal obtained from CT1 observations has a significance of $`58\sigma `$ (see Table 2 for the different contributions from the four HV settings). Therefore the statistical error of the average flux is completely negligible. Averaging over the four data sets we obtain the following integral flux above 1.5 TeV
$$F(E>1.5\mathrm{T}eV)=2.33(\pm 0.04)_{\mathrm{stat}.}\times 10^{11}\mathrm{cm}^2\mathrm{s}^1$$
between March 11th and October 20th. This value can be compared with the Crab Nebula flux above 1.5 TeV. From observations with CT1 in the 1995/96 and 1996/97 winter periods (the same dataset as used for Figure 6), a Crab flux of
$$F_{\mathrm{Crab}}(E>1.5TeV)=0.82(\pm 0.1)_{\mathrm{stat}.}\times 10^{11}\mathrm{cm}^2\mathrm{s}^1$$
has been determined (Petry \[1997b\]), thus the average flux of Mkn 501 in 1997 was about 3 times larger than that of the Crab Nebula.
The error on the flux is dominated by systematics and reflects only instrument related errors and not those arising from the sparse time sampling. A major contribution to the error comes from the uncertainty for the photon-to-photoelectron conversion which we estimated to be about 15%, which, in turn, converts to a systematic flux uncertainty of $`25\%`$. We estimate a total systematic flux error of 30%, common to all flux values.
### 3.3 Test for time variability of the spectral shape
For the study of Mkn 501’s spectral variability above 1.5 TeV we restrict the analysis to the non-moon data taken at HV2 and zenith angles less than 38 because the thresholds of the individual data sets were below 1.5 TeV. For observations lasting longer than 0.5 hours we calculate daily values of $`F_{1.53}`$, the flux between 1.5 and 3 TeV and $`F_3`$, the flux above 3.0 TeV. The hardness ratio
$$r_h=\frac{F_3}{F_{1.53}}$$
which is available for over 100 nights, can then be inspected for variability.
Figure 7 shows the result of this study. Only points with significance $`>1\sigma `$ were used for the calculation of $`r_h`$ while the points with $`1\sigma `$ were converted to 90 % confidence level upper limits and are only shown in the light curves.
There is no indication of significant spectral variability with time, nor of a correlation between the hardness ratio and the emission state. The averaged hardness ratio of $`0.41\pm 0.02`$ (error purely statistical) is somewhat smaller than the ratio of $`0.51\pm 0.01`$ as expected from the spectrum measured by the CT system and CT2 (section 4), but the difference is in the range of the systematic errors.
In order to estimate the degree of variability still permitted by this measurement, we fit a linear function to the plot $`r_h`$ versus $`F_{1.53}`$ and obtain
$$r_h=(0.006\pm 0.007)F_{1.53}[10^{11}\mathrm{cm}^2\mathrm{s}^1]+(0.39\pm 0.028)$$
with a reduced $`\chi ^2`$ of 0.98. With the range of values of $`F_{1.53}`$ of roughly $`(0.5\mathrm{to}\mathrm{\hspace{0.17em}10})\times 10^{11}\mathrm{cm}^2\mathrm{s}^1`$, this means that $`r_h`$ may vary by up to 15% of its average value within the $`1\sigma `$ error of the fit.
### 3.4 The CT1 lightcurve above 1.5 TeV
The lightcurve from CT1 data which we present in this paper aims for a time coverage as complete as possible and at the same time minimizing systematic errors from varying zenith angle distributions. In order to achieve this compromise we limit the data set to zenith angles of below 38. The complete lightcurve of the integral fluxes above 1.5 TeV is shown in Fig.14 together with the results from the other HEGRA telescopes.
The lightcurve as shown in Figure 14 is calculated for a threshold of 1.5 TeV. The data are taken from the different HV settings using the above mentioned extrapolation procedure. Only statistical errors are shown.
The small possible error of $`\mathrm{r}_\mathrm{h}`$ up to 15%, see previous section, could only influence those points where one has to extrapolate over a sizable energy range, i.e. for the few data points taken at HV4. A listing of the CT1 and CT2 observation times and fluxes is given in Table 4 together with the Johnson V extinction coefficients (whenever available). Note that the MC simulation takes a mean loss of light of 16% into account.
### 3.5 Correlation with RXTE observations of Mkn 501
Since the beginning of 1996, the RXTE all sky monitor (ASM) has been observing Mkn 501 in the 2-12 keV band. From these data (subsequently called keV data), which are publicly available as so-called “quick-look results”, the hardness ratio
$$\frac{\mathrm{Rate}(512.1\mathrm{keV})}{\mathrm{Rate}(1.33\mathrm{keV})}$$
has been determined.
In Figure 8, we present the RXTE keV rate together with the data from HEGRA CT1 which observed Mkn 501 both in 1996 and 1997. The simultaneous change in flux in both energy ranges is clearly visible.
In order to further examine the correlation, we plot the daily RXTE averages versus the flux values from the complete CT1 lightcurve in 1997. This is shown in Fig. 9. We obtain a correlation coefficient (see Part I for details) of:
$$r=0.611\pm 0.057.$$
with a significance of $`8.56`$ (based on the assumption of 125 independent data pairs).
In order to verify whether this correlation is real or only an artifact of some binning effects (e.g. fewer observations during moonshine nights, etc.) or unequal data statistics, we shift the CT1 and the RXTE light curves with respect to each other in steps of 1 day by up to $`\pm 100`$ days. For each shift, we recalculate the correlation coefficient. The result is plotted in Figure 10. The fact that only at the un-shifted value there is a clear peak visible, underlines the significant correlation between the TeV and keV datasets. Even if we assume that the daily TeV data are highly correlated, i.e., only $`1/\mathrm{\hspace{0.17em}5}`$ of the data is independent, we obtain still a significance of $`3.7`$.
Due to the nearly continuous CT1 observation only a modest modulation due to the lunar period is visible.
The ratio
$$R_{\mathrm{TeV}/\mathrm{keV}}=\frac{F_{1.5}}{\mathrm{RXTEcountrate}}$$
(here in units of \[$`10^{11}`$ cm<sup>-2</sup>s<sup>-1</sup>/Hz\]) is quite different in 1996 and 1997:
$$R_{\mathrm{TeV}/\mathrm{keV}}(\mathrm{MJD}5016050310)=0.5\pm 0.04$$
and
$$R_{\mathrm{TeV}/\mathrm{keV}}(\mathrm{MJD}5052050720)=1.75\pm 0.04.$$
While the keV flux rises by about a factor 3 from the 1996 to 1997 periods, the TeV $`\gamma `$ flux increases by 11, i.e., about in quadrature of the keV flux.
## 4 Observations and data analysis - CT2
CT2 observed Mkn 501 between 16 March and 28 August 1997. After thorough checks of the data quality, 85 hours ( 79 h in normal- and 6 h in reverse tracking mode <sup>2</sup><sup>2</sup>2Normal- and reverse mode refer to the Azimuth range the Telescope is operated when observing a source. The Azimuth range of the reverse mode refers to a $`180^{}`$ rotation in $`\phi `$ rotation of the telescope) of good data remained.
For the background determination the same procedure was followed as for CT1.. The OFF-source data set consisted of 90 hours of data which had passed the same quality cuts as the ON-data and spanned all zenith angles up to 51.
For the gamma/hadron separation, we employed the set of image parameter cuts already used in Petry (). The efficiencies of these cuts and the corresponding Monte Carlo studies are described in Bradbury et al. () and Petry (\[1997b\]). The effective collection areas of CT2 after gamma/hadron separation cuts for three different zenith angles are shown in Figure 1d. The characteristics of CT2 have not changed over a long time. This was checked by comparing data from Mkn 421 observations taken in 1995 with the 1997 Mkn 501 data set. Neither the background rates nor the background image parameter distributions of CT2 have changed significantly.
Table 3 summarizes the observation times and trigger rates before and after the FILTER cut for CT2 for 3 ranges of the zenith angle. Also given are the excess and background rates and signal significances after the “image” cuts. Due to the coarser camera pixel size, an ALPHA cut at 15 is applied.
Fig.11 shows the CT2 ALPHA distributions after all cuts, for the zenith angle ranges as listed in Table 3. In all distributions a clear excess at small ALPHA is seen.
### 4.1 Average spectrum and flux
For the study of the spectrum of the CT2 signal we applied the regularised unfolding method as for CT1 (see section 3.1). We subdivided the dataset into three separate zenith angle bins according to the zenith angles of the available Monte Carlo data (0, 30 and 45) (see Table 3 for statistics).
For each of these datasets the regularised unfolding was applied separately. The resulting three spectra were scaled such that the fluxes at the point of the lowest common energy were equal. This was done in order to compensate for the flux variation with time. The results are shown in Figure 12.Within the errors, the spectra from different zenith angle observations are perfectly compatible.
When fitting the CT2 data by a pure power law, we obtain a differential spectral index of
$$\alpha =2.7\pm 0.2$$
with a reduced $`\chi ^2`$ of 5.0. In order to improve the fit, we introduce an exponential cutoff, i.e. we fit
$$\mathrm{d}F/\mathrm{d}EE^\alpha e^{E/E_0}.$$
This fit gives
$$\alpha =1.27\pm 0.37,E_0=(2.85\pm 0.58)\mathrm{TeV}$$
with a reduced $`\chi ^2`$ of 1.7. This is shown in Figure 12 as a solid line.
Using the CT2 data and the above spectrum (when an extrapolation was necessary), we calculate an average flux above 1 TeV
$$F(E>1.0\mathrm{TeV})=5.26(\pm 0.13)_{\mathrm{stat}.}\times 10^{11}\mathrm{cm}^2\mathrm{sec}^1$$
for the 85 hours of observation time.
The corresponding Crab flux value measured with CT2 is (Petry et al. ):
$`F^{\mathrm{Crab}}(E>1.0\mathrm{TeV})`$ $`=`$ $`1.57(\pm 0.24)_{\mathrm{stat}.}(+0.990.39)_{\mathrm{syst}.}`$
$`\times 10^{11}\mathrm{cm}^2\mathrm{sec}^1.`$
Here we use the Crab flux as determined by CT2 and not with CT1 because the ratio $`F^{\mathrm{Mkn501}}/F^{\mathrm{Crab}}`$, when measured with the same telescope, should be free of some systematic errors, such as the photon to ADC signal conversion error. The resulting flux ratio $`F^{\mathrm{Mkn501}}/F^{\mathrm{Crab}}`$ is $`3.3\pm 0.5`$ and in good agreement with the ratio obtained from the CT1 data (sec. 3.2).
### 4.2 Test for time variability of the spectral shape
As for CT1 (section 3.3), we examine the possible spectral variability of Mkn 501 in the independent CT2 data set. Again, we construct from Monte Carlo data a function which estimates the energy of the primary photon from the zenith angle and the image parameters SIZE, DIST and WIDTH (see also Fig. 2b).
For the daily measurements lasting longer than 0.5 hours, we determine the flux $`F_{13}`$ between 1.0 and 3.0 TeV and the flux $`F_3`$ above 3.0 TeV and define a hardness ratio $`r_h`$ as
$$r_h=\frac{F_3}{F_{13}}.$$
Note that this is different from the hardness ratio defined for CT1 since the threshold of CT2 is lower. Only data up to a zenith angle of 30 were used.
In Figure 13 we plot $`F_{13}`$, $`F_3`$ and $`r_h`$ versus time and in addition $`r_h`$ versus $`F_{13}`$ to test a dependence on the emission state of the source. A fit of a constant function to the latter plot results in
$$r_h=0.18\pm 0.012$$
(errors purely statistical) which is in agreement with the value $`0.24\pm 0.02`$ expected from the measured spectrum if we take into account the large systematic errors of the energy calibration of $``$ 20 %. given the good reduced $`\chi ^2`$ of 0.92, there is no indication for a correlation between the hardness ratio and the emission state.
As for the corresponding CT1 data, we give an estimate of the degree of variability still permitted by fitting a linear function to the plot $`r_h`$ versus $`F_{13}`$. We obtain
$`r_h`$ $`=`$ $`(0.0038\pm 0.0032)F_{13}[10^{11}\mathrm{cm}^2\mathrm{s}^1]`$
$`+0.215\pm 0.032.`$
(Note again that $`r_h`$ is differently defined for CT1 and CT2). With the range of values of $`F_{13}`$ of roughly $`(114)\times 10^{11}\mathrm{cm}^2\mathrm{s}^1`$, this means that $`r_h`$ may vary by up to 25 % of its average value within the $`1\sigma `$ error bars of the fit. However, the fact that the slope in the corresponding result of CT1 has the opposite sign, is an indication that indeed no spectral variability is present.
### 4.3 The CT2 lightcurve above 1.5 TeV
In order to examine the time variability of the emission of Mkn 501 and to compare the data with those of the other telescopes, we construct the lightcurve above 1.5 TeV. We exclude again the data with zenith angles larger than 38 in order to avoid possible systematic errors due to low MC statistics at large zenith angles.
Each point is calculated according to the method of the adjustment of the zenith angle distribution (see section 3). The errors are purely statistical. The results are shown in Fig.14 as open circles.
CT2 observations were partly overlapping in time with CT1 or the CT system and partly carried out alone. Therefore the CT2 measurements provide cross checks of the CT1 and CT system measurements and they add some new data points to the lightcurve. The daily fluxes and observation times are listed in Table 5 , again for zenith angles below 38 and $`E>1.5`$ TeV.
## 5 The combined CT1, CT2 and CT system lightcurve above 1.5 TeV
In Fig.14 the lightcurve from all HEGRA CTs is shown for an energy threshold of 1.5 TeV. The observations with CT1 under the presence of moonlight are indicated separately. The errors are purely statistical.
In general we see good agreement between the data from the three instruments. Restricting the comparison to directly overlapping days we obtain the following ratio between the fluxes
$$\frac{F(\mathrm{CT1})}{F(\mathrm{CTsystem})}=0.73\pm 0.02,\frac{F(\mathrm{CT2})}{F(\mathrm{CT1})}=1.03\pm 0.04$$
and
$$\frac{F(\mathrm{CT2})}{F(\mathrm{CTsystem})}=0.89\pm 0.03.$$
The overlap times were 110 h, 60 h and 65 h, respectively. The seemingly small inconsistency in the ratios has its origin in different overlap times and different zenith angles. The ratios agree well within the systematic errors, which are in the order of 30%. The systematic error is in part global and in part quite different for each instrument. There are 5 data points where the disagreement between simultaneous observations with different telescopes differ by more than 4 $`\sigma `$ (MJD 50579, 50580, 50582, 50607, 50658) after normalizing the fluxes to the respective mean fluxes. These large differences are for the time being unexplainable. Part of the difference might be due to source variability and different observation times in compared nights but some of the discrepancies remain even for exactly matching ON time slices.
Here we would like to comment on various features of the lightcurve.
1. The largest flare was observed at MJD 50626-27 with a flux above 1.5 TeV of $`10^{10}`$ cm<sup>-2</sup> sec<sup>-1</sup>.
2. Other experiments, Whipple and CAT, have observed a large and short flare on April 16th (MJD 50554). Due to complete cloud coverage, HEGRA could not observe that.
3. The deferred analysis of the data taken at large zenith angles will add about 15% more data points on the light curve as well as reducing some of the errors of the shown data points.
4. The data point at MJD 50526 is less reliable because observations were carried out during strong and gusty winds.
5. The visibility at MJD 50697 was at the allowed limit, therefore this flux value may have to be corrected.
6. A detailed time structure analysis including also the data from large zenith angle observations (including observations under moonlight) is in preparation and will be published elsewhere.
7. No CT1/2 entries are shown after MJD 50721 because of modest statistics below 38 zenith angle.
## 6 Discussion and summary
The long period of intense flaring of Mkn 501 provided the unusual opportunity
* to obtain a large and relatively clean sample of VHE $`\gamma `$-rays,
* to carry out a detailed analysis of the spectral distribution and the lightcurve over a duration of nearly seven months,
* to carry out multi-wavelength observations,
* to compare the highly variable $`\gamma `$ emission with that observed in previous years and also with that of other AGNs,
* to test the detector performance by comparing data taken at the same time with nearby telescopes and in other experiments.
Most of the conclusions have already been presented in Part I. Here we concentrate on a comparison of data taken with the different HEGRA CTs and results related mainly to increased density of nightly samplings and outline some future analysis prospects which will require additional measurements.
The data of CT1 and 2 presented here are for about half the time overlapping with the CT system observation periods, while the other half fills many gaps, dominantly during moonlit nights, but normally CT1 was also observing 30%-50% longer during dark nights. Nevertheless, even during identical times and given the fact that CT1 is basically centered to the system, not only identical events have been recorded. This is due to the collection area of the CT system being about 2 1/2 times larger than that of CT1. Also, due to the larger cameras and better precision on the impact parameter calculation, one could record with the CT system showers where one “sees” only shower halo particles, i.e., from showers with an impact distance between 130 - 200 m. Due to the different readout concept, 8 bit FADC readout of the CT system and 10 respectively 11 bit charge sensitive gated ADCs of the stand-alone telescopes, the saturation effects for multi-TeV showers are different. Also it should be mentioned that we used different MC programs for the standalone CT1/CT2 and the system. In addition we used quite different procedures for the $`\gamma `$ selection. In spite of these differences we see in general excellent agreement in the structure of the lightcurve of the data recorded with the different instruments. In general, we see a better agreement between CT1 and CT2 data although their direct event overlap is smaller than that between the CT system and CT1 observations. The flux values from CT1 are systematically lower compared to the CT system data by about 27%. For the time being we are unable to decide whether this is related to the very different analysis methods or due to the systematic errors in the photon to ADC signal conversion ratio.
We observe good agreement of the spectral shape from the observations with the different telescopes. Fig. 15 shows a comparative plot where we combined the two resp. three zenith angle ranges of CT1 and CT2. Obviously, an unbroken power law will not describe the data well, except for the limited energy range of the CT1 data (see section 3.1). An ansatz with an exponential cutoff
$$\mathrm{d}F/\mathrm{d}EE^\alpha e^{E/E_0}$$
yielded:
for CT1: $`\alpha =2.09\pm 0.09,E_0=(7.16\pm 1.04)\mathrm{TeV}`$
with a reduced $`\chi ^2=0.6`$ (fit to the data points of Fig. 5)
for CT2: $`\alpha =1.27\pm 0.37,E_0=(2.85\pm 0.58)\mathrm{TeV}`$
with a reduced $`\chi ^2=1.7`$ (fit to the data points of Fig. 12)
The fit to the preliminary data of the HEGRA IACT system in the energy region from 1.25 TeV to 24 TeV (Krawczynski ) gives:
for CT system: $`\alpha =1.9\pm 0.05,E_0=(5.7\pm 1.1)\mathrm{TeV}`$
It should be noted that $`\alpha `$ and $`E_0`$ are highly correlated, i.e., a modestly more curved spectrum enforces both a lower $`\alpha `$ and lower E<sub>0</sub> in the fit. This is particularly obvious for the CT2 data. The difference in the three sets of $`\alpha `$ and E<sub>0</sub> is explaineable by the different range of the fit. If we fit the CT2 spectrum unsing the ’system’ $`\alpha `$ of 1.81 we obtain an $`E_0=(4.7\pm 0.26)`$ and a marginally worse reduced $`\chi ^2`$ of 1.71. The observed steepening of the spectrum could be either due to an inherent change in the acceleration and interaction process or due to $`\gamma `$-interaction with the cosmic IR background. A detailed study of the spectra will be presented in a forthcoming paper.
Other tests of the existence of an IR absorption have been proposed by Aharonian () and Plaga (), namely the production of pair halos and time delay of secondary $`\gamma `$s. The “halo” $`\gamma `$s should show up predominantly at larger ALPHA values and should not show the rapid variation of the main $`\gamma `$ flux at all. Another aspect is that, depending on the onset of the IR absorption, the spectrum of the halo $`\gamma `$s should be much softer resp. have a lower energy cutoff. The effect should be quite visible in the 1-10 TeV region if strong IR absorption occurs above 25-35 TeV. We searched for such effects using data between 10 to 20 in ALPHA but no conclusive results could be drawn due to insufficient statistics. One of the problems is that improvements in the source position resolution due to higher energy can fake a soft halo spectrum. Also the wide spread of the prediction for the extragalactic magnetic field make such an analysis difficult.
Due to the large number of CT1 daily flux measurements a precise comparison with the nearly continuous RXTE data is possible. The correlation of 0.61 $`\pm `$ 0.06 between the CT1/2 data and the RXTE data gives a rather strong evidence of a coupled effect such as electron acceleration and inverse Compton scattering on synchroton radiation generated photons.
Close inspection of the data show that around MJD 50580 the structure of the lightcurve in the TeV range differs significantly from that at the 2 - 10 keV range. While observing significant flaring in the TeV range, the keV lightcurve remaines constant within errors. If we exclude the data between MJD 50568 and 50590 the correlation rises to $`0.65\pm 0.07`$ while inside the range it drops to $`0.17\pm 0.19`$.
The assumption of the electron acceleration gets further support from the about quadratic rise of TeV $`\gamma `$ flux as compared to the keV flux rise from 1996 to 1997, see section 3.5. Clearly, a long term observation of the keV - TeV correlation over a few years should give further support or disprove the concept of electron acceleration dominance. Hadronic components and/or a significant change in electron acceleration cannot be ruled out, see our comment on the observation around MJD 50580.
Note that the 1996 Mkn 501 data, originally showing a 5.8 $`\sigma `$ excess (Bradbury et al. ), have been reanalysed using the dynamical supercuts also used in this paper. The excess increased to $`>7\sigma `$ while the flux values and the integral spectrum remained the same ($`F(E>1.5\mathrm{TeV})=2.3\times 10^{12}\mathrm{cm}^2\mathrm{s}^1`$). Comparing the 1996 and the 1997 spectrum, however, we find indications that the curvature of the spectrum has increased since the increase in flux from 1996 to 1997 is lower at higher energies. A detailed comparison between the spectra in 1996 and 1997 will be presented in a later paper.
Finally, we want to comment on a technical conclusion drawn from the analysis. For precision measurements it is important to use a larger camera diameter compared to the one of CT1. A larger camera would have resulted in a significantly better energy and angular resolution at higher energies.
## Acknowledgements
The HEGRA collaboration thanks the Instituto de Astrofisica de Canarias and the town of Garafia for use of the site and the excellent working conditions. Also we acknowledge the rapid availability of the RXTE data and the atmospheric extinction data from the CAMC. This work was supported by the German Ministry of Education and Research, BMBF, the Deutsche Forschungsgemeinschaft, DFG, and the Spanish Research Foundation CICYT.
|
no-problem/9901/hep-ph9901363.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The remarkable accord of standard model (SM) predictions with experiment does not remove the question of a more fundamental theory which would comprise SM as the low energy limit. Among various ways to look for the signs of a new theory, precise tests of fundamental discrete symmetries play an important role.
The nil results for the electric dipole moments of the neutron, heavy atoms and diatomic molecules put in general very strong constraints on the CP-violating sector of a new theory and probe the energy scales unaccessible for direct observations at colliders . Regardless what the particular construction for the new theory is, its relevant contribution at 1 GeV can be reexpressed in terms of effective operators of different dimensions suppressed by corresponding power of a high scale $`M`$ where these operators were generated:
$$_{eff}=\underset{n4}{}\frac{c_{ni}}{M^{n4}}𝒪_i^{(n)},$$
(1)
Here $`𝒪_i^{(n)}`$ are operators of dimension $`n`$ and $`i`$ stands for their different field content, Lorentz structures etc. Fields, relevant for low-energy dynamics, are gluons, three light quark fields, $`u`$, $`d`$ and $`s`$, and the electromagnetic field. The specifics of a given model enters only through the value of the coefficients $`c_{ni}`$
Dim=3, 4 operators can be combined to form $`\theta `$-term. In the absence of axion relaxation mechanism this operator is normally the most important on account of possible tree-level contributions $``$ 1. If PQ mechanism is operative $`\theta `$ 1 is removed but $`\theta `$-parameter still can gain a nonzero value induced at low energy by CP-odd operators of bigger dimension. Dim=5 operators, which are usually suppressed by an additional $`m_q/M`$ ratio, are electric and chromoelectric dipole moments of quarks. Due to this additional mass ratio, these operators are suppressed by a large scale exactly as dim=6 operators built from four quark fields or purely from gluons (Weinberg operator). Most of the operators have been extensively studied in the literature and limited from experiment using PCAC and QCD sum rules techniques. For the operators with strange quarks, however, only the analysis of the chromoelectric dipole moment is available . Recently some of the four-fermion operators with $`s`$-quark induced at a high scale in $`SU(3)\times SU(2)\times U(1)`$-symmetric form were limited using the fact these operators can be mixed with electric dipole moment operators for $`u`$ and $`d`$ fields at one-loop level .
In this letter we combine PCAC approach and the experimental bounds on neutron, mercury and thallium EDMs to put the limits on four-fermionic operators containing strange quark field. This completes the study of the relevant operators dim=6 and can be used for any model where these operators are generated. Another issue addressed here is the shift of axion vacuum by effective CP-odd four-fermion operators. This shift, $`\theta _{eff}`$ is estimated within the same approach .
## 2 CP-odd operators containing strange quark
In what follows, we adopt the classification of operators proposed in Refs. . Among the flavour-conserving CP-odd operators dim=6
$`\kappa _1{\displaystyle \frac{G}{\sqrt{2}}}(\overline{s}s)(\overline{q}i\gamma _5q);\kappa _2{\displaystyle \frac{G}{\sqrt{2}}}(\overline{s}i\gamma _5s)(\overline{q}q);\kappa _3{\displaystyle \frac{G}{\sqrt{2}}}(\overline{s}i\gamma _5s)(\overline{s}s);`$
$`\kappa _4{\displaystyle \frac{G}{\sqrt{2}}}(\overline{s}t^as)(\overline{q}i\gamma _5t^aq);\kappa _5{\displaystyle \frac{G}{\sqrt{2}}}(\overline{s}i\gamma _5t^as)(\overline{q}t^aq);\kappa _6{\displaystyle \frac{G}{\sqrt{2}}}(\overline{s}i\gamma _5t^as)(\overline{s}t^as);`$
$`\kappa _7{\displaystyle \frac{G}{\sqrt{2}}}{\displaystyle \frac{1}{2}}ϵ_{\mu \nu \alpha \beta }(\overline{s}\sigma _{\mu \nu }s)(\overline{q}\sigma _{\alpha \beta }q);\kappa _8{\displaystyle \frac{G}{\sqrt{2}}}{\displaystyle \frac{1}{2}}ϵ_{\mu \nu \alpha \beta }(\overline{s}\sigma _{\mu \nu }t^as)(\overline{q}\sigma _{\alpha \beta }t^aq);`$ (2)
$`\kappa _9{\displaystyle \frac{G}{\sqrt{2}}}(\overline{s}s)(\overline{e}i\gamma _5e);\kappa _{10}{\displaystyle \frac{G}{\sqrt{2}}}(\overline{s}i\gamma _5s)(\overline{e}e);\kappa _{11}{\displaystyle \frac{G}{\sqrt{2}}}{\displaystyle \frac{1}{2}}ϵ_{\mu \nu \alpha \beta }(\overline{s}\sigma _{\mu \nu }s)(\overline{e}\sigma _{\alpha \beta }e).`$
we take those, containing $`s`$-quark field. We will numerate these operators $`𝒪_i`$ according to the constant $`\kappa _i`$ standing in front of them. $`G`$ in these formulae is Fermi constant.
As an experimental input we use the following limits, obtained for the neutron EDM ,
$$d_N<10^{25}ecm,$$
(3)
and mercury EDM experiments:
$$d_{Hg}<910^{25}ecm.$$
(4)
The latter translates into the limit on the Schiff moment of <sup>199</sup>Hg nucleus and eventually leads to the following bound on the effective CP-violating $`\pi ^0`$pp coupling :
$$\overline{g}_{\pi pp}<210^{11}.$$
(5)
To evaluate the contribution of operators (2) to the effective coupling $`\overline{g}_{\pi pp}`$ we use the same method proposed earlier in Refs. . The operator $`𝒪_1`$ is the simplest in this respect. Using the PCAC reduction of the soft-pion field and calculating subsequent commutators, we reduce the contribution of the $`(\overline{s}s)(\overline{q}i\gamma _5q)`$ operator $`𝒪_1`$ to the matrix element of $`\overline{s}s\overline{q}q`$ operator over the proton:
$$p\pi ^0|\overline{s}s\overline{d}i\gamma _5d|p=\frac{1}{f_\pi }p|\overline{s}s\overline{d}d|p$$
(6)
This matrix element can be estimated using vacuum insertion approximation:
$$p|\overline{s}s\overline{d}d|p0|\overline{q}q|0p|\overline{s}s+\overline{q}q|p5(1+\beta )\overline{p}p,$$
(7)
where we take $`p|\overline{d}d|pp|\overline{u}u|p=p|\overline{q}q|p5\overline{p}p`$. The analysis of the barion mass splitting and experimental data on pion-nucleon scattering suggest that the coefficient $`\beta `$, $`\beta =p|\overline{s}s|p/p|\overline{q}q|p`$, numerically is close to 0.6 . Thus, CP-violating coupling constant $`\overline{g}_{\pi pp}`$ is:
$$\overline{g}_{\pi pp}=5(1+\beta )\frac{0|\overline{q}q|0}{f_\pi }G810^6\kappa _1.$$
(8)
The rest of the four-quark operators give suppressed contributions to the effective T-violating nucleon-nucleon interaction. They either contribute only in $`\eta `$-exchange channel or do not work in vacuum factorization approach. To get the limits on these operators we use the neutron EDM bound. The neutron EDM can be induced as a result of the chiral loop, Fig. 1, where CP-violation resides in one of the meson-nucleon vertecies. In the limit of exact chiral symmetry this loop is logarithmically divergent in the infrared which justifies its appearance in the chiral theory. For our purposes we choose $`\mathrm{\Sigma }^{}K^+`$ loop where the operators containing $`s`$-quark will most likely contribute. In the real life, chiral symmetry is broken and the mass of kaons is rather large, so that estimated limit on $`n\mathrm{\Sigma }^{}K^+`$ coupling has rather large uncertainty:
$$g_{n\mathrm{\Sigma }^{}K^+}<210^{11}$$
(9)
Most of the quark operators from the set (2) induce this coupling; to calculate their effect on it we use the same method, PCAC and vacuum factorization. Thus, for example, $`\overline{s}i\gamma _5s\overline{d}d`$ operator contributes in the CP-odd vertex of interest in the following way:
$`\mathrm{\Sigma }^{}K^+|\overline{s}i\gamma _5s\overline{d}d|n={\displaystyle \frac{i}{f_K}}\mathrm{\Sigma }^{}|\overline{d}d\overline{s}u|n{\displaystyle \frac{i}{f_K}}0|\overline{q}q|0\mathrm{\Sigma }^{}|\overline{s}u|n`$
$`{\displaystyle \frac{i0|\overline{q}q|0}{f_K}}{\displaystyle \frac{m_\mathrm{\Sigma }m_N}{m_s}}\overline{\mathrm{\Sigma }}^{}n{\displaystyle \frac{i0|\overline{q}q|0}{f_K}}1.3\overline{\mathrm{\Sigma }}^{}n`$ (10)
and the $`SU(3)`$-octet type of splitting in the barion mass spectrum was used in the second line of Eq. (10).
The limits on semileptonic operators, $`𝒪_9`$, $`𝒪_{10}`$ and $`𝒪_{11}`$, can be obtained from the limits on T-odd nucleon-electron interaction by simply taking matrix elements from their strange-quark part over the nucleon. For the case of $`𝒪_9`$ and $`𝒪_{10}`$ these matrix elements can be easily obtained within the same PCAC approach. In the case of $`𝒪_{11}`$, however, the tensor charge of the strange quark over the nucleon, $`N|\overline{s}\sigma _{\mu \nu }s|N`$, is not known. It is not reducible to the $`s`$-quark spin content over the nucleon, as it was asserted in Ref. , because the latter is expressed by absolutely independent quantity $`N|\overline{s}\gamma _\mu \gamma _5s|N`$. Moreover, unlike axial-vector operator, tensor operator is odd under charge conjugation and we expect the effects of strange and anti-strange quarks to cancel each other in the first approximation. The model calculations and lattice simulations for tensorial charges give indeed a very suppressed value for the strange quark contribution . The same refers, of course, to the strange quark EDM operator, as it was discussed in .
The resulting limits on the coefficients are summarized in Table 1. One can easily see that the best sensitivity is for $`𝒪_1`$ and $`𝒪_9`$ operators where $`s`$-quark enters only as $`\overline{s}s`$ and does not take part in spin dynamics.
## 3 Effective theta-term induced by CP-odd four-fermion operators
In all known models with significant amount of CP-violation in the flavor-conserving channel, the operators of dim$`>`$4 are usually accompanied by a large contribution to the theta term. In other words, $`\theta _{loop}`$ is usually by far more sensitive to the new CP-violating physics because it corresponds to the operator dim=4 and therefore need not experience scale suppression of the order $`(\mathrm{\Lambda }_{QCD}/M)^2`$. Thus the CP-violating operators of dim$`>`$4 generated at a scale $`MM_W`$ and higher are important only in the case when $`\theta `$-term is removed by an axion mechanism. We also assume here the existence of PQ mechanism. In the absence of CP violation, non-removable by PQ transformation, PQ symmetry sets theta parameter to zero . The situation is different in the presence of extra CP-violating sources, communicated by the operators dim$`5`$. These operators $`𝒪_i`$ shift the axion vacuum and generate additional indirect contribution to all CP-odd observables through the effective $`\theta `$-term given by the ratio of two correlators:
$`\theta _{eff}`$ $`=`$ $`{\displaystyle \frac{K_i}{|K|}},\text{where}K=i\left\{{\displaystyle 𝑑xe^{ikx}0|T(\frac{\alpha _s}{8\pi }G\stackrel{~}{G}(x),\frac{\alpha _s}{8\pi }G\stackrel{~}{G}(0))|0}\right\}_{k=0}`$ (11)
$`K_i`$ $`=`$ $`i\left\{{\displaystyle 𝑑xe^{ikx}0|T(\frac{\alpha _s}{8\pi }G\stackrel{~}{G}(x),𝒪_i(0))|0}\right\}_{k=0}.`$
Here $`G_{\mu \nu }^a\stackrel{~}{G}_{\mu \nu }^a`$ is abbreviated as $`G\stackrel{~}{G}`$. The calculation of $`K`$ is based on the use of the anomaly equation and the saturation of subsequent correlators by light hadronic states :
$$K=\frac{m_\pi ^2f_\pi ^2m_um_d}{2(m_u+m_d)^2}.$$
(12)
($`F_\pi `$ we use throughout the paper is 130 MeV). The same technique can be exploited in the case of $`K_1`$. For the case of chromoelectric dipole moments the explicit derivation of $`K_i`$ can be found in Ref. . A similar type of calculation could be done for the most part of the four-fermion operators discussed here and in earlier works . Using the anomaly equation in the form
$`_\mu {\displaystyle \frac{m_dm_s\overline{u}\gamma _\mu \gamma _5u+m_um_s\overline{d}\gamma _\mu \gamma _5d+m_um_d\overline{s}\gamma _\mu \gamma _5s}{m_sm_d+m_sm_u+m_dm_u}}=`$
$`{\displaystyle \frac{2m_um_dm_s}{m_sm_d+m_sm_u+m_dm_u}}(\overline{u}\gamma _5u+\overline{d}\gamma _5d+\overline{s}\gamma _5s)+{\displaystyle \frac{\alpha _s}{4\pi }}G\stackrel{~}{G},`$ (13)
we apply standard technique of current algebra. The correlators of interest, $`K_i`$, can be rewritten in the form of the equal-time commutator, which for all sets of four-fermion operators we can calculate easily, plus the term containing the singlet combination of pseudoscalars built from quark fields. Thus, for $`𝒪_1`$ operator we have the following expression:
$`K_1=\kappa _1{\displaystyle \frac{G}{\sqrt{2}}}0|{\displaystyle \frac{m_dm_s(\overline{u}u)(\overline{s}s)}{m_sm_d+m_sm_u+m_dm_u}}+{\displaystyle \frac{m_um_d(\overline{u}i\gamma _5u)(\overline{s}i\gamma _5s)}{m_sm_d+m_sm_u+m_dm_u}}|0+`$ (14)
$`{\displaystyle d^4x0|T\{\frac{im_um_dm_s}{m_sm_d+m_sm_u+m_dm_u}(\overline{u}\gamma _5u+\overline{d}\gamma _5d+\overline{s}\gamma _5s)(x),𝒪_1(0)\}|0}`$
The second line here is suppressed by an extra power of light quark masses in the numerator. It would bring a comparable contribution, though, if there were an intermediate hadronic state with a mass, vanishing in the chiral limit $`m_i0`$. At the same time, the flavor structure of this term shows that the lightest intermediate state here is $`\eta ^{}`$ which is believed to remain heavy even if quark masses vanish. Thus the contribution from the second term is negligible in the limit $`m_\pi m_\eta ^{}`$. The second term in the first line of Eq. (3) is suppressed by $`m_u/m_s`$ ratio and effectively we get the following formula for the induced theta term due to the operator $`𝒪_1`$:
$$\theta _{eff}=\kappa _1\delta _1\frac{G}{\sqrt{2}}m_u^10|\overline{q}q|0,$$
(15)
where $`\delta _1`$ is the ratio of the four-quark condensate to the square of $`\overline{q}q`$ condensate.
$$\delta _1=\frac{0|\overline{u}u\overline{s}s|0}{0|\overline{q}q|0^2}.$$
(16)
In the case of $`𝒪_1`$ we can use vacuum factorization and estimate that $`\delta _11`$. For some of four-quark operators vacuum factorization does not work and we expect $`\delta _i`$ to be smaller than 1. The appearence of $`m_u^1`$ in Eq. (15) is because the operator $`𝒪_1`$ breaks chirality. Any answer for CP-odd observables, iduced by $`\theta `$ will not contain this singularity.
The value of $`\theta _{eff}`$ induced by $`𝒪_1`$ leads to an additional contribution to $`\overline{g}_{\pi pp}`$ constant,
$$\overline{g}_{\pi pp}(\theta _{eff})=\frac{m_um_d}{m_u+m_d}\frac{\sqrt{2}\theta _{eff}}{f_\pi }=\delta _1\kappa _1\frac{G0|\overline{q}q|0}{f_\pi }\frac{2m_d}{m_u+m_d}=1.310^6\kappa _1,$$
(17)
which should be compared with the direct contribution (8). We see that in the case of $`𝒪_1`$ the indirect contribution related to theta term gives 15-20% correction to the CP-odd vertex $`\overline{g}_{\pi pp}`$. In fact, this is the biggest value of $`\theta _{eff}`$, generated by the set of operators (2). This is especially true for the operators $`𝒪_3`$ and $`𝒪_6`$ composed exclusively from strange quark which induce $`\theta _{eff}`$ with an additional parametrical suppression $`(m_u^1+m_d^1)/m_s`$. We have also done similar calculation for the operators composed from the $`u`$ and $`d`$ field and the result for the $`\theta `$-driven contributions never exceeds 20% from the direct contribution. The estimates for $`\theta _{eff}`$ are included into Table 1.
## 4 Conclusions
We have considered the limits on the four-fermion CP-odd operators containing strange quarks coming from the neutron, thallium and mercury EDM experiments. This completes the study of CP-odd dim=5,6 operators built from light-quark fields. We observe that the limits are very strong, especially for the operators $`\overline{s}s\overline{q}i\gamma _5q`$ and $`\overline{s}s\overline{e}i\gamma _5e`$ where strange quark is in some sense ”spectator”. The limits summarized in Table 1 are almost as strong as for operators composed from $`u`$ and $`d`$ quarks. This is because the strange quark condensate is the same as for up and down quarks in the flavor $`SU(3)`$ symmetry approximation and the content of the $`s`$-quark in the nucleon in scalar, pseudoscalar and axial-vector channels is also significant. The limits on the operators $`𝒪_1,`$ $`𝒪_9`$ and $`𝒪_{10}`$ are extracted with much smaller error than those for the rest of the operators. Other limits are estimated within an order of magnitude, mainly because of the chiral loop, used to obtain the EDM of the neutron. The infrared logarithm enhancement factor, $`\mathrm{log}m_K`$, is numerically important only in the limit $`m_K0`$.
There is an alternative method of limiting four-fermion operators used in Ref. . In this work different linear combination of operators (2) were taken to form a different set, invariant under standard model group. At one-loop level some of these operators can be mixed with EDMs of $`u`$ and $`d`$ quarks with the coefficients proportional to $`m_s\mathrm{log}(\mathrm{\Lambda }/1\text{GeV})`$. The comparison of the limits obtained in Ref. and in the present work shows that they are complementary. For most of the operators the limits obtained here are stronger, although $`𝒪_\mathcal{7}`$ and especially $`𝒪_{\mathcal{11}}`$ can be better constrained from their one-loop mixing with quark and electron EDM operators.
The shift of the axionic vacuum induced by four-fermion operators is shown to give contributions to CP-odd observables normally smaller than the direct contributions not associated with $`\theta _{eff}`$. For some operators this type of correction can be as large as 15-20% from the direct contribution.
Acknowledgments
M.P. would like to thank P. Herczeg and I. B. Khriplovich for valuable discussions. This work is supported in part by N.S.E.R.C. of Canada and DOE grant DE-FG02-94ER-40823 at the University of Minnesota.
Table 1.
The limits obtained on the $`CP`$-odd four-fermion operators from neutron, thallium and mercury EDM experiments.
$`𝒪_1`$ $`𝒪_2`$ $`𝒪_3`$ $`𝒪_4`$ $`𝒪_5`$ $`𝒪_6`$ $`\kappa _i`$ $`310^6`$ $`710^5`$ $`710^5`$ $`310^4`$ $`310^4`$ $`310^4`$ $`|\theta _{eff}|`$ $`510^5\kappa _1`$ $`510^5\delta _2\kappa _2`$ $`210^6\kappa _3`$ $`510^5\delta _4\kappa _4`$ $`510^5\delta _5\kappa _5`$ $`510^7\kappa _6`$
$`𝒪_7`$ $`𝒪_8`$ $`𝒪_9`$ $`𝒪_{10}`$ $`𝒪_{11}`$ $`\kappa _i`$ $`310^5`$ $`210^5`$ $`210^7`$ $`10^5`$ $`10^5`$ $`|\theta _{eff}|`$ $`510^5\delta _7\kappa _7`$ $`510^5\delta _8\kappa _8`$ - - -
Figure captions.
Chiral loop diagrams, inducing the EDM of the neutron. Dirac structure of $`CP`$-violating vertex is proportional to 1. This diagram logarithmically diverges in the limit $`m_K0`$.
|
no-problem/9901/cond-mat9901136.html
|
ar5iv
|
text
|
# Stokes parameters for light scattering from a Faraday-active sphere
## 1 Introduction
Several reasons exist why one wishes to understand light scattering from a dielectric sphere made of magneto-active material. Single scattering is the building block for multiple scattering. Recently, many experiments such as those reported by Erbacher *etal.* and Rikken *etal.* have been done with diffuse light in a magnetic field. It turns out that the theory using point-like scatterers in a magnetic field, as first developed by MacKintosh and John , does not always enable a quantitative analysis, for the evident reason that experiments do not contain “small” scatterers. This paper addresses light scattering from a sphere of any size in a homogeneous magnetic field.
The model of Rayleigh scatterers was used successfully to describe specific properties of multiple light scattering in magnetic fields, such as for instance Coherent Backscattering, *Photonic Hall Effect (PHE)* and *Photonic Magneto-resistance (PMR)*. The first study of one gyrotropic sphere, due to Ford and Werner , was applied to the scattering of semiconducting spheres by Dixon and Furdyna . For the case of magneto-active particles, for which the change in the dielectric constant induced by the magnetic field is small, a perturbational approach is in fact sufficient. Kuzmin showed that the problem of scattering by a weakly anisotropic particle of any type of anisotropy can be solved to first order in the perturbation. Using a T-matrix formalism, Lacoste *etal.* developed independantly a perturbational approach for the specific case of magneto-optical anisotropy. This was successfully applied to compute the diffusion coefficient for magneto-transverse light diffusion . Using the T-matrix for a Mie scatterer in a magnetic field we have obtained, we discuss the consequences for the Stokes parameters that describe the polarization of the scattered light.
## 2 Perturbation theory
In this paper we set $`c_0=1`$. In a magnetic field $`𝐁`$, the refractive index is a tensor of rank two. For the standard Mie problem, its value at position $`𝐫`$ depends on the distance to the center of the sphere $`|𝐫|`$, which has a radius $`a`$ via the Heaviside function $`\mathrm{\Theta }(|𝐫|a),`$ that equals 1 inside the sphere and 0 outside,
$`\epsilon (𝐁,𝐫)𝐈=`$ $`\left[(\epsilon _01)𝐈+\epsilon _F𝚽\right]\mathrm{\Theta }(|𝐫|a).`$ (1)
In this expression, $`𝐈`$ is the identity tensor, $`\epsilon _0=m^2`$ is the value of the normal isotropic dielectric constant of the sphere of relative index of refraction $`m`$ (which is allowed to be complex-valued) and $`\epsilon _F=2mV_0B/\omega `$ is a dimensionless coupling parameter associated with the amplitude of the Faraday effect ($`V_0`$ being the Verdet constant, $`B`$ the amplitude of the magnetic field, and $`\omega `$ the frequency). We introduced the antisymmetric hermitian tensor $`\mathrm{\Phi }_{ij}=iϵ_{ijk}\widehat{B}_k`$ (the hat above vectors notes normalized vectors). The Mie solution depends on the dimensionless size parameters $`x=\omega a`$ and $`y=mx`$. In this paper we restrict ourselves to non-absorbing media so that $`m`$ and $`\epsilon _F`$ are real-valued. Since $`\epsilon _F10^4`$ in most experiments, a perturbational approach is valid.
Upon noting that the Helmholtz equation is formally analogous to a Schr dinger equation with potential $`𝐕(𝐫,\omega )=\left[𝐈\epsilon (𝐁,𝐫)\right]\omega ^2`$ and energy $`\omega ^2`$, the T-operator is given by the following Born series,
$$𝐓(𝐁,𝐫,\omega )=𝐕(𝐫,\omega )+𝐕(𝐫,\omega )𝐆_0(\omega ,𝐩)𝐕(𝐫,\omega )+\mathrm{}$$
(2)
Here $`𝐆_0(\omega ,𝐩)=1/(\omega ^2𝐈p^2𝚫_𝐩)`$ is the free Helmholtz Green’s operator in gaussian rationalized units for pure dielectric particles, and $`𝐩=i`$ is the momentum operator. The tensor of rank two $`(\mathrm{\Delta }_p)_{ij}=\delta _{ij}p_ip_j/p^2`$ projects upon the space transverse to the direction of $`𝐩`$. The T-matrix is defined as,
$$𝐓_{𝐤\sigma ,𝐤^{}\sigma ^{}}=<𝐤,\sigma |𝐓|𝐤^{},\sigma ^{}>,$$
(3)
where $`|𝐤,\sigma >`$(respectively $`|𝐤^{},\sigma ^{}>`$) represents an incident (respectively emergent) plane wave with direction $`𝐤`$ and state of helicity $`\sigma `$ (respectively $`𝐤^{}`$ and $`\sigma ^{}`$).We will call $`𝐓^0`$ the part of $`𝐓`$ that is independent of the magnetic field and $`𝐓^1`$ the part of the T-matrix linear in $`𝐁`$. We have found the following result ,
$$𝐓_{𝐤\sigma ,𝐤^{}\sigma ^{}}^1=\epsilon _F\omega ^2<\mathrm{\Psi }_{𝐤,\sigma }^{}|\mathrm{\Theta }𝚽|\mathrm{\Psi }_{𝐤^{},\sigma ^{}}^+>,$$
(4)
where the $`\mathrm{\Psi }_{𝐤,\sigma }^\pm (𝐫)`$ are the unperturbed eigenfunctions of the conventional Mie problem. This eigenfunction represents the electric field at the point $`𝐫`$ for an incident plane wave $`|𝐤,\sigma >`$. This eigenfunction is “outgoing” for $`\mathrm{\Psi }_{𝐤,\sigma }^+`$ and “ingoing” for $`\mathrm{\Psi }_{𝐤,\sigma }^{}`$. Eq. (4) resembles the perturbation formula for the Zeeman shift in terms of the atomic eigenfunctions, although here it provides a complex-valued amplitude in terms of *continuum* eigenfunctions, rather than a real-valued energy shift in terms of bound states.
## 3 T matrix for Mie scattering
In order to separate radial and an angular contribution in Eq. (4), we used a well-known expansion of the Mie eigenfunction $`\mathrm{\Psi }_{𝐤,\sigma }^+`$ in the basis of vector spherical harmonics . We choose the quantification axis $`z`$ along the magnetic field. With this choice, the operator $`𝐒_z`$, the $`z`$-component of a spin one operator, can be associated with the tensor $`𝚽`$. The eigenfunctions of the operator $`𝐒_z`$ form a convenient basis for the problem. The expansion of Eq. (4) in vector spherical harmonics leads to a summation over quantum numbers $`J,J^{},M`$ and $`M^{}`$. The Wigner-Eckart theorem applied to the vector operator $`𝐒`$ gives the selection rules for this case $`J=J^{}`$ and $`M=M^{}`$.
The radial integration can be done using a method developed by Bott *etal*. , which gives,
$$𝐓_{𝐤,𝐤^{}}^1=\frac{16\pi }{\omega }W\underset{J,M}{}(M)\left[𝒞_J𝐘_{J,M}^e(\widehat{𝐤})𝐘_{J,M}^e(\widehat{𝐤}^{})+𝒟_J𝐘_{J,M}^m(\widehat{𝐤})𝐘_{J,M}^m(\widehat{𝐤}^{})\right],$$
(5)
with the dimensionless parameter:
$$W=V_0B\lambda ,$$
$`\lambda `$ is the wavelength in the medium. The meaning of the indices $`e,m`$ is explained in the Appendix. In the limiting case of a perfect dielectric sphere with no absorption ( $`\mathrm{}m(m)0`$ ), the coefficients are given by,
$$𝒞_J=\frac{c_J^2u_J^2y}{J(J+1)}\left(\frac{A_J}{y}\frac{J(J+1)}{y^2}+1+A_J^2\right),$$
(6)
$$𝒟_J=\frac{d_J^2u_J^2y}{J(J+1)}\left(\frac{A_J}{y}\frac{J(J+1)}{y^2}+1+A_J^2\right),$$
(7)
with $`A_J(y)=u_J^{}(y)/u_J(y)`$, $`u_J(y)`$ the Ricatti-Bessel function, and $`c_J`$ and $`d_J`$ the Mie amplitude coefficients of the internal field .
Two important symmetry relations must be obeyed by our T-matrix. The first one is parity symmetry and the second one reciprocity. These relations can be established generally when the hamiltonian of a given system has the required symmetries ( *cf.* Eq. (15.53) and Eq.(15.59a) P454 of Ref. () ). We give in the Appendix a less general derivation of these relations for our specific problem:
$$T_{𝐤\sigma ,𝐤^{}\sigma ^{}}(𝐁)=T_{𝐤\sigma ,𝐤^{}\sigma ^{}}(𝐁),$$
(8)
$$T_{𝐤^{}\sigma ^{},𝐤\sigma }(𝐁)=T_{𝐤\sigma ,𝐤^{}\sigma ^{}}(𝐁).$$
(9)
We emphasize that $`\sigma (𝐤)=\sigma (𝐤)`$, i.e. $`\sigma `$ indicates in fact helicity and *not* circular polarization. The helicity is the eigenvalue of the operator $`𝐒𝐤`$.
### 3.1 The amplitude matrix
The amplitude matrix $`𝐀`$ relates incident and scattered field with respect to an arbitrary plane of reference. A common choice is the plane that contains the incident and the scattered wave vector, and which is for this reason called the scattering plane. We will call the linear base the base made of one vector in this plane and one vector perpendicular to it. In this basis, the amplitude matrix sufficiently far $`\omega r1`$, is simply defined from the T-matrix by,
$$𝐀_{𝐤,𝐤^{}}=\frac{1}{4\pi r}𝐓_{𝐤,𝐤^{}}^{}=\frac{e^{i\varphi }}{i\omega r}\left(\begin{array}{cc}S_2& S_3\\ S_4& S_1\end{array}\right).$$
(10)
$`\varphi `$ is a phase factor that depends on the relative phase of the scattered wave with respect to the incident wave and which is defined in Ref. (). The complex conjugation in Eq. (10) is simply due to a different sign convention in Newton . When no magnetic field is applied, the T-matrix of the conventional Mie-problem is given by a formula analogous to Eq. (5) where $`𝒞_J`$ and $`𝒟_J`$ have been replaced by the Mie coefficients $`a_J`$ and $`b_J`$, and with $`M=1`$. Because of rotational invariance of the scatterer, the final result only depends on $`\mathrm{cos}\theta `$, the scalar product of $`𝐤`$ and $`𝐤^{}`$, $`\theta `$ is the scattering angle. Therefore we get in the circular basis (associated with the helicities $`\sigma `$ and $`\sigma ^{}`$):
$$T_{\sigma \sigma ^{}}^0=\frac{2\pi }{i\omega }\underset{J1}{}\frac{2J+1}{J(J+1)}(a_J^{}+\sigma \sigma ^{}b_J^{})\left[\pi _{J,1}(\mathrm{cos}\theta )+\sigma \sigma ^{}\tau _{J,1}(\mathrm{cos}\theta )\right].$$
(11)
Alternatively the T-matrix may be expanded on the basis of the Pauli matrices
$$𝐓^0=\frac{2\pi }{i\omega }\left[(S_1^{}+S_2^{})𝐈+(S_1^{}S_2^{})\sigma _x\right].$$
(12)
In Eq. (11), the polynomials $`\pi _{J,M}`$ and $`\tau _{J,M}`$ are defined in terms of the Legendre polynomials $`P_J^M`$ by ,
$$\pi _{J,M}(\theta )=\frac{M}{\mathrm{sin}\theta }P_J^M(\mathrm{cos}\theta ),\tau _{J,M}(\theta )=\frac{d}{d\theta }P_J^M(\mathrm{cos}\theta ).$$
(13)
For $`M=1`$, $`\pi _{J,1}`$ and $`\tau _{J,1}`$ are polynomials of $`\mathrm{cos}\theta `$ of order $`J1`$ and $`J`$ respectively but not in general for any value of $`M`$. When written in the linear basis of polarization, Eq. (11) implies that a Mie scatterer has $`S_3=S_4=0`$ as imposed by the rotational symmetry. For the backward direction $`\theta =\pi `$, the reciprocity symmetry implies that $`S_3+S_4=0`$ for an arbitrary particle (possibly non-spherical) . We will see that these two properties do not hold anymore when a magnetic field is present.
### 3.2 General case for $`𝐓^1`$ when $`\widehat{𝐤}\times \widehat{𝐤}^{}0`$
It remains to express the vector spherical harmonics in Eq. (5), as a function of the natural angles of the problem. In Fig. 1, we give a schematic view of the geometry. In the presence of a magnetic field, the rotational invariance is broken because $`𝐁`$ is fixed in space. Because our theory treats $`𝐓^1`$ linear in $`\widehat{𝐁}`$, $`𝐓^1`$ can be constructed by considering only three special cases for the direction of $`\widehat{𝐁}`$. If $`\widehat{𝐤}`$ and $`\widehat{𝐤}^{}`$ are not collinear, we can decompose the unit vector $`\widehat{𝐁}`$ in the non-orthogonal but complete basis of $`\widehat{𝐤},\widehat{𝐤}^{}`$ and $`\widehat{𝐠}=\widehat{𝐤}\times \widehat{𝐤}^{}/|\widehat{𝐤}\times \widehat{𝐤}^{}|`$. This results in,
$`𝐓_{\mathrm{𝐤𝐤}^{}}^1`$ $`=`$ $`{\displaystyle \frac{(\widehat{𝐁}\widehat{𝐤})(\widehat{𝐤}\widehat{𝐤}^{})\widehat{𝐁}\widehat{𝐤}^{}}{(\widehat{𝐤}\widehat{𝐤}^{})^21}}𝐓_{\widehat{𝐁}=\widehat{𝐤}^{}}^1`$ (14)
$`+`$ $`{\displaystyle \frac{(\widehat{𝐁}\widehat{𝐤}^{})(\widehat{𝐤}\widehat{𝐤}^{})\widehat{𝐁}\widehat{𝐤}}{(\widehat{𝐤}\widehat{𝐤}^{})^21}}𝐓_{\widehat{𝐁}=\widehat{𝐤}}^1`$
$`+`$ $`(\widehat{𝐁}\widehat{𝐠})𝐓_{\widehat{𝐁}=\widehat{𝐠}}^1,`$
The cases where $`\widehat{𝐁}`$ is either along $`\widehat{𝐤}`$ or $`\widehat{𝐤}^{}`$ turn out to take the form,
$$T_{\sigma \sigma ^{}}^1(\widehat{𝐁}=\widehat{𝐤})=\frac{\pi }{\omega }[R_1(\mathrm{cos}\theta )\sigma +R_2(\mathrm{cos}\theta )\sigma ^{}],$$
(15)
$$T_{\sigma \sigma ^{}}^1(\widehat{𝐁}=\widehat{𝐤}^{})=\frac{\pi }{\omega }[R_1(\mathrm{cos}\theta )\sigma ^{}+R_2(\mathrm{cos}\theta )\sigma ],$$
(16)
with
$$R_1(\mathrm{cos}\theta )=\frac{2W}{\pi }\underset{J1}{}\frac{2J+1}{J(J+1)}[𝒞_J\pi _{J,1}(\mathrm{cos}\theta )+𝒟_J\tau _{J,1}(\mathrm{cos}\theta )]$$
(17)
$$R_2(\mathrm{cos}\theta )=\frac{2W}{\pi }\underset{J1}{}\frac{2J+1}{J(J+1)}[𝒟_J\pi _{J,1}(\mathrm{cos}\theta )+𝒞_J\tau _{J,1}(\mathrm{cos}\theta )]$$
(18)
In Ref. we gave an expression for $`𝐓_{\sigma \sigma ^{}}^1(\widehat{𝐁}=\widehat{𝐠})`$ involving a double summation over the partial wave number $`J`$ and the magnetic quantum number $`M`$. It is actually possible to do the summation over $`M`$ explicitely, thus simplifying considerably the numerical evaluation. Indeed, if one expresses $`𝐓^0`$ with respect to a $`z`$-axis perpendicular to the scattering plane for a given partial wave $`J`$, one ends up with the following relation between the polynomials $`\pi _{J,M}`$ and $`\tau _{J,M}`$,
$$\pi _{J,1}(\mathrm{cos}\theta )=2\underset{JM1}{}\left[\frac{(JM)!}{(J+M)!}\tau _{J,M}(0)^2\mathrm{cos}(M\theta )\right]+\tau _{J,0}(0)^2$$
(19)
$$\tau _{J,1}(\mathrm{cos}\theta )=2\underset{JM1}{}\left[\frac{(JM)!}{(J+M)!}\pi _{J,M}(0)^2\mathrm{cos}(M\theta )\right]+\pi _{J,0}(0)^2$$
(20)
Upon performing the derivatives of these relations with respect to $`\theta `$ and comparing to the expression for $`𝐓_{\sigma \sigma ^{}}^1(\widehat{𝐁}=\widehat{𝐠})`$ we find,
$$T_{\sigma \sigma ^{}}^1(\widehat{𝐁}=\widehat{𝐠})=\frac{\pi }{\omega }(Q_1(\theta )+\sigma \sigma ^{}Q_2(\theta ))$$
(21)
with
$$Q_l(\theta )=i\frac{d}{d\theta }R_l(\mathrm{cos}\theta )=i\mathrm{sin}\theta \frac{d}{d\mathrm{cos}\theta }R_l(\mathrm{cos}\theta ),l=1,2.$$
(22)
We are convinced that a rigorous group symmetry argument exists that relates the derivative of $`T_{\sigma \sigma ^{}}^1(\widehat{𝐁}=\widehat{𝐤})`$ with respect to $`\theta `$ to $`T_{\sigma \sigma ^{}}^1(\widehat{𝐁}=\widehat{𝐠}).`$
### 3.3 Particular case for $`𝐓^1`$ when $`\widehat{𝐤}=\widehat{𝐤}^{}`$ and $`\widehat{𝐤}=\widehat{𝐤}^{}`$
The treatment in section (3.2) becomes degenerate when $`\widehat{𝐤}`$ and $`\widehat{𝐤}^{}`$ are collinear, i.e. in forward or backward direction. In these cases, $`\widehat{𝐁}`$ can still be expressed on a basis made of $`\widehat{𝐤}`$ and of two vectors perpendicular to $`\widehat{𝐤}`$. The contribution of these last two vectors has the same form as in $`𝐓_{\sigma \sigma ^{}}^1(\widehat{𝐁}=\widehat{𝐠})`$ for $`\theta =0`$ or $`\theta =\pi `$, which vanishes. An alternative derivation consists to take the limit $`\theta 0`$ so that $`R_1=R_2`$ or $`\theta \pi `$ so that $`R_1=R_2`$ in Eqs. (15-18). This yields,
$$R_1(1)=R_2(1)=\frac{W}{\pi }\underset{J1}{}(2J+1)(𝒞_J+𝒟_J)$$
(23)
and
$$R_1(1)=R_2(1)=\frac{W}{\pi }\underset{J1}{}(1)^{J+1}(2J+1)(𝒞_J𝒟_J)$$
(24)
This means,
$$𝐓_{𝐤,𝐤}^1=𝚽\frac{2\pi }{\omega }R_1(1),$$
(25)
and
$$𝐓_{𝐤,𝐤}^1=𝚽\frac{2\pi }{\omega }R_1(1).$$
(26)
Both T-matrices contain the tensor $`𝚽`$ introduced in Eq. (1) for the dielectric constant of the medium of the sphere. For these two cases, an operator can be associated with these T-matrices, which is $`𝐒_𝐳`$ since we have chosen $`𝐁`$ along the $`z`$-axis. For $`𝐓_{𝐤,𝐤}^1`$, the presence of the tensor $`𝚽`$ is to be expected since we know that the forward scattering amplitude can be interpreted as an effective refractive index in a transmission experiment . In the framework of an effective medium theory, the real part of Eq. (25) gives the Faraday effect whereas the imaginary part gives the magneto-dichroism (*i.e* different absorption for different circular polarization) of an ensemble of Faraday-active scatterers.
## 4 Magneto-transverse Scattering
From $`𝐓^1`$ matrix, we can compute how the magnetic field affects the differential scattering cross section (summed over polarization) as a function of the scattering angle. Its form can be guessed before doing any calculation at all, since it must satisfy mirror-symmetry and the reciprocity relation $`d\sigma /d\mathrm{\Omega }(𝐤𝐤^{},𝐁)=d\sigma /d\mathrm{\Omega }(𝐤^{}𝐤,𝐁)`$. A magneto-cross-section proportional to $`\widehat{𝐁}\widehat{𝐤}`$ or to $`\widehat{𝐁}\widehat{𝐤}^{}`$ is parity forbidden since $`𝐁`$ is a pseudo-vector. Together with the rotational symmetry of the sphere the only possibility is:
$$\frac{d\sigma }{d\mathrm{\Omega }}(𝐤𝐤^{},𝐁)=F_0(\mathrm{cos}\theta )+det(\widehat{𝐁},\widehat{𝐤},\widehat{𝐤}^{})F_1(\mathrm{cos}\theta )$$
(27)
where $`det(𝐀,𝐁,𝐂)=𝐀(𝐁\times 𝐂)`$ is the scalar determinant constructed from these three vectors. The second term in Eq. (27) will be called the magneto-cross-section.
The magneto-cross section implies that there may be more photons scattered “upwards” than “downwards”, both directions being defined with respect to the magneto-transverse vector $`\widehat{𝐤}\times \widehat{𝐁}`$ perpendicular to both incident wave vector and magnetic field. An easy calculation yields,
$$\mathrm{\Delta }\sigma =\sigma _{up}\sigma _{down}=\pi _0^\pi 𝑑\theta \mathrm{sin}^3\theta F_1(\mathrm{cos}\theta )$$
(28)
A non-zero value for $`\mathrm{\Delta }\sigma `$ will be referred to as a *Photon Hall Effect* (PHE).
For Rayleigh scatterers, the above theory simplifies dramatically because one only needs to consider the first partial wave of $`J=1`$ and the first terms in a development in powers of $`y`$ ( since $`y1`$ ). From Eqs. (6) and (7) we find that, $`𝒞_1=2y^3/m^2(2+m^2)^2`$ and $`𝒟_1=y^5/45m^4`$. We can keep only $`𝒞_1`$ and drop $`𝒟_1`$ as a first approximation. Adding all the contributions of Eqs. (14) and (11), we find, in the linear base
$$𝐓_{𝐤,𝐤^{}}=\left(\begin{array}{cc}t_0\widehat{𝐤}\widehat{𝐤}^{}+it_1\widehat{𝐁}(\widehat{𝐤}\times \widehat{𝐤}^{})& it_1\widehat{𝐁}\widehat{𝐤}\\ it_1\widehat{𝐁}\widehat{𝐤}^{}& t_0\end{array}\right).$$
(29)
where $`t_0=6i\pi a_1^{}/\omega `$ and $`t_1=6C_1W/\omega `$. This form agrees with the Rayleigh point-like scatterer discussed in Ref. .
A magnetic field breaks the rotational symmetry of the particle. If it is contained in the scattering plane, Eq. (29) shows that we must have a non-zero value for $`S_3`$ and $`S_4`$ as opposed to the case when no magnetic field is applied. This property still holds for a Mie scatterer, the difference being only present in the angular dependence of the elements of the amplitude matrix. A magnetic field also violates the standard reciprocity principle as can be seen on Eq. (9). This implies that $`S_3+S_4`$ is non zero for the backward direction $`\theta =\pi `$. The relation $`S_3+S_4=0`$ for the backward direction was derived by Van Hulst, but does not apply when a magnetic field is present. In fact the magnetic field imposes that $`S_3=S_4`$ at backscattering. This is readily confirmed by the Rayleigh particle, for which Eq. (29) implies that $`S_3+S_4=2S_3=2it_1\widehat{𝐁}\widehat{𝐤}`$ for $`\theta =\pi `$.
Eq. (29) yields $`F_1(\mathrm{cos}\theta )VB\mathrm{cos}\theta /k`$ so that Eq. (28) gives $`\mathrm{\Delta }\sigma =0.`$ The magneto scattering cross section is shown in Fig. 2 for a Rayleigh scatterer and in Fig. 3 for a Mie scatterer for which a non zero value of $`\mathrm{\Delta }\sigma `$ is seen to survive.
## 5 Stokes parameters
To describe the flux and polarization, a 4 dimensional Stokes vector $`(I,Q,U,V)`$ can be introduced . The general relation between scattered Stokes vector and incoming Stokes vector is,
$$(I,Q,U,V)=𝐅(I,Q,U,V)$$
(30)
For a sphere and without a magnetic field, the F-matrix is well known and equals ,
$$F_{ij}^0=\frac{1}{k^2r^2}\left(\begin{array}{cccc}F_{11}& F_{12}& 0& 0\\ F_{12}& F_{11}& 0& 0\\ 0& 0& F_{33}& F_{34}\\ 0& 0& F_{34}& F_{33}\end{array}\right)$$
(31)
where
$$\{\begin{array}{c}F_{11}=(|S_1|^2+|S_2|^2)/2\\ F_{12}=(|S_1|^2+|S_2|^2)/2\\ F_{33}=(S_2^{}S_1+S_2S_1^{})/2\\ F_{34}=i(S_2^{}S_1+S_2S_1^{})/2\end{array}$$
(32)
Among these four parameters only three are independent since $`F_{11}^2=F_{12}^2+F_{33}^2+F_{34}^2.`$ The presence of the many zeros in Eq. (31) is a consequence of the fact that the amplitude matrix in Eq. (10) is diagonal for one Mie scatterer. It is in fact much more general. The form of Eq. (31) still holds for an ensemble of randomly oriented particles with an internal plane of symmetry (such as spheroids for instance) . In that case, the averaging is essential to get the many zeros in Eq. (31). It also holds for a single anisotropic particle in the Rayleigh-Gans approximation ,.
For the Mie case, the anisotropy has two consequences: the F-elements that were zero for an isotropic particle may take finite values, and they may depend on the azymuthal angle $`\varphi `$. When a magnetic field is applied perpendicular to the scattering plane, corrections will appear in the diagonal terms of the amplitude matrix. We will use the vector H to denote them. When a magnetic field is applied in the scattering plane, the amplitude matrix becomes off diagonal, which will fill up the zeros in $`F^0`$. We will use the vector G to denote these new terms.
If we call $`F^1`$ the first-order magnetic correction to the F-matrix one finds,
$$F_{ij}^1=\frac{1}{k^2r^2}\left(\begin{array}{cccc}H_{11}& H_{12}& \mathrm{}eG_3& \mathrm{}mG_3\\ H_{12}& H_{11}& \mathrm{}eG_4& \mathrm{}mG_4\\ \mathrm{}eG_1& \mathrm{}eG_2& H_{33}& H_{34}\\ \mathrm{}mG_1& \mathrm{}mG_2& H_{34}& H_{33}\end{array}\right).$$
(33)
When $`\widehat{𝐁}`$ is directed along $`\widehat{𝐤}`$, the G terms are given by,
$$G_{\widehat{𝐁}=\widehat{𝐤}}\{\begin{array}{c}G_1=(S_1^{}R_1^{}S_2R_2)/2\\ G_2=(S_1^{}R_1^{}S_2R_2)/2\\ G_3=(S_1^{}R_2^{}+S_2R_1)/2\\ G_4=(S_1^{}R_2^{}+S_2R_1)/2\end{array}$$
(34)
The general case (forward and backward directions excluded) has
$$𝐆=\frac{(\widehat{𝐁}\widehat{𝐤})(\widehat{𝐤}\widehat{𝐤}^{})\widehat{𝐁}\widehat{𝐤}^{}}{(\widehat{𝐤}\widehat{𝐤}^{})^21}𝐆_{\widehat{𝐁}=\widehat{𝐤}^{}}+\frac{(\widehat{𝐁}\widehat{𝐤}^{})(\widehat{𝐤}\widehat{𝐤}^{})\widehat{𝐁}\widehat{𝐤}}{(\widehat{𝐤}\widehat{𝐤}^{})^21}𝐆_{\widehat{𝐁}=\widehat{𝐤}}$$
(35)
and $`G_{\widehat{𝐁}=\widehat{𝐤}^{}}`$ is obtained from $`G_{\widehat{𝐁}=\widehat{𝐤}}`$ by exchanging $`R_1`$ and $`R_2`$ in Eq. (34) like in Eqs. (15,16). Finally, we need,
$$𝐇=(\widehat{𝐁}\widehat{𝐠})𝐇_{\widehat{𝐁}=\widehat{𝐠}},$$
(36)
with
$$𝐇_{\widehat{𝐁}=\widehat{𝐠}}\{\begin{array}{c}H_{11}=\mathrm{}m(S_1Q_1+S_2Q_2)/2\\ H_{12}=\mathrm{}m(S_1Q_1+S_2Q_2)/2\\ H_{33}=\mathrm{}m(S_1Q_2S_2Q_1)/2\\ H_{34}=\mathrm{}e(S_1Q_2S_2Q_1)/2\end{array}$$
(37)
The F-matrix defined in Eq. (30) can contain at most 7 independent constants, resulting from the 8 constants in the amplitude matrix minus an irrelevant phase. Our $`F^1`$-matrix has 12 coefficients (4 for the H vector and 8 for the G vector). Therefore 5 relations must exist between these 12 coefficients. These relations have not been explicitely derived.
We can write all the expressions above in a very compact way using the basis of the Pauli matrices
$$\begin{array}{c}F_{ij}^0=\frac{1}{2}Tr(A^0\sigma _iA^0\sigma _j)\\ F_{ij}^1=\frac{1}{2}Tr(A^1\sigma _iA^0\sigma _j)+\frac{1}{2}Tr(A^0\sigma _iA^1\sigma _j),\end{array}$$
(38)
where $`Tr`$ is the trace of the matrix, the superscript $``$ denotes Hermite conjugation, $`\sigma _i`$ are Pauli matrices and $`A^0`$, $`A^1`$ are zeroth and first order correction in the amplitude matrix defined as the T-matrix from Eq. (10).
If the incident light is unpolarized, the Stokes vector for the scattered light is simply equal to the first column of the F-matrix in Eq. (33). For instance when $`\widehat{𝐁}`$ is directed along $`\widehat{𝐤}`$, the magnetic field will only affect $`U=F_{31}^1`$ and the circular polarization $`V=F_{41}^1`$ which would be zero when no magnetic field is applied. We choose to normalize the matrix elements $`F_{ij}^1`$ that quantify the deviation of the polarization from the isotropic case by the flux $`F_{11}^0`$ without magnetic field. In Fig. 4 we plotted these normalized matrix elements for the cases where $`\widehat{𝐁}`$ is directed along $`\widehat{𝐤}`$ and where $`\widehat{𝐁}`$ is directed along $`\widehat{𝐠}`$. We observe that off-diagonal F-elements such as $`F_{12}^1`$ and $`F_{41}^1`$, are generally more important in the angle region of $`140170^{}`$, and increase with the size parameter. In this region, these Stokes parameters seem to be very sensitive to anisotropy as also found from studies of Stokes parameters of quartz particles .
The F-matrix of spherical scatterers in Eq. (31) contains 8 zeros among its 16 elements. This property persists for an ensemble of randomly oriented non-spherical particles having a plane of symmetry because of the averaging over all the orientations. In a magnetic field even spherical scatterers can have a non zero value for these 8 elements. Furthermore we have good reasons to believe that our theory made for spheres in a magnetic theory should also apply to an ensemble of randomly oriented non-spherical particles in a magnetic field, since the magnetic field direction is the same for all the particles.
We have chosen the size distribution and optical parameters of a reported experiment , when no magnetic field is present , but in which all the matrix elements of $`F^0`$ were measured and found to be in good agreement with the theoretical evaluation from Eq. (31). For water, the parameter $`W2.410^6`$ for a magnetic field of 1T. From Fig. 4, we can therefore expect a modification of the order of $`2.410^6`$ in the region near backward scattering for $`F_{31}^1(B)/F_{11}^1(B=0)`$ when $`\widehat{𝐁}`$ is directed along $`\widehat{𝐤}`$. The magneto-optical effects on polarization are very small. Nevertheless they may become significant in multiple scattering, which usually tends to depolarize completely the light.
### 5.1 Forward and backward directions
When no magnetic field is present, the situations for $`\theta =0`$ or $`\theta =\pi `$ are similar because the scattering plane is undefined in both cases. We also have $`H=0`$ by Eq. (36). The remaining contribution is therefore only determined by the G-vector, and the final result reads for $`\theta =0`$,
$$F_{\theta =0}=\frac{\widehat{𝐁}\widehat{𝐤}}{k^2r^2}\left(\begin{array}{cccc}0& 0& 0& \mathrm{}m(z)\\ 0& 0& \mathrm{}e(z)& 0\\ 0& \mathrm{}e(z)& 0& 0\\ \mathrm{}m(z)& 0& 0& 0\end{array}\right),$$
(39)
with $`z=S_1(1)R_1(1).`$ For $`\theta =\pi ,`$
$$F_{\theta =\pi }=\frac{\widehat{𝐁}\widehat{𝐤}}{k^2r^2}\left(\begin{array}{cccc}0& 0& 0& \mathrm{}m(z^{})\\ 0& 0& \mathrm{}e(z^{})& 0\\ 0& \mathrm{}e(z^{})& 0& 0\\ \mathrm{}m(z^{})& 0& 0& 0\end{array}\right),$$
(40)
with $`z^{}=S_1(1)R_1(1).`$
The functions $`R_1(1)`$ and $`R_1(1)`$ defined in Eqs. (17,18) are very similar to $`S_1(1)`$ and $`S_1(1)`$. Both F-matrices contain only two real-valued independent parameters as do the corresponding T matrices. For unpolarized incident light only the Stokes parameter $`V=F_{41}^1(B)`$ of these matrices is non-zero. In Fig. 4, all the curves are zero for $`\theta =0`$ and $`\theta =\pi `$ except the one of $`F_{41}^1(B)/F_{11}^1(B=0)`$. In other words, unpolarized incident light will produce partially circularly polarized light (the degree of circular polarization being precisely $`F_{41}^1(B)/F_{11}^0(B=0)`$) for $`\widehat{𝐁}`$ directed along $`\widehat{𝐤}`$ in the forward and backward directions. This can be understood from the fact that the effective index that one can define from Eq. (25) suffers from magneto-dichroism (*i.e* different absorption for different circular polarization).
The modified reciprocity relation in the presence of a magnetic field was expressed for the amplitude matrix in Eq. (9). For the F-matrix it implies exactly the different signs in the matrix elements of Eq. (40) with respect to Eq. (39).
## 6 Summary and Outlook
We have shown that the theory developed for magneto-active Mie scatterers so far is consistent with former results concerning predictions of the light scattering by Rayleigh scatterers in a magnetic field. Our perturbative theory provides quantitative predictions concerning the Photonic Hall Effect for one single Mie sphere, such as the scattering cross section, the dependence on the size parameter or on the index of refraction.
Using the magneto-correction to the T-matrix we have derived the Stokes parameters for the light scattered from a single sphere in a magnetic field. We have distinguished two main cases. Either the magnetic field is perpendicular to the scattering plane and there will be corrections to the usual non-zero Stokes parameters, or when the magnetic field is in the scattering plane, the corrections fill up the F-matrix elements which were previously zero. We have discussed the particular cases of forward and backward scattering.
We hope that these results will be useful in comparing them to the situation in multiple scattering. Even after many scattering events, we suspect that the presence of a magnetic field prevents the Stokes parameters $`U,V`$ and $`Q`$ to be zero. In single scattering, their order of magnitude is controlled by the parameter $`W=V_0B\lambda `$. In multiple scattering however, this parameter must be replaced by $`fV_0Bl^{}`$, where $`f`$ is the volume fraction of the scatterers and $`l^{}`$ the transport mean free path, and $`fl^{}1`$. We expect to find more significant effects in this case.
We thank Geert Rikken and Joop Hovenier for useful comments. We thank the referees for their work, and in particular for mentioning the work of Kuz’min *etal*.
## Appendix A Appendix: Derivation of reciprocity and parity relations
In the indices for polarization in the T-matrix, the state of helicity $`\sigma `$ is to be referred to the direction of the wave vector immediately close to it. In Eq. (8,9), $`T_{𝐤\sigma ,𝐤^{}\sigma ^{}}`$ for instance really means $`T_{𝐤\sigma (𝐤),𝐤^{}\sigma ^{}(𝐤^{})}.`$ To derive these equations, we start from Eq. (5) in which we change both incoming and outcoming wave vectors into their opposite:
$$𝐓_{𝐤\sigma ,𝐤^{}\sigma ^{}}^1=\underset{J,M,\lambda }{}(M)\left[\alpha _{J,\lambda }𝐘_{J,M}^\lambda (\widehat{𝐤})\chi _{\sigma (𝐤)}(𝐤)𝐘_{J,M}^\lambda (\widehat{𝐤}^{})\chi _{\sigma ^{}(𝐤^{})}(𝐤^{})\right],$$
(41)
where $`\alpha _{J,\lambda }`$ is a well known coefficient, $`\chi _{\sigma (𝐤)}`$ is the eigenfunction of the operator $`𝐒𝐤`$ with eigenvalue $`\sigma (𝐤)`$ the helicity. $`𝐒`$ is a spin one operator acting on three-dimensional vectors. The summation is to be performed for $`\lambda =e,m`$ only, which are associated with the two transverse components of the given vector spherical harmonics. $`𝐘_{JM}^\lambda (𝐤)`$ are well-defined linear combinaisons of $`𝐘_{J,J}^M(𝐤)`$,$`𝐘_{J,J1}^M(𝐤)`$ and $`𝐘_{J,J+1}^M(𝐤)`$ that obeys
$$𝐘_{JM}^\lambda (𝐤)𝐤=0,\lambda =e,m.$$
We now use the relations,
$$\begin{array}{c}𝐘_{J,M}^e(\widehat{𝐤})=(1)^{J+1}𝐘_{J,M}^e(\widehat{𝐤})\\ 𝐘_{J,M}^m(\widehat{𝐤})=(1)^J𝐘_{J,M}^m(\widehat{𝐤})\end{array}.$$
(42)
The eigenfunctions $`\chi _{\sigma (𝐤)}`$ also change under parity since
$$\chi _{\sigma (𝐤)}(𝐤)=\chi _{\sigma (𝐤)}(𝐤).$$
Because of this additional minus sign, the parities of the vector spherical harmonics are in fact,
$$\begin{array}{c}\mathrm{𝐏𝐘}_{J,M}^e=(1)^J𝐘_{J,M}^e\\ \mathrm{𝐏𝐘}_{J,M}^m=(1)^{J+1}𝐘_{J,M}^m\end{array}.$$
(43)
The parity symmetry relation of Eq. (8) follows from the application of these relations into Eq. (41). The proof of the reciprocity symmetry relation of Eq. (9) is similar, where now the following relations are necessary
$$𝐘_{J,M}^\lambda =(1)^{J+M}𝐘_{J,M}^\lambda $$
(44)
for $`\lambda =e,o,m`$ and
$$\chi _{\sigma (𝐤)}^{}(𝐤)=\chi _{\sigma (𝐤)}(𝐤).$$
The change of sign of $`𝐁`$ is provided by the $`M`$ factor in Eq. (41) as surmised.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.