id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
|---|---|---|---|
no-problem/9902/astro-ph9902095.html
|
ar5iv
|
text
|
# Radiospectra and Kinematics in Blazars
## 1 Introduction
Analytical models of parsec–scale jets, although being contested ever stronger by numerical simulations, still provide a valuable means for describing observational data, particularly when different aspects of the jet physics (such as its spectral and kinematic properties) are combined together in a single formulation, providing additional constraints and checks for the model parameters. In this contribution, we discuss how the well–established shock–in–jet model (“shock model” hereafter; see Marscher 1990, Marscher, Gear, & Travis 1991, for detailed discussions of the model) can be reinforced by inclusion of kinematic information available from VLBI observations.
## 2 Model quantities
In its most common formulation, the shock model predicts changes of the turnover frequency, $`\nu _\mathrm{m}`$, and flux density, $`S_\mathrm{m}`$, in the spectrum of radio emission associated with a shock. The jet is usually assumed to have a constant opening angle, $`\varphi `$, so that the shock transverse dimension is proportional to the distance, $`r`$, at which the shock is located. Other model parameters are expressed as functions of $`r`$: the magnetic field $`Br^a`$, Doppler factor $`\delta r^b`$, and number density $`Nr^n`$ (for a power–law electron energy distribution, $`N(\gamma )d\gamma \gamma ^sd\gamma `$). The shock emission is dominated subsequently by Compton, synchrotron and adiabatic losses. At each stage, the predicted quantities are described by the following proportionalities: $`S_\mathrm{m}\nu _\mathrm{m}^\rho `$ and $`\nu _\mathrm{m}r^\epsilon `$, with $`\rho =\rho (a,b,s)`$ and $`\epsilon =\epsilon (a,b,s)`$ (for complete evaluations of $`\rho `$ and $`\epsilon `$, see Marscher 1990, Lobanov & Zensus 1999). Below, we describe how estimates of the power index $`b`$ can be obtained from VLBI data, and give examples of applying this approach to studying compact variable radio sources.
## 3 Observable quantities
Single dish observations yield light curves $`S(t)`$ at different frequencies, which can be used to determine the evolution of spectral turnover ($`S_\mathrm{m},\nu _\mathrm{m}`$), provided an adequate frequency coverage and time sampling. VLBI monitoring programs allow to measure relative proper motions, $`\mu _{\mathrm{app}}(t)`$, and (in exceptional cases) also spectral changes of enhanced emission regions detected in parsec–scale jets. From the measured $`\mu (t)`$, the jet apparent speeds, $`\beta _{\mathrm{app}}(t)`$, and Doppler factors, $`\delta (t)`$, can be reconstructed, with necessary assumptions made about the jet kinematics. In the simplest case, the jet Lorentz factor, $`\gamma `$, can be taken constant, and $`\delta (t)`$ is then described by changes of the jet viewing angle, $`\theta (t)`$. For more complicated cases, $`\gamma (t)=\gamma _{\mathrm{min}}(t)=[1+\beta _{\mathrm{app}}(t)]^{0.5}`$ can be assumed, or even complete kinematic settings can be postulated (e.g. a helical trajectory, as has been done, for instance, by Roland et al. 1994).
## 4 Relations between the jet spectrum and kinematics
Once the form of $`\delta (t)`$ has been determined, we can evaluate $`b(t)`$. Since variations of $`\delta (t)`$ are not necessarily monotonic, we resort to determining $`b(t)`$ locally, so that
$$b(t)=\frac{\mathrm{log}[\delta (t+dt)/\delta (t)]}{\mathrm{log}[r(t+dt)/r(t)]}$$
(1)
We then select a timerange, ($`t_1,t_2`$), during which the changes of $`b(t)`$ are small enough to approximate $`b(t_1)b(t_2)b`$. Fitting the observed spectral turnover data, we can obtain the turnover points at the respective epochs, and evaluate the absolute location of the shock at the epoch $`t_1`$:
$$r_1=\left(\frac{1+z}{\delta _1c\mathrm{\Delta }t}_1^{r_u}\frac{1}{\sqrt{\gamma ^2(r)1}}\frac{\mathrm{d}r}{r^b}\right)^{1/(b1)},$$
(2)
with $`\mathrm{\Delta }t=t_2t_1`$, $`r_u=(\nu _{\mathrm{m}\mathrm{\hspace{0.17em}2}}/\nu _{\mathrm{m}\mathrm{\hspace{0.17em}1}})`$. Repeating this step as many times as necessary, we can reconstruct the entire kinematic evolution of the shock. The procedure can also be reversed: we can first fit the shock model to the spectral data, and determine values of $`b`$ for different time periods. We then use equation (2) to calculate the respective locations of the shock at different epochs, and compare these locations with the locations and speeds inferred from VLBI data.
If we fix the kinematic settings and assume that the shocked feature moves at a speed $`\beta _\mathrm{j}`$ along a helical path with amplitude, $`A(r)`$, frequency, $`\omega `$, and parallel wavenumber, $`k`$, we can reconstruct the time evolution of the shock location:
$$t(r)=t_0+_{r_0}^r\frac{C_2(r)}{k\omega A^2(r)+[C_2(r)\beta _\mathrm{j}^2\omega ^2k^2C_1(r)]^{1/2}}𝑑r,$$
(3)
with $`C_1(r)=[A_r^{}(r)]^2+1`$ and $`C_2(r)=C_1(r)+k^2A^2(r)`$. The form of $`A(r)`$ may differ, depending on the choice of the jet geometry. We use $`A(r)=A(r_0)r/(a_0+r)`$, corresponding to a jet with opening half–angle approaching $`\mathrm{arctan}[A(r_0)]`$, for $`rr_0`$. The obtained $`t(r)`$ can be then checked against $`b(t)`$ inferred from the shock model, or used for predicting the light curves directly.
## 5 Examples
We show here two examples of applying the method outlined above to radio observations of blazars.
### 5.1 0235+164
A short-timescale flare in October 1992 was monitored with the VLA at 1.4, 4.8, and 8.4 GHz (Kraus, Quirrenbach, Lobanov, et al., these proceedings) . The observed light curves show that the emission first peaks at 1.4 GHz, and later nearly simultaneously at 4.8 and 8.4 GHz. A cross correlation function analysis yields the respective time lags $`\tau _{4.8}^{1.4}=0.8\pm 0.2`$ days, $`\tau _{8.4}^{1.4}=0.7\pm 0.2`$ days, and $`\tau _{8.4}^{4.8}=0.2\pm 0.2`$ days. The flare duration becomes progressively longer at higher frequencies, making this event rather peculiar. The modified lightcurves obtained after subtraction of the underlying emission are shown in the left panel of Figure 1.
We discuss elsewhere (Kraus et al. 1999) several possible schemes capable of explaining the observed peculiarities. One of our proposed schemes uses a precessing electron–positron beam (see Roland et al. 1994) with a period $`P_0=200`$ days and precession angle $`\mathrm{\Omega }_0=5.7\mathrm{deg}`$. The kinematics of such a beam is then given by equation (3), with $`A(r_0)=0.1`$ pc and $`a_0=A(r_0)/\mathrm{tan}\mathrm{\Omega }_0=1`$ pc. The resulting Doppler factors vary with time, reproducing the observed lags between the peaks in the lightcurves. For time lags to be present during a flare, the turnover frequency in the observer’s frame should be within the range of observing frequencies. In 0235+164, we can satisfy this condition by postulating a homogeneous synchrotron spectrum with spectral index $`\alpha =0.5`$ and rest frame turnover frequency $`\nu _\mathrm{m}^{}=0.15`$ GHz. Additional spectral evolution may be required to remove the apparent discrepancy between the model and observed amplitudes and widths of the flares.
### 5.2 3C 345
We have studied (Lobanov & Zensus 1999) spectral changes in the core and several jet components in 3C 345, based on the data from a VLBI monitoring of the source. In the example shown in figure 2, we use the observed trajectory (left panel of fig. 2) of the jet component C5 to confront a fit by the shock model to the variations of $`S_\mathrm{m}`$ and $`\nu _\mathrm{m}`$ (right panel of fig. 2). In the right panel, the solid line shows a fit by the shock model, without taking into account the observed kinematics of C5. When we require the shock model to reproduce $`b(t)`$ needed to satisfy the observed path of C5, the fit becomes problematic, at later stages of the shock evolution. This indicates that, at distances $`>1`$ mas, the shock may have dissipated, and other processes become main contributors to the emission from C5. Incidentally, the kinematic and emission properties of other jet features also show evidence for a change in the emission properties, at distances of 1–1.5 mas from the VLBI core of 3C 345 (Lobanov & Zensus).
|
no-problem/9902/astro-ph9902155.html
|
ar5iv
|
text
|
# Building galaxy models with Schwarzschild method and spectral dynamics
## 1 Introduction
One of the classical problems in galaxy dynamics is building equilibrium models for a galaxy with an observed light distribution. The basic process can be illustrated by the simpliest form of the problem, which is to construct a spherical, isotropic model with a constant mass-to-light ratio $`M/L`$ and a stellar distribution function $`DF`$ which fits the light profile. This has the well-known solution (Eddington 1916),
$$DF(E)_E^0\frac{d^2\rho }{d\varphi ^2}\frac{d\varphi }{\sqrt{\varphi E}},$$
(1)
which is a function of the energy $`E`$ only, where the potential $`\varphi `$ comes from solving the Poisson equation, and the volume density $`\rho (r)`$ comes from deprojecting the light profile $`\mu (R)`$
$$\rho (r)\frac{M}{L}_r^{\mathrm{}}\frac{d\mu (R)}{dR}\frac{dR}{\sqrt{R^2r^2}}.$$
(2)
In general deprojecting the light and getting the potential are relatively easier parts of the problem.
While a simple problem in concept, it is challenging to extend the mathematical and numerical machinery to cope with realistic systems. In particular, galaxies are almost always flattened, and sometimes triaxial. They are also anisotropic in velocity distribution due to dissipational and dissipationless processes in formation. By formation they are often dominated by dark matter at very small and very large radii (central black holes as indicated by nuclear activities in AGNs and outer dark halos as by flat HI rotation curves). In short, none of the three simplifying assumptions (constant $`M/L`$, isotropic and spherical) are generally valid.
While progress has been made in the analytical direction, the application is generally limited. The Hunter & Qian (1993) method, for example, can construct two-integral models – with a DF being function of energy and angular momentum azimuthal component $`DF(E,L_z)`$ – for axisymmetric galaxies and has been applied, e.g., in the case of the nucleus of M32 (Qian et al. 1995. See also, Dejonghe 1986, Dehnen & Gerhard 1994 for alternative techniques of building two-integral models). Formulaism also exists for building anisotropic non-axisymmetric models as long as the potential remains in Stäckel form (Teuben 1987, Statler 1987, 1991, Arnold et al. 1994, Dejonghe et al. 1995), and in a few cases for tumbling models (e.g. Freeman 1966, Vandervoort 1980). As a side comment separability is no guarantee for self-consistency; for example, the recent non-axisymmetric disc potentials by Sridhar & Touma (1997) require unphysically negative DF (Syer & Zhao 1998).
At the other end of the spectrum of methods straight N-body simulations can deal with all geometries (Aarseth & Binney 1976, Wilkinson & James 1982, Barnes 1996), but their power is again limited when it comes to sculpturing a simulation to fit a set of observations in certain $`\chi ^2`$ sense. The limitation here is the huge amount of computation to cover enough degrees of freedom to find the true best fitting model, e.g. Fux’s models (1997) for the Milky Way.
The most promising approach so far is the so-called Schwarzschild (1979) method, after his pioneering efforts in this direction. Basically one tries to match the observed distribution of the light with typically a few hundred or thousand building blocks with each being one stellar orbit populated with certain amount of stars. One adjusts the mass assigned to each orbit until a best match is obtained (see reviews by Binney 1982, de Zeeuw & Franx 1991, de Zeeuw 1996, Merritt 1996, 1999).
## 2 Schwarzschild method with bells & whistles
The Schwarzschild method has now been extensively applied to study nearby elliptical and S0 galaxies under the assumption of a static spherical, axisymmetric or triaxial potential (e.g., Richstone & Tremaine 1988, Merritt & Fridman 1996, Rix et al. 1997, van der Marel et al. 1998), and has also been applied to build 2-dimensional models of external bars (Pfenniger 1984, Wozniak & Pfenniger 1997, see also Sellwood & Wilkinson 1993) and 3-dimensional models of the tumbling bar of our own galaxy (Zhao 1996). These applications have also greatly generalized the original layout of the Schwarzschild method, and in particular, it is possible to match the orbits to a variety of kinematic data of gas and stars, and to derive a smooth physical solution (cf. the schematic Fig. 1). Nevertheless there are three main limitations of Schwarzschild approach and these are best overcome by joining force with the analytical and the N-body approaches.
Limitation A: Stability of a Schwarzschild model needs to be addressed by an N-body simulation. An interesting idea, due to Syer & Tremaine (1996) is to do the $`\chi ^2`$ fitting and N-body simulation at the same time, adjusting the mass of each particle as the simulation evolves towards a best match of data with the distribution of the particles. A simpler, better understood approach is to design a Schwarzschild model first, then populate each library orbit with $`N_𝐀`$ particles with random phase where $`N_𝐀`$ is proportional to the weight assigned to the orbit with actions $`𝐀`$, and finally feed these particles to an N-body simulation code to test stability. This has been applied successfully to the Galactic bar, which is found to be stable (Zhao 1996).
Limitation B: Stochastic orbits in a Schwarzschild model make the model evolve on time scales of the mixing time (several hundred dynamical time, cf. Merritt & Valluri 1996). Merritt & Fridman (1996) propose to average out this effect by explicitly summing up many stochastic orbits to achieve a good phase-mix. This is challenging because it means integrating a few hundred orbits for a few thousand of dynamical times to beat down the time-dependent fluctuations. An alternative approach has been used in the case of the Galactic bar (Zhao 1996). The hybrid model makes use of two types of building blocks for the Galaxy (cf. Fig. 1): a library of regular orbits obtained by direct integration for several hundred dynamical time, and a library of “super-orbits”, which are nothing but many delta-like DFs $`_iN_i\delta (E_Ji\mathrm{\Delta })`$, where the weighting $`N_i`$ are to be found by the same Non-Negative Least Square fitting code as with the weighting of the regular orbits. Each delta function includes all orbits with the same Jacobi integral $`E_JE\mathrm{\Omega }J_z`$ implicitly, where $`\mathrm{\Omega }`$ is the tumbling speed of the bar and $`E_J`$ is the only analytical integral. Such a prescription naturally incorporates stochastic orbits in the model without explicitly making the division of the fraction of mass in stochastic orbits vs. regular ones. Variations of our analytical way of including stochastic orbits have now been developed to model axisymmetric systems and bars by the dynamics groups at Leiden (Cretton et al. 1998, private communication) and Oxford (Häfner et al. 1998, private communication).
Limitation C: A Schwarzschild model is cell-dependent. Checking self-consistency of the model involves computing the amount of time an orbit spends inside a cell and comparing it with the amount of mass prescribed in the same cell. However it is possible to make cell-independent modeling. For example, to keep a triaxial galaxy in equilibrium requires a healthy mix of shapes of its building blocks with some orbits more flattened than the potential, some less flattened. It is well-known that loop orbits cannot reproduce a self-consistent triaxial potential because they move too fast and spend too little time at the major axis to match the relatively (compared to, say, the minor axis) high model density there. We find that this problem is actually more general (Zhao, Carollo & de Zeeuw 1999): it is easy to prove analytically that any regular orbit will reach a local maximum for its angular momentum $`|J(t)|`$ at the major axis, because the torque of triaxial potential is always directed towards the major axis (cf. Fig. 2). So in this regard a loop orbit or a boxlet orbit (with the shape of a banana, fish, pretzel etc) behaves like a pendulum with a stretchable length. Since a pendulum tends to swing too fast and spend too little time at its symmetry axis, the ”pendulum effect” generally prevents loops and boxlets from putting many stars at the major axis. This can be used as a cell-independent argument against making strongly flattened and triaxial galactic nuclei with bananas, fishes etc., consistent with previous authors (e.g., Gerhard & Binney 1985, Pfenniger & de Zeeuw 1989).
## 3 Spectral dynamics method
Another very promising cell-independent method of building galaxies is the spectral dynamics method. This method, as introduced by Binney & Spergel (1982), provides a conceptually simple representation of a regular orbit, by decomposing it into a truncated Fourier series involving three fundamental frequencies. The basic idea here is that a regular orbit in a 3D potential is simplest described in the action angle space since it satisfies periodic boundary conditions on the torus (cf. Fig. 3). Let an orbit be labeled by its three actions $`𝐀`$, then the phase space coordinates $`[𝐱_𝐀(t),𝐯_𝐀(t)]`$ are periodic with respect to the three action angles $`\theta (\omega _1,\omega _2,\omega _3)t`$, ie., we have the following truncated Fourier series
$$𝐱_𝐀(t)=\underset{\lambda (l,m,n)}{\overset{L}{}}X_\lambda \mathrm{cos}(\lambda \theta +\chi _\lambda ),\theta (\omega _1,\omega _2,\omega _3)t.$$
(3)
where the $`\omega `$’s are the three basic frequencies, the coefficients $`X_\lambda `$ are the amplitudes of each frequency combination and $`L`$ is the highest order harmonics before truncation. Similarly the velocity of the orbit at any time, related to the position by a time derivative, can be written down as
$$𝐯_𝐀(t)=\underset{\lambda (l,m,n)}{\overset{L}{}}\omega _\lambda X_\lambda \mathrm{sin}(\lambda \theta +\chi _\lambda ),\omega _\lambda =l\omega _1+m\omega _2+n\omega _3,$$
(4)
It is easy to work out the actions $`𝐀`$ by integrating along one of the three action angles,
$$𝐀=\frac{1}{2}\underset{\lambda }{}X_\lambda ^2(l\omega _1,m\omega _2,n\omega _3).$$
(5)
This method actually goes back to many years ago (e.g., Ratcliff, Chang & Schwarzschild 1984), but recent work by the Oxford group (e.g., Kaasalainen & Binney 1994), and by Papaphilipou & Laskar (1996) and Carpintero & Aguilar (1998) has made it possible to extract the basic frequencies numerically from the time series data of a regular orbit. Namely the step from $`[𝐱_𝐀(i\mathrm{\Delta }t),𝐯_𝐀(i\mathrm{\Delta }t)]`$ to $`[\omega _\lambda ,X_\lambda ]`$.
Most important to the Schwarzschild method is that we can compute the volume density of the orbit $`𝐀`$ at a given point $`(x,y,z)`$. Since we know that a regular orbit is uniformly distributed in its action angle space, the density in the real space is given by
$$\rho _𝐀(x,y,z)=\frac{1}{(2\pi )^3}\left|\frac{(x,y,z)}{(\theta _1,\theta _2,\theta _3)}\right|^1,$$
(6)
where the partial derivatives are simply the Jacobian for the transformation between the action angle space $`(\theta _1,\theta _2,\theta _3)`$ and the coordinate space $`(x,y,z)`$. Since the Jacobian can be evaluated analytically with eq. (3), we have derived a rigorous expression for the spatial distribution of an orbit. Likewise the line-of-sight velocity ($`v_z`$) distribution of an orbit in the direction $`(x,y)`$ (cf. Fig. 3) is given by
$$LOSVD_𝐀(x,y,v_z)=\frac{1}{(2\pi )^3}\left|\frac{(x,y,v_z)}{(\theta _1,\theta _2,\theta _3)}\right|^1.$$
(7)
For details see Copin et al. (1999).
The beauty of this method is that the description of regular orbits is conceptually simple. The description is time-independent, and involves no gridding and binning. It is also easy to store and recover an orbit, thus saving the amount of disc space for storing orbit libraries. Typically the number of quantities to store is about $`10`$ times the dimension of the problem; this includes the basic frequencies and the leading amplitudes.
To conclude we remark that the most promising method might be some kind of generalized Schwarzschild method or hybrid method, where the computationally intentive stochastic orbits are implicitly modelled by the analytical super-orbits, and the spatial and velocity distribution of the regular orbits are treated with spectral dynamics.
I thank Tim de Zeeuw for a critical reading of an earlier version and Danny Pronk for making the electronic version of Figure 1.
|
no-problem/9902/hep-ph9902422.html
|
ar5iv
|
text
|
# MASSIVE ϕ⁴ MODEL AT FINITE TEMPERATURE – RESUMMATION PROCEDURE A LA RG IMPROVEMENT –
## 1 Introduction
Understanding the phase structure and the mechanism of phase transition of quantum field theories at finite temperature/density is important to understand the evolution of Universe and the physics to be searched by the ultrarelativistic heavy-ion experiments planned at the BNL-RHIC and CERN-LHC .
To investigate analytically the phase structure of relativistic quantum field theory, the effective potential (EP) is used as a powerful tool. Perturbative calculation of the EP at finite temperature, however, suffers from various troubles: the poor convergence or the breakdown of the loop expansion , and the strong dependence on the renormalization-scheme (RS). These troubles have essentially the same origin, i.e., they come about with the emergence of large perturbative corrections depending explicitly on the RS. Thus to break a way out of these troubles we must carry out the systematic resummation of dominant large correction terms, and at the same time we must also solve the problem of the RS-dependence.
Recently simple but very efficient renormalization group (RG) improvement procedures for resumming dominant large correction terms are proposed in vacuum and in thermal field theories. This procedure was originally proposed to solve the problem of the strong RS-dependence of the EP calculated through the loop-expansion method.
It is worth noticing here that in the massive scalar $`\lambda \varphi ^4`$ model at high temperature the large correction terms appearing in the $`L`$-loop EP have the structures as follows; i) terms proportional to powers of the temperature $`T`$: i-a) $`(\lambda (T/\mu )^2)^L`$, i-b) $`(\lambda (T/\mu ))^L`$, and ii) terms proportional to powers of the $`logs`$: ii-a) $`(\lambda \mathrm{ln}(T/\mu ))^L`$, ii-b) $`(\lambda \mathrm{ln}(M/\mu ))^L`$, where $`M`$ is the large mass scale appearing in the theory.
In this paper we present the result of application of the resummation procedure a la RG to the massive scalar $`\varphi ^4`$ model renormalized at the temperature of the environment $`T`$. We have found that the proposed resummation procedure a la RG works efficiently, not only by resolving the problem of the RS-dependence, but also by properly as well as systematically resumming terms having the structures i-a) and i-b) above. As for the details of the analyses, see Ref. 9 and the paper to appear .
## 2 Improving the effective potential through resummation and the phase structure of massive $`\varphi ^4`$ model at $`T0`$
Let us consider the massive scalar $`\varphi ^4`$ model at finite temperature renormalized at an arbitrary mass-scale $`\mu `$ and at the temperature of the environment $`T`$ (hereafter we call this scheme as the $`T`$-renomalization). The key idea to resolve the RS-ambiguity is to use correctly and efficiently the fact that the exact EP satisfies a homogeneous renormalization group equation (RGE) with respect to change of the arbitrary parameter $`\mu \overline{\mu }=\mu e^t`$.
In the scalar $`\varphi ^4`$ model the dominant large corrections appear as a power function of the effective variable $`\tau `$ (for more details, see Refs. 9 and 10)
$`\tau /\lambda `$ $``$ $`\mathrm{\Delta }_1`$ (1)
$`=`$ $`{\displaystyle \frac{T^2}{2\pi ^2M^2}}\left\{L_1\left({\displaystyle \frac{T^2}{M^2}}\right){\displaystyle \frac{\pi ^2}{12}}\right\}{\displaystyle \frac{1}{2\pi ^2}}L_2\left({\displaystyle \frac{T^2}{\mu ^2}}\right)+{\displaystyle \frac{b}{2}}\left(\mathrm{ln}{\displaystyle \frac{M^2}{\mu ^2}}1\right),`$
where $`b=1/16\pi ^2`$, $`M^2=m^2+\lambda \varphi ^2/2`$ and $`M^2\mathrm{\Delta }_1`$ is nothing but (a part of) the renormalized one-loop self-energy correction,
$`M^2\mathrm{\Delta }_1`$ $``$ $`{\displaystyle \frac{1}{2}}{\displaystyle \frac{1}{k^2M^2}}+\text{(one-loop counter term)},`$ (2)
$`L_i\left({\displaystyle \frac{1}{a^2}}\right)`$ $``$ $`{\displaystyle \frac{^i}{(a^2)^i}}L_0\left({\displaystyle \frac{1}{a^2}}\right),(i1),`$
$`L_0\left({\displaystyle \frac{1}{a^2}}\right)`$ $``$ $`{\displaystyle _0^{\mathrm{}}}k^2𝑑k\mathrm{ln}[1\mathrm{exp}\{\sqrt{k^2+a^2}\}].`$
The resummation of dominant $`O(\lambda (T/\mu )^2)`$ terms in the $`T`$-renomalization can be automatically performed through renormalization, giving the remormalized mass-squared $`m^2m_0^2+\frac{1}{24}\lambda T^2`$, appearing as a mass-term in the propagator with which the perturbative calculation is performed, where $`m_0`$ denotes the renormalized mass in the vacuum theory.
At the one-loop level the RGE’s satisfied by the renormalized coupling and mass-squared can be solved exactly, giving solutions to the running parameters $`\overline{m}^2`$ and $`\overline{\lambda }`$ as
$`\overline{M}^2=\overline{m}^2+{\displaystyle \frac{1}{2}}\overline{\lambda }\varphi ^2,\overline{m}^2=m^2f^{1/3},\overline{\lambda }=\lambda f^1,`$ (3)
$`f=13\lambda \left[bt+{\displaystyle \frac{1}{2\pi ^2}}\left\{L_2\left({\displaystyle \frac{T^2}{\overline{\mu }^2}}\right)L_2\left({\displaystyle \frac{T^2}{\mu ^2}}\right)\right\}\right].`$
Up to now $`\overline{\mu }`$ in the above eqations (3) can be arbitrary, with $`\mu `$ being fixed at the initial value of renormalization. Our RG-improvement procedure, i.e., the resummation procedure a la RG, can be carried out by choosing the RS-fixing parameter $`\overline{\mu }`$ so as to satisfy $`\overline{\tau }(t)=0`$, namely to make the one-loop radiative correction to the mass fully vanish.
The RS-fixing equation $`\overline{\tau }(t)=0`$ gives the mass-gap equation
$$M^2=m^2+f(\overline{M}^2)\overline{M}^2f(\overline{M}^2)^{2/3}m^2,$$
(4)
which determines, in the HT regime where $`T/\mu 1`$, the RS-parameter $`\overline{\mu }`$ being exact up to $`T`$-independent constant as
$$\overline{\mu }=\frac{\overline{M}}{2}.$$
(5)
Now we can study the consequences of the RG-improvement in the $`T`$-renormalization, with solutions $`\overline{\lambda }`$, $`\overline{m}^2`$, and $`\overline{\mu }`$, Eqs. (3) and (5). The RG improvement can then be performed analytically , obtaining the improved EP as
$`\overline{V}_1`$ $`=`$ $`{\displaystyle \frac{1}{2}}\overline{m}^2\varphi ^2+{\displaystyle \frac{1}{4!}}\overline{\lambda }\varphi ^4+\overline{h}\overline{m}^4`$ (6)
$`+{\displaystyle \frac{\overline{M}^4}{2}}\left[{\displaystyle \frac{b}{4}}+{\displaystyle \frac{T^4}{\pi ^2\overline{M}^4}}L_0\left({\displaystyle \frac{T^2}{\overline{M}^2}}\right){\displaystyle \frac{T^2}{2\pi ^2\overline{M}^2}}L_1\left({\displaystyle \frac{T^2}{\overline{M}^2}}\right){\displaystyle \frac{T^2}{24\overline{M}^2}}\right]`$
$`=`$ $`{\displaystyle \frac{1}{2}}\overline{m}^2\varphi ^2+{\displaystyle \frac{1}{4!}}\overline{\lambda }\varphi ^4{\displaystyle \frac{\overline{m}^4}{2\overline{\lambda }}}{\displaystyle \frac{T\overline{M}^3}{48\pi }}+\mathrm{}.`$ (7)
With the RG-improved EP, $`\overline{V}_1`$, Eqs. (6) and (7), we can see the nature of the temperaure-dependent phase-transition of the model; i) At low temperature below $`T_c\sqrt{24|m^2|/\lambda }`$ the EP has twofold structure showing the existence of two phases, Fig. 1a, the ordinary mass phase and the small mass phase. The ordinary mass phase, with its counterpart in the tree-level potential, is the symmetry-broken phase which develops its minimum at $`\varphi =\varphi _0\{T_c^4|m^2|^3/\lambda \}^{1/10}`$. The small mass phase is a new “symmetric” phase, without having any counterpart in the tree-level potential, with a linearly decreasing potential unbounded from below, indicating the simple $`\varphi ^4`$ model becoming an unstable theory at low temperature. As the temperature becomes higher the the minimum of the ordinary mass phase eventually diminishes, and ii) at the critical temperature $`T_c`$ the minimum of the potential at non-zero $`\varphi `$ completely disappears. The EP shows a symmetric structure in $`\varphi `$ with the minimum at $`\varphi =0`$, $`V(\varphi )V(0)\varphi ^{\delta +1}`$, $`\delta 5.0`$, Fig. 1b, and iii) at high temperature above $`T_c`$ the EP remains symmetric in $`\varphi `$ with its minimum at $`\varphi =0`$. Transition between the symmetry-broken phase at low temperature and the symmetric phase at high temperature proceeds through the second order transition.
## 3 Critical exponents
Here we present the critical exponents determined from the RG-improved one-loop EP in the $`T`$-renormalization, Eqs. (6) and (7). In this case we can calculate the critical exponents through analytic manipulation.
The definition of the critical exponents are as follows; 1) On the behavior at $`\varphi =\varphi _0`$ around $`TT_c`$: $`\varphi _0(T_cT)^\beta `$, $`d^2V/d\varphi ^2|_{\varphi =\varphi _0}|T_cT|^\gamma `$, $`V(\varphi _0)V(0)|T_cT|^{2\alpha }`$. 2) On the behavior at $`T=T_c`$: $`V(\varphi )V(\varphi _0=0)\varphi ^{\delta +1}`$ or $`dV/d\varphi \varphi ^\delta `$. Here $`\varphi _0`$ denotes the position of the true minimum, and $`T_C`$ denotes the critical temperature, which are determined as $`\varphi _0\{54^3T_c^4|m^2|^3/\lambda (8\pi )^4\}^{1/10}`$, $`T_c\sqrt{24|m_0^2|/\lambda }3|m_0^2|/4\pi \mu `$.
Results are summarized in Table 1, showing that our result deviates significantly from the mean-field values and agrees reasonably with the experimental data .
## 4 Summary and discussion
In this paper we proposed a new resummation method inspired by the renormalization-group improvement. Applying this resummation procedure a la RG-improvement to the one-loop effective potential in the massive scalar $`\varphi ^4`$ model renormalized at the temperature of the environment $`T`$, we found important observations; The $`O(\lambda (T/\mu )^2)`$-term resummation, thus the so-called hard-thermal-loop resummation in this model, can be simply done through the $`T`$-renormalization itself. With the lack of freedom we can set only one condition to choose the RS-fixing parameter, which actually ensures to absorb the large terms of $`O(\lambda T/\mu )`$, thus only the partial resummation of these terms can be carried out.
It is to be noted that all the results obtained are essentially the same as those in the $`T_0`$-renormalization case : the second order phase transition between the ordinary mass broken phase at low temperature and the symmetric phase at high temperature, and the existence of the unstable small mass phase at low temperature. In this sense our resummation method gives stable results so long as the terms of $`O(\lambda T/\mu )`$ are systematically resummed. The critical exponents are determined by analytic manipulation, showing the significant deviation from the mean-field values and the reasonable agreement with the experimental data . For details, see Refs. 9 and 10.
As noticed above, the RG-improved EP in the simple massive $`\varphi ^4`$ model has, below the critical temperature $`T_c`$, an unstable small-mass phase in addition to the ordinary symmetry-broken phase. This unstable phase also appears in the same model at exact zero-temperature, indicating its appearence being not the artifact due to the crudeness of approximation on the temperature-dependent correction terms. Though the origin of its appearence is not fully understood, it may have a relation with the triviality of the model, which is an interesting problem for further studies. The $`O(N)`$ symmetric model in the large-$`N`$ limit exists as a stable theory without having such an unstable phase .
## Acknowledgments
This work is partly supported by the Special Research Grant-in-Aid of the Nara University.
## References
|
no-problem/9902/cond-mat9902361.html
|
ar5iv
|
text
|
# Difference between insulating and conducting states
## 1 Preface
This article had published in 1991 (E.K.Kudinov, Fisika Tverdogo Tela 33, 2306 (1991); \[in English: Sov.Phys. Solid State 33, 1299 (1991)\]). It have considered a distinction between insulating and conducting states which based on the first principles. In the last time a whole series of the articles, which considered the same problem (see, for example: Resta R., Sorella S. cond-mat/ 9808151.) without mentoining of the aforesaid article was appeared. For that reason author belives, that a submission this manuscript in cond-mat were be resonably.
##
The band model has been able to represent both conducting and insulating states of a crystal in terms of band occupations for the system of the noninterating electrons. As early as 1937, however, it was noted that this model does not describe several insulating crystals. A large number of such materials are now known. They are compounds of metals with parity filled $`d`$ or $`f`$ shells, and the corresponding bands are only partly occupied . Since the end of the 1950s, they have been much studied on account of their particular features: magnetic and structural transitions, insulator–conductor transitions, intermediate valence, ”heavy” fermion effects, and so on. In particular, the high-temperature superconductors belong to this class.
Mott has given a qualitive explanation of the insulating nature of such materials. If the $`d`$ or $`f`$ orbital overlap is small, the electrons should be described by (atomic type) functions localized at sites; the Coulomb repulsion then causes an effective attraction of a localized electron and hole, which can form an electrically neutral bond state and carry no current. The formation of current exitations is due to the ionisation of such a state (finite activation energy), which is also responsible for the insulating state (Mott insulator).The electron interaction evidently has a decisive role in this pictures, and the question naturally arises of how to formulate a criterion to differentiate an insulator from a conductor in the general case, without using a model.
The model proposed in 1964 by Hubbard , which took into account only one-site Coulomb repulsion, gave rise to an enormous number of papers, such as Refs. , since it appeared that the simplicity of the Hubbard Hamiltonian would allow a description not only of the insulating and conducting states, but also of the Mott transition between them. It was found, however, that the results obtained are not easly interpreted, mainly for lack of a general insulator–conductor criterion (for which inadequately justified assertion have often been substituted).
The criterion must evidently reflect the specific of the ground state of the system ($`T=0`$). Those so far proposed fall into two classes: 1) the presence of a gap in the current exitation spectrum (the band gap in the band theory, the positive twin–hole pair exitation energy in a Mott insulator), 2) based on the Kubo expression for the complex polarisability $`\kappa (\omega )`$, a conductor having a singularity of $`\kappa (\omega )`$ as $`\omega 0`$. Class 1 based on the properties of the exitation spectrum and can be used only with a specific and fairly clear model. The disadvantage of class 2 is that the necessary properties of the ground state are determined by the reaction to an external perturbation, so that a higher-rank problem, the kinetic one, has to be contemplated. The present paper proposed a criterion based entirely on the static properties of the ground state.
## 2 Qualitative treatment
Our approachis based on the substancial difference between the linear responses of an insulator and a conductor (at sufficiently iow temperatures) to a static homogeneous electric field $`𝐄`$. For an insulator with a finite volume $`V`$, the field inside the body is nonzero and induces a dipole moment $`𝐏=\kappa _0𝐄`$ per unit volume, the polarizability $`\kappa _0`$ per unit volume being finite and independent of $`V`$. In a conductor, there is a redistribution of charge, and equilibrium corresponds to a spatially inhomogeneous charge distribution, which reduce to zero the field acting within the volume (field effect). This inhomogeneity has to be taken into account from the start when formulating the problem of the response.
The response can, however, be formally calculated in either case on the assumption that the final state is a homogeneous one (homogeneous linear response). For an insulator, this is true, and a reasonable value of $`\kappa _0`$ is obtained. The corresponding calculation for a conductor is bound to show that the problem is incorrectly formulated, by giving an ”anomalous” expression for $`\kappa _0`$. For an ideal charged Fermi gas, one easly finds
$$\kappa _0=\frac{e^2}{V}\underset{\mathrm{𝐤𝐤}^{}}{}\left|x_{\mathrm{𝐤𝐤}^{}}\right|^2\frac{n_𝐤n_𝐤^{}}{\epsilon _𝐤\epsilon _𝐤^{}},$$
(1)
where $`x_{\mathrm{𝐤𝐤}^{}}`$ are the matrix elements of the coordinate $`x`$ between states with the wave vectors $`𝐤`$ and $`𝐤^{}`$ ($`x_{\mathrm{𝐤𝐤}}=0`$, corresponding to neutrality of the system as a whole), $`\epsilon _𝐤=\mathrm{}^2k^2/2m`$, and $`n_𝐤`$ is the Fermi distribution. A simple calculation gives, for $`T=0`$,
$$\kappa _0=\frac{1}{10\pi ^2}\left(\frac{3}{4\pi }\right)^{2/3}\frac{k_c}{r_B}V^{2/3}+\frac{1}{k_cr_B}O(V^0),$$
(2)
where $`r_B=\mathrm{}^2/me^2`$, $`k_c`$ is the Fermi momentum, and $`(r_B/k_c)^{1/2}`$ is the Thomas–Fermi screening distance. The anomaly is that $`\kappa _0`$ depends on the volume: $`\kappa _0\mathrm{}`$ as $`V\mathrm{}`$; a similar anomaly, as $`V^{2/3}`$, is found for a superconductor in the BCS model. The corresponding expression for a band insulator at $`T=0`$ gives $`\kappa _0`$ independent of $`V`$; there is no field effect (at $`T>0`$, an anomalous term occurs as in Eq. (2), proportional to $`\mathrm{exp}(E_g/kT)`$, where $`E_g`$ is the gap width).
Since the polarizability is expressed in terms of the dipole moment correlation function $`<d(t)d>(dd_x)`$ one can expect that the anomaly will occur also in the corresponding static quantity, the mean square fluctuation $`<d^2>`$ of the dipole moment (it is postulated that there is no ferroelectric ordering, and so $`<d>=0`$); $`<d^2>V`$ for an insulator, but $`<d^2>V^{1+\gamma }`$ with $`\gamma >0`$ for a conductor.
In the limit $`V\mathrm{}`$ it is resonable to put all materials in two classes according to the nature of the homogeneous linear response: 1) $`\kappa _0`$ is finite (insulator), 2) $`\kappa _0`$ is infinite (conductor); that is, the classification is based on the absence or presence of the field effect. This is in agreement with the presence of a pole at $`\omega =0`$ of the complex polarisability of a conductor . In accordance with the casuality condition, the pole term in $`\kappa (\omega )`$ must have the form
$$\mathrm{const}\frac{i}{\omega +i\delta }=\mathrm{const}\left(\pi \delta (\omega )+i\frac{𝒫}{\omega }\right),$$
where $`𝒫`$ denotes the principal value; thus, formally, $`\kappa _0=\kappa ^{}(0)=\mathrm{}`$ (note: $`\kappa ^{}(0)`$, not $`\kappa ^{}(\omega )|_{\omega 0}`$). We can suppose that class 1 has a finite value of $`lim_V\mathrm{}(<D^2>V^1)`$ as $`T0`$, but class 2 has an infinite one; that is, $`d`$ has normal fluctuations in case 1 and anomalous ones in case 2. It will be proved that this hypothesis follows from the fluctuation-dissipation theorem.
## 3 Relationship between the static homogeneous response and the static fluctuations of the dipole moment
Since the difference between an insulator and a conductor depends on the specific nature of the ground states, we will everywhere consider temperatures so low that the contribution from the ”gap” modes $`e^{E/kT}`$ with $`E>0`$ is negligible.
The fluctuation-dissipation theorem, in the form of the Callen–Welton relationship for $`\kappa (\omega )`$, is
$$\frac{\mathrm{}}{\pi }\kappa ^{\prime \prime }\mathrm{coth}\frac{\mathrm{}\omega }{2kT}=\left(\frac{1}{V}\underset{mn}{}e^{E_n/kT}|d_{nm}|^2[\delta (\omega \omega _{nm})+\delta (\omega +\omega _{nm})]\right)|_V\mathrm{},$$
(3)
$$\omega _{nm}=\frac{E_nE_m}{\mathrm{}},$$
where $`E_n`$ is the energy of steady state number $`n`$ of the system. (Since the theorem assumes the energy spectrum of the system to be continuous, we suppose that the limit $`V\mathrm{}`$ has be taken.) Integration of the right-hand side of Eq. (3) over $`\omega `$ from 0 to $`\mathrm{}`$ gives
$$_0^{\mathrm{}}𝑑\omega \left(\frac{1}{V}e^{E_n/kT}|d_{nm}|^2[\delta (\omega \omega _{nm})+\delta (\omega +\omega _{nm})]\right)|_V\mathrm{}$$
$$=\left(\frac{1}{V}e^{E_n/kT}|d_{nm}|^2\right)|_V\mathrm{}=\underset{V\mathrm{}}{lim}(<d^2>V^1).$$
(4)
Thus, if $`d`$ has normal fluctuations, the integral $`_0^{\mathrm{}}`$ of the left-hand side of Eq. (3) converges; if anomalous fluctuations, then it diverges. This is valid for infinitesimal $`T`$ values. The only possible singularity of $`\kappa ^{\prime \prime }(\omega )`$ is $`\omega =0`$, and so the convergence of the integral depends on the behavior of $`\kappa ^{\prime \prime }(\omega )`$ as $`\omega 0`$<sup>1</sup><sup>1</sup>1As $`\omega \mathrm{},\kappa ^{\prime \prime }`$ must decrease as $`\omega ^2`$, and there is convergence at the upper limit.. Let us first take the two limiting cases.
a) A normal insulator (in which we include an intrisinc semiconductor) has $`\kappa ^{\prime \prime }(\omega )`$ an analytic function of $`\omega `$. The integral
$$J(T)=_0^{\mathrm{}}\kappa ^{\prime \prime }\mathrm{coth}\frac{\mathrm{}\omega }{2kT}d\omega $$
is convergent, and as $`T0`$ it tends to a finite limit $`_0^{\mathrm{}}\kappa ^{\prime \prime }𝑑\omega `$. Hence thoroughout the temperature range concerned, $`d`$ has normal fluctuations. The static polarizability is then finite, since the integral converges in the Kramers – Kronig relationships
$$\kappa ^{}(0)=\frac{1}{\pi }_{\mathrm{}}^+\mathrm{}\kappa ^{\prime \prime }(\zeta )\frac{𝒫}{\zeta }𝑑\zeta .$$
(5)
b) A normal metal. Here, $`\kappa (\omega )`$ has a pole term:
$$\kappa (\omega )=\overline{\kappa }(\omega )+\frac{i\sigma _0}{\omega +i\delta },(\delta >0,\delta 0),$$
(6)
$`\overline{\kappa }`$ has no singularity; $`\sigma _0`$ is the static conductivity. (It is assumed that $`\sigma _0`$ is independend of $`T`$.) The integral $`J(T)`$ diverges for all $`T`$, including $`T=0`$. Hence $`d`$ has anomalous fluctuations. It follows from Eq. (6) that $`\kappa ^{}(\omega )`$ then has a singular term $`\phi \sigma _0\delta (\omega )`$; thus, the homogeneous linear response is anomalous, $`\kappa ^{}(0)=\mathrm{}`$.
A smooth insulator–metal transition can be formally represented by writing $`\kappa ^{\prime \prime }(\omega )`$ in the form $`\overline{\kappa }^{\prime \prime }(\omega )+\kappa _c^{\prime \prime }(\omega )`$
$$\kappa _c^{\prime \prime }(\omega )=a\frac{\omega }{|\omega |}|\omega |^\alpha ,1\alpha 1$$
(7)
($`a>0`$) and $`\alpha =1`$ for an insulator, $`\alpha =1`$ for a metal. The integral of the lefthand side of Eq. (3) is written as
$$J(T)=_0^{\mathrm{}}\kappa ^{\prime \prime }\left(\mathrm{coth}\frac{\mathrm{}\omega }{2kT}1\right)𝑑\omega +_0^{\mathrm{}}\kappa ^{\prime \prime }𝑑\omega .$$
(8)
We assume that $`a`$ and $`\alpha `$ are independent of $`T`$. Then, as $`T0`$,
$$J=a_0^{\mathrm{}}\omega ^{\prime \prime }\left(\mathrm{coth}\frac{\mathrm{}\omega }{2kT}1\right)𝑑\omega +_0^{\mathrm{}}\kappa ^{\prime \prime }𝑑\omega $$
$$=a\left(\frac{2kT}{\mathrm{}}\right)^{1+\alpha }_0^{\mathrm{}}x^\alpha (\mathrm{coth}x1)𝑑x+_0^{\mathrm{}}\kappa ^{\prime \prime }𝑑\omega .$$
(9)
1) When $`\alpha >0,J(T0)`$ tends to the finite limit $`_0^{\mathrm{}}\kappa ^{\prime \prime }𝑑\omega `$;
$`(<d^2>/V)_{\mathrm{}}`$, and, by Eq. (7), $`\kappa _0=\kappa ^{}(0)`$ are finite, even for $`T=0`$.
2) When $`0>\alpha >1,J`$ is infinite for all nonzero $`T`$ in the range considered, the fluctuations of $`d`$ are anomalous when $`T0,\kappa _0=\mathrm{}`$ for all $`T`$ including $`T=0`$, and $`\sigma _0=0`$ in this range of $`\alpha `$ values. We thus see that when $`\alpha >0`$ the homogeneous linear response is that of an insulator ($`\kappa _0`$ finite), and $`(<d^2>/V)_{\mathrm{}}`$ also is finite. When $`\alpha <0`$, the homogeneous linear response corresponds to $`\kappa _0=\mathrm{}`$ (field effect) and $`(<d^2>/V)_{\mathrm{}}=\mathrm{}`$ for nonzero $`T`$ (and also for $`T=0`$ when $`\alpha =1`$)<sup>2</sup><sup>2</sup>2When $`1<\alpha <0`$, there is an expression $`T^{1+\alpha }\mathrm{}`$ on the right of Eq. (9), and it has not been possibile to find a correct passage to $`T=0`$ so as to determine $`(<d^2>/V)_{T=0}`$.. This is the justification for the hypotesis advanced at the end of Sec. 1, that the static insulator reaction $`\kappa _0`$ and the fluctuations of $`d`$ are in one-to-one correspondence as regards their nature. The kinetic nature of $`\kappa ^{\prime \prime }(\omega )`$ acts only as an intermediate link between these two static characteristics.
This argument justifies the division of substances into two classes (Sec. 1). Classes 1 and 2 correspond to $`\alpha >0`$ and $`1<\alpha <0`$ respectively; the nature of the insulator reaction $`\kappa _0`$ is in one-to-one relationship with the behavior of $`<d^2>/V`$ as $`V\mathrm{}`$. The physical interpretation of class 1 is that the electrons have a finite motion and so there is no field effect. In class 2, their motion is infinite, they can go to macroscopic distances, and the field effect can occur<sup>3</sup><sup>3</sup>3When $`\alpha >1,\sigma _0=0`$, but this means only that the random walk is not Marcovian.. The range of $`\alpha `$ values between 1 and -1 can apparently occur only near an insulator–conductor transition; such a transition by the occurence of a branch point is probably the ”smootest” such transition. The basic criterion (the nature of the static fluctuations) then retains its meaning even for ideal nonergodic modeles such as those where the particles do not interact.
## 4 Static fluctuations of the dipole moment
The expression for $`<d^2>`$ can be put in a clear form. For simplicity, let us consider a homogeneous electron gas in a finite volume $`V`$. The dipole moment operator $`\widehat{d}=\widehat{d}_x`$ is
$$\widehat{d}=_Vx\widehat{n}(𝐫)𝑑𝐫,\widehat{n}(𝐫)=\underset{\sigma }{}\psi _\sigma ^+(𝐫)\psi _\sigma (𝐫),$$
(10)
where $`\psi _\sigma ^+`$ and $`\psi _\sigma `$ are the Fermi field operators and $`\sigma `$ is the spin component. The coordinates are chosen so that $`_Vx𝑑𝐫=0`$ (neutrality condition; the origin is taken at the point where the dipole moment of the positive charges is zero). Then,
$$\frac{1}{V}<d^2>=\frac{1}{V}_V<\mathrm{\Delta }\widehat{n}(𝐫)\mathrm{\Delta }\widehat{n}(𝐫^{})>𝑑𝐫𝑑𝐫^{},$$
(11)
$$\mathrm{\Delta }\widehat{n}(𝐫)=\widehat{n}(𝐫)n,n=<\widehat{n}(𝐫)>.$$
It is known that the density correlation function $`<\mathrm{\Delta }\widehat{n}(𝐫)\mathrm{\Delta }\widehat{n}(𝐫^{})>`$ has a delta-function singularity.<sup>4</sup><sup>4</sup>4To make a correct allowance for this, the field operators must always be put in the normal order. This can be separated by writing
$$<\mathrm{\Delta }\widehat{n}(𝐫)\mathrm{\Delta }\widehat{n}(𝐫^{})>=n\delta (𝐫𝐫^{})+\underset{\sigma \sigma ^{}}{}<\psi _\sigma ^+(𝐫)\psi _\sigma ^{}^+(𝐫^{})\psi _\sigma ^{}(𝐫^{})\psi _\sigma (𝐫)>n^2.$$
(12)
We set
$$K(\mathrm{𝐫𝐫}^{})=<\mathrm{\Delta }\widehat{n}(𝐫)\mathrm{\Delta }\widehat{n}(𝐫^{})>=n\delta (𝐫𝐫^{})$$
$$+\underset{\sigma \sigma ^{}}{}<\psi _\sigma ^+(𝐫)\psi _\sigma ^{}^+(𝐫^{})\psi _\sigma ^{}(𝐫^{})\psi _\sigma (𝐫)>.$$
(13)
This $`K(\mathrm{𝐫𝐫}^{})`$ tends to zero as $`|𝐫𝐫^{}|\mathrm{}`$, and $`K(\mathrm{𝐫𝐫}^{})=K(𝐫^{}𝐫)`$. Then
$$\frac{1}{V}<\widehat{d}^2>=n\frac{1}{V}_Vx^2𝑑𝐫+\frac{1}{V}_Vxx^{}K(𝐫𝐫^{})𝑑𝐫𝑑𝐫^{}.$$
(14)
We substitute here $`xx^{}=(1/2)(xx^{})^2+(1/2)(x^2+x^{\mathrm{\hspace{0.17em}2}})`$:
$$\frac{<d^2>}{V}=\frac{n}{V}_Vx^2𝑑𝐫+\frac{1}{2V}_V(x^2+x^{\mathrm{\hspace{0.17em}2}})K(\mathrm{𝐫𝐫}^{})𝑑𝐫𝑑𝐫^{}$$
$$\frac{1}{2V}_V(xx^{})^2K(\mathrm{𝐫𝐫}^{})𝑑𝐫𝑑𝐫^{},$$
(15)
and $`_VK(\mathrm{𝐫𝐫}^{})𝑑𝐫^{}`$ is
$$_VK(\mathrm{𝐫𝐫}^{})𝑑𝐫^{}=\underset{\sigma }{}<\psi _\sigma ^+(𝐫)\widehat{N}\psi _\sigma (𝐫)>Vn^2.$$
(16)
Here, $`\widehat{N}=_\sigma _V\psi _\sigma ^+\psi _\sigma 𝑑𝐫`$ is the total particle number operator in the volume $`V`$:
$$_VK(\mathrm{𝐫𝐫}^{})𝑑𝐫^{}=<\widehat{n}(𝐫)\widehat{N}>nVn^2.$$
(17)
Since the system is homogeneous, $`<\widehat{n}(𝐫)\widehat{n}>`$ is independent of $`𝐫`$, and therefore
$$<\widehat{n}(𝐫)\widehat{n}>=\frac{1}{V}<\widehat{N}^2>.$$
(18)
The final result is
$$\frac{<d^2>}{V}=\frac{<\mathrm{\Delta }\widehat{N}^2>}{V}_Vx^2𝑑𝐫\frac{1}{2V}_V(xx^{})^2K(\mathrm{𝐫𝐫}^{})𝑑𝐫𝑑𝐫^{},$$
(19)
$$\mathrm{\Delta }\widehat{N}=\widehat{N}N,N=<\widehat{N}>.$$
The first term on the right here is of order of $`V^{2/3}`$, on the assumption that $`\widehat{N}`$ has normal fluctuations (that is, far from any first-order transition point). Its significance is straightforward: these are dipole moment fluctuations of the charged particle system, on the assumption that the small macroscopic volumes are statistically independent; and it follows from elementary arguments. The second term is anomalous if the correlation function $`K(\mathrm{𝐫𝐫}^{})`$ does not decrease sufficiently rapidly as $`|𝐫𝐫^{}|\mathrm{}`$.
There is reason to suppose that always $`<\mathrm{\Delta }\widehat{N}^2>=0`$ in a normal system at $`T=0`$; that is, the first term in Eq. (19) is zero for $`T=0`$. A ”normal” system is described by a Gibbs distribution, i.e., by a density matrix $`\rho =\mathrm{const}\mathrm{exp}[(\widehat{H}\mu N)/kT]`$. At $`T=0`$, the system is in a state $`\mathrm{\Phi }_0`$ corresponding to the lowest eigenvalue of the operator $`\mathrm{\Omega }=\widehat{H}\mu \widehat{N}`$ for a given $`\mu `$. Since $`\widehat{N}`$ is an integral of the motion, the number of particles is a good quantum number: all eigenstates of $`\mathrm{\Omega }`$, including $`\mathrm{\Phi }_0`$, are states with a given number of particles. Hence, in each such state, including $`\mathrm{\Phi }_0`$, the dispersion of the particle number is zero, $`<\mathrm{\Delta }\widehat{N}^2>=0`$, and a nonzero $`<\mathrm{\Delta }\widehat{N}^2>`$ can occur only by a spread over different quantum numbers. When $`T=0`$, however, the system is in the ground state only, and $`<\mathrm{\Delta }\widehat{N}^2>_{T=0}=0`$.
When there is long-range order, the Gibbs distribution no longer describes the state of the system; the true distribution function is found by including infinitesimal terms in the Hamiltonian, and these break the original symmetry. The fluctuations at $`T=0`$ are determined by the specific nature of the broken-symmetry state.
The representation of $`<d^2>/V`$ in the form (19) can be derived also for a spatially periodic system (more precisly, it is the form of the terms in $`<d^2>`$ that are responsible for the presence of anomalous fluctuations), and for a disordered system under certain plausible assumptions. It reduces the problem of the $`d`$ fluctuations to that of finding $`<\mathrm{\Delta }\widehat{N}^2>`$ and determining the behavior of the correlation function $`K(\mathrm{𝐫𝐫}^{})`$.
## 5 Examples
### a) Ideal charged Fermi gas
Here,
$$<\mathrm{\Delta }\widehat{N}^2>=kTn^2\left(\frac{V}{P}\right).$$
Since the compressibility is finite, the first term in Eq. (19) is zero for $`T=0`$. The function $`K(\mathrm{𝐫𝐫}^{})`$ is known ; its leading term for $`T=0`$ is
$$K(\mathrm{𝐫𝐫}^{})=K(𝐫𝐫^{})=\frac{3n}{2\pi ^2k_c}\frac{\mathrm{cos}^2k_c|𝐫𝐫^{}|}{|𝐫𝐫^{}|}.$$
(20)
Note that $`K<0`$. It is seen that, for $`T=0`$, $`d`$ has anomalous fluctuations:
$$\frac{<d^2>}{V}V^{1/3}.$$
(21)
As $`T`$ increases, the decrease of $`K`$ becomes exponential, but the $`<\mathrm{\Delta }\widehat{N}^2>`$ terms begins to be effective, giving a $`V^{2/3}`$ anomaly. The reason for the power law (20) is the Fermi step singularity. The repulsive interaction maintains this singularity, and therefore does not affect the nature of the singularity at $`T=0`$. When the temperature is not zero, this singularity is blurred, causing an exponential decrease of $`K(𝐫𝐫^{})`$.
### b) Band insulator
Here again, $`<\mathrm{\Delta }\widehat{N}^2>=0`$ at $`T=0`$, in accordance with the discussion at the end of Sec. 3. At a nonzero temperature, $`<\mathrm{\Delta }\widehat{N}^2>\mathrm{exp}(E_g/kT)`$, and so this term is to be neglected in the temperature range considered. The correlation function $`K(𝐫𝐫^{})`$ at $`T=0`$ decreases exponentially, as follows from the absence of any singularity within the Brillouin zone on account of the uniform band occupation. Hence, $`d`$ has normal fluctuations even at $`T=0`$.
### c) Superconductor
The ordering specific to a superconductor (ODLRO) needs fluctuations of $`\widehat{N}`$ in the ground state<sup>5</sup><sup>5</sup>5In a Bogolubov Bose gas at $`T=0,<\mathrm{\Delta }\widehat{N}^2>0`$ also.. This is in contrast to the ”normal” systems a) and b), where the fluctuations of $`\widehat{N}`$ are thermodynamic, whereasin a superconductor they are quantum effects and not zero in the ground state. For the BCS model
$$\frac{<\mathrm{\Delta }\widehat{N}^2>}{V}|_{t=0}=\frac{4}{V}\underset{𝐤}{}u_𝐤^2v_𝐤^2=\frac{1}{V}\underset{𝐤}{}\frac{\mathrm{\Delta }^2}{(\epsilon _𝐤\mu )^2+\mathrm{\Delta }^2},$$
(22)
where $`\mathrm{\Delta }`$ is the gap. In accordance with Eq. (19), for $`T=0`$ we have $`<\widehat{d}^2>/VV^{2/3}`$. It is easy to see that the correlation function decrease exponentially, since the denominators such as $`[(\epsilon _𝐤\mu )^2+\mathrm{\Delta }^2]^{1/2}`$ which depend on $`𝐤`$ are analytic for real $`𝐤`$ and are nowhere zero. We accentuate, that the anomalous character of the dipole moment fluctuations $`<d^2>`$, which ensures a field effect in a superconductor, is realized in view of a fluctuation of $`N`$ in the ground state of the superconductor, but no a slow diminshing of the $`K(\mathrm{𝐫𝐫}^{})`$ as it occurs in a dielectric.
The straightforward models are thus in agreement with the insulator – conductor criterion proposed here.
## 6 Hubbard model
The Hubbard model provides a nontrivial example of using the criterion formulated here in order to distinguish between insulators and conductors.
The Hubbard Hamiltonian for nondegenerate orbital states is
$$H=\underset{\mathrm{𝐦𝐦}^{}\sigma }{}J(𝐦𝐦^{})a_{𝐦\sigma }^+a_{𝐦^{}\sigma }+U\sigma _𝐦\widehat{n}_𝐦n_𝐦W+H_0,$$
(23)
with $`U>0`$. Let us suppose that the number of electrons is equal to the number $`N`$ of sites. The site lattice is assumed centrosymmetric. The form (23) of the Hamiltonian presupposes a certain choice of Fermi field operators, $`\psi _\sigma (𝐫)=_𝐦\phi _𝐦(𝐫)a_{𝐦\sigma }`$, where $`\{\phi _𝐦(𝐫)\}`$ is a set of $`N`$ ortonormalized orbitals, each localized at a lattice site $`𝐦`$<sup>6</sup><sup>6</sup>6We assume that, as $`|𝐫𝐦|\mathrm{},\phi _𝐦(𝐫)`$ falls exponentially.. Here, $`J`$ and $`U`$ are unambiguously defined. The density operator $`\widehat{n}(𝐫)`$ is
$$\widehat{n}(𝐫)=\underset{\sigma }{}\psi _\sigma ^+(𝐫)\psi _\sigma (𝐫)=\underset{\mathrm{𝐦𝐦}^{}\sigma }{}\phi _𝐦(𝐫)\phi _𝐦^{}(𝐫)a_{𝐦\sigma }^+a_{𝐦^{}\sigma },$$
(24)
and the density correlation function is
$$<\mathrm{\Delta }\widehat{n}(𝐫)\mathrm{\Delta }\widehat{n}(𝐫^{})>=$$
$$\phi _{𝐦_1}(𝐫)\phi _{𝐦_2}(𝐫)\phi _{𝐦_3}(𝐫^{})\phi _{𝐦_4}(𝐫^{})<a_{𝐦_1\sigma }^+a_{𝐦_2\sigma }a_{𝐦_3\sigma ^{}}^+a_{𝐦_4\sigma ^{}}>n(𝐫)n(𝐫^{}),$$
(25)
$$n(𝐫)=<\widehat{n}(𝐫)>=\phi _𝐦\phi _𝐦^{}<a_{𝐦\sigma }^+a_{𝐦^{}\sigma }>.$$
We will take into account only the nearest neighbors $`J(𝐦𝐦^{})=J,𝐦𝐦^{}=𝐠;𝐠`$ is the vector between nearest-neighbor sites. The overlap is assumed small $`(J/U1)`$, and the temperature range $`J^2/UkTU`$ is considered. We can then neglected the spin ordering and the formation of actual twins and holes (gap excitations). In this range, the state of the system can be represented to order $`(J/U)^2`$ by the wave function<sup>7</sup><sup>7</sup>7Strictly, one should use the density matrix $`\rho _0`$ corresponding to $`\phi _0`$, but taking into account the equal probabilities of the spin configurations.
$$\mathrm{\Phi }=e^{i\widehat{S}}\mathrm{\Phi }_0=\left(1+i\widehat{S}\frac{1}{2}\widehat{S}^2\right),$$
(26)
where
$$\widehat{S}=i\underset{\mathrm{𝐦𝐦}^{}\sigma }{}\frac{J(𝐦𝐦^{})}{U}(\widehat{n}_{𝐦\sigma }\widehat{n}_{𝐦^{}\sigma })a_{𝐦\sigma }^+a_{𝐦^{}\sigma },$$
(27)
$`\mathrm{\Phi }_0`$ is the homopolar state function $`(_\sigma \widehat{n}_{𝐦\sigma }\mathrm{\Phi }_0=\mathrm{\Phi }_0)`$ with an arbitrary spin configuration, and $`e^{i\widehat{S}}`$ is a unitary operator that eliminates from Eq.(23) the term of the first order in $`J/U`$. We can find the correlation function (25) as far as $`(J/U)^2`$. The operator $`a_{𝐦\sigma }^+a_{𝐦^{}\sigma }`$ there is transformed by means of Eq. (27):
$$a_{𝐦\sigma }^+a_{𝐦^{}\sigma }a_{𝐦\sigma }^+a_{𝐦^{}\sigma }+i[\widehat{S}a_{𝐦\sigma }^+a_{𝐦^{}\sigma }]\frac{1}{2}[\widehat{S}[\widehat{S}a_{𝐦\sigma }^+a_{𝐦^{}\sigma }]].$$
(28)
The quantity (25) can then be calculated with the transformed operator (28) by averaging over $`\mathrm{\Phi }_0`$ and then over all $`2^N`$ spin configurations. The result is
$$<\mathrm{\Delta }\widehat{n}(𝐫)\mathrm{\Delta }\widehat{n}(𝐫^{})>=\underset{𝐦}{}\phi _𝐦^2(𝐫)\phi _𝐦^2(𝐫^{})\underset{𝐠}{}\frac{J^2(𝐠)}{U^2}$$
$$\underset{\mathrm{𝐦𝐦}^{}}{}\phi _𝐦^2(𝐫)\phi _𝐦^{}^2(𝐫^{})\underset{𝐠}{}\frac{J^2(𝐠)}{U^2}\delta _{mm^{},g}+\mathrm{}$$
(29)
the dots represent terms arising from $`n(𝐫)𝐧(𝐫^{})`$, which are even in $`𝐫,𝐫^{}`$ and make no contribution to $`<d^2>`$. Since, in the absence of ferroelectric ordering, $`\phi _{𝐦=0}^2(𝐫)=\phi _{𝐦=0}^2(𝐫)`$,
$$<\widehat{d}^2>=e^2xx^{}<\mathrm{\Delta }\widehat{n}(𝐫)\mathrm{\Delta }\widehat{n}(𝐫^{})>𝑑𝐫𝑑𝐫^{}=\underset{𝐦}{}m_x^2\underset{𝐠}{}\frac{J^2}{U^2}$$
$$\underset{\mathrm{𝐦𝐠}}{}m_x(𝐦+𝐠)_x\frac{J^2}{U^2}=\underset{\mathrm{𝐦𝐠}}{}m_xg_x\frac{J^2}{U^2}.$$
(30)
The right-hand side is zero because $`_𝐦𝐦=\mathrm{𝟎}`$ (neutrality condition). Similarly, $`<N^2>=0`$. The allowance for the temperature would give hole-twin states in density matrix $`\rho _0`$, accompanied by the gap factor $`\mathrm{exp}(E/kT),EU`$. The state $`\mathrm{\Phi }`$ is thus, to within $`(J/U)^2`$, an insulating state according to our view. This becomes clearer if one considers the structure of the state $`\mathrm{\Phi }`$, Eq. (26). The state $`\mathrm{\Phi }_0`$ is homopolar; $`\widehat{S}\mathrm{\Phi }_0`$ contains one twin and one hole, $`\widehat{S}^2\mathrm{\Phi }_0`$ not more than two of each. However, it is phisically obvious that in the ground state of a conductor the number of polar excitations, independend of $`N`$, as in $`\mathrm{\Phi }`$, does not make the state conducting. Hence, in any order of perturbation theory of the type (26), the state remains insulating <sup>8</sup><sup>8</sup>8In the calculations, we neglected in Eq. (30) the surface contribution, assuming that for all the sites over which the summation is taken there is the same number $`z=_𝐠`$ of nearest neighbors. In contrast to this, the fluctuations in a Fermi gas, proportional to $`V^{1/3}`$, are volume fluctuations; according to the meaning of the fluctuation-dissipation theorem, which involves the limit $`V\mathrm{}`$, these fluctuation are the important ones..
The insulating state may therefore contain an admixture of polar states, expressed by a nonzero $`<\widehat{n}_𝐦\widehat{n}_𝐦>`$; in our approximation, $`<\widehat{n}_{}\widehat{n}_{}>=(z/2)(J/U)^2`$. Attempts to link the transition to the conducting state with the loss of homopolarity have been made in , for example. However, it is evident from the above that the admixture of polar states does not at all imply the presence of the charges capable of infinite motion.
## 7 Discussion of results
The proposed criterion to differentiate an insulator from a conductor depends the behavior of the mean square dipole moment fluctuation, a purely static quantity. For an insulator, the dipole moment is additive, since its mean square is proportional to $`V`$. This may be regarded as a manifestation of electron localisation relative to the ions at lattice sites. Anomalious behavior, $`<d^2>V^{1+\gamma }`$ with $`\gamma >0`$, corresponds to delocalisation of electrons. That is, the particular behavior of $`(<d^2>/V)_V\mathrm{}`$ gives a precise meaning to the intuitive idea of localized or delocalized electron states (finite or infinite electron motion). The nature of the delocalisation is substantially different for a normal conductor, the anomaly of $`<d^2>_{T=0}`$ is due to the slow decrease of the correlation function at large distances; in a superconductor, to the presence of density fluctuations at $`T=0`$. This will be more fully discussed elsewhere.
The fluctuation-dissipation theorem makes possible a definite association between the localization or delocalization and other static quantity, the reaction $`\kappa _0`$ to a homogeneous static electric field (homogeneous linear response), which is fundamentally different for conductors and insulators<sup>9</sup><sup>9</sup>9An attempt to use the Callen-Welton relationship for the conductivity $`\sigma (\omega )`$ would not lead to the classification sought, since the current correlation function $`<J^2>`$, unlike $`<d^2>`$, is always ”normal”. . The kinetic quantity $`\kappa ^{\prime \prime }(\omega )`$ does not appear in the final result. It should be emphased that the conducting state is characterized e contrario: the quantities $`\kappa _0`$ and $`<d^2>/V,V\mathrm{}`$ are finite for an insulator, are infinite for a conductor, and so have no meaning.
The transition from an insulator to a normal conductor amounts to a change in the asymptotic form of $`K(\mathrm{𝐫𝐫}^{})`$ as $`|𝐫𝐫^{}|\mathrm{}(T=0)`$. It is not accompanied by ”strong” fluctuations, and therefore cannot be represented by any order parameter, even conventionally as in a gas-liquid transition. There is, however, a parallel with transitions at $`T0`$ in two-dimentional systems .
We have in the foregoing extended our treatment to ideal non-ergodic systems, on the assumption that, if the ”kinetic” terms in the Hamiltonian, responsible for ergodicity, are small, then neglecting them does not affect on the classification of the system as insulator or conductor. For a conductor, the omission of these terms simply emphasizes the singularities ( for example, $`\sigma _0`$ becomes infinite); for an insulator, it does not cause a field effect. However, including them is important near an insulator- conductor transition and also in low-dimensional disorder systems, where an exponential decrease of $`K(\mathrm{𝐫𝐫}^{})`$ can occur even with infinitesimal disorder.
The problem of classifying insulators and conductors is thus reduced to an investigation of static properties: the mean square fluctuation $`<\mathrm{\Delta }N^2>`$ of the number of electrons, and the correlation function $`K(\mathrm{𝐫𝐫}^{})`$ near $`T=0`$. This allows also a clear formulation of the insulator-normal conductor transition problem.
|
no-problem/9902/astro-ph9902009.html
|
ar5iv
|
text
|
# GRB990123, The Optical Flash and The Fireball Model
## 1 Introduction
Gamma-ray bursts observers, were shocked once more with the explosion of GRB990123. This is a very strong burst. Its fluence of $`3\times 10^4\mathrm{erg}/\mathrm{cm}^2`$ (Kippen et al., GCN224) places it at the top 0.3% of BATSE’s bursts. It has a multi-wavelength afterglow ranging from X-ray via optical and IR to radio. Absorption lines in the optical have led to a lower limit of its redshift $`z>1.6`$ (Kelson et al., IAUC 7096) which for isotropic emission leads to a $`\gamma `$-ray energy of about $`3\times 10^{54}`$ergs. This, and a second set of absorption lines at $`z0.2`$ have led to the suggestion (Djorgovski et al., GCN216) that GRB990123 might have been lensed and amplified by a factor of ten or so.
GRB990123 would have been amongst the most exciting GRBs ever just on the basis of these facts. However, ROTSE discovered a new element 8.9 magnitude prompt optical flash (Akerlof et al., GCN205). This have added another dimension to GRB astronomy. It is the first time that a prompt emission in another wavelength apart from $`\gamma `$-rays has been detected from a GRB. Such a strong optical flash was predicted, just a few weeks ago (Sari & Piran 1999a,b; hereafter SP99), to arise from a reverse shock, propagating into the relativistic ejecta, that forms in the early afterglow. The original prediction gave a lower limit of 15 magnitude for a “standard” GRB with a fluence of $`10^5`$ergs/cm<sup>2</sup>. Scaling that to the $`\gamma `$-ray fluence of GRB990123 yields a lower limit of $`11`$, compatible with the observed 9 magnitude.
In this letter we confront the fireball theory with the ongoing observations of GRB990123. We show that the observations of the GRB light curve and spectrum, the prompt optical flash light curve, the radio emission as well as the available afterglow light curve for the first few days strongly support the reverse shock prompt emission model. This agreement provides an additional support to the overall internal-external scenario.
## 2 Observations
GRB990123 triggered BATSE on 1999 January 23.507594 (Kippen et al., GCN 224). It consisted of multi-peaked structure lasting more than 100 seconds. There are two clear relatively hard peaks with irregular softer emission that follow. The burst’s $`\gamma `$-ray peak flux is 16.42 photons/cm<sup>2</sup>/sec. The total fluence ($`>20`$keV) is $`3\times 10^4`$ erg/cm<sup>2</sup> (Band 1999). The burst also triggered GRBM (on 23.50780) and was detected by the WFC on BeppoSAX (Feroci et al., IAUC 7095). The GRBM fluence is comparable $`3.5\times 10^4`$ erg/cm<sup>2</sup>. The WFC light curve is complex with only one clear peak (about 40 seconds after the GRBM peak) followed by a structured high plateau. The peak flux of this peak is $`3.4`$ Crab in the energy band 1.5-26 keV. The total fluence in this soft X-ray band is about $`7\times 10^6`$ergs/cm<sup>2</sup>, a few percent of the $`\gamma `$-ray fluence.
BATSE’s observations triggered ROTSE via the BACODINE system (Akerlof et al., GCN205). An 11.82 magnitude optical flash was detected on the first 5 seconds exposure, 22.18 seconds after the onset of the burst. This was the first observation ever of a prompt optical counterpart of a GRB. Another 5 seconds exposure, 25 seconds later, revealed a 8.95 magnitude signal ($`1`$ Jy!). The optical signal decayed to 10.08 magnitude 25 seconds later and continued to decay down to 14.53 magnitude in subsequent three 75 second exposures that took place up to 10 minutes after the burst. The five last exposures depict a power law decay with a slope of $`2.0`$ (see Fig. 1). This initial optical flash contains most of the optical fluence: $`2.5\times 10^7\mathrm{ergs}/\mathrm{cm}^2`$, about $`7.7\times 10^4`$ of the $`\gamma `$-ray fluence.
Optical afterglow follow up by larger telescopes began some 3 hours and 46 minutes later with the observations on Palomar (Odewahn et al., IAUC. 7094). These observations revealed an 18.2 R magnitude source. The optical observations continued in more than half a dozen observatories around the world. These observations are summarized in Fig. 1. We have inferred a slope of $`1.1`$ from the entire R band observations reposted in the GCN (see Fig 1). A similar slope $`1.13\pm 0.03`$ was deduced for the Gunn-r flux (Bloom et al., GCN240, Yadigaroglu et al, GCN242). Note that this is significantly different from the initial slope. The optical spectra revealed several absorption line system, showing that the redshift of GRB990123 is $`1.61`$ (Kelson et al., IAUC 7096).
The early X-ray observations were followed up by an NFI observation (Piro et al., GCN203, Heise et al IAUC7099) beginning approximately six hours after the burst with a flux of $`1.1\times 10^{11}`$ ergs/cm<sup>2</sup>/sec (about $`0.8\mu `$Jy) and lasting for 26hours. The NFI observation corresponds to a power law decay with a slope of -1.35 from the prompt observation (about 60 seconds after the burst) and within the 26 hour NFI measurement itself. An ASCA observation (Murakami et al., GCN228) on 25.688 (approximately 2 days and 7 hours after the burst) reported a flux of $`10^{12}`$ergs/cm<sup>2</sup>/sec (about $`0.08\mu `$Jy). The decay from the NFI to the ASCA observation is slightly slower with a slope of $`1.1`$.
A Near Infra-red counterpart with a $`K=18.3\pm 0.03`$ magnitude was detected on Jan. 24.6356 (Bloom et al., GCN240). It has been observed later on 27.65 and 28.55. The observations agree with a decaying light curve with a slope $`1.14\pm 0.08`$.
Finally, a radio source at 8.46 GHz was detected on Jan. 24.65 by the VLA (Frail & Kulkarni, GCN211) with a flux density of $`260\pm 32\mu `$Jy. This radio source was not detected earlier with an upper limit of $`64\mu `$Jy (Frail and Kulkarni GCN200) or later with a comparable upper limit by the VLA (Kulkarni and Frail GCN239). Earlier attempt to detect a radio source on 24.28 at 4.88GHz gave only an upper limit of $`130\mu `$Jy (Galama et al., GCN212)
## 3 An Optical Flash from the Reverse Shock
A brief examination of the $`\gamma `$-rays signal during the three optical exposures that where simultaneous with the burst show that there is no correlation between the $`\gamma `$-ray intensity and the optical intensity. The $`\gamma `$-ray counts ratios in these three exposures are 5:1:1 (a more careful examination of the spectrum shows that the energy ratio is about 10:1:1; Band, 1999). The optical ratios, on the other hand, are approximately 1:15:5. While in principle the cooling tail of the electrons producing the GRB itself could give rise to a strong optical signal it would have been correlated with the $`\gamma `$-ray signal. The lack of correlations means that the same electrons could not have emitted both the $`\gamma `$-rays and the optical emission. Moreover, GRB990123 was a relatively hard burst peaking at about $`1`$MeV. The decreasing flux with decreasing frequency (below a few hundred keV) is incompatible with a low cooling frequency, required for a strong optical emission. One could have thought that the highest energy electrons which are emitting $`\gamma `$-rays are fast cooling while the lower energy electrons which are emitting optical are slow cooling. In this case the optical emission would have been proportional to the integrated $`\gamma `$-ray flux (Katz, 1999). But this again is in disagreement with the decay in the optical emission in the third optical exposure. We conclude that the $`\gamma `$-rays and the optical emission must have been emitted in different physical regions.
According to the fireball model (see e.g. Piran, 1999 for a review) there are two possible regions in which shocks could take place. Internal shocks which take place within the relativistic ejecta, and external shocks that take place between the ejecta and the ISM. In the internal-external model (Sari & Piran 1997) the GRB is produced via internal shocks within the relativistic ejecta itself while the afterglow is produced via external shocks. In an internal shock both forward and reverse are more or less similar since the ejecta shells they are running into have similar properties. On the other hand, in the external shocks that follow, the reverse shock, that is going into the dense ejecta, is very different from the forward shock that is going into the ISM. Therefore, overall there are three possible emitting regions in the internal-external scenario.
For external shocks the ratio between the emission from the forward and the reverse shock is proportional to $`\gamma ^2`$, $`\gamma `$ being the Lorentz factor of the ejecta (See SP99 for more details). Thus, if an external reverse shock is producing the GRB the external forward shock emission will be in the GeV range (Mészáros and Rees, 1994), and there is no room for a strong optical emission. If an external forward shock has produced the GRB the external reverse shock could have emitted in optical. However, we expect such emission to be correlated with the $`\gamma `$-ray emission, unlike the observations. Thus, we rule out this scenario. This is in agreement with other arguments against this model (Sari & Piran 1997, Fenimore, Madras & Nayakshin 1996).
Within the internal-external scenario the GRB is produced by the internal shocks. For these shocks both forward and reverse shocks are rather similar and the emission from both shocks is at the same energy band. If the forward external shock would have produced the optical emission there would have been no place to produce neither the prompt X-rays nor the late afterglow emission. Thus, we are left with the only possibility that the reverse external shock has produced the optical emission while the forward external shock (which continues later as the afterglow) produced the early X-ray as well as UV and some weak $`\gamma `$-ray signal. We don’t expect now any correlation between the $`\gamma `$-rays and the optical emission, but we expect some correlation between the optical emission and an early X-ray emission. Indeed the WFC on BeppoSAX reported an X-ray peak some 60 seconds after the beginning of the burst, not far from the peak exposure of ROTSE. It is important to note that the overlap between the internal shocks signal (the GRB) and the early afterglow (the optical and the X-ray) was expected. In the internal shocks scenario long bursts are produced by thick shells, which, in turn, are produced by a central engine operating for a long time. Sari (1997) have shown that for this case, there should be an overlap between the internal shock emission and the external shock emission, in agreement with the observations.
## 4 The Reverse Shock Evolution
An exact calculation of the reverse shock evolution requires a detailed understanding of the magneto-hydrodynamics of relativistic collisionless fluids and their behavior behind strong shocks. However, surprisingly good qualitative picture can be obtained by treating the matter as a fluid, using the simplest assumptions (equipartition and random orientation) on the magnetic field evolution.
After the reverse shock has passed through the ejecta, the ejecta cools adiabatically. We assume that it follows now the Blandford McKee (1976) self-similar solution (recall that strictly speaking this solution deals only with the ISM material). In this solution a given fluid element evolves with a bulk Lorentz factor of $`\gamma R^{7/2}`$. Since the observed time is given by $`TR/\gamma ^2c`$ we obtain
$$\gamma T^{7/16}.$$
(1)
Similarly, the internal energy density evolves as $`eR^{26/3}T^{13/12}`$, the particle density evolves as $`nR^{13/2}T^{13/16}`$ and therefore the energy per particle, or the particle Lorentz factor behaves like
$$\gamma _eT^{13/48}.$$
(2)
The simplest assumption regarding the magnetic field is that its energy density remains a constant fraction of the internal energy density. In this case we obtain $`B\sqrt{e}T^{13/24}`$. Other evolution of the magnetic field are possible if the magnetic field has a defined orientation (Granot, Piran & Sari 1998).
We assume that the reverse shock has accelerated the electrons to a power-law distribution. However, once the reverse shock crossed the ejecta shell, no new electrons are accelerated. All the electrons above a certain energy cool, and if the cooling frequency, $`\nu _c`$, is above the typical frequency, we are left with a power law electron distribution over a finite range of energies and Lorentz factors. Each electron now cools only due to the adiabatic expansion with its Lorentz factor proportional to $`T^{13/48}`$.
Once the reverse shock has crossed the ejecta shell the emission frequency drops quickly with time according to $`\nu _e\gamma \gamma _e^2BT^{73/48}`$. Given that the total number of radiating electrons $`N_e`$ is fixed the flux at this frequency falls like $`F_{\nu _e}N_eB\gamma T^{47/48}`$. Below the typical emission frequency, $`\nu _m`$, we have the usual synchrotron low energy tail and for these low frequencies the flux decreases as $`F_\nu F_{\nu _m}(\nu /\nu _m)^{1/3}T^{17/36}`$.
Above $`\nu _m`$ (and below $`\nu _c`$) the flux falls sharply as $`F_\nu F_{\nu _m}(\nu /\nu _m)^{(p1)/2}`$. For $`p=2.5`$ this is about $`F_\nu T^{2.1}`$. Both $`\nu _m`$ and $`\nu _c`$ drop as $`T^{73/48}`$, since all electrons cool by adiabatic expansion only. Once $`\nu _c`$ drops below the observed frequency the flux practically vanishes (drops exponentially with time).
## 5 The Reverse Shock Emission and GRB990123 Observations
The initial decay of the optical flux after the second ROTSE exposure is $`T^2`$. In agreement with the crude theory predicting $`2.1`$ . This means that $`\nu _m`$, the typical synchrotron frequency of the reverse shock, is below the optical bands quite early on. Using the estimates of the peak value of reverse shock $`\nu _m`$ from SP99 we obtain
$$\nu _m=1.2\times 10^{14}\left(\frac{ϵ_e}{0.1}\right)^2\left(\frac{ϵ_B}{0.1}\right)^{1/2}(\frac{\gamma _0}{300})^2n_1^{1/2}5\times 10^{14}.$$
(3)
This shows that the initial Lorentz factor of this burst was not too high. Using the equipartition values $`ϵ_e0.6`$ and $`ϵ_B0.01`$ and $`n_1=5`$ inferred for GRB970508 (Wijers & Galama, 1998, Granot, Piran & Sari, 1998b) we find that the initial Lorentz factor was rather modest:
$$\gamma _0200.$$
(4)
This is in agreement with the lower limit estimates, based on the pair creation opacity (Fenimore, Epstein, & Ho, 1993; Woods & Loeb, 1995; Piran, 1995; Baring & Harding, 1997).
The Lorentz factor at the beginning of the self similar deceleration, i.e., at the time of the afterglow peak $`50`$s, $`\gamma _A220`$ is independent of the initial Lorentz factor of the flow (SP99). This is very close to our estimated initial Lorentz factor. It shows that the reverse shock is only mildly relativistic and the initial Lorentz factor of its accelerated electrons’ random motion is:
$$\gamma _m630ϵ_e.$$
(5)
Emission from the reverse shock can also explain the radio observations: a single detection of $`260\mu `$Jy one day after the burst. If the reverse shock emission peaked in the optical at $`50`$sec and the peak frequency decayed in time as $`T^{73/48}`$ then the peak frequency should have reached $`8.4`$GHz after $`19`$hours. Scaling the observed optical flux of $`1`$Jy, as $`T^{47/48}`$ to $`19`$ hours the expected flux at $`\nu _m=8.4`$GHz is $`840\mu `$Jy. From that time on the 8.4GHz flux decays as $`T^{2.1}`$. The emitted 8.4GHz flux is therefore given by
$$F_\nu =\{\begin{array}{cc}840\mu \mathrm{Jy}(T/19h)^{2.1}\hfill & T>19\text{ hours}\hfill \\ 840\mu \mathrm{Jy}(T/19h)^{17/36}\hfill & T<19\text{ hours}\hfill \end{array},$$
(6)
so that at $`1.2`$ days, when radio was detected, we expect a flux of $`350\mu `$Jy, amazingly close to the observations. Equation 6 is also compatible with all later upper limits, see figure 2.
Equation 6 yields a 8.4GHz flux of $`1.5`$mJy after six hours, which is way above the upper limit of $`64\mu `$Jy. However, strong self absorption of the reverse shock radio emission took place at this stage and this suppressed this emission. When accounting for that, the resulting emission would be the minimum between the estimate of equation 6 (ignoring self absorption) and the black body upper limit. This upper limit of black body emission from the reverse shock can be estimated by (Katz & Piran, 1997, SP99):
$$F_{\nu ,BB}=\frac{2\nu ^2}{c^2}\pi \gamma \gamma _em_ec^2(R_{}/D)^2=150\mu \mathrm{J}(T/1\mathrm{d}\mathrm{a}\mathrm{y})^{5/12}.$$
(7)
Note that while the emission estimates used only scaling with time of the observed early optical flux, the black body upper limit is more model dependent and possibly less reliable. The scaling in the last expression as well as the numerical coefficient use the scaling of $`\gamma `$ and $`\gamma _e`$ with time (equation 1 and 2) together with their inferred initial value from the initial afterglow time and peak (equations 4 and 5). We used $`R_{}4.6\gamma cT`$ where the numerical coefficient is appropriate for a fast decelerating shell (see Sari 1997,1998; Waxman 1997a; Panaitescu & Mészáros 1998), and the relevant distance is $`D=D_L/\sqrt{1+z}1.7\times 10^{28}`$cm for a $`\mathrm{\Omega }=1`$, $`h=65`$ universe and assuming $`z=1.61`$. Shown, on Fig. 2 is also the upper limit to the radio emission according to a black body spectrum. This upper limit from black body emission also accounts for the lack of 4.88GHz reported by Galama et al. (GCN212).
We now turn to estimate the initial (50sec) cooling frequency $`\nu _c`$. Note that initially, this frequency is the same for the forward and the reverse shock (SP99). A simple estimate can be obtained from the the temporal slope of the late afterglow (forward shock) light curve and its spectrum, which are compatible with the predicted spectrum (Sari, Piran & Narayan, 1998) of slow cooling electrons (Bloom et at., GCN240). This indicates that the forward shock cooling frequency is $`\nu _{c,f}(2\mathrm{d}\mathrm{a}\mathrm{y}\mathrm{s})4\times 10^{14}`$Hz leading to $`\nu _c(50\mathrm{s})=\nu _{c,r}=\nu _{c,f}2.5\times 10^{16}`$Hz. Detection of a break in the optical flux later on can be used to replace this inequality by a more solid number.
A similar lower limit can be put using the fact that the reverse shock was seen for more than 650sec. The reverse shock cooling frequency at that time is therefore higher than $`5\times 10^{14}`$Hz. Scaling it back to 50sec according to $`T^{73/48}`$ we get $`\nu _c(50\mathrm{s})2\times 10^{16}`$Hz.
A more speculative constraint on $`\nu _c`$ can be obtained from the GRB spectrum itself (SP99). The decreasing GRB spectrum below a few hundred keV implies that $`h\nu _c100`$keV. Otherwise the low energy flux would have increased at low frequencies like $`\nu ^{1/2}`$. This constraints $`\nu _c2\times 10^{19}`$Hz. This holds for the site producing the GRB (internal shocks in our model). However, the observed $`\gamma `$-ray emission during the end of the GRB is probably dominated by the forward shock, as suggested by the smoother temporal profile and by the softer spectrum. This means that a similar constraint applies to the initial $`\nu _c`$ of the forward shock, which is the same as the reverse shock one. If this rough estimate of $`10^{19}`$Hz is correct then this break in the late optical light curve is expected to be only after about 40 days.
The reverse shock model can also be confronted with the observed optical to $`\gamma `$-ray energy ratio. Using the table in SP99 and the estimated synchrotron frequency at $`50`$sec $`\nu _m4\times 10^{14}`$Hz and the cooling frequency $`\nu _c10^{19}`$Hz we find the optical fluence to be $`4\times 10^3`$ of the GRB fluence. This is about a factor of five higher than the observed fraction, a reasonable agreement, considering the crudeness of the model. Note that this model assumes that the reverse shock contains the same amount of energy as the whole system. This can be of course lower by a factor of a few.
## 6 The Forward Shock
The forward shock that propagates into the ISM is considered by now as the classical source of the afterglow (Katz, 1994; Mészáros & Rees 1997; Vietri, 1997; Wijers et al., 1997; Waxman, 1997b; Katz & Piran, 1997). After a possible short radiative phase it becomes adiabatic and it acquires the Blandford McKee profile. It then expands self-similarly until it becomes non-relativistic. In GRB990123 it has produced some of the prompt soft $`\gamma `$-rays and X-rays observed late during the burst by BATSE, and the WFC. It has continued to produce the X-ray and the late optical and IR emission.
The initial decline of the X-ray suggests that already initially the typical synchrotron frequency was below the 1.5-10keV band. The late slope of the light curve of the optical afterglow agrees well with other power law decays seen in other afterglows. This suggests that we see an adiabatic decay phase. There could have been an early radiative phase, but if there was one it was shorter than the 3.75 hours gap before the first optical observations. This is in agreement with expectations (Waxman 1997, Granot et al., 1998a). The decline from $`3.75`$ hours onwards in the R band suggests that already at this stage the typical synchrotron frequency, $`\nu _m`$ was below this band. Extrapolating this back to 50sec we get $`\nu _m2\times 10^{18}`$Hz, consistent with the above discussion. Moreover, the ratio between $`\nu _m`$ of the forward and the reverse shock should be approximately $`\gamma ^2`$. Using the two estimated values in the initial time we find that $`\gamma 70`$. This is a factor of 3 lower than completely independent estimate in equation 4. Again we consider this as a rather good agreement in view of the crudeness of both estimates. A short initial radiative phase could even improve this agreement.
The observed temporal decay slope of the X-ray ($`1.35`$ and $`1.1`$) and optical ($`1.13`$) from the forward shock are comparable. An X-ray slope steeper by a $`1/4`$ is predicted (Sari, Piran and Narayan 1998) if the cooling frequency is between the X-rays and the optical, which seems to be the case in this burst. With future data and a careful analysis this prediction could be tested.
## 7 Discussion
The discovery of prompt optical emission during a GRB have opened a new window to explore this remarkable phenomenon. The lack of correlation between the optical and $`\gamma `$-ray emission is a clear indication that two different processes produce the emission in those two different bands. The emission from these two processes reach the observers simultaneously. These findings are in a perfect agreement with the internal-external model. Fenimore, Ramirez-Ruiz and Wu (1999) reach into the same conclusion on the grounds of the burst’s temporal structure.
The strong prompt optical emission was predicted (SP99) to arise from the reverse external shock. We see here that various features of this emission, in particular the overall fluence and the decay slope agree well with the predictions. This reverse shock also explains the origin of the transient radio observation a day after the burst.
The light curves of the X-ray and the late optical afterglow agree well with the, by now, “classical” afterglow model. According to this model this emission is produced by the forward external shock. We expect a somewhat different slope for the X-ray and optical light curves. However at present the data is not good enough to test this prediction. It remains to be seen if this could be tested in the future. We also expect that radio emission would show up on a time scale of weeks. The source of this emission would also be the forward external shock.
Already now we were able to determine some of the parameters of GRB990123. Specifically we were able to estimate the initial Lorentz factor and the Lorentz factor three days after the burst. Future radio and optical observations will enable us to determine the rest of the parameters of GRB990123, allowing a refinement of these calculations and further tests of the theory.
The late optical light curve is well fit by a single power law without any break. The index of this slope is approximately the one predicted by the spherical afterglow model. These facts suggest that so far there was no transition from a spherical like to an expanding jet behavior. Such a transition is expected, for a relativistic jet, when the Lorentz factor reaches the value $`\theta ^1`$, where $`\theta `$ is the opening angle of the jet (Rhoads, 1997). Such a transition would lead to a break in the light curve and to a decrease in its index by one. Since the theory gives a Lorentz factor of about six at seven days, these observations set a lower limit on the beaming angle of GRB990123 to be $`\theta 0.15`$. The energy budget could still be “rescued” if a break is seen soon. Otherwise, this indicates that GRB990123 is as powerful as the isotropic estimates suggest!
The coincidence of nearby galaxy the strength of the burst have led to the speculation that GRB990123 has been magnified by a gravitational lens (Djorgovski et al., GCN216). There have been some suggestions that this is unlikely. We stress that our analysis (apart from the black body emission in equation 7) is independent of whether the event was lensed or not and independent of its redshift.
This research was supported by the US-Israel BSF grant 95-328, by a grant from the Israeli Space Agency and by NASA grant NAG5-3516. Re’em Sari thanks the Sherman Fairchild Foundation for support, and Eric Blackman and David Band for discussions and useful remarks. Tsvi Piran thanks Columbia University and Marc Kamionkowski for hospitality while this research was done and Pawan Kumar for many helpful discussions.
|
no-problem/9902/cond-mat9902189.html
|
ar5iv
|
text
|
# Depletion potential in hard sphere fluids
## Abstract
A versatile new approach for calculating the depletion potential in a hard sphere mixture is presented. This is valid for any number of components and for arbitrary densities. We describe two different routes to the depletion potential for the limit in which the density of one component goes to zero. Both routes can be implemented within density functional theory and simulation. We illustrate the approach by calculating the depletion potential for a big hard sphere in a fluid of small spheres near a planar hard wall. The density functional results are in excellent agreement with simulations.
When the separation between two big colloidal particles suspended in a fluid of small colloidal particles or non-adsorbing polymers is less than the diameter of the small ones exclusion or depletion of the latter occurs leading to anisotropy of the local pressure which gives rise to an attractive depletion force between the big particles. Asakura and Oosawa first described this depletion mechanism, suggesting that it would drive phase separation in colloid-polymer mixtures. Using excluded volume arguments they calculated the force for two hard spheres of radius $`R_b`$ in a fluid of small hard spheres of radius $`R_s`$ and showed that this is attractive for all separations less than $`2R_s`$ and is zero for larger separations. The hard sphere model captures the essence of the depletion phenomenon and can be mimicked experimentally by suitable choices of colloidal solutions . Depletion forces are of fundamental statistical mechanical interest since they are purely entropic in origin and much theoretical and simulation effort has been devoted to their determination. Many recent experiments, using a wide variety of techniques, have investigated depletion forces for colloidal mixtures or for colloid-polymer mixtures . Much of the impetus for these studies stems from a need to understand how depletion determines phase separation and flocculation phenomena and this has stimulated a growing interest in ascertaining quantitative details of the depletion force. Several of these studies are concerned with a big particle near a planar wall but Dinsmore et al. have employed video microscopy to monitor the position of a single big colloid immersed in a solution of small colloids inside a vesicle – a system resembling hard spheres confined inside a hard cavity. The experimental conditions in Ref. correspond to a small sphere packing fraction $`\eta _s=0.3`$, which is sufficiently high that one expects the depletion force to be substantially different from that given by the Asakura-Oosawa result; the latter is valid only in the limit $`\eta _s0`$. Thus, in order to model this and other experimental situations one requires a theory of depletion which is reliable at high packings and which can tackle rather general “confining” geometries. The latter could represent a planar wall, a wedge or cavity or, indeed, another big particle . Here we present such a theory for the depletion potential based on a density functional treatment (DFT) of a fluid mixture. Our approach is versatile and avoids the limitations of the virial expansion and of the Derjaguin approximation and avoids many of the limitations inherent in integral-equation theories of the depletion force. It has a distinct advantage over a direct implementation of DFT in that it does not require a minimization of the free energy in the presence of the big particle. It also suggests an alternative simulation procedure for calculating depletion potentials. We demonstrate the accuracy of our approach by comparing our results for the depletion potential between a big hard sphere and a planar hard wall with those of our new simulation and with those of Ref..
We consider a multicomponent mixture in which species $`i`$ ($`i=1,2,\mathrm{}\nu `$) has a chemical potential $`\mu _i`$ and is subject to an external potential $`V_i(𝐫)`$ giving rise to the equilibrium number density profiles $`\{\rho _i(𝐫)\}`$. The quantity of interest $`W_t(𝐫_i)\mathrm{\Omega }_{ti}(𝐫_i;\{\mu _i\},T)\mathrm{\Omega }_{ti}(\mathrm{};\{\mu _i\},T)`$ is the difference in grand potential $`\mathrm{\Omega }_{ti}`$ between a test particle of species $`i`$ located at position $`𝐫_i`$ and one located at $`𝐫_i=\mathrm{}`$, i.e., deep in the bulk, far from the object exerting the external potentials. Using the potential distribution theorem in the grand ensemble one can easily show :
$$W_t(𝐫_i)=k_BT\mathrm{ln}\left(\frac{\rho _i(𝐫_i)}{\rho _i(\mathrm{})}\right)V_i(𝐫_i)+V_i(\mathrm{}),$$
(1)
where $`T`$ is the temperature. Equation (1) is a general result; it is valid for arbitrary interparticle forces. It is important for subsequent application to note that $`\rho _i(𝐫)`$ is determined solely by the external potentials so it has the symmetry dictated by these potentials and not the broken symmetry brought about by inserting the test particle. In order to obtain the depletion potential $`W`$, which pertains to a single (big) particle, $`ib`$, in the presence of a fixed object exerting a potential $`V_b(𝐫)`$ of finite range, we take the limit $`\mu _b\mathrm{}`$ so that the density of the big particles vanishes, with the other chemical potentials $`\{\mu _{ib}\}`$ fixed. It follows that for $`𝐫`$ outside the range of $`V_b(𝐫)`$
$$\beta W(𝐫)=\underset{\mu _b\mathrm{}}{lim}\mathrm{ln}\left(\frac{\rho _b(𝐫)}{\rho _b(\mathrm{})}\right)$$
(2)
with $`\beta =(k_BT)^1`$. Although both $`\rho _b(𝐫)`$ and $`\rho _b(\mathrm{})`$ vanish in this limit their ratio is non-zero. Moreover, the density profiles of the other species $`\{\rho _{ib}(𝐫)\}`$ reduce to those in the $`\nu 1`$ component fluid. Thus, for a binary mixture of big and small ($`s`$) particles, $`\rho _s(𝐫)`$ reduces to the profile of a pure fluid of small particles. Equation (2) can be re-stated in a more familiar form as $`p(𝐫)/p(\mathrm{})=\mathrm{exp}(\beta W(𝐫))`$, where $`p(𝐫)`$ is the probability density of finding the big particle at a position $`𝐫`$ from the fixed object.
There are two distinct ways of implementing this route to the depletion potential. First one can calculate the density profile $`\rho _b(𝐫)`$ and, hence, $`W_t(𝐫)`$ in the mixture for decreasing concentration of species $`b`$. For sufficiently negative values of $`\mu _b`$, $`W_t(𝐫)`$ should approach its limiting value $`W(𝐫)`$. Such a procedure is straightforward to implement in any approximate DFT, especially for objects with planar or radial symmetry (e.g., planar walls, spherical cavities or a fixed spherical particle). Second one can attempt to proceed immediately to the limit of a single big particle, thereby obtaining $`W(𝐫)`$ directly without evaluation of $`\rho _b(𝐫)`$. Equation (1) can be re-expressed as a difference in one-body direct correlation functions
$$\beta W_t(𝐫_i)=c_i^{(1)}(𝐫_i;[\{\rho _i\}])c_i^{(1)}(\mathrm{};[\{\rho _i\}])$$
(3)
and the depletion potential can be expressed formally as
$$\beta W(𝐫)=c_b^{(1)}(𝐫;[\{\rho _{ib}\},\rho _b=0])c_b^{(1)}(\mathrm{};[\{\rho _{ib}\},\rho _b=0]).$$
(4)
In order to implement this result an explicit prescription for $`c_b^{(1)}`$ is required. Within the context of DFT $`c_b^{(1)}(𝐫_i;[\{\rho _i\}])=\beta \delta _{ex}[\{\rho _i\}]/\delta \rho _i(𝐫_i)`$, where $`_{ex}[\{\rho _i\}]`$ is the excess (over the ideal gas) free energy functional of the inhomogeneous mixture . For certain classes of (approximate) functionals explicit results can be obtained for $`c_i^{(1)}`$ which permit the limit $`\mu _b\mathrm{}`$ to be taken . An important example is Rosenfeld’s Fundamental Measure functional for hard sphere mixtures ,
$$_{ex}[\{\rho _i\}]=d^3r\mathrm{\Phi }(\{n_\alpha (𝐫)\}),$$
(5)
where the Helmholtz free energy density $`\mathrm{\Phi }`$ is a function of the set of weighted densities
$$n_\alpha (𝐫)=\underset{i=1}{\overset{\nu }{}}d^3r^{}\rho _i(𝐫^{})\omega _i^\alpha (𝐫𝐫^{})$$
(6)
and explicit expressions are available for the weight functions $`\omega _i^\alpha `$ and for $`\mathrm{\Phi }`$. In this theory the free energy of the homogeneous mixture is identical to that from Percus-Yevick or scaled-particle theory. Within this approach Eq. (3) reduces to
$$\beta W_t(𝐫)=d^3r^{}\underset{\alpha }{}\mathrm{\Psi }^\alpha (𝐫^{})\omega _b^\alpha (𝐫^{}𝐫)$$
(7)
with $`ib`$ and
$$\mathrm{\Psi }^\alpha (𝐫^{})\left(\frac{\beta \mathrm{\Phi }(\{n_\alpha \})}{n_\alpha }\right)_𝐫^{}\left(\frac{\beta \mathrm{\Phi }(\{n_\alpha \})}{n_\alpha }\right)_{\mathrm{}},$$
(8)
i.e., the grand potential difference consists of a sum of convolutions of the functions $`\mathrm{\Psi }^\alpha `$ and the weight functions $`\omega _b^\alpha `$ of the big particle. The index $`\alpha `$ labels $`4`$ scalar plus $`2`$ vector weights . Since the derivatives in Eq. (8) are evaluated at equilibrium, the weighted densities must be obtained from Eq. (6), once the density profiles $`\rho _i(𝐫)`$ are obtained by minimizing the free energy functional. However, for the binary mixture the limit $`\rho _b(𝐫)0`$ implies that $`n_\alpha (𝐫)`$ involves only $`\rho _s(𝐫)`$ which is given by the solution of the Euler-Lagrange equation for the pure small-sphere fluid. The depletion potential $`W(𝐫)`$ is then given by Eq. (7) with $`n_\alpha (𝐫)`$ determined in this way. Both schemes have been used to calculate the depletion potential for a big hard sphere, immersed in a sea of small spheres, near a planar hard wall. Before describing our results we remark upon two limiting cases of the DFT treatment.
(i) The present functional reduces to the exact low density limit $`\beta _{ex}[\{\rho _i\}]=\frac{1}{2}_{i,j}d^3rd^3r^{}\rho _i(𝐫)\rho _j(𝐫^{})f_{ij}(𝐫𝐫^{})`$, where $`f_{ij}`$ is the Mayer bond between species $`i`$ and $`j`$, when all the densities are small and the depletion potential simplifies to
$$\beta W(𝐫)=d^3r^{}(\rho _s(𝐫^{})\rho _s(\mathrm{}))f_{bs}(𝐫𝐫^{})$$
(9)
where, in the same limit, $`\rho _s(𝐫)=\rho _s(\mathrm{})\mathrm{exp}(\beta V_s(𝐫))`$. For hard spheres $`f_{bs}(𝐫𝐫^{})=\mathrm{\Theta }((R_s+R_b)|𝐫𝐫^{}|)`$ where $`\mathrm{\Theta }`$ is the Heaviside function and one can show that for a hard potential $`V_s(𝐫)`$, $`\beta W(𝐫)`$ is $`\rho _s(\mathrm{})`$ times the overlap volume of the exclusion sphere of the big particle with the hard wall, i.e., one recovers the Asakura-Oosawa result for the depletion potential .
(ii) In the limit where the two species have equal radii, i.e., $`sR_s/R_b=1`$, it is straightforward to show that $`c_b^{(1)}`$ reduces to the direct correlation function of a pure ($`s`$) fluid and that the depletion potential is given by $`\beta W(𝐫)=\mathrm{ln}(\rho _s(𝐫)/\rho _s(\mathrm{}))`$, which is the exact result .
Since the Rosenfeld functional is known to yield very accurate results for the density profile of a pure fluid near a hard wall and for the radial distribution function, obtained by fixing a particle at the origin and calculating the one-body density profile, we have good reasons to expect that the depletion potential calculated from DFT for $`s1`$ should be rather accurate for all (fluid) densities $`\rho _s(\mathrm{})`$. On the other hand for small size ratios the accuracy of the mixture DFT is not known; we test this by making comparison with simulations.
In Fig. 1 we show $`W_t(z)`$ for the binary hard sphere mixture, with $`s=0.2`$, at a planar hard wall as calculated using the Rosenfeld functional for a fixed packing fraction $`\eta _s4\pi R_s^3\rho _s(\mathrm{})/3=0.2`$ of the small spheres and 4 different values of $`\eta _b4\pi R_b^3\rho _b(\mathrm{})/3`$. $`z`$ is the distance between the surface of the big sphere and the wall. For $`\eta _b=10^4`$, $`W_t(z)`$ is indistinguishable from the depletion potential $`W(z)`$ calculated using the second, direct, method. Moreover, at this concentration $`\rho _s(z)`$ is indistinguishable from the the profile of the pure fluid. As both functions converge fairly rapidly to their limits (the results for $`\eta _b=10^2`$ differ from those for $`10^4`$ by at most 1.6%) we conclude that this is a practicable method of calculating depletion potentials via DFT. We note that for small values of $`\eta _b`$, $`W_t(z)`$ (and $`\rho _s(z)`$) exhibit decaying oscillations whose period is $`2R_s`$, whereas for $`\eta _b5\times 10^2`$ additional, longer period oscillations develop. The latter reflect the ordering of the big spheres which becomes pronounced at high packings. Figure 2 shows the depletion potential $`W(z)`$ calculated for a size ratio $`s=0.2`$ and $`\eta _s=0.3`$ alongside the result from Ref. . The latter was obtained from canonical Monte Carlo simulations of the depletion force, as given by an integral of the local density of small spheres in contact with the large sphere . Also plotted in Fig. 2 are our own simulation results, based on the formula $`p(z)\mathrm{exp}(\beta W(z))`$. As direct measurement of the probability density $`p(z)`$ of finding the big particle at a distance $`z`$ from the wall yields poor statistics, we measured the probability ratio $`p(z)/p(z+\mathrm{\Delta }z)`$ using umbrella sampling in a grand canonical Monte Carlo simulation of the small spheres, i.e., we determined $`\beta (W(z+\mathrm{\Delta }z)W(z))`$, where $`\mathrm{\Delta }z`$ is a step length, and we set $`W(0)`$ equal to the DFT value. Our DFT results are in excellent agreement with those of both simulations: the height of the depletion barrier (the maximum value of $`W`$ minus the contact value $`W(0)`$) is given almost exactly and the subsequent extrema are closely reproduced by the theory . That two independent sets of simulation results, based on totally different routes to the depletion potential, agree so closely is pleasing and attests to the accuracy with which these potentials can be calculated. We find a similar high level of agreement between DFT and the results of Ref. for $`\eta _s=0.1,0.2`$ and the smaller size ratio $`s=0.1`$.
The oscillatory behavior of $`W(z)`$ warrants further discussion. From the general theory of the asymptotic decay of density profiles in mixtures with short-ranged interaction potentials it is known that the profiles of all species near a wall (or a fixed particle) exerting a finite ranged external potential exhibit the same characteristic decay . Thus, for a binary mixture near a hard wall $`\rho _i(z)\rho _i(\mathrm{})A_{wi}\mathrm{exp}(a_0z)\mathrm{cos}(a_1z\mathrm{\Theta }_{wi})`$, $`z\mathrm{}`$ where $`i=b,s`$. The amplitude $`A_{wi}`$ and phase $`\mathrm{\Theta }_{wi}`$ depend on the particular species but $`a_0`$ and $`a_1`$ are common. In the limit $`\rho _b(\mathrm{})0`$ these are given by the solution of $`1\rho _s(\mathrm{})\widehat{c}_{ss}(a)=0`$, where $`\widehat{c}_{ss}(a)`$ is the Fourier transform of the two-body direct correlation function of the homogeneous pure fluid of density $`\rho _s(\mathrm{})`$. $`a_1`$ is the real and $`a_0`$ the imaginary part of the solution $`a`$ with the smallest imaginary part . We have confirmed from our numerical results that for $`\eta _b=10^4`$ and $`s=0.2`$, $`\rho _b(z)`$ and $`\rho _s(z)`$ exhibit the same value of $`a_0`$ and $`a_1`$. Moreover, for fixed $`\eta _s`$, both $`a_0`$ and $`a_1`$ are, as predicted, independent of the size ratio $`s`$. It follows from Eq. (2) that $`\beta W(z)A_{wb}\mathrm{exp}(a_0z)\mathrm{cos}(a_1z\mathrm{\Theta }_{wb})`$, $`z\mathrm{}`$, and apart from the amplitude and phase, the asymptotic decay of the depletion potential is also independent of $`s`$ . From a fit to the numerical data for $`\eta _s=0.2`$ we obtained $`a_02.1/(2R_s)`$ and $`a_15.2/(2R_s)`$. These values are close to those obtained from solving $`1\rho _s(\mathrm{})\widehat{c}_{ss}(a)=0`$ for the pure fluid where, consistent with the Rosenfeld DFT, $`\widehat{c}_{ss}`$ is given by Percus-Yevick theory. Moreover, and in keeping with earlier studies of bulk pair correlation functions , we find that the leading-order asymptotic result provides a good description of the intermediate as well as the longest-ranged behavior of the depletion potential. We note that the period of the oscillations $`2\pi /a_1`$ decreases whereas the decay length $`a_0^1`$ increases with increasing $`\eta _s`$.
The success of the comparisons between our DFT results and those of simulation lead us to expect that the Rosenfeld functional will provide an accurate description of depletion potentials for size ratios down to $`s=0.1`$ and for packing fractions $`\eta _s`$ up to $`0.3`$. (For smaller size ratios we expect the Percus-Yevick theory of hard sphere mixtures, upon which the functional is based, to become inaccurate and there is no reason to expect that any approximate mixture functional would reproduce the exact $`s0`$ results for the depletion potential which are obtained by making the Derjaguin approximation at the outset .) This implies that our procedure can be profitably employed for some of the more complex geometries investigated in experiments . We emphasize that our present procedure has a crucial advantage over the brute-force application of DFT to the calculation of depletion potentials. In the latter one would calculate $`W(𝐫)`$ either from the total free energy of the inhomogeneous fluid or from the local density of the small particles in contact with the big one . In contrast with the present scheme, both methods require much numerical effort to minimize the free energy functional since the original symmetry of the density profile $`\rho _s(𝐫)`$ is broken by the presence of the big particle. A likely limitation of our procedure is that one requires a reliable free energy functional for the (binary) mixture of big and small particles. Although accurate functionals are available for hard sphere mixtures this is not the case for most other types of mixtures.
As a final remark we note that it is possible to take simulation data for $`\rho _s(𝐫)`$, computed in the absence of the big test particle, and insert these into the DFT expression Eq. (7). Although such a procedure is not self-consistent, in view of the high accuracy of the Rosenfeld functional it does provide an alternative means of calculating depletion potentials for complex geometries such as wedges where a direct simulation of the potential or force is extremely difficult.
We are grateful to R. Dickman for providing his simulation results for the depletion potential which we plot in Fig. 2. We benefited from conversations with R. Van Roij. This research was supported in part by the EPSRC under GR/L89013.
|
no-problem/9902/astro-ph9902121.html
|
ar5iv
|
text
|
# Phenomenon of electrization caused by gravitation of massive body
The value of excess charge in the kernel of massive body
(and the opposite in sign excess charge at the surface)
caused by the influence of gravitational forces is determined.
Up to now an old problem of connecting rotatoty and magnetic characteristic heavenly bodies has been of gread interest. One of the aspects of the above problem is a question concerning quasi-neutrality disturbance of the substance of the body. (neutrality which on average in scale is greater than the lattice parameter or Debay radius because of gravitational forces. In addition to condensing the substance of the body these forces also create an excess of a positive charge in the center of a heavenly body and an excess of a negative charge on periphery of a body. (gravitational forces affect more strongly heavy nuclei than light electrons). The electric field caused by this effect results in the compression of the electric component of the substance of a heavenly body.
Rotating around its own axis that body owing to redistribution of its charge must obtain magnetic moment. Thus far the scientists have tried to explain (by the mechanism under discussion) the magnetic properties of heavenly bodies. The aim of this paper is to prove hopelessness of such attempts due to a negligible value of the effect of charge redistribution which corresponds to parameter $`\alpha `$:
$$\alpha =\frac{Gm_p^2}{e^2}10^{36},$$
$`\left(1\right)`$
Here $`G`$ \- a gravitation constant, $`m_p`$ and $`e`$ \- the mass and charge of a proton.
1. Let’s start with elementary evaluations directly concerning a solid (crystalline) state of a substance. In this case redistribution of a charge is the displacement of a nucleus relative to the center of ”Vigner-Zeitc cell” and leads to the appearance of dipole moment and polarization . It is obvious that both the electric and gravitational forces which act on the displacement of a nucleus must be in balance. Hence
$$Ze𝐄=GM^2𝑑𝐱^{}n_p\left(𝐱^{}\right)/\left|𝐱𝐱^{}\right|,$$
$`\left(2\right)`$
Here $`Ze`$ and $`M`$ \- a charge and mass of a nucleus, $`𝐄`$ \- the vector of an electric field, $`n_p`$ \- the concentration of nuclei. Taking into account equation $`div𝐄=4\pi \delta \rho `$ ($`\delta \rho `$ \- an excess of a charge), one may find:
$$\delta \rho =\left(\frac{A}{Z}\right)^2\alpha \rho _p\alpha \rho _p.$$
$`\left(3\right)`$
Here $`A`$ \- the mass of a nucleus divided by the mass of a proton, $`\rho _p`$ \- the density of an electric charge of nuclei.
Simple dimension considerations are in favour of evaluation $`(3)`$.
Dimensionless relation $`\delta \rho /\rho _p`$ must be proportional to $`G`$ (or more exactly to $`\alpha `$) which follows from the application of perturbation theory on the gravitation interaction. Corresponding coefficient of proportionality may depend on dimensionless parameters of substance - $`z`$, the ratio of masses of a proton and an electron; the ratio of Coulomb energy of a single particle to the largest of the following values: $`E_F`$ and $`T`$ ($`E_F`$ \- Ferme energy and $`T`$ \- temperature ). Now you can see for yourself that for astronomic bodies the above parameters differ from 1 not more than 100 - 1000 times. Therefore they can’t considerably influence eq. (3), (because $`\delta \rho /\rho _p`$, depends on $`G`$ linearly). The negative charge, which compensate (3) and is localized on the surface of a body, equals
$$Q=𝑑𝐱\delta \rho \alpha Q_p,$$
$`\left(3^{}\right)`$
Here $`Q_p`$ \- the total charge of the nuclei of a body. The polarization $`𝐏=Zen_p\delta `$ ($`\delta `$ \- the shift of a nucleus) is equal to $`𝐄/4\pi `$, because induction $`𝐃=0`$ owing to the absence of external charges. As a result the elementary calculation using eq. (3) gives the ratio of the magnetic moment of a body to its mechanical moment.
$$\alpha \frac{e}{m_pc}.$$
$`\left(4\right)`$
Owing to the infininitesimal of parameter $`\alpha `$ (see $`(1)`$) the above mentioned values are very small and the effect of charge redistribution under discussion can’t have direct observing demonstrations.
Suffice it to say, that for the Earth (the mass $`10^{27}g.`$, the radius $`10^9sm`$) the magnitude of the surface charge $`(3^{})`$ corresponds to one electron per 1 $`m^2`$ of the surface.
2. A more rigorous derivation of eq. (2) for a solid state of a substance is based on the selection of the part system from the total energy depending on the shift of nuclei $`\delta _k`$ (k - the number of a nucleus) and minimization of this part over $`\delta _k`$ accompanied by a change of: $`\delta _k𝐩_𝐤/(Zen_p)`$. Besides the sums for the grate may be replaced by integrals:
$$\underset{k}{}𝑑𝐱n_p,\text{ }𝐩_𝐤𝐏\left(𝐱\right).$$
Let’s start with gravitational energy for nuclear interaction :
$$E_{gr}=\frac{GM^2}{2}\underset{k,k^{}}{}\left|𝐱_𝐤𝐱_𝐤^{}\right|^1$$
Substituting $`𝐱_𝐤𝐱_𝐤+\delta _𝐤`$ and factorizing over $`\delta `$ up to the first order inclusive, one finds:
$$E_{gr}=\frac{GM^2}{Ze}𝑑𝐱\left\{𝐏\left(𝐱\right)\frac{d𝐱^{}n_p\left(𝐱^{}\right)}{\left|𝐱𝐱^{}\right|}\right\}.$$
$`\left(5\right)`$
Coulomb energy of interaction of nuclei and electrons can be written as:
$$E_c=\frac{Ze^2}{2}\underset{k,k^{}}{\overset{}{}}\left|𝐱_𝐤𝐱_𝐤^{}\right|^1Ze^2𝑑𝐱n\left(𝐱\right)\underset{k}{}\left|𝐱_𝐤𝐱\right|^1+\frac{e^2}{2}\frac{d𝐱d𝐱^{}n\left(𝐱\right)n\left(𝐱^{}\right)}{\left|𝐱𝐱^{}\right|},$$
here $`nZn_p`$ \- the concentration of electrons. In repeating the same considerations which resulted in (5) we must have in mind two circumstances:
1. Now we must factorize $`\delta `$ (or $`𝐏`$) to the second degree inclusive, (as $`\alpha `$ is very small, see (1)).
2. Factorization with respect to $`\delta `$ is impossible in case of interaction with electrons of the same cell because of the necessity to take into account the contribution of region $`r<\delta `$ and this case must be considered exactly.
It is represented in the second term of $`E_c`$, in which one must select an integral over the cell volume:
$$Ze^2\frac{d𝐱n\left(𝐱\right)}{\left|𝐱\delta \right|},$$
multiplied by the total number of nuclei. It leads to the following expression for the part depending on $`𝐏`$:
$$E_c^{\left(1\right)}=\frac{2\pi }{3}𝑑\mathrm{𝐱𝐏}^\mathrm{𝟐}.$$
In the remaining part of $`E_c`$ after factorizing over $`\delta `$ an ordinary term of dipole-dipole interaction arises, the interaction of different cells being reduced to it:
$$E_c^{\left(2\right)}=𝑑𝐱𝑑𝐱^{}\left(\frac{\left(𝐏\left(𝐱\right)𝐏\left(𝐱^{}\right)\right)}{\mathrm{𝟐}\left|𝐱𝐱^{}\right|^\mathrm{𝟑}}\mathrm{𝟑}\frac{(𝐏\left(𝐱\right),𝐏\left(𝐱𝐱^{}\right))(𝐏\left(𝐱^{}\right),\left(𝐱𝐱^{}\right))}{\mathrm{𝟐}\left|𝐱𝐱^{}\right|^\mathrm{𝟓}}\right)$$
(Terms of $`E_c`$ which are linear with respect to $`𝐏`$ disappear owing to the neutrality of a system). The equation for $`E_c^{(2)}`$ may be written as ($`see`$ $`Enclosure`$):
$$E_c^{\left(2\right)}=\frac{2\pi }{3}𝑑𝐱\left[𝐏^\mathrm{𝟐}\left(𝐱\right)\mathrm{𝟑}(𝐏\left(𝐱\right),)\frac{\left(𝐏\left(𝐱\right)\right)}{𝚫}\right]$$
For that reason the sum of $`E_c^{(1)}`$ and $`E_c^{(2)}`$ can be written as:
$$E_c=2\pi 𝑑\mathrm{𝐱𝐏}_𝐥^\mathrm{𝟐},$$
$`\left(6\right)`$
here $`𝐏_𝐥=\frac{div}{\mathrm{\Delta }}𝐏`$ \- the longitudinal part of vector $`𝐏`$ (it is in fact in $`(5)`$). Minimization of sums $`(5)`$ and $`(6)`$ over $`𝐏_𝐥`$ (if $`𝐄=4\pi 𝐏`$) brings us back to eq. (2).
Let’s emphasize that in the last equation there is no correction for the distinction of the functioning field from the average one . That correction would appear if we placed the nucleus considered - for which the balance of forces is written - into empty space (2). In fact we have the interaction of a nucleus with electrons of its cell (cavity). It is described by $`E_c`$ and the required contribution $`4\pi 𝐏/3`$ to intensity because $`\frac{\delta E}{\delta 𝐏}=𝐄`$. To complete the proof of eq. (2) one should make sure that the parts of the system energy not considered above (to be more exact their parts depending on $`𝐏`$) do not influence the result. First of all it is true for the electronic component of energy - kinetic, exchange, correlation .
In case of a strongly compressed substance - this case is more interesting for heavenly bodies having a solid state substance - kinetic energy of free electron gas is an important factor (other components of energy are very small compared with Coulomb energy taken into account above).
Decomposition of this energy over the nuclear shift relative to the center of a cell:
$$\delta E_{kin}𝑑𝐱n\delta U,\text{ }\delta U=Ze^2\left(\frac{1}{r}\frac{1}{\left|𝐫+\delta \right|}\right)$$
leads to the zero result owing to the spherical symmetry $`n`$ and a known decomposition of Coulomb term into a series over Lezhandr polynomial. As regards the energy of nuclei in a solid state at low temperatures one should consider only zero energy of nuclear oscillation which is equal to $`\frac{3}{2}\mathrm{}w_0`$ for one nucleus ($`w_0`$ \- the oscillation frequency). Hence only part $`w_0`$ depending considerably on shift $`\delta `$ or $`𝐏`$ could affect the proof given above. But in case of an exceptional small value of $`\delta `$ ($`\delta \alpha R`$ \- see point 1, $`R`$ \- the radius of a body) the shift of the oscillation center of a nucleus in the cells model does not influence the oscillation frequency at all. A change in energy in shifting a nucleus by $`\delta _𝐫`$ is equal to $`l^2e^2\delta _r^2/(2R^3)`$ (where $`R`$ \- the radius of a cell) and in substituting $`\delta _𝐫\delta _𝐫+\delta `$ (the shift of the oscillation center) the square of frequency as the second derivative of energy with respect to $`\delta _r`$ does not change at all.
3. Eq. (3) is in fact of universal nature and it is valid irrespective of the state of a substance. We will utilize a method of functional density , writing down the free energy of a system (in general case temperature $`T`$ is not equal to zero) as a function of the density of electrons $`n`$ and nuclei $`n_p`$.
$$F\{n,n_p\}=F_0+E_{sc}+F_{xc}\mu 𝑑𝐱n\mu _p𝑑𝐱n_p,$$
$`\left(7\right)`$
here the first term conforms to free electronic and nuclear gases, the second one - Coulomb and gravitation energy of their interaction in approximation of a self - consistent field, the third term corresponds to exchange and correlation effects. The latter two terms correspond to Lagranzh factors and show conservation of total numbers of electrons and nuclei. Minimum $`F`$ for $`n`$ and $`n_p`$ defines equilibrium distributions of these quantities. Writing down
$$E_{sc}=\frac{GM^2}{2}\frac{d𝐱d𝐱^{}}{\left|𝐱𝐱^{}\right|}n_pn_p^{}+\frac{e^2}{2}\frac{d𝐱d𝐱^{}}{\left|𝐱𝐱^{}\right|}\left(nzn_p\right)\left(n^{}zn_p^{}\right),$$
we obtain conditions of minimum $`(7)`$ for $`n`$ and $`n_p`$:
$$\mathrm{\Delta }\frac{\delta \left(F_0+F_{xc}\right)}{\delta n\left(𝐱\right)}=4\pi e^2\left(nzn_p\right)$$
$`\left(7^{}\right)`$
$$\mathrm{\Delta }\frac{\delta \left(F_0+F_{xc}\right)}{\delta n_p\left(𝐱\right)}=4\pi ze^2\left(nzn_p\right)4\pi GM^2n_p.$$
$`\left(7^{\prime \prime }\right)`$
If we could omit the left part of eq. $`(7^{\prime \prime })`$, then, taking into account that $`\delta \rho =e(zn_pn),\text{ }\rho _p=zen_p`$, we would obtain eq. $`(3)`$, and if $`(7^{\prime \prime })`$ is substituted into $`(7^{})`$ and if appropriate simplifications are made then $`(7^{})`$ will fit Chandrasekar equations which define equilibrium configuration of electrons and nuclei. Let’s emphasize that the left part of that equation is defined by the lightest particles - electrons, the right part - contains only gravitation quantities after the above substitution of $`(7^{\prime \prime })`$ into $`(7^{})`$ although gravitation does not act on the electrons directly. The electrons are affected by the electric field investigated in this paper, which only quantitatively (see $`(3)`$) coincides with a gravitational field.
Thus eq. $`(7^{})`$ and $`(7^{\prime \prime })`$ are rewritten as:
$$\delta \rho =\alpha \rho _p\left(1+\sigma \right)^1;\text{ }\sigma =\frac{\mathrm{\Delta }\delta F/\delta n_p}{z\mathrm{\Delta }\delta F/\delta n},$$
$`\left(8\right)`$
where here and below $`F=F_0+F_{xc}`$.
The appearance of $`\sigma `$ either keeps the magnitude of $`\delta \rho /\rho _p`$ constant, or decreases this relation. The single case when it may increase considerably, represents exceptional nearness of $`\sigma `$ and $`1`$. But this is practically unbelievable. Furthermore, we may think that $`\sigma <<1`$ . Let’s illustrate it by using two examples . For both examples it is supposed that the electron gas is strongly compressed and degenerated so that the corresponding contribution to $`F`$ is $`\frac{\mathrm{}^2}{m}𝑑𝐱n^{5/3}`$. In the first example nuclei are localized in nodes of a grate and the energy of their zero oscillations represent $`\delta F\frac{Ze\mathrm{}}{\sqrt{M}}𝑑𝐱n_p^{3/2}`$ (see above). Then for $`\sigma `$ in $`(8)`$ we find:
$$\sqrt{\frac{m}{M}Z}\left(a_0n^{1/3}\right)^{1/2}<<1,$$
where $`m`$ \- the mass of the electron, $`a_0=\mathrm{}^2/me^2`$ \- it’s ”Bohr-radius”. This smallness is connected with inequalites: $`m/M<<1`$ and $`a_0n^{1/3}>>1`$ in a compressed substance.
The second example represents a weak non ideal nuclear ”Boltzman-system” for which $`F(n_p)e^3𝑑𝐱\frac{n_p^{3/2}}{\sqrt{T}}`$ is ”Debay-Hukkel” - correlation correction. So for $`\sigma `$ one finds : $`\frac{e\sqrt{n^{1/3}/T}}{(\sqrt{Z}a_0n^{1/3})}<<1`$ because the condition of weak non-perfection is $`T>>e^2n^{1/3}`$.
4. In conclusion let’s go back to the question about the minimum of $`(7)`$ in connection with disturbance of local electric neutrality of the system.
Note that the above violation is typical of the crystalline state of a substance in the absence of gravitation forces, which is evident and the electrons are not localized in contrast to nuclei which are localized at the point.
It is important that this violation is not described by minimum of F, as a point in functional space in which its functional derivative disappears. In this case we deal with a boundary minimum reached when the parameter which defines the length of nuclear localization tends to its limiting value which equals zero.
Let’s consider the second clear Coulomb component of $`E_{sc}`$ which should be broken into neutral and spherical on the whole ”Vigner-Zeitc” cells with a nucleus in the center. In such a model the radius of the cell is $`R`$ and the nucleus is spread in sphere with radius $`\rho `$. This model represents energy: $`E_{sc}=\frac{3}{10}\frac{Z^2e^2}{R}\frac{2x^3+4x^2+6x+3}{(x^2+x+1)^2}`$ where $`x=\rho /R`$. At $`x=0`$ this equation gives a well-known binding energy of the grate: $`\frac{9Z^2e^2}{10R}`$. It is evidentl that maximum of $`|E_{sc}|`$ is in fact reached on the boundary of permissible area (at $`x=0`$ and fixed $`R`$).
$$\text{ }Enclosure$$
Initial expression for $`E_c^{(2)}`$ can be written as:
$$E_c^{\left(2\right)}=𝑑𝐱P_i\left(𝐱\right)K_{ij}\left(𝐤\right)P_j\left(𝐱\right),$$
where $`𝐤=i,\text{ }`$ act on $`P_j(𝐱)`$, and
$$K_{ij}\left(\stackrel{}{k}\right)=\frac{1}{2}\frac{d\stackrel{}{\lambda }}{\lambda ^3}\left(\delta _{ij}3\frac{\lambda _i\lambda _j}{\lambda ^2}\right)e^{i\stackrel{}{k}\stackrel{}{\lambda }}.$$
This integral satisfies evident condition $`K_{ii}=0`$ and hence it may be represented as
$$K_{ij}=\frac{1}{2K^2}k_lK_{lm}k_m\left(\delta _{ij}\frac{3k_ik_j}{k^2}\right).$$
The expression for $`E_c^{(2)}`$ mentioned in this paper gives the calculation that is not complicated but uwkward.
In conclusion we would like to note critical debate on this questions with Vasilyev B.V., Grigoryev V.I. and Maximov V.I..
Lebedev Institute of Physics RAS,
Physics Department, Moscow State University.
$$\text{ }LITERATURE$$
1. Tamm I.E. The Theoretical Course on Electricity. Moscow, Science, 1989.
2. Landau L.D., Lifshitc E.M.. Statistical physics, part 1, Moscow, Science, 1995.
3. Lundkvist S., March M., The Theory of Heterogeneous Electrons gas. Moscow, Mir, 1987.
|
no-problem/9902/cond-mat9902234.html
|
ar5iv
|
text
|
# Comment on “Mechanisms of synchronization and pattern formation in a lattice of pulse-coupled oscillators”
## I Introduction
In Ref. , Diaz-Guilera et al. propose a new analytical procedure to study a lattice model of pulse-coupled oscillators in a one-dimensional ring with unidirectional coupling. This procedure, which consists in the linear stability analysis of the fixed points of some return maps obtained from the original system by means of matrix manipulations, is based on some ad hoc assumptions and in a particular form for the phase response curve (PRC). Diaz-Guilera et al. claim that their description gives complete information on the original system as well as that the results they obtain for the particular PRC they consider can also be extrapolated to a general PRC. In this comment we show that the model they consider is very specific and unable to account even for the case of a linear PRC. We also show that Diaz-Guilera et al.’s results, in addition, do not give complete information on the original system, not even for the specific PRC they consider, owing to the fact that many of the assumptions involved are not valid.
A first step toward simplifying the system and getting a linear description is to consider a linear PRC. This requirement for the PRC is often not too restrictive since one can expand the PRC in powers of the convexity of the driving hopping to grasp the behavior of the system by only considering the first two terms in the expansion, i.e. a constant term plus another proportional to the phase. Diaz-Guilera et al., in contrast, only consider the term proportional to the phase and claim that the constant term is unimportant since its only effect is to shift the threshold. This claim, which is not proved neither in Ref. nor in the reference they quote, is not true. This fact can easily be seen, for instance, in the inhibitory situation if one considers the linear PRC, $`\mathrm{\Delta }(\varphi )=\alpha +ϵ\varphi `$, with $`1\alpha ϵ<\alpha `$ and $`\alpha <0`$. In this case, when an oscillator gets a pulse it can or cannot instantaneously be reset to zero depending on the value of its phase, $`\varphi `$. In the case addressed by Diaz-Guilera et al., when the interactions are inhibitory, an oscillator can never be reset to zero after getting a pulse if $`ϵ>1`$ or it is always reset to zero if $`ϵ=1`$, i.e., whether an oscillator is reset to zero or not does not depend on the value of its phase. This simple example makes it evident that the approximation of Diaz-Guilera et al. is not only unable to account even for the general linear case but also that it is highly pathological. In this sense, one should hardly expect that the results they obtain could be extrapolated to any other system with some degree of generality, as we will show.
## II Negative coupling
To proceed further with their analysis, Diaz-Guilera et al. assume also that the oscillators fire in a cyclic order, i.e., advancements between oscillators are not allowed. This additional assumption, which holds in the all-to-all case, enables them to construct some linear return maps for the particular PRC they are considering. These return maps are intended to describe the dynamics of the system and since they are linear, it can be done easily by only looking at their fixed points and the corresponding eigenvalues. In their analysis, however, Diaz-Guilera et al. additionally constrain the fixed points to those in which each oscillator fires just once, neglecting then other possibilities. For the case in which $`ϵ`$ is negative they found that the fixed points are stable whereas they are unstable for the case in which $`ϵ`$ is positive. After performing some simulations for a system of three oscillators they realize that in fact advancements are possible but they claim, again without proof, that advancements only matter during the transient dynamics. Thereby, they proceed with their analysis without taking into account this annoying hindrance.
In Fig. 1 we have displayed the typical time evolution for the Diaz-Guilera et al.’s PRC for the case of three oscillators when the phase of an oscillator is always reset to zero after receiving a pulse. For this coupling, Diaz-Guilera et al.’s results predict that there exists a stable fixed point in which just before an oscillator fires, the other two should have a phase equal to $`0.5`$. Fig. 1, however, clearly illustrates that such a state is not reached. Diaz-Guilera et al. incorrectly computed the moduli of the eigenvalues of the $`2\times 2`$ matrix for $`ϵ<3/4`$. The eigenvalues are $`0`$ and $`1`$ whose moduli are not $`0`$ \[$`=(1+ϵ)^{3/2}`$\], as they obtained. In this case the fixed point of the return map they construct is not stable despite $`ϵ`$ being negative. Actually, for arbitrary initial conditions, $`\{\varphi _1^{},\varphi _2^{},\varphi _3^{}\}`$, where as a first oscillator has been taken the one with the highest phase, the system eventually attains a state in which just before the first oscillator reaches the threshold, the phases of the other two oscillators alternate between $`\varphi _2=\varphi _1^{}\varphi _3^{}`$, $`\varphi _3=\varphi _1^{}\varphi _3^{}`$ and $`\varphi _2=1(\varphi _1^{}\varphi _3^{})`$, $`\varphi _3=1(\varphi _1^{}\varphi _3^{})`$. Notice that although Diaz-Guilera et al.’s solution is a fixed point of the real dynamics, the set of initial conditions for which the system reaches the corresponding state has zero measure since $`\varphi _1^{}\varphi _3^{}`$ must be equal to $`0.5`$. In essence, this fact means that given arbitrary initial conditions the probability that Diaz-Guilera et al.’s results describe the eventual evolution of the system is zero. In this case, the real attractor of the dynamics is not precisely a fixed point of the “one-cycle” return map, but a state of period two, which in addition depends on the initial conditions.
The failure of Diaz-Guilera et. al.’s scheme to account even for the specific model they are considering when phase resettings are allowed, seems to be a general feature for any number of oscillators. An example of this behavior is illustrated in Fig. 2, where we consider the case of five oscillators. This time series clearly does not agree with Diaz-Guilera et al.’s predictions. At first glance, one could wonder if that time series is just a long transient. It is easy to see, however, that a periodic state in which each oscillator fires four times, not just once as Diaz-Guilera et al.’s procedure imposes, is reached. Notice that the phases after $`5`$ time units, for instance at $`6652`$ and $`6657`$, have the same value. In addition, the figure shows that the order in which the oscillators reach the threshold is not preserved. This result indicates that the assumption of the oscillators firing in a cyclic order except perhaps in the transient, as assumed by Diaz-Guilera et al., is in general not valid and other kind of behavior with different periodicity appears. From the methodological point of view, the previous examples illustrate that the existence of advancements cannot be disregarded.
It is worth to notice that although in the appendix Diaz-Guilera et. al. conclude that for an arbitrary number of oscillators the moduli of the eigenvalues are always lower than $`1`$ when $`ϵ<0`$, in fact, if the moduli are correctly computed, that result is in general not valid. For instance, in the case of three oscillators there is an eigenvalue with modulus $`1`$ when $`ϵ=1`$. We performed numerical simulations for the same PRC as in previous figures and for values of the number of oscillators ranging from $`3`$ to $`1000`$ and in all of them we found that Diaz-Guilera et. al.’s results were unable to account for the real dynamics. In the opposite situation, when we considered values for the coupling which never reset the phase to zero, we obtained results that can be compatible with Diaz-Guilera et. al.’s predictions.
## III Positive coupling
In the case wherein the coupling is excitatory ($`ϵ>0`$), by only taking into account the results concerning the stability of the previous fixed points and by assuming, again without proof, that a set of synchronized oscillators acts as a single unit that cannot be broken, Diaz-Guilera et al. concluded that eventually the whole population fires in synchrony. This procedure to state the presence of synchrony is clearly inconsistent with the assumptions they use to try to describe the real system through linear return maps. They assumed that advancements are only important during the transient dynamics. However, the whole time evolution until synchrony is reached is precisely a transient. In addition, the fact that the state corresponding to the cyclic ordering they propose for the eventual evolution is unstable does not imply that the system will go far away from those fixed points. For instance, the system might oscillate around an unstable fixed point as occurs in the logistic map, as well as in may other systems, when period doubling appears .
Notice that Figs. 1 and 2 show also that, in contrast to the all-to-all case, when short-range interactions are present two synchronized oscillators can lose their mutual synchrony. This situation is a general property of the model, even for the excitatory case . In general, there are not “absorbing barriers surrounding the repellers”, as quoted by Diaz-Guilera et al. In the real dynamics the situation is more complex: Two synchronized oscillators can lose their mutual synchrony but eventually they are able to recover it if some conditions for the PRC are fulfilled .
It is fair to say that sufficient conditions for synchronization to occur in the all-to-all case were rigorously found by Mirollo and Strogatz . Based on numerical simulations, these authors also conjectured that in the excitatory case the same sufficient conditions should hold if a local coupling is considered. In this regard, despite the procedure followed by Diaz-Guilera et al. in view to show synchrony is not correct, their model meets the criteria for synchronization to occur when $`ϵ`$ is positive . In essence, Diaz-Guilera et al. results concerning synchronization are another numerical verification of Mirollo and Strogatz’s conjecture , which has been widely analyzed through numerical simulations and recently rigorously proved in a model with short-range interactions .
## IV Linear coupling
Previously, we explained how the PRC that Diaz-Guilera et al. considered is unable to account even for a linear PRC and why one should hardly expect that the results they obtain can be extrapolated to any system with a certain degree of generality. We also showed that the results they obtain do not account neither for the specific PRC they consider when the phase is reset to zero after receiving a pulse. In order to address the issue of what happens when such pathologies are not present, we now consider a PRC including the constant term neglected by Diaz-Guilera et al. In this case, whether the phase is reset to zero or not will depend on the value of the phase. When the phase is reset to zero, i.e., $`\alpha +ϵ\varphi \varphi `$, the PRC is effectively described by the $`ϵ=1`$ case previously studied since the resulting phase cannot be lower than zero by definition of the model. In contrast, when the phase is not reset to zero it is given by $`\alpha +ϵ\varphi `$. The existence of these two effective contributions on the PRC breaks the linearity of the coupling and makes Diaz-Guilera et al.’s description inapplicable in a general case. Three examples of what happens under these circumstances are displayed in Figs. 3(a), 3(b) and 3(c). We show the time evolution of the return map corresponding to a decreasing inhibitory PRC ($`\alpha <0`$ and $`ϵ<0`$) for three different values of this couple of parameters. Although Diaz-Guilera et al.’s procedure does not account even for this PRC, Fig. 3(a) looks like their predictions, i.e, the phase of each oscillator is always the same just before the first oscillator fires. However, Figs. 3(b) and 3(c) are clearly incompatible with their results since there is not an eventual cyclic firing and continuous overtakings between oscillators occur. Notice that these states are not transient since they are periodic.
In Figs.3(d) and 3 (e) we display the time evolution for an excitatory decreasing and an inhibitory increasing PRC, respectively. Although the PRC corresponding to Fig. 3(d) is positive and in Fig. 3 (e) $`ϵ>0`$, the system does not synchronize. These results again do not fit Diaz-Guilera et al.’s predictions wherein for a positive PRC ($`ϵ>0`$) the system will eventually synchronize. In general, the appearance of synchronization cannot be stated by only considering the inhibitory or excitatory character of the coupling neither by only considering the derivative of the PRC. In the all-to-all case, whether the coupling is excitatory or inhibitory, synchronization appears for a positive derivative of the PRC. In general, the “absorbing barriers surrounding the repellers” called for by Diaz-Guilera et al.’s to assert the existence of synchronization do not exist and the system will not synchronize despite the interactions be excitatory or the derivate of the PRC ($`ϵ`$) be positive. When short-range interactions are considered both conditions are simultaneously required . Synchronization in Diaz-Guilera et al. simulations appears since the two conditions are same due to the specific PRC they consider.
## V Conclusions
To summarize, we have shown that a general, complete, and correct description of a lattice model of pulse-coupled oscillators is not possible trough the method proposed by Diaz-Guilera et al. In essence, Diaz-Guilera et al.’s results are only valid when studying the linear stability of the fixed points corresponding to a cycle in which each oscillator fires exactly once per period, for a specific PRC which is proportional to the phase and for the inhibitory situation when phase resettings to zero are not allowed. The failure for their description to be applied to any other situation relies on the fact that many of the assumptions required to the development of their analysis are not valid. In particular, a PRC proportional to the phase is not equivalent to a linear PRC; advancements between oscillators can be important for the final state and not only during the transient dynamics; fixed points in which an oscillator fires several times per period are present; synchronized oscillators can lose their mutual synchrony. Diaz-Guilera et al. method can never be used to analyze the global dynamics of the system since it consists only in the study of the linear stability of some fixed points. Therefore, the appearance of synchronization, which is the most relevant case from the experimental point of view , can not be inferred from their results.
Our analysis makes it clear that a priori indiscriminate assumptions in view to obtain a known result are not only unjustified but can also give a misunderstanding of what is really happening. Non-linear systems do not usually behave in the way one can suspect from the intuition arising only from some simulations for a particular model. Conversely, rigorous mathematical results are very useful to understand when a given behavior should be expected and under which conditions this behavior can or cannot be extrapolated to other systems.
## acknowledgments
The illuminating discussions with A. Corral and J. M. Rubí are gratefully acknowledged.
|
no-problem/9902/astro-ph9902161.html
|
ar5iv
|
text
|
# The Effect of Intrinsic UV Absorbers on the Ionizing Continuum and Narrow Emission Line Ratios in Seyfert Galaxies
## 1 Introduction
Since the launch of IUE in 1978, it has been known that the UV spectra of a few Seyfert galaxies show absorption lines that are thought to be intrinsic to the nucleus, as evidenced by their large radial velocities relative to the host galaxy, large widths, and variability. Although IUE studies revealed few Seyfert 1 galaxies that showed intrinsic absorption (Ulrich 1988), an examination of HST spectra, with better sensitivity and/or resolution, reveals that more than half ($``$ 60%) of these galaxies showed such absorption (Crenshaw et al. 1998). Also, approximately half of a sample of Seyfert 1 galaxies observed in X-rays show the presence of an X-ray (“warm”) absorber, characterized by absorption edges of O VII and O VIII (Reynolds 1997; George et al. 1998a). Although there have been suggestions that the UV and X-ray absorbers are physically linked (cf. Mathur, Elvis, & Wilkes 1995; George et al. 1998a), it is also possible that the absorbing material is comprised of different components at different radial distances from the central source. In any case, it is clear that the absorbing material is an important physical component in the inner regions of Seyfert galaxies and is likely to have a global covering factor between 0.5 and 1.0 (Crenshaw et al. 1998).
It follows that if the covering factor of the absorbing material is large, or, at least, co-planar with the narrow-line region (NLR), its presence will alter the spectral energy distribution (SED) of the ionizing continuum to which the NLR gas is exposed. The effect of the absorption on the ionizing continuum depends on the ionization state of the absorber since the conditions in the NLR gas will depend principally on the number of photons between 13.6 and 100 eV, where the cross-sections of H, He I and He II are greatest. A highly ionized X-ray absorber may be transparent at energies below the O VII edge ($``$ 740 eV), unless it is Compton thick. In such a case, the NLR will be illuminated by an SED that is effectively intrinsic. On the other hand, a UV absorber with sufficient column density to produce absorption lines of Ly$`\alpha `$, N V $`\lambda `$ 1240, and C IV $`\lambda `$ 1550 may also produce a significant edge at the He II Lyman limit. In this case, the effect on the conditions in the NLR may be significant, and will be revealed by the ratios of emission lines, particularly He II $`\lambda `$4686/H$`\beta `$ which is especially sensitive to the SED of the ionizing continuum. If absorption from lower ionization states, in particular Mg II $`\lambda `$2800, is present, there may be significant absorption of continuum photons near the hydrogen Lyman limit as well. As the attenuation below 100 eV becomes greater, the average ionization state of the NLR will drop, while at the same time ionization and heating by X-rays may dominate, resulting in zones of warm (T $``$ 10<sup>4</sup>K), partially ionized gas.
In order to examine the possible effects of a low ionization UV absorber on the ionizing SED and NLR, we have generated a set of photoionization models. The photoionization code used in this study was developed by Kraemer (1985) and has been described in detail in several previous papers (cf. Kraemer et al. 1994). This code has a high energy cutoff at 5 keV. This is a sufficiently high cutoff energy to model the effects in the low ionization gas, although may be inadequate for an X-ray absorber model. The modeling methodology and results are discussed in detail in the following sections.
## 2 The Models
### 2.1 Setting up the Model
The purpose of this phase of the modeling is to determine how the modification of the SED varies for different column densities of the principle UV absorbers. While the physical conditions within the absorbing gas are of interest, their examination is not the principle part of this study. Therefore, we have taken a very simple approach, and held most parameters fixed while generating the absorber models.
The concept that absorption by intervening gas can modify the ionizing continuum to which the NLR gas is exposed is far from new. The most detailed treatment has been that of Ferland & Mushotzsky (1982), in which they modeled the NLR of NGC 4151, assuming it was ionized by a continuum modified by a “leaky absorber”. In this model, broad line region (BLR) clouds, effectively opaque to the ionizing continuum at EUV energies, cover 90% of the source, while the remaining 10% of the ionizing continuum escapes unattenuated. However, their absorber models predicted ionic column densities several orders of magnitude larger than those calculated from recent observations of NGC 4151 (cf. Weymann et al. 1997), and the details of X-ray absorption were not well known at the time the paper was written. Therefore, it seems worthwhile to revisit this approach, with the benefit of better constraints on the absorbers.
In order to generate these models, we had to assume an intrinsic SED for the ionizing continuum. Unfortunately, absorption by galactic neutral hydrogen makes it all but impossible to get a measurement of the SED in the EUV for the vast majority of AGN, and, thus, there is no direct way to determine the intrinsic shape of the ionizing continuum. Although there has been significant effort directed towards understanding the shape of the ionizing continuum in active galactic nuclei (AGN), no consensus has been reached. Recent work by Zheng et al. (1997) and Laor et al. (1997) suggest that, for QSOs at intermediate redshift, the ionizing continuum from the Lyman limit to soft X-ray energies may be characterized by a power law, with an index $`\alpha `$ $``$ $``$2. While some lower luminosity AGN, specifically Seyfert galaxies, are similar, others may have somewhat flatter indices ($`\alpha `$ $``$ $``$1.5; Korista, Ferland, & Baldwin 1997). Korista et al. (1997) have shown that a spectral energy distribution (SED) similar to the composite QSO spectrum proposed by Zheng et al. (1995) does not possess sufficient He II ionizing photons to produce the observed equivalent width of the broad He II $`\lambda `$ 1640 emission lines in the Seyfert 1 galaxy Mrk 335 (specifically) and, perhaps, AGN in general. Mathews & Ferland (1987) have proposed that the bulk of the necessary He II ionizing photons in AGN could arise from the so-called “big blue bump” (BBB) (cf. Malkan 1983). Therefore, we have generated two sets of models. For the first, which we will call the “power-law” SED, we assumed a broken power law, F<sub>ν</sub> $`=`$ K$`\nu ^\alpha `$ where:
$$\alpha =1.5,13.6eVh\nu <1000eV$$
(1)
$$\alpha =0.7,h\nu 1000eV.$$
(2)
For the other SED, which we will call the “BBB” SED, we adopted the parameterization of the Mathews & Ferland continuum given in Laor et al. (1997), where
$$F_\nu =K\nu ^{\alpha _0}e^{h\nu /KT_{cut}}$$
(3)
where, $`\alpha _0`$ $`=`$ $``$0.3 and T<sub>cut</sub> $`=`$ 5.4 x 10<sup>5</sup>K. At energies $``$ 200 eV, F<sub>ν</sub> $`=`$ K$`\nu ^\alpha `$, with
$$\alpha =2,200eVh\nu <1000eV$$
(4)
$$\alpha =0.7,h\nu 1000eV.$$
(5)
Although it is certainly likely that there is greater variation in the intrinsic SED among Seyfert galaxies, one can obtain a qualitative understanding of how conditions would vary with other power laws and/or differing BBB contributions from these results.
There have been several attempts to determine the density of the gas in which the UV resonance line absorption arises. Density estimates based on recombination timescale arguments are typically $``$ 10<sup>5</sup> cm<sup>-3</sup> (cf. Voit et al. 1987; Shull & Sachs 1993). The presence of weak absorption from excited states in the spectrum of NGC 4151 would require densities $``$ 10<sup>8.5</sup> cm<sup>-3</sup> (Bromage et al. 1985). Nevertheless, as often noted (cf. Shields & Hamann 1997), the ionic column densities predicted by photoionization models are not particularly sensitive to density. With this in mind, we have assumed a numerical density of atomic hydrogen of 1 x 10<sup>7</sup>cm<sup>-3</sup> typical of the inner NLR in Seyfert galaxies (cf. Kraemer et al. 1998a), and located the absorbers in the intermediate zone between the inner NLR and outer BLR, as suggested by Espey et al. (1998). We have assumed solar abundances (cf. Grevesse & Anders 1989) for the UV absorber models, with numerical abundances relative to hydrogen as follows: He=0.1, C=3.4x10<sup>-4</sup>, O=6.8x10<sup>-4</sup>, N=1.2x10<sup>-4</sup>, Ne=1.1x10<sup>-4</sup>, S=1.5x10<sup>-5</sup>, Si=3.1x10<sup>-5</sup>, Mg=3.3x10<sup>-5</sup>, Fe=4.0x10<sup>-5</sup>. Cosmic dust was not included.
In order to examine a range of conditions, we varied the ionization parameter for the absorber, U<sub>abs</sub>, where:
$$U_{abs}=_{\nu _0}^{\mathrm{}}\frac{L_\nu }{h\nu }𝑑\nu /4\pi D^2n_Hc,$$
(6)
where L<sub>ν</sub> is the frequency dependent luminosity of the ionizing continuum, D is the distance between the central source and the absorber, n<sub>H</sub> is the density of atomic hydrogen and h$`\nu _0`$ =13.6 eV. Models were generated over the range 10<sup>-3.5</sup> $``$ U<sub>abs</sub> $``$ 10<sup>-2</sup>. This is quite similar to the range of ionization parameters calculated for those kinematic components detected in GHRS spectra of NGC 4151 for which both C IV and Mg II absorption lines could be identified (Kriss 1998). This range is much lower than the typical values for an X-ray absorber (U<sub>abs</sub> $`=`$ 0.1 - 10, cf. Reynolds & Fabian 1995). Integration was truncated at an effective column density (the sum of the columns densities of ionized and neutral H), N<sub>eff</sub> $`=`$ 10<sup>20</sup>cm<sup>-2</sup>. This is approximately equal to the sum of the effective column densities from each kinematic component detected in NGC 4151 (Kriss 1998).
### 2.2 Absorber Model Results
The predicted column densities of several ionic species are listed in Table 1. Although there are slight differences in the values predicted for the two different SED’s, the results overall show very similar trends, which is to be expected, since the continua were scaled to produce the same number of ionizing photons. The C IV and Mg II column densities predicted by the models with U<sub>abs</sub> $`=`$ 10<sup>-3</sup> are a reasonable match for those calculated for the Seyfert 1.5 galaxy NGC 4151 (Weymann et al. 1997; Kriss 1998), although they are larger than any single component measured by those authors. We should note that our single component absorber is an idealized model, since a set of absorbers with smaller effective column densities and different radial velocities, distributed along the line-of-sight, would have a similar cumulative effect on the ionizing continuum.
In the most highly ionized cases, where U<sub>abs</sub> $`=`$ 10<sup>-2</sup>, the models predict C IV columns at least an order of magnitude in excess of that observed for NGC 4151 (Weymann et. al 1997). This could be compensated by truncating the models at lower effective column density, but then the predicted Mg II columns are too low. Of course, gas at such a low state of ionization could not produce the observed columns of O VII and O VIII seen in many X-ray absorbers (George et al. 1998a). It is possible, then, that the some of the absorption components are due to traces of C IV and N V in a X-ray absorber with greater effective column density, rather than low ionization, optically thin gas (Mathur et al. 1995).
Although C IV and Mg II absorption can coexist in the low-ionization models, i.e. with U<sub>abs</sub> $``$ 10<sup>-3</sup> for both SED’s, we did not obtain large columns of N V absorption together with Mg II. NGC 4151 is the only source in which the presence of absorption by all three of these ionic species has been confirmed (cf. Crenshaw et al. 1998). Unfortunately, the spectral region around N V $`\lambda `$1241 has not been observed at sufficiently high resolution for the identification of individual absorption components with those of C IV $`\lambda `$1550 and Mg II $`\lambda `$2800, although the low resolution spectra indicate a large column density for N V ($``$ 10<sup>15</sup>cm<sup>-2</sup>), suggesting a strong high ionization component. We predict that the N V columns from those components that show Mg II absorption are likely to be below the detection limits ($`<`$ 10<sup>13</sup>cm<sup>-2</sup>). This prediction will be tested with upcoming STIS observations of NGC 4151. Thus, is it likely that the N V absorption, and possibly some of the C IV absorption, occurs in the X-ray absorber or in more highly ionized, optically thin UV absorbing gas, both of which will be transparent to the ionizing continuum below a few hundred eV.
These models span a range of UV absorber properties, which also show a range of effects on the EUV continuum. The results are also listed in Table 1, and displayed graphically in Figures 1 and 2, for the power-law and BBB models, respectively. Comparing the ratio of incident ionizing radiation to that which escapes the back-end of the absorber at different energies (f<sub>E</sub> in Table 1), we see that, as expected, first the absorption builds above the He II Lyman limit, then near the hydrogen Lyman limit, and more gradually at energies above 100 eV. In Figures 1 and 2, the absorption edges of He I (24.6 eV), O II (35.1 eV), and C III (47.4 eV) are also clearly visible in the filtered spectrum from the U<sub>abs</sub> $`=`$ 10<sup>-3</sup> model. There are two important effects resulting from the absorption of the ionizing continuum. First, there is a decrease in the fraction of ionizing radiation reaching the NLR. To get a quantitative measure of this effect, consider a typical narrow line cloud, of density n<sub>H</sub> $`=`$ 10<sup>5</sup> cm<sup>-3</sup>, with an ionization parameter U<sub>nlr</sub> $`=`$ 10<sup>-2.5</sup> (although one could use any set of conditions for this comparison). Table 1 also shows the ionization parameter for a cloud of the same density, at the same distance from the ionizing source, if it were screened from the source by the UV absorber. For the most extreme case, U<sub>nlr</sub> can be reduced by approximately two orders of magnitude. Therefore, for objects with the same intrinsic SED, the presence of a low ionization UV absorber with large covering factor can cause dramatic differences in the conditions in the NLR, and the resulting emission line spectrum. The effect will be particularly pronounced for line ratios that are good ionization parameter diagnostics, such as \[O III\] $`\lambda `$5007/\[O II\] $`\lambda `$3727 (cf. Ferland & Netzer 1983).
The second spectral property that these absorber models predict is the absorption of the soft X-ray continuum below 100 eV, primarily by He II. At low spectral resolution (i.e. $``$ 40%), such as that provided by the ROSAT Position Sensitive Proportional Counter (PSPC), this absorption would be manifested by an apparent flattening of the observed continuum. In Figures 3 and 4 we compare the incident and transmitted continua from 100 eV to 5 keV for the four UV absorber models, as indicated by the value of the ionization parameter. As the He II absorption edge builds, the low energy end of this band of the ionizing continuum becomes increasingly suppressed. Neutral hydrogen is an important source of opacity for the lowest ionization models. We can obtain a quantitative measure of the continuum flattening by calculating the spectral index from 0.1 to 2.4 keV (to match the ROSAT/PSPC spectral range), $`\alpha _{softXray}`$, for a linear (i.e. power law) fit to log(F<sub>ν</sub>) vs. log($`\nu `$). For the power law models, the unattenuated continuum can be fit with $`\alpha _{softXray}`$ $``$ $``$1.38. For comparison, the transmitted continuum for the UV absorber with U<sub>abs</sub> $`=`$ 10<sup>-3</sup> has an $`\alpha _{softXray}`$ $``$ $``$0.86. If the covering factor for the UV absorber is 90%, rather than unity, $`\alpha _{softXray}`$ becomes $``$0.97 for the U<sub>abs</sub> $`=`$ 10<sup>-3</sup> model. For the BBB models, the results are similar ($`\alpha _{softXray}`$ $``$ $``$2.07, unattenuated, and $``$1.58, $``$1.68, for the U<sub>abs</sub> $`=`$ 10<sup>-3</sup> model, 100% and 90% covering, respectively). It is worthwhile to note that there would be no change to the index calculated from a power-law fit between the non-ionizing UV (energies $`<`$ 13.6 eV) and the X-ray (energies $`>`$ 2 keV), $`\alpha _{UV2keV}`$ (see Section 3.2), if the absorber were dust-free, such as those modeled in this paper.
To summarize, the presence of a UV absorber with a large covering factor along the line of sight will modify the soft X-ray band of the ionizing continuum. The lower the ionization state of the absorber, the more pronounced the effect, which we have illustrated by holding the N<sub>eff</sub> fixed. Also, if the covering factor of the absorber is large along the line of sight to the NLR, the fraction of ionizing photons reaching the NLR is inversely proportional to the ionization parameter of the absorber. This would have a profound effect on the narrow emission line ratios. However, when viewed in the context of a sample of objects, the effects of the modified SED may be diluted by by variations in the physical conditions in the NLR gas among Seyfert galaxies. A third effect of the UV absorber is to change the SED below 100 eV, as indicated by Figures 1 and 2 and the values of f<sub>E</sub> in Table 1. To examine the results of this effect on the conditions in the NLR, we have generated a second set of photoionization models. The predictions of these models are discussed in the following section.
### 2.3 Narrow Line Models
It is well known that the conditions in the Narrow Line Region (NLR) of Seyfert galaxies are affected by processes other than photoionization by the central source, such as starbursts (cf. Heckman et al 1997), collisional effects such as shocks (cf. Allen, Dopita & Tsvetanov 1998) and heating by cosmic rays (Ferland & Mushotzky 1984). Also, line ratios can be affected by the relative contribution from matter-bounded gas (Binette, Wilson & Storchi-Bergmann 1996). Nevertheless, it is likely that photoionization is the dominant mechanism for ionization and heating in the NLR, as Ferland has argued (cf. Ferland & Netzer 1983). This is supported by detailed models of individual objects (cf. Kraemer et al. 1998a), which also show that the composite emission-line spectrum appears to be dominated by radiation-bounded gas. Furthermore, the NLR models that we present are intended to examine the effects of modification of the SED by an intervening absorber, and therefore we are foremost concerned with photoionized gas. The interpretation of observational evidence for this effect, however, requires us to address some of these concerns, which we will do in the Discussion section.
The predicted emission line ratios from the photoionization models of typical NLR clouds are listed in Tables 2 and 3 (for comparison to the NLR spectra of Seyfert 1.5s, see Cohen (1983)). For each intrinsic SED, NLR models of gas of density n<sub>H</sub> $`=`$ 10<sup>5</sup>cm<sup>-3</sup> and solar abundances (see section 2.1) were generated for the case of an unattenuated source, and both 100% and 90% covering by each of the four different UV absorber models. The models were truncated at an N<sub>eff</sub> $`=`$ 10<sup>21</sup>cm<sup>-2</sup>, which we have found to be a reasonable average for a mix of radiation- and matter-bounded gas in the NLR of Seyferts (cf. Kraemer et al. 1998a). Since we are primarily interested in the gross effect of the SED on the physical conditions in the NLR, only a subset of the emission-line ratios predicted by the models are listed. In order to make the comparison of the NLR model predictions simpler, we held the ionization parameter for the NLR cloud fixed at U<sub>nlr</sub> $`=`$ 10<sup>-2.5</sup>. Holding U fixed required scaling up the flux as the UV absorbers screen out more of the ionizing radiation. This was done by decreasing the distance between the NLR cloud and the ionizing source. The absorption of the continuum was so extreme for model 4 that the scaling became unrealistic, effectively placing the NLR cloud closer to the ionizing source than the absorber. Each model predicted the same average H$`\beta `$ emissivity (with the exception of the power-law SED Model 4, which had a slightly lower emissivity due to the higher fractional ionization of hydrogen).
The narrow He II $`\lambda `$4686/H$`\beta `$ ratio is quite sensitive to the shape of the ionizing continuum, since neutral hydrogen is the dominant source of opacity near 13.6 eV, while ionized helium is above 54.4 eV. Comparing the model predictions for this ratio provides us with the most direct way to see the effect of the different SED’s. He II $`\lambda `$4686/H$`\beta `$ for the unattentuated case is 0.21 and 0.25 for the power-law and BBB models, respectively, which is approximately the value expected from a photon counting calculation (cf. Kraemer & Harrington 1986). As the ionization parameter of the absorber decreases, we start to see the expected effect on this line ratio, with the value dropping to 0.07 for Model 2 for the case of full source coverage by the UV absorber. Then, as the attenuation of the continuum becomes more extreme, the He II $`\lambda `$4686/H$`\beta `$ ratio begins to increase up to three times the value predicted by the unattentuated model for the most absorbed continuum. Although this is partially an artifact of the scaling used to fix the ionization parameter of the NLR models, there is also a reflection of the physical conditions in X-ray ionized gas. In the most extreme case, i.e. Model 4, the modified ionizing continua includes a significant remnant of photons near 100 eV, where He I and He II cross-sections dominate, but effectively no photons at the Lyman limit, resulting in an unusual mix of low and high ionization species. Finally, as expected, the inclusion of a fraction (10%) of unattenuated continuum mitigates these effects, although not to the point where they would be undetectable.
Due to the choice of ionization parameter, the high ionization collisional lines are weak in these models, and we have only listed ratios for C IV $`\lambda `$1550, \[Ne IV\] $`\lambda `$2423 and \[Ne V\] $`\lambda `$3426 in Tables 3 and 4. A similar effect is seen for these lines as for He II $`\lambda `$4686, and the two neon lines track the He II line most closely, as one would expect since the ionization potentials of their parent ionic states are above 54.4 eV. Lines from intermediate ionization states with parent ionization potentials well below 54.4 eV, including \[O III\] $`\lambda `$5007, \[O III\] $`\lambda `$4363, \[O II\] $`\lambda `$3727, \[N II\] $`\lambda \lambda `$6548,6584, and \[Ne III\] $`\lambda `$3869, show little variation in relative strength until the X-ray component of the ionizing continuum dominates, i.e. in Models 3 and 4. The low ionization lines such as \[O I\] $`\lambda `$6300 and Mg II $`\lambda `$2800 also show the effects of the increased hardness of the attenuated continua, although the scaling of model 4 continuum causes an increase in the ionization fraction that supresses the Mg II line somewhat. The oxygen is tightly bound to the neutral fraction of hydrogen by charge exchange, so the scaling effect is not obvious when looking at oxygen line ratios relative to H$`\beta `$.
The Balmer decrement is strongly affected by collisional excitation in X-ray heated gas, as noted by Netzer (1982) and Gaskell & Ferland (1984), and this is clearly seen in this set of models. Also, Ly$`\alpha `$/H$`\beta `$ increases as the ionizing continuum becomes harder, as expected. It should be noted that very large values of H$`\alpha `$/H$`\beta `$ and Ly$`\alpha `$/H$`\beta `$ are predicted for the most extreme cases (Model 4). Although this fits an extrapolation of the results presented in Gaskell and Ferland, the code has not been compared in detail with other models for this extreme case, and, as such, the predictions for these two ratios should be taken qualitatively.
The differences between the power-law models and corresponding BBB models are subtle. In the BBB models a larger residual of photons remains below 100 eV after attentuation by the UV absorber, so the effects on the NLR gas are less pronounced. Observationally, it would be hard to distinguish between the two intrinsic SED’s on the basis of narrow line ratios, except, perhaps, from the relative strength of the \[O I\] lines, but these lines depend on the optical depth of the emission line clouds, which is almost certain to vary among objects.
### 2.4 Summary of Modeling Results
From the set of photoionization models described in the previous sections, we have been able to show that a UV absorber that possesses observable columns for a range of ionic species, including Mg II, will attenuate the ionizing continuum in such a way that the conditions in the NLR will be affected (assuming a large covering factor). Even when holding the ionization parameter fixed for the NLR models, the predicted line ratios show the gross effects of the continuum attentuation which should be observable. If we had allowed the ionization parameter to vary, as the values for U<sub>nlr</sub> in Table 1 show, the effect on the NLR models would have been even more dramatic.
The following set of conditions may exist in individual objects:
1. If intrinsic UV resonance line absorption is present, the continuum along the line of sight will be attentuated, near the He II Lyman limit for a more highly ionized absorber, then near the hydrogen Lyman limit and finally at soft X-ray energies as the absorber becomes more neutral. Although most of the EUV waveband is unobservable, ROSAT and ASCA observations in the soft X-ray (0.1 to 2 keV) should show the effects of attenuation by a UV absorber, particularly if its presence is flagged by Mg II absorption.
2. If the covering factor of the absorber is large, its effect on the conditions in the NLR gas will be observable, particularly in the line ratios most sensitive to the SED such as He II $`\lambda `$4686/H$`\beta `$.
3. Unless the absorber is dusty or Compton thick ($`>`$ 10<sup>24</sup>cm<sup>-2</sup>) there will not be any effect on the continuum below the Lyman limit. If the attenuation of the X-ray continuum above 2 keV is relatively small, the optical to X-ray index, $`\alpha `$<sub>UV-2keV</sub>, will not be correlated with these other observables (although determination of the 2 keV flux depends on accurate modeling of the low energy X-ray spectrum, and therefore can be biased by an improper correction for absorption).
In the following sections we will discuss the observational evidence for these conditions.
These models also predict that the total emission from the UV absorbing gas could be comparable to that from the NLR gas, assuming that the covering factors are similar. This, of course, depends on the density and/or location of the absorber. Nevertheless, the presence of this emission in the narrow-line spectrum will dilute some of the effects of the modified ionizing continuum. The predicted emission line ratios from the absorber models are listed in Tables 4 and 5. In the higher ionization cases, the UV absorber could be a source of emission from coronal lines such as \[Ne V\] $`\lambda `$3426 and \[Fe VII\] $`\lambda `$6087. Interestingly, the conditions for our high ionization parameter models are similar to those used in modeling the NLR of NGC 5548 (Kraemer et al. 1998a), which suggests that some of the C IV absorption in NGC 5548 might arise in NLR gas. It is also possible that strong semi-forbidden lines, such as C III\] $`\lambda `$1909 and C II\] $`\lambda `$2326 emission may arise in this gas, even in the lower ionization cases.
## 3 Comparison with Observations
In order to compare our model results to the observations most effectively, we have chosen to concentrate on Seyfert 1.5s (for the sake of simplicity, we will refer to all intermediate Seyfert galaxies as type 1.5s). The presence of X-ray and UV intrinsic absorption has been detected in both Seyfert 1s and Seyfert 1.5s, and for each subclass the fact that we observe emission from the BLR is evidence that we are seeing the AGN directly. However, in Seyfert 1.5 spectra it is possible to separate out the narrow emission-line component much more easily than for Seyfert 1s. Seyfert 2s are unsuitable since the ionizing source is generally obscured (cf. Antonucci 1994) and only a scattered component of the BLR emission and soft X-ray continuum can be observed, and the latter may be contaminated by a starburst (Turner et al. 1997).
We selected a set of Seyfert 1.5 galaxies for which narrow He II $`\lambda `$4686/H$`\beta `$ ratios had been measured. The sample includes the 13 Seyfert 1.5s studied by Cohen (1983), which is the largest set thereof, NGC 4151 (cf. Ferland and Mushotzky 1982), NGC 3516 (Ulrich & Pequignot 1980), and LB 1727 (Turner et al. 1998). For this set, the values of narrow He II $`\lambda `$4686/H$`\beta `$ range from 0.14 to 0.43, with an average close to the values assumed for our unattenuated NLR models.
### 3.1 Narrow Emission Lines
A detailed comparison of the observed narrow emission-line ratios of each of the Seyfert 1.5s in our sample to our model predictions is outside the scope of this paper (see Kraemer et al. 1998a for an example of this type of analysis). However, Kraemer et al. (1998b) examined several selected line ratios for this set of objects, from which we can make a qualitative assessment of the model predictions. First, the relative \[O I\] strength is weakly anti-correlated with the He II strength, as predicted (with a correlation coefficient, r<sub>s</sub> $`=`$ $``$0.44 and a probability, P<sub>r</sub> $`=`$ 0.12, of exceeded r<sub>s</sub> in a random sample). Second, there is a strong correlation of \[O III\] $`\lambda `$5007/\[O II\] $`\lambda `$3727 vs. He II $`\lambda `$4686/H$`\beta `$ (r<sub>s</sub> $`=`$ 0.76, P<sub>r</sub> $`=`$ 0.002), similar to a trend that has been noted for the \[O III\] $`\lambda `$5007 vs. He II $`\lambda `$4686 (Cohen 1983). Although this could be explained by the contribution from highly ionized, matter-bounded gas, as in the case examined by Binette et al. (1996), it also fits with the predictions for an attenuated ionizing continuum.
### 3.2 The Soft X-ray Continuum and He II $`\lambda `$4686/H$`\beta `$
In Seyfert galaxies, the hard X-ray spectrum above 2 keV can be characterized by a fairly flat power law, with a typical index, $`1.0`$ $`\alpha `$ $`0.5`$ (e.g., Nandra & Pounds 1994). In the soft X-ray band ($`1`$keV) attenuation by neutral and/or ionized material along the line–of–sight becomes important (e.g., George et al. 1998a and references therein). The flattest X-ray (2-10 keV) spectra have been observed in Seyfert 1.5 galaxies (e.g., NGC 4151, Weaver et al. 1994; NGC 3227, George et al. 1998b) and in most known cases at least part of the nuclear continuum is heavily absorbed, although the connection between absorption and index has not been fully explored.
Many Seyfert spectra appear to steepen below $`1`$keV (e.g. Arnaud et al. 1985, Turner & Pounds 1989, Turner et al. 1998). The luminosities and variability seen in the soft X-ray regime suggests these “soft X-ray excesses” are often dominated by an upturn in the underlying continuum, and the strength of the soft component for Seyfert 1s varies but the slope appears to be well-correlated with the ratio of the UV flux at 1375Å to the flux at 2 keV (Walter & Fink 1993). In at least some sources, this soft X-ray component may be a power-law which extends to join the UV continuum (e.g. Nandra et al. 1995, Zheng et al. 1997, Laor et al. 1997); in this case there should be a correlation between EUV-dependent line ratios and the soft X-ray index.
The soft X-ray indices for each of the Seyfert 1.5s in our sample are listed in Table 6, and were determined by fits to an absorbed power-law model. Most indices were measured from ROSAT/PSPC data (from 0.1 - 2.4 keV). NGC 3227 and Mrk 6 were derived from fits to the ASCA data (George et al. 1998b, Feldmeier et al. 1998, respectively) which gives a more accurate index as ASCA allows a better modeling of the complex absorption. In the case of MCG 8-11-11 the lack of a pointed PSPC observation led us to use the 0.7 - 10 keV index from the Einstein Solid State Spectrometer/Monitor Proportional Counter (Turner et al. 1991).
From these results, there appears to be a modest anti-correlation between X-ray hardness and the He II $`\lambda `$4686/H$`\beta `$ ratio (r<sub>s</sub> $`=`$ $``$0.47, P<sub>r</sub> $`=`$ 0.08), in the sense that sources with steep soft X-ray spectra, show relatively strong He II, as shown in Figure 5. However, given the small number of data points, care should be taken in interpreting this trend. A more conservative interpretation is that there is a zone of exclusion, specifically the intersection of flat soft X-ray slope and large He II $`\lambda `$4686/H$`\beta `$. As we have shown, the presence of absorbing material, residing between the X-ray emitting region and the NLR could explain this anti-correlation. In fact, the measured soft X-ray indices for the flatter sources (specifically NGC 4151, NGC 3227, Mrk 6, and MCG 8-11-11) are similar to those predicted by our UV absorber models. If we are truly seeing the results of attentuation, the implication is that the covering factor is large enough that the NLR is exposed to the same continuum as observed in the soft X-ray band.
In Table 6, we also list the UV to X-ray indices, $`\alpha `$<sub>UV-2keV</sub> for those objects for which data were available. As noted, they were either calculated from the values listed in Walter & Fink (1993) or from the UV fluxes at 1460Å in the IUE low dispersion archive and the 2 keV luminosity densities determined from Einstein IPC data by Wilkes et al (1994), assuming an intrinsic X-ray index of $``$1.0. Although for the most part the $`\alpha `$<sub>UV-2keV</sub> and $`\alpha `$<sub>soft-Xray</sub> are similar, there are several cases, once again those sources with the flattest soft X-ray slopes, where they disagree. If, as Walter & Fink’s (1993) assert, the shape of the UV to soft X-ray bump does not vary among Seyfert 1 galaxies, this result may be interpreted as evidence that $`\alpha `$<sub>UV-2keV</sub> and $`\alpha `$<sub>soft-Xray</sub> match when the intrinsic continuum is observed, while mismatches are the result of attenuation, as our models predict.
### 3.3 Intrinsic Absorption
As we have seen, there appears to be some correlation between line ratios and the shape of the soft X-ray continuum that qualitatively fit our model predictions. A better test of the models is associating these properties with the intrinsic UV absorption.
Among the Seyfert 1.5 galaxies, those with the flattest soft X-ray spectra, (specifically NGC 4151, NGC 3227, and MCG 8-11-11) are known to have unusually strong intrinsic absorption. IUE and HST spectra of NGC 4151 reveal high column densities for lines covering a wide range in ionization (Mg II to N V, see Weymann et al. 1997), and the presence of absorption by metastable C III $`\lambda `$1175 ((note also that Balmer and He I self absorption are seen in the optical (cf. Anderson & Kraft 1969); we have not attempted to model the conditions that would produce these features). IUE spectra of NGC 3227 and MCG 8-11-11 also appear to show the presence of large absorption columns (Ulrich 1988). Kriss et al. (1997) report a possible Lyman limit detection in HUT spectra of NGC 3227. These objects all appear to have a mismatch of $`\alpha `$<sub>soft-Xray</sub> and $`\alpha `$<sub>UV-2keV</sub>, which, as noted above, is what might be expected for a continuum modified by a UV absorber. The galaxies with low ionization absorption can be contrasted with that of objects like NGC 5548 and NGC 7469, which show absorption by C IV and N V, but not Si IV or Mg II. The $`\alpha `$<sub>soft-Xray</sub> and $`\alpha `$<sub>UV-2keV</sub> are quite similar in these objects, which is agreement with the results from our more highly ionized UV absorber models.
If we include emission line ratios, the overall picture becomes more complicated. Although most of those with flatter soft X-ray spectra are weak He II emitters, NGC 3227 is an exception. This can be interpreted as indicating that the unattenuated ionizing continuum is unusually hard, the NLR gas is matter-bounded, or that the covering factor of the UV/soft X-ray absorber is low, except along our line of sight. However, our models predict that in the case of a very low ionization UV absorber the narrow He II $`\lambda `$/H$`\beta `$ can be larger in shielded gas than in gas that is directly exposed to the ionizing source. Also, as the ionization state of the absorber decreases, the \[O I\] $`\lambda `$6300/H$`\beta `$ ratio will increase. The narrow line spectrum of NGC 3227 shows strong \[O I\] emission (0.8 x H$`\beta `$, (Cohen 1983)), which indicates that the NLR gas is radiation-bounded and suggests that the ionization state of the UV absorber may be particularly low. It will require HST/STIS observations to test this prediction.
The case of NGC 7469 also merits an explanation. This object, which shows the presence of a high ionization state UV absorber (Crenshaw et al. 1998), has both weak He II and a steep soft X-ray slope, one that matches the $`\alpha `$<sub>UV-2keV</sub> quite closely. In fact, the He II is weaker than a simple photon counting calculation would predict. It is known, however, that a starburst is present in NGC 7469 (cf. Wilson et al. 1991). We would suggest that the starburst is diluting the relative He II strength, and that He II $`\lambda `$4686/H$`\beta `$ in the AGN ionized gas is probably similar to NGC 5548.
As we have noted several times, our analysis of the effect of the modified ionizing continuum on the NLR gas is based on the assumption that the covering factor of the low ionization absorber is large (i.e. $`>`$ 0.5). Although there are a large fraction of Seyfert 1s that show intrinsic UV absorption (Crenshaw et al. 1998), which implies a large covering factor, there is no a priori reason to assume that the low ionization absorber is associated with the high ionization UV absorber (or the X-ray absorber). Another possibility is that the low ionization absorption lines are formed in an atmosphere above the molecular torus, the existence of which is a fundamental prediction of the “unified” model for Seyfert galaxies (cf. Antonucci 1994). Kriss et al. (1997) suggest that in several cases (NGC 4151, NGC 3516, and NGC 3227), our line of sight to the BLR is through this atmosphere, along the plane of the torus. From this viewing angle, the biconical distribution of the emission line gas should be quite evident. This is certainly true for NGC 4151, but less clear for the other cases (cf. Schmitt & Kinney 1996). Also, if this model is correct, we should still see an absorbed soft X-ray continuum when low ionization absorption is present. The fact that a large fraction of Seyfert 1.5s have a flat soft X-ray slope appears to support association of the low ionization gas with the high ionization intrinsic absorption, a point to which we will return in the Discussion section.
## 4 Discussion
Our basic model predicts that a UV absorber, situated between the BLR and NLR and with a large covering factor, will have an observable effect on the soft X-ray continuum and narrow emission-line spectrum. We have generated a set of model UV absorbers, spanning a range of ionization parameter. The predicted ionic columns densities, except in the highest ionization models, are a reasonable match to those observed in the most heavily obscured Seyfert 1.5 galaxies. We have also demonstrated the effects of attentuation by this gas on conditions in the NLR, both by calculating the ionization parameters for a “typical” NLR cloud shielded by these absorbers and by generating a set of NLR models at fixed ionization parameter. Also, we have shown that significant soft X-ray attentuation by this absorbing layer can result in an apparent flattening soft x-ray spectrum (when observed at low spectral resolution), as shown in Figures 3 and 4. Finally, there are sources where the full set of predicted characteristics are present. If this model is correct, we should be able to predict the presence of one characteristic, such as UV absorption, if the other two are observed: a low ionization, He II weak narrow-line spectrum and a flat soft X-ray spectrum.
As noted above, the spectral characteristics of NGC 4151 match the model predictions quite well, and MCG 8-11-11 and NGC 3227 also show evidence that the same process is at work. Specifically, we predict that more than 1/3 of the Seyfert 1.5s in our sample (specifically those with $`\alpha `$<sub>soft-Xray</sub> $``$ $``$1.0; see Table 6) possess a large column of low ionization UV absorption. If, as Kriss et al. (1997) suggest, the low ionization lines form in an atmosphere associated with the molecular torus, either the atmosphere must extend well above the plane of the torus or the characteristics of Seyfert 1.5s, e.g. the detection of a narrow component in the permitted emission lines, are more easily observed when the AGN is viewed along this line-of-sight. If the former were true, the covering factor of the absorber along the line of sight to the NLR could still be large, and our predictions on the effect in the NLR gas are relevant.
## 5 Summary
We have explored the effect of the gas in which intrinsic UV absorption lines arise on the ionizing continuum in Seyfert galaxies. The main results are the following:
1. Above 100 eV, the absorber will modify the soft X-ray continuum, if the column density is sufficiently large (N<sub>eff</sub> $``$ 10<sup>20</sup>cm<sup>-2</sup>) and the gas is not highly ionized (U<sub>abs</sub> $``$10<sup>-2.5</sup>), as is the case for NGC 4151. There is observational evidence for this effect, since objects that show large columns of low ionization absorbers tend to have flatter soft X-ray continuum slopes.
2. A low ionization absorber will attenuate much of the ionizing radiation between 13.6 eV and 100 eV, in particular near the He II Lyman limit. There is evidence for this in that the relative strength of the narrow He II line is anti-correlated with the hardness of the soft X-ray continuum in Seyfert 1.5s.
3. Since the presence of low ionization UV absorption and a flat soft X-ray continuum may be interrelated, one can predict that a Seyfert galaxy will exhibit one of these characteristics if the other is present. If so, a large fraction (1/3) of Seyfert 1.5s should show low ionization UV absorption lines. Also, if the covering factor of the low ionization gas is close to unity, we would expect that these absorbed Seyfert 1.5s will possess low ionization narrow emission-line spectra.
S.B.K. thanks Eric Smith for useful discussions. S.B.K. and D.M.C. acknowledge support from NASA grant NAG5-4103. T.J.T. acknowledges support from UMBC and NASA/LTSA grant NAG5-7835.
|
no-problem/9902/hep-ph9902449.html
|
ar5iv
|
text
|
# Non-equilibrium electroweak baryogenesis at preheating after inflation
## I Introduction
One of the most appealing explanations for the baryon asymmetry of the universe utilizes the non-perturbative baryon-number-violating sphaleron interactions present in the electroweak model at high temperatures . In addition to B, C and CP violating processes, a departure from thermal equilibrium is necessary for baryogenesis . The usual scenario invokes a strongly first-order phase transition to drive the primordial plasma out of equilibrium and set the stage for baryogenesis . This scenario presupposes that the universe was in thermal equilibrium before and after the electroweak phase transition, and far from it during the phase transition. Although there is a mounting evidence in support of the standard Big-Bang theory up to the nucleosynthesis temperatures, $`𝒪(1)`$ MeV, the assumption that the universe was in thermal equilibrium at earlier times is merely a result of a (plausible) theoretical extrapolation. In this paper we propose a picture of the early universe in which thermal equilibrium is maintained only up to temperatures of the order of 100 GeV. The earlier history of the universe is diluted by a low-scale period of inflation, after which the universe never reheated above the electroweak scale.
We will show that the absence of the usual thermal phase transition at the electroweak scale does not preclude electroweak baryogenesis. In fact, according to recent studies of reheating after inflation, the universe could have undergone a period of “preheating” , during which only certain modes are highly populated, and the universe remains very far from thermal equilibrium . Such a stage creates an ideal environment in which a substantial baryon asymmetry could be created. The sphaleron transitions, known to cause a baryon number violation at high temperature, may also proceed in a system out of thermal equilibrium. In addition, the very non-equilibrium nature of preheating may facilitate the baryon number generation, as emphasized in Ref. in the context of GUT baryogenesis.
It remains a challenge to construct a natural model with a low scale of inflation. The main problem is to achieve an extreme flatness of the effective potential in the inflaton direction (i.e. the smallness of the inflaton mass) without fine-tuning. Although several models have been proposed , the lack of naturalness remains a serious problem. Perhaps recent ideas related to large internal dimensions can provide a solution. In our paper we will not address the problem of naturalness, but will simply assume that the electroweak-scale inflation took place. The main question we are going to address is whether the electroweak baryogenesis could take place under these circumstances. The only qualitative feature of the low-energy inflation that is essential to us is that it produces a “cold” state in which coherent bosonic fields are misplaced from their equilibrium vacuum values. Another mechanism that can produce a similar state is related to strong supercooling and spinodial decomposition phase transition which can occur, for example, in theories with radiative symmetry breaking .
As a toy model, we consider a hybrid model of inflation , in which the inflaton is a SU(2)$`\times `$U(1)-singlet and the ordinary Higgs doublet is the triggering field that ends inflation. Alternatively, one can view this process as one in which the inflaton coupling to the Higgs induces dynamical electroweak symmetry breaking, when the inflaton slow-rolls below a certain critical value. The resonant decay of the low-energy inflaton can generate a high-density Higgs condensate characterized by a set of narrow spectral bands in momentum space with large occupation numbers. The system evolves towards equilibrium while slowly populating higher and higher momentum modes. The expansion of the universe at the electroweak scale is negligible compared to the mass scales involved, so the energy density is conserved, and the final reheating temperature $`T_{\mathrm{rh}}`$ is determined by the energy stored initially in the inflaton field. We will find model parameters such that the final thermal state has a temperature below the electroweak scale, $`T_{\mathrm{rh}}<T_c100`$ GeV.
Sphalerons are large extended objects sensitive mainly to the infrared part of the spectrum. We will conjecture that the rate of sphaleron transitions at the non-equilibrium stage of preheating after inflation can be estimated as $`\mathrm{\Gamma }_{\mathrm{sph}}\alpha __\mathrm{W}^4T_{\mathrm{eff}}^4`$, where $`T_{\mathrm{eff}}`$ is some “effective” temperature associated with the long wavelength modes of the Higgs and gauge fields that have been populated during preheating.
Since $`T_{\mathrm{rh}}<T_c`$, the baryon-violating processes, relatively frequent in the non-thermal condensate, are strongly suppressed as soon as the plasma thermalizes via the interaction with fermions. Therefore, the baryon asymmetry created at the end of preheating is not washed out. This is in contrast to the equilibrium electroweak baryogenesis, where the main constraint arises from the tendency for the baryon density to equilibrate back to zero during the slow cooling following the electroweak phase transition. Since the energy density at the electroweak scale is so low, the universe expansion is essentially irrelevant and does not affect the prediction for the baryon asymmetry.
The paper is organized as follows. In section II we discuss the hybrid inflationary model suitable for non-equilibrium electroweak baryogenesis. We estimate the sphaleron rates and produced baryon asymmetry in section III. Our estimates, based on a number of assumptions about the complicated non-linear dynamics, are in agreement with numerical simulations discussed in section IV. We summarize our conclusions in section V.
## II An inflationary model for the electroweak baryogenesis during preheating
Inflation is often associated with processes occurring at the very high energy scales, of order the Grand Unification Scale ($`10^{16}`$ GeV), see Ref. . However, this need not be the case. Low-scale inflation models have been considered . One of the side benefits of lowering the inflation scale is avoiding the gravitino over-production constraints. There are other particle-physics motivations for using the TeV scale, which is associated with supersymmetry breaking in a class of models . We will discuss a simple hybrid inflation model which satisfies the constraints from cosmic microwave background (CMB) anisotropies and large-scale structure, and at the same time contains the desired features for a successful reheating of the universe. We ignore completely the issue of radiative corrections, as discussed above.
As described in the introduction, we want to construct a model with a reheating temperature which is below that of the electroweak scale, so that sphaleron processes are suppressed after reheating. Such a model necessarily has a very low rate of expansion during inflation, $`H\rho ^{1/2}/M_P10^5`$ eV, which is many orders of magnitude smaller than the mass scales we will consider. This means that essentially all the energy density during inflation is converted into radiation in less than a Hubble time, i.e. before the universe has had a chance to expand significantly. This imposes a very strict constraint on the energy scale during inflation<sup>1</sup><sup>1</sup>1One can avoid such constraints by coupling the inflaton to some additional hidden-sector fields that do not contribute to the reheating of the observable universe. Then the potential energy density during inflation can be significantly larger than $`\rho (200\mathrm{GeV})^4`$.. For example, if we want the universe to reheat to $`T_{\mathrm{rh}}100`$ GeV, we need a model of inflation with an energy density of order $`\rho ^{1/4}200`$ GeV. We will construct an example of such a model.
Hybrid inflation is an ingenious model of inflation, in which the amplitude of CMB anisotropies is not necessarily related to the GUT scale physics . The idea is very simple: instead of ending inflation via deviations from the slow-roll, it is the symmetry breaking by a scalar field coupled to the inflaton that triggers the end of inflation. The model can then satisfy the CMB constraints and allow for the electroweak-scale inflation. We will assume that the symmetry breaking field is in fact the Standard Model Higgs field, and that the inflaton is an additional SU(2)$`\times `$U(1)-singlet scalar field. The model thus contains two fields, the inflaton $`\sigma `$ with mass $`\stackrel{~}{m}`$, coupled, with coupling $`g`$, to the Higgs field $`H^{}H=\varphi ^2/2`$, with false vacuum energy $`V_0=M^4/4\lambda `$ and the vacuum expectation value $`\varphi _0=M/\sqrt{\lambda }v`$,
$$V(\sigma ,\varphi )=\frac{\lambda }{4}(\varphi ^2v^2)^2+\frac{1}{2}\stackrel{~}{m}^2\sigma ^2+\frac{1}{2}g^2\sigma ^2\varphi ^2.$$
(1)
During inflation, the inflaton is large, $`\sigma \sigma _cM/g`$, and the effective mass of $`\varphi `$ is, therefore, large and positive. As a consequence, the Higgs field is fixed at $`\varphi =0`$ and does not contribute to the metric perturbations that gave rise to the observed CMB anisotropies. As the inflaton field slowly rolls in the effective potential $`V(\sigma )=V_0+\stackrel{~}{m}^2\sigma ^2/2`$, it will generate the perturbations observed by COBE on large scales . Eventually, the inflaton reaches $`\sigma =\sigma _c`$, where the Higgs has an effective zero mass, and at this point the quantum fluctuations of the Higgs field trigger the electroweak symmetry breaking and inflation ends. The number of e-folds of inflation required to solve the horizon and flatness problems is given by
$$N_e34+\mathrm{ln}\left(\frac{T_{\mathrm{rh}}}{100\mathrm{GeV}}\right).$$
(2)
The fluctuations seen by COBE on the largest scales could have arisen in this model, $`N_e34`$ e-folds before the end of inflation. The observed amplitude and tilt of CMB temperature anisotropies , $`\delta T/T2\times 10^5`$, and $`n10.1`$, imposes the following constraints on the model parameters :
$`g\left({\displaystyle \frac{v}{M_{\mathrm{Pl}}}}\right)^3{\displaystyle \frac{M^2}{\stackrel{~}{m}^2}}`$ $``$ $`1.2\times 10^5,`$ (3)
$`n1={\displaystyle \frac{1}{\pi }}\left({\displaystyle \frac{M_{\mathrm{Pl}}}{v}}\right)^2{\displaystyle \frac{\stackrel{~}{m}^2}{M^2}}`$ $`<`$ $`0.1.`$ (4)
For example, for $`v=246`$ GeV (the electroweak symmetry breaking vacuum expectation value), $`\lambda 1`$, and $`g0.1`$, we find $`\stackrel{~}{m}2\times 10^{12}`$ eV, and it turns out that the spectrum is essentially scale-invariant, $`n15\times 10^{14}`$. These parameters give a negligible rate of expansion during inflation, $`H7\times 10^6`$ eV, and a reheating temperature $`T_{\mathrm{rh}}70`$ GeV. However, the relevant masses for us here are those in the true vacuum, where the Higgs has a mass $`m__\mathrm{H}=\sqrt{2\lambda }v350`$ GeV, and the inflaton field a mass $`m=gv25`$ GeV. Such a field, a singlet with respect to the standard model (SM) gauge group, could be detected at future colliders because of its large coupling to the Higgs field .
Some comments are in order. The consideration carried out below is qualitatively applicable also to a more complicated theory than the minimal SM. Let us take the minimal supersymmetric standard model (MSSM) with an additional singlet field, the inflaton $`\sigma `$, as an example. There are three SU(2) invariant couplings of the inflaton to the Higgs doublets $`H_1`$ and $`H_2`$: $`g_{11}\sigma ^2ϵ_{\alpha \beta }H_1^\alpha H_1^\beta `$, $`g_{22}\sigma ^2ϵ_{\alpha \beta }H_2^\alpha H_2^\beta `$, and $`g_{12}\sigma ^2ϵ_{\alpha \beta }H_1^\alpha H_2^\beta `$. The Higgs mass matrix of the MSSM has the eigenvalues that range from the lightest, $`100`$ GeV, to the heaviest, roughly, $`500`$ GeV . In general, the inflaton-Higgs interaction is not diagonal in the basis that diagonalizes the Higgs mass matrix in the broken-symmetry vacuum. In fact, the entire Higgs mass matrix is important in determining the conditions for parametric resonance. We will leave the analysis of multiple Higgs degrees of freedom for future work because it is too complicated and is not necessary to illustrate the main idea.
### A Preheating in hybrid inflation
To study the process of parametric resonance after the end of inflation in this model, let us recall some of the main features of preheating in hybrid inflation . In hybrid models, after the end of inflation, the two fields $`\sigma `$ and $`\varphi `$ start to oscillate around the absolute minimum of the potential, $`\sigma =0`$ and $`\varphi =v`$, with frequencies that are much greater than the rate of expansion. Other bosonic and fermionic fields coupled to these may be parametrically amplified until the backreaction occurs and further rescattering drives the system to thermal equilibrium. Initially, rescattering of the long-wavelength modes among themselves drives them to local thermal equilibrium, while only a very small fraction of the short-wavelength modes are excited. The spectral density evolves slowly towards the higher and higher momenta . Eventually, thermalization should occur through a process that breaks the coherence of the bosonic modes, e.g. through the decay of the Higgs or gauge fields into fermions. Such a process is very fast in the absence of the expansion of the universe. What prevented the universe from reheating immediately after inflation in chaotic models was the fact that the rate of expansion in those models was much larger than the decay rate of the inflaton, and particles did not interact with each other until the rate of expansion dropped below the decay rate. In our case, the opposite is true: the rate of expansion $`H10^5`$ eV is much smaller than the typical gauge field decay rate into fermions, and the universe thermalizes quickly. Since the masses are much greater than the rate of expansion, many oscillations (of order $`10^{15}`$) occur in one Hubble time . It is, therefore, possible to approximate the particle production by that in a flat Minkowski space-time .
The evolution equation for the Fourier component of the Higgs field that is subject to parametric resonance is approximately given by
$$\ddot{\varphi }_k+[k^2M^2+3\lambda \varphi ^2+g^2\sigma ^2(t)]\varphi _k=0.$$
(5)
Note that this equation applies only in the case $`\lambda g^2`$, where we have ignored the non-linear effect of the inflaton field $`\sigma `$, and in particular the cross-terms $`g^2\varphi _k\sigma _k`$, which do not contribute significantly before backreaction . We will only use this equation for qualitative arguments, since our quantitative results will be fully non-linear and non-perturbative, based on numerical simulations, see Section IV. As the inflaton oscillates around $`\sigma =0`$ with amplitude $`\mathrm{\Sigma }=\sigma _c=M/g`$ in the effective potential of Fig. 1, its coupling to the Higgs will induce the parametric resonance with a $`q`$ parameter , characterizing the strength of the resonance, and given by
$$q\frac{g^2\mathrm{\Sigma }^2}{4m^2}=\frac{\lambda }{4g^2}1.$$
(6)
Since we can neglect the rate of expansion, the amplitude of oscillations $`\mathrm{\Sigma }`$ does not decrease, and the resonance is extremely long-lived. For generic values of the couplings, $`g^210^210^3`$, it is, in fact, a broad resonance, $`q1`$. Higgs particle production occurs at the instants when $`\sigma (t)=0`$, and continues until backreaction becomes important, either for the inflaton oscillations ($`\varphi ^2m^2/g^2`$ or for the effective Higgs mass ($`3\lambda \varphi ^2M^2k_{}^2+4m^2\sqrt{q}`$. Which of the two effects back-reacts first depends on the coupling $`g`$. Here $`k_{}=\sqrt{2}mq^{1/4}`$ is the typical momentum of the resonance band. For $`g>0.08`$ backreaction on the inflaton mass occurs before the $`\lambda `$-term in (5) is relevant. We have chosen $`g=0.1`$ for definiteness, and computed the power spectrum of the Higgs field. For a smaller coupling $`g`$, the resonance spectrum would be different, but the qualitative behavior would be similar. In fact, it does not matter how many bands the parametric resonance populates because after rescattering all those bands smooth out and reach “thermalization” over a finite region in momentum space . In Fig. 2 we show the growth parameter $`\mu _k`$ as a function of $`k`$. The typical momentum contributing to the power spectrum, $`k^2|\varphi _k|^2`$, is
$$kk_{}/2=mq^{1/4}/\sqrt{2}2m,$$
(7)
where the growth factor has a large value, $`\mu _{\mathrm{max}}0.9`$. This unusually large number is due to the fact that what drives the Higgs production in this model is not the usual parametric resonance from oscillations around the minimum of the potential , but the spinodal instability responsible for the breaking of the electroweak symmetry. In the language of Mathieu equations , this corresponds to a large and negative $`A=(k^2M^2)/4m^2`$ parameter, which induces large growth factors $`\mu `$. On the other hand, the occupation number of a given mode is determined from $`\mu _k`$ as $`n_k\frac{1}{2}\mathrm{exp}(2\mu _kmt)`$. This means that, within a few oscillations, the Higgs field reaches a huge occupation number over a range of narrow bands in momentum space. Therefore, the Higgs fluctuations grow exponentially with time,
$$\varphi ^2=\frac{1}{2\pi ^2}𝑑kk^2\frac{n_k}{\omega _k}\frac{n_\varphi (t)}{g\mathrm{\Sigma }}e^{2\mu mt},$$
(8)
with $`\mu =\mu _{\mathrm{eff}}0.8`$. At backreaction, the Higgs expectation value is just of order its vacuum expectation value (VEV), $`\varphi ^2m^2/g^2v^2`$, but continues to grow slightly during rescattering . With our set of parameters, this happens at times $`t𝒪(1)`$ GeV<sup>-1</sup>.
In Section IV of this paper we follow a numerical approach in (1+1) dimensions and computed the initial state from parametric resonance and subsequent stages like rescattering and backreaction directly through the real time evolution of the classical equations of motion for the bosonic modes with arbitrary $`k`$, with all the couplings between fields properly taken into account. This way, we have automatically included rescattering and thermalization in the evolution.
### B Higgs coupling to W bosons
Soon after production, Higgs particles decay predominantly into $`W`$ bosons with a branching ratio of order one, for $`m__\mathrm{H}=350`$ GeV, and a decay rate $`\mathrm{\Gamma }20`$ GeV. One may ask whether the Higgs oscillations may induce a resonant production of gauge bosons. It turns out that the corresponding resonance is very narrow and insufficient ($`q__\mathrm{W}m__\mathrm{H}=g__W^2\mathrm{\Phi }^2/4m__\mathrm{H}0.3\mathrm{GeV}\mathrm{\Gamma }`$, where $`g__W^2=4\pi \alpha __\mathrm{W}`$ is the SU(2) gauge coupling and $`\mathrm{\Phi }v/10`$ is the amplitude of the Higgs oscillations during the first resonance stage, see Fig. 9) for the coherent decay of the Higgs into gauge bosons. It is therefore appropriate to use perturbation theory to calculate the Higgs decay into the W bosons.
Since the rate of growth of the energy density of the Higgs field,
$$\rho _\varphi =\frac{1}{2\pi ^2}𝑑kk^2n_k\omega _kn_\varphi (t)h\mathrm{\Sigma }e^{2\mu mt},$$
(9)
is larger than its decay rate into W bosons, i.e. $`2\mu m2\mathrm{\Gamma }40`$ GeV, we do not expect a significant depletion of the energy density of the Higgs field during preheating, while the energy density of the gauge bosons grows exponentially at the same rate, $`\rho __\mathrm{W}\mathrm{exp}(2\mu mt)`$. Therefore, soon after rescattering, most of the energy density is in the form of Higgs and gauge fields with essentially zero momentum. It is these long-wavelength gauge configurations that will play an important role in inducing the sphaleron transitions, and the subsequent baryon production.
One of the most fascinating properties of rescattering after preheating is that the long-wavelength part of the spectrum soon reaches some kind of local equilibrium , while the energy density is drained, through rescattering and excitations, into the higher frequency modes. Therefore, initially the low energy modes reach “thermalization” at a higher effective “temperature” , while the high energy modes remain unpopulated, and the system is still far from true thermal equilibrium:
$$n_k=\frac{1}{\mathrm{exp}(\omega _k/T)1}\frac{T_{\mathrm{eff}}}{\omega _k}1.$$
(10)
It is possible to estimate the effective “temperature” $`T_{\mathrm{eff}}`$ from the conservation of energy during preheating. The energy per (long wavelength) mode is $`n_k\omega _kT_{\mathrm{eff}}`$, or effectively equipartitioned. Since only the modes in the range $`0<kk_{\mathrm{max}}4k_{}5mq^{1/4}`$ are populated, we can integrate the energy density in Higgs and gauge fields, $`g__\mathrm{B}=1+3\times 3=10`$, to give, in (3+1)-dimensions,
$`\rho _{\mathrm{bosons}}`$ $`=`$ $`g__\mathrm{B}{\displaystyle \frac{d^3k}{(2\pi )^3}n_k\omega _k}{\displaystyle \frac{g__\mathrm{B}}{6\pi ^2}}T_{\mathrm{eff}}k_{\mathrm{max}}^3{\displaystyle \frac{\lambda v^4}{4}},`$ (11)
$`{\displaystyle \frac{T_{\mathrm{eff}}}{v}}`$ $``$ $`{\displaystyle \frac{6\pi ^2}{125}}{\displaystyle \frac{q^{1/4}}{g__\mathrm{B}g}}g^{1/2},`$ (12)
which gives $`T_{\mathrm{eff}}350`$ GeV. We note that the effective temperature depends on the value of the coupling $`g`$ as $`T_{\mathrm{eff}}^4g^2`$, as expected . The temperature $`T_{\mathrm{eff}}`$ is higher than the final reheating temperature $`T_{\mathrm{rh}}`$, which is easy to understand, since preheating is a very efficient mechanism for populating just a few modes, into which a large fraction of the original inflaton energy density is put. This means that a few modes carry a large amount of energy as they come into partial equilibrium among themselves, and thus the effective “temperature” is high. However, when the system reaches a full thermal equilibrium, the same energy is distributed between all modes, which corresponds to a much lower temperature. In our example, thermalization of long-wavelength modes happens at a time scale $`t\mathrm{\Gamma }__\mathrm{W}^1𝒪(1)`$ GeV<sup>-1</sup>, where $`\mathrm{\Gamma }__\mathrm{W}2`$ GeV is the width of the vector boson.
This is the main reason why the out of equilibrium mechanism of preheating is so efficient in producing sphaleron transitions, since the rate of these transitions is greatly enhanced by the higher effective temperature. An alternative way of seeing this is by analogy with a diffusing plasma. The rescattering of Higgs and W bosons after preheating produces a diffusion which enhances over-the-barrier sphaleron transitions. It is fortunate that the description of this diffusion mechanism can be done with the use of an effective temperature, for which the rate of sphaleron transitions can be estimated analytically, by ignoring the higher momentum modes and the integration over hard thermal loops .
## III Baryon asymmetry of the universe
It is well known that sphaleron transitions are mainly sensitive to the long-wavelength modes in a plasma. This is because the sphaleron size, $`(\alpha __\mathrm{W}T_{\mathrm{eff}})^1`$, is much larger than the typical Compton wavelengths of particles in the plasma, $`(2k_{})^1(5m)^1`$. A simple argument then suggests that the rate of sphaleron transitions per unit time per unit volume should be of the order of the fourth power of the magnetic screening length in the plasma .<sup>2</sup><sup>2</sup>2In the symmetric phase of the electroweak theory and in thermal equilibrium at a temperature $`T`$, the typical momentum scale of sphaleron processes is $`\alpha __\mathrm{W}T`$ ($`\alpha __\mathrm{W}1/29`$ is the weak gauge coupling) which is much smaller than the average momentum of the particles in the plasma, $`kT`$. It was argued in Refs. that the higher momentum modes with typical scale greater than $`g__WT`$ should slow down the sphaleron processes by an extra factor $`\alpha __\mathrm{W}\mathrm{log}(1/\alpha __\mathrm{W})`$. During the first stages of reheating those high frequency modes are not populated and therefore should not be considered in our estimate.
We, therefore, conjecture that the sphaleron transition rate during rescattering after preheating, $`\mathrm{\Gamma }_{\mathrm{sph}}`$, can be approximated by that of a system in thermal equilibrium at some temperature $`T_{\mathrm{eff}}`$ defined in the previous section:
$$\mathrm{\Gamma }_{\mathrm{sph}}\alpha __\mathrm{W}^4T_{\mathrm{eff}}^4.$$
(13)
In the Standard Model, baryon and lepton numbers are not conserved because of the non-perturbative processes that involve the chiral anomaly:
$$_\mu j__B^\mu =_\mu j__L^\mu =\frac{3g__W^2}{32\pi ^2}F_{\mu \nu }\stackrel{~}{F}^{\mu \nu }.$$
(14)
Furthermore, the sphaleron configurations connect vacua with different Chern-Simons numbers, $`N_{_{CS}}`$, and induce the corresponding changes in the baryon and lepton number, $`\mathrm{\Delta }B=\mathrm{\Delta }L=3\mathrm{\Delta }N_{_{\mathrm{CS}}}`$.
A baryon asymmetry can be generated by sphaleron transitions in the presence of C and CP violation. There are several possible sources of CP violation at the electroweak scale. The only one confirmed experimentally is due to Cabibbo-Kobayashi-Maskawa mixing of quarks that introduces some violation of CP, but it is probably too small to cause a sufficient baryon asymmetry. Various extensions of the Standard Model contain additional scalars (e.g. extra Higgs doublets, squarks, sleptons, etc.) with irremovable complex phases that lead to C and CP violation.
We are going to model the effects of CP violation in the effective field theory approach. Namely, we assume that, after all degrees of freedom except the gauge fields, the Higgs, and the inflaton are integrated out, the effective Lagrangian contains some non-renormalizable operators that break CP. The lowest, dimension-six operator of this sort is
$$𝒪=\frac{\delta _{_{\mathrm{CP}}}}{M_{\mathrm{new}}^2}\varphi ^{}\varphi \frac{3g__W^2}{32\pi ^2}F_{\mu \nu }\stackrel{~}{F}^{\mu \nu }.$$
(15)
The dimensionless parameter $`\delta _{_{\mathrm{CP}}}`$ is an effective measure of CP violation, and $`M_{\mathrm{new}}`$ characterizes the scale at which the new physics, responsible for this effective operator, is important. Of course, other types of CP violating operators are possible although, qualitatively, they lead to the same picture.
Note that the operator (15) is CP-odd but does not violate C. Thus, in a pure bosonic theory non-equilibrium evolution can only produce parity-odd or CP-odd configurations, but no C asymmetry. For example, the Chern-Simons number can be produced, as it is C-even but P and CP-odd. C violation, necessary for baryogenesis, comes from ordinary gauge-fermion electroweak interactions that violate C and parity, but conserve CP. This manifests itself in the anomaly equation that relates baryon number (C-odd but P-even operator) to the Chern-Simons number (C$`=+1`$, P$`=1`$). In other words, C violation in the bosonic sector of the theory is not required as long as it appears in the fermionic sector, via the electroweak interactions.
If the scalar field is time-dependent, the vacua with different Chern-Simons numbers are not degenerate. This can be described quantitatively in terms of an effective chemical potential, $`\mu _{\mathrm{eff}}`$, which introduces a bias between baryons and antibaryons,
$$\mu _{\mathrm{eff}}\frac{\delta _{_{\mathrm{CP}}}}{M_{\mathrm{new}}^2}\frac{d}{dt}\varphi ^2.$$
(16)
This equation follows from Eq. (15) by integration by parts. Although the system is very far from thermal equilibrium, we will assume that the evolution of the baryon number $`n__B`$ can be described by a Boltzmann-like equation, where only the long-wavelength modes contribute,
$$\frac{dn__B}{dt}=\mathrm{\Gamma }_{\mathrm{sph}}\frac{\mu _{\mathrm{eff}}}{T_{\mathrm{eff}}}\mathrm{\Gamma }__Bn__B,$$
(17)
where $`\mathrm{\Gamma }__B=(39/2)\mathrm{\Gamma }_{\mathrm{sph}}/T_{\mathrm{eff}}^3`$. The temperature $`T_{\mathrm{eff}}`$ decreases with time because of rescattering: the energy stored in the low-frequency modes is transferred to the high-momentum modes.
The rate $`\mathrm{\Gamma }__B`$, even at high effective temperatures, is smaller than other typical scales in the problem. Indeed, for $`T_{\mathrm{eff}}400`$ GeV, $`\mathrm{\Gamma }__B0.01`$ GeV, which is small compared to the rate of the resonant growth of the Higgs condensate ($`2\mu m40`$ GeV). It is also much smaller than the decay rate of the Higgs into W’s and the rate of W decays into light fermions. Therefore, the last term in Eq. (17) never dominates during preheating and the final baryon asymmetry can be obtained by integrating
$$n__B=𝑑t\mathrm{\Gamma }_{\mathrm{sph}}(t)\frac{\mu _{\mathrm{eff}}(t)}{T_{\mathrm{eff}}(t)}\mathrm{\Gamma }_{\mathrm{sph}}\frac{\delta _{_{\mathrm{CP}}}}{T_{\mathrm{eff}}}\frac{\varphi ^2}{M_{\mathrm{new}}^2},$$
(18)
where all quantities are taken at the time of thermalization. This corresponds to a baryon asymmetry
$$\frac{n__B}{s}\frac{45\alpha __\mathrm{W}^4\delta _{_{\mathrm{CP}}}}{2\pi ^2g_{}}\frac{\varphi ^2}{M_{\mathrm{new}}^2}\left(\frac{T_{\mathrm{eff}}}{T_{\mathrm{rh}}}\right)^3,$$
(19)
where $`g_{}=g__\mathrm{B}+(7/8)g__\mathrm{F}10^2`$ is the number of effective degrees of freedom that contribute to the entropy density $`s`$ at the electroweak scale. Taking $`\varphi ^2v^2=(246\mathrm{GeV})^2`$, the scale of new physics $`M_{\mathrm{new}}1`$ TeV, the coupling $`\alpha __\mathrm{W}1/29`$, the temperatures $`T_{\mathrm{eff}}350`$ GeV and $`T_{\mathrm{rh}}70`$ GeV, we find
$$\frac{n__B}{s}3\times 10^8\delta _{_{\mathrm{CP}}}\frac{v^2}{M_{\mathrm{new}}^2}\left(\frac{T_{\mathrm{eff}}}{T_{\mathrm{rh}}}\right)^32\times 10^7\delta _{_{\mathrm{CP}}},$$
(20)
consistent with observations for $`\delta _{_{\mathrm{CP}}}10^3`$, which is a reasonable value from the point of view of particle physics beyond the Standard Model. Therefore, baryogenesis at preheating can be very efficient in the presence of CP violation that comes from new physics at $`M_{\mathrm{new}}1`$ TeV.
## IV Numerical simulations in (1+1) dimensions
The theoretical analysis presented above was based on the conjecture that the sphaleron transition rate can be described in terms of the effective “temperature” $`T_{\mathrm{eff}}`$ as in equation (13). This assumption is based on the reasoning given above and seems quite plausible. We have also verified the validity of such description in the (1+1)-dimensional numerical simulations.
For simplicity, we consider an Abelian Higgs model in (1+1) dimensions, which was successfully used before for studying physics relevant to baryogenesis . The Lagrangian comprises two scalar fields and a U(1) gauge field:
$``$ $`=`$ $`{\displaystyle \frac{1}{4}}F_{\mu \nu }^2\kappa |\varphi |^2ϵ_{\mu \nu }F^{\mu \nu }`$ (21)
$`+`$ $`|D_\mu \varphi |^2\lambda (|\varphi |^2v^2/2)^2`$ (22)
$`+`$ $`{\displaystyle \frac{1}{2}}(_\mu \sigma )^2{\displaystyle \frac{1}{2}}\stackrel{~}{m}^2\sigma ^2g^2\sigma ^2|\varphi |^2,`$ (23)
where $`D_\mu =_\mu ieA_\mu `$, with $`e`$ the U(1) gauge coupling, and $`ϵ_{\mu \nu }`$ is the totally antisymmetric tensor in (1+1) dimensions. Here CP violation is induced via the $`\kappa \varphi ^{}\varphi ϵ_{\mu \nu }F^{\mu \nu }`$ term, which violates both C and CP. Furthermore, in (1+1) dimensions, the analogue of the chiral anomaly is the anomalous non-conservation of the gauge invariant fermionic current, $`j_F^\mu =\overline{\psi }\gamma ^\mu \psi `$,
$$_\mu j_F^\mu =\frac{e}{4\pi }ϵ_{\mu \nu }F^{\mu \nu },$$
(24)
which serves as a source of B violation.
The corresponding equations of motion are:
$`_\nu F^{\mu \nu }+2\kappa ϵ^{\mu \nu }_\nu |\varphi |^2`$ $`=`$ $`ej_\varphi ^\mu ,`$ (25)
$`D^2\varphi +2\lambda \varphi (|\varphi |^2v^2/2)+g^2\sigma ^2\varphi `$ $`=`$ $`\kappa \varphi ϵ_{\mu \nu }F^{\mu \nu },`$ (26)
$`_\mu ^\mu \sigma +\stackrel{~}{m}^2\sigma +2g^2|\varphi |^2\sigma `$ $`=`$ $`0,`$ (27)
where $`j_\varphi ^\mu =i(\varphi ^{}^\mu \varphi \varphi ^\mu \varphi ^{})`$ is the charged current of the Higgs field.
The numerical simulations track the real-time evolution of the classical field configurations. The initial conditions are set by preheating as a set of narrow bands in the Higgs power spectrum (Fig. 3). The real-time evolution leads to a gradual redistribution of energy between different modes, including the inflaton field $`\sigma `$ itself. Note that the Higgs field takes a very long time to reach its VEV, see Fig. 9, due to the presence of the CP violating term in Eq. (26), which leads to the production of baryon number before equilibrium. In fact, the full thermalization of the system should take a very long time , while some other processes (e.g. interaction with fermions from the decay of gauge fields and/or Higgs) will induce thermalization via decoherence long before that. Thus, although it is technically possible to reach complete thermalization of the whole system including the inflaton, our simulations are necessarily limited to the vector and Higgs decay time scale, of the order of $`50`$ inflaton oscillations, as in Fig. 4.
### A The sphaleron transition rate and the effective temperature
As expected, the resonant inflaton decay quickly leads to a population of the long-wavelength modes of the Higgs field, see Fig. 3. This happens after only a few oscillations of the inflaton. At this point the long-wavelength modes contain a very large fraction of the total energy, and that leads to a noticeable increase in $`T_{\mathrm{eff}}`$ at the beginning of the resonance, see Fig. 5.
The sphaleron transitions immediately set in. We monitor them both by calculating the Chern-Simons number, $`N_{_{\mathrm{CS}}}=A_1𝑑x^1`$ in the temporal gauge $`A_00`$, and also by keeping track of the U(1) winding number of Higgs field. (Actually, no statistically significant difference between these two quantities was observed). To get a quantitative estimate of the transition rate we measure the variance of $`N_{_{\mathrm{CS}}}`$, i.e. $`\delta ^2N_{_{\mathrm{CS}}}^2N_{_{\mathrm{CS}}}(t)^2`$, over an ensemble of 100 independent runs starting from different field configurations that have the same energy spectrum as shown in Fig. 3. The preparation of the initial configuration and other peculiarities of the numerical procedure will be discussed in detail in a future publication.
The variance of $`N_{_{\mathrm{CS}}}`$ is shown in Fig. 6. Its time derivative $`d\delta ^2/dt=\mathrm{Volume}\times \mathrm{\Gamma }_{\mathrm{sph}}`$ is plotted in Fig. 7. Note that this relation comes naturally from the diffusion of the Chern-Simons number, $`N_{_{\mathrm{CS}}}^2\mathrm{Volume}\times \mathrm{\Gamma }_{\mathrm{sph}}t`$, which follows a typical Brownian motion . Note that for initial parameters chosen as in Figs. 3-10, the rate actually increases during the thermalization of the Higgs field. However, for our purposes, it is important that we get a substantial amount of sphaleron transitions right after the beginning of the resonance. One could slow down the after-resonance transitions by decreasing the total energy of the system by a factor of 4 (see Figs. 11-15 below). However, this decreases the net generated asymmetry considerably, due to a decrease of the effective temperature $`T_{\mathrm{eff}}`$, see Fig. 12, and the subsequent decrease in the sphaleron rate just after the resonance, see Fig. 13. These two sets of figures helps us gain intuition about the process of baryogenesis during preheating in (1+1) dimensions.
### B The generation of the baryon asymmetry
As is clear from Eqs. (22) and (26), the chemical potential $`\mu _{\mathrm{eff}}\kappa _0\varphi ^{}\varphi `$ is non-zero only during the resonance. The energy transfer from the inflaton to the Higgs field results in a steady shift of $`\varphi ^{}\varphi `$ expectation value, see Fig. 9. This shift in VEV acts as a chemical potential and drives the baryon asymmetry.
The baryon asymmetry generated by the non-equilibrium sphaleron transitions in the presence of a CP violating chemical potential $`\mu _{\mathrm{eff}}`$, see Eq. (16), is observed as a non-zero value of $`N_{_{\mathrm{CS}}}`$ averaged over a computer-generated ensemble. As shown in Figs. 10 and 15, $`N_{_{\mathrm{CS}}}`$ steadily increases and eventually freezes when the expectation value $`\varphi ^{}\varphi `$ approaches a constant value and the chemical potential (16) vanishes. In the early universe, the drift of $`N_{_{\mathrm{CS}}}`$ is eventually interrupted by the decay of the vector and Higgs fields into fermions.<sup>3</sup><sup>3</sup>3The Higgs and vector decays into fermions are not included in our (1+1)-dimensional simulations. There has been recent progress in introducing fermions in (1+1) lattice simulations, but we will leave for future work such developments. This leads to thermalization and, as long as the reheat temperature is sufficiently low, there is no further wash-out of the baryon asymmetry. We note in passing that, in our numerical simulations, the Chern-Simons number attained at the end (i.e. the final baryon number) is approximately linearly dependent on the CP violating parameter $`\kappa `$, and, therefore, our estimate can be extrapolated to very small values of $`\kappa `$.
## V Conclusion
There is no empirical evidence that a thermal electroweak phase transition took place in the early universe. However, since the only well-established source of baryon number non-conservation is the gauge sector of the Standard Model, one could argue that electroweak baryogenesis is the only explanation for the baryon asymmetry of the universe that does not invoke any unknown B-violating new physics. This reasoning would favor the usual electroweak phase transition, followed by the electroweak baryogenesis, on aesthetical grounds.
In this paper we have presented an appealing alternative. We have shown that a new kind of electroweak baryogenesis, which still uses only the known sources of baryon number violation, is possible even if the reheat temperature after inflation was too low for a thermal restoration of the SU(2)$`\times `$U(1) gauge symmetry. Moreover, the departure from thermal equilibrium, necessary for generating a non-zero baryon number density, is naturally achieved at preheating after an electroweak-scale inflation. Sphaleron transitions take place during preheating, before the thermalization of the plasma. The baryon asymmetry can be generated through sphalerons in a manner similar to the usual electroweak baryogenesis . When the universe reaches thermal equilibrium, the temperature can be small enough to suppress further baryon-violating processes, so that the baryon asymmetry is not washed out.
## Acknowledgements
J.G.B. thanks the organizers of the ITP workshop on Non-equilibrium quantum fields, Santa Barbara (January 1999), where this work was presented, for a very stimulating atmosphere, and the participants of the workshop for generous discussions. J.G.B. also thanks Belen Gavela and Andrei Linde for enlightening comments and suggestions. J.G.B. is supported by a Research Fellowship of the Royal Society of London. D.G. thanks D.V. Semikoz and M.M. Tsypin for stimulating discussions. D.G. is grateful to CERN TH division for kind hospitality. D.G. work was also supported in part by RBRF grant 98-02-17493a. A.K. thanks J.M. Cornwall for helpful discussions. The work of A.K. was supported in part by the US Department of Energy grant DE-FG03-91ER40662. We thank Gia Dvali and Igor Tkachev for many valuable comments.
## Note added
After our paper was finished, we learned about a recent paper that also discussed baryogenesis after an electroweak-scale inflation.
|
no-problem/9902/cond-mat9902099.html
|
ar5iv
|
text
|
# Density Modulations and Addition Spectra of Interacting Electrons in Disordered Quantum Dots
## Abstract
We analyse the ground state of spinless fermions on a lattice in a weakly disordered potential, interacting via a nearest neighbour interaction, by applying the self-consistent Hartree-Fock approximation. We find that charge density modulations emerge progressively when $`r_s1`$, even away from half-filling, with only short-range density correlations. Classical geometry dependent magic numbers can show up in the addition spectrum which are remarkably robust against quantum fluctuations and disorder averaging.
PACS Numbers: 73.20Dx, 73.23Hk
The interplay of disorder and interactions in two dimensional Fermi systems is currently a central problem in condensed matter physics. Mesoscopic systems provide a unique forum for analysing ground state properties as it is possible to access the regime $`kT\mathrm{\Delta }`$, where $`\mathrm{\Delta }`$ is the mean single particle level spacing. Examples include the study of low temperature persistent currents and magnetic response in small quantum rings and dots, and low bias measurements of the d.c. response and capacitance of weakly coupled quantum dots. The latter experiments made it possible to access directly the energy differences $`\mu _N`$ between ground states of $`N`$ and $`N1`$ particles: $`\mu _N=E(N)E(N1)`$. The addition spectrum, $`_i\delta (\mu \mu _i)`$, depends sensitively on the nature of the mesoscopic ground state .
Resonant tunnelling measurements of the addition spectrum have resulted in some interesting observations. Whilst the mean peak spacings are well described by the constant interaction model , the fluctuations are not. In the averaging was carried out over $`N`$, whereas in the results were also averaged over sample geometry. The experimental data indicates the existence of atypical addition spectrum spacings at certain values of $`N`$, suggesting that averaging over disorder may not be equivalent to averaging over $`N`$ (the ergodicity principle is violated). Theoretical and numerical studies attempted to address various aspects of the problem.
In the capacitance experiments of Ref., the measured addition spectra display bunching, an indication that the Coulomb blockade becomes negative between one or more consecutive electron addition peaks; in the experiment, these peaks then coalesce. Such bunching is in direct conflict with the naive picture combining the constant interaction model with ergodic effective single-particle wavefunctions. It has been shown that a classical charge model can reproduce many of the observed effects, but there is currently no quantum mechanical explanation as the experiments are carried out at densities considered too high to form a Wigner solid (in the case of a Coulomb bare interaction see ref., for a short-ranged potential one might expect a Wigner solid to be less stable).
Motivated by these experiments, but not attempting to reproduce specific details thereof, we analyse the nature of the ground state by applying the self-consistent Hartree-Fock (SCHF) approximation. We are thus able to go beyond the random phase approximation (RPA) with perturbation theory (valid for large dimensionless conductance $`g`$, and small $`r_s`$ ), whilst considering larger systems than is feasible by exact methods. Experimental values of $`r_s>1`$ have indeed been reported . Starting from a noninteracting model, we find that as the interaction strength is increased such that $`r_s1`$ (but still too weak to form a Wigner solid), the electron gas crosses over to a regime where: (i) there are significant spatial density modulations; (ii) density-density correlation functions seem to saturate, defining only short range order; (iii) the addition spectrum becomes strongly $`N`$ dependent, with magic numbers for which $`\mathrm{\Delta }_2(N)\mu _{N+1}\mu _N`$ exhibits sharp maxima that coincide with the related classical charge model.
Recent reports of exact numerical studies have emphasised that the properties of quantum dots which are not reproduced by effective single-particle random-matrix-like theories are associated with the emergence of short range correlations. Our results support this claim, but the main thrust here is related to (iii): some deviations from RPA behaviour have a direct classical electrostatic counterpart. The signature of the latter is not totally washed out by quantum fluctuations even far from the Wigner crystalisation threshold.
We consider the following tight binding Hamiltonian for spinless fermions with periodic boundary conditions:
$$H=\underset{i}{}w_ic_i^+c_it\underset{ij}{}c_i^+c_j+\frac{U}{2}\underset{ij}{}c_i^+c_j^+c_jc_i$$
(1)
where $`ij`$ denotes pairs of nearest neighbours, $`w_i`$ is the random on-site energy in the range $`[W/2,W/2]`$, and $`t`$ the hopping matrix element. All lengths are measured in units of the lattice constant $`a`$, so that $`U=e^2/a`$ and $`t=\mathrm{}^2/2ma^2`$. For low filling (i.e. a parabolic band) we find $`r_s=U/(t\sqrt{4\pi \nu })`$ where $`\nu =N/A`$ is the filling factor, for $`N`$ electrons in an area $`A`$. For the non-interacting system we find $`g=k_Fl/2=96\pi \nu (t/W)^2`$ by applying the Born approximation (valid for $`1gA`$); $`k_F=\sqrt{4\pi \nu }`$ and $`l`$ is the elastic mean free path. In the capacitance measurements , the $`2d`$ quantum dots was sandwiched between a metallic source (heavily doped $`n^+`$ GaAs) and drain (Cr/Au), separated, at distances comparable with the mean particle separation, by tunnel barriers. To account for external sources of screening (taken as half planes), one can insert a bare interaction between electrons in the dot that is dipolar ($`1/r^3`$) at distances greater than the dot to gate separation when there is only one close gate, and in the case of two close gates with exponentially small long range interactions . Here we model such interactions with a nearest-neighbour pair potential.
The ground state is obtained in the SCHF approximation, over a range of densities and disorder strengths at zero magnetic field. The generalised inverse participation ratio (GIPR) is then calculated according to the following definition:
$$\frac{1}{\nu ^2A}\underset{i}{}\rho (𝐫_i)^2,$$
(2)
where $`\rho (𝐫_i)`$ denotes the expectation value of the total density at the lattice site $`i`$. The angle brackets correspond to an average over the disorder ensemble. The GIPR provides a convenient measure of the degree of density modulation: in the limit of a perfectly flat density profile it takes the value unity, and increases for a modulated density. The maximal value that can be obtained for the GIPR occurs when all the charge is concentrated on only $`N`$ sites, in which case $`=1/\nu `$.
The GIPR is plotted for a range of disorder strengths in Fig. 1. For $`\nu =1/4`$ it increases rapidly between $`U1`$ and $`U5`$ depending on the disorder strength, then gives way to a weak interaction dependence for $`U5`$. For comparison we also plot results for an identical system but with bare Coulomb interactions, such that the interaction potentials are both equal to $`U`$ between nearest neighbour sites: the relative rapidity of the increase of $``$ for nearest neighbour interactions is clear. The increase in the GIPR signals an increase in the spatial modulation of the total electron density, we shall refer to the increased density modulation at finite $`U`$ as a charge density modulation (CDM) .
At zero interaction we find $`11/g`$ for large $`g`$ (not shown). Within our numerical accuracy we were unable to find a consistent size dependence in the GIPR, suggesting that disorder is the dominant mechanism controlling the small to large $`U`$ cross-over, as seen in fig. 1 .
The GIPR yields no information on the spatial structure of the ground state, for which we evaluate the density-density correlation function defined as
$$𝒞(r)=\rho (r)\rho (0)_c/\rho (0)^2_c.$$
(3)
The subscript $`c`$ indicates that only connected averages are included, and here, due to the homogeneity of the disorder averaged potential, the correlation function depends only on the vector separation $`𝐫`$. We only consider $`𝐫`$ to be directed along a lattice vector (1,0). A typical result for the correlation function is plotted in Fig. 2. As the interaction strength is increased, short range correlation develop, and then saturate. The underlying square lattice excludes the possibility of observing incipient Wigner crystal fluctuations, which possess the symmetry of a triangular lattice.
Comparing figures 1 and 2, one can see that the short ranged correlations develop over the same range of interaction strength as the rapid increase in $``$. We did find that on rare occasions a further rearrangement occurs at larger interaction values, but it is not clear whether this is a genuine effect which for larger systems would become correspondingly less rare, or a manifestation of metastable configurations.
Let us now look at the longer range behaviour of $`𝒞(r)`$. At half filling it has been claimed that in clean infinite lattice systems a second order transition to a crystalline state occurs at strong interactions . In disordered systems, evidence of at least short range order has been seen in exact calculations on small systems . Within the SCHF approximation we find no decay of correlations. It is well known that at half filling, nesting of the Fermi surface leads to a $`2k_F`$ charge density wave instability, but away from half filling this nesting does not occur. In fig.2 it can be seen that, in the presence of disorder, there exists no long range order in the SCHF ground state for $`\nu =1/4`$. This however, is also true of the related classical system (i.e. $`t=0`$ and $`W/U0`$ in the Hamiltonian (1)), where one expects the formation of a non-crystalline solid. One way to establish whether the electrons possess solid- or liquid- like correlations is to analyse the excited states of the system. However, to show that classical results can provide information on the SCHF ground state away from half filling, at least when the particle packing is compact, we consider the appearance of geometrical frustration, where the ground state of the classical system contains line defects with respect to a pure crystal. These defects lead to the disappearance of long-ranged order, but at the same time give rise to magic filling factors where $`\mathrm{\Delta }_2(N)`$ exhibits large fluctuations.
This brings us to the central observation of this study, namely, the strong geometry and filling factor dependent fluctuations that can arise in the average addition spectrum spacing $`\mathrm{\Delta }_2(N)`$ as the interaction strength is increased . Consider first a collection of classical charges, on a square lattice, with nearest neighbour interactions. If the lattice is a torus with $`2n\times 2m`$ sites, it is possible to insert up to $`2nm`$ particles without incurring an energy cost. The remaining $`2mn`$ particles cost an additional $`4U`$ to add, so that $`\mathrm{\Delta }_2(N)`$ displays a peak, $`𝒪(U)`$, at $`N=2nm`$. If on the other hand one of the sides (or both) of the lattice is odd (e.g. $`(2n+1)\times (2m+1)`$) the maximum number of particles that can be added without nearest neighbours is reduced. It is not difficult to see that such a maximally filled configuration contains a line defect and long range order is lost. In other words, the lattice of sites without nearest neighbours is incommensurate with the underlying lattice. In the commensurate case, the quantum system also shows a peak in $`\mathrm{\Delta }_2(N)`$ at half filling, but the nesting of the Fermi surface in the non-interacting system also makes this filling special. We show below that in the incommensurate case we find that within the SCHF approximation, remnants of the peaks at the magic filling factors in the related classical model are visible far from the classical limit, despite the lack of nesting.
We consider a lattice of the type $`(2n+1)\times (2n+1)`$ as an example, the predictions for other incommensurate lattices are easily obtained. In the classical limit with nearest neighbour interactions the first $`n(2n+1)`$ particles can be added with no interaction energy cost, the next $`2n+1`$ particles cost an additional $`2U`$, and the rest cost an additional $`4U`$. As a result, in the classical limit, $`\mathrm{\Delta }_2(n(2n+1))`$ (as well as $`\mathrm{\Delta }_2((n+1)(2n+1))`$) is significantly larger than all other values of $`\mathrm{\Delta }_2(N)`$ . In our calculations we include a trivial constant interaction term to make the results easier to read. In Fig. 3, we plot some typical results for $`\mathrm{\Delta }_2(N)`$ for a $`7\times 7`$ lattice, which shows that for $`U2`$ remnants of this classical effect can be seen clearly at the predicted filling $`N=21`$.
Similar behaviour has been observed for other sample sizes and geometries. Although these results correspond to a density regime where quantum fluctuations are predominant, the structure in $`\mathrm{\Delta }_2(N)`$ agrees qualitatively with that of the classical counterpart. One might expect that extending the range of the interaction will give rise to a more intricate classical structure, but with correspondingly smaller amplitude which is thus more easily washed out by quantum fluctuations. This question is left for a future study. In previous work the results of were reproduced by interacting classical charges in a parabolic potential. In this case the the source of magic numbers incorporated the existence of topological defects in the ground state, as well as the interplay with the confining potential. It was not a priori clear why such a classical model should prove useful, but our work suggests that quantum fluctuations strong enough to destablise a Wigner solid may not completely wash out these effects.
In summary, we show that the metallic ground state develops charge density modulations, controlled by the electron-electron interaction, at densities $`r_s1`$ depending on disorder. The development of the CDMs with increasing $`r_s`$ is more rapid for short-range interactions, presumably because of the large gradient of the interaction potential. We also show that away from half filling, the CDMs are associated with short range order only. Elsewhere , it has been demonstrated that the existence of these CDMs result in unusual fluctuations of $`\mathrm{\Delta }_2`$ over the disorder ensemble. Finally, we demonstrate that topological defects in the equivalent classical system occur in the CDM, and that they result in strong filling factor and geometry dependent fluctuations in $`\mathrm{\Delta }_2`$, clearly visible for $`U2`$. It seems clear that the ergodicity principle fails in this case, and so disorder averaging and averaging over $`N`$ are not equivalent. These results lend support to the classical analysis of Ref., which suggests that the behaviour seen in the experiments of Ref., is due to topological defects in the classical ground state configuration. We stress however, that bunching is not generated in the geometry that we consider.
It is a pleasure to acknowledge discussions with A. Finkelstein, S. Levit, B. Shklovskii and U. Sivan. This work was supported by EU TMR fellowship ERBFMBICT961202, the German-Israeli Foundation, the DFG as part of SFB195, the Israel Academy of Sciences and Humanities Center for ‘Strongly Correlated Interacting Electrons in Restricted Geometries’ and the Minerva Foundation. Many of the calculations were made using IDRIS facilities.
|
no-problem/9902/astro-ph9902107.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Althought this Meeting is devoted to the discussion of the nature of the Ultraluminous Far-IR Galaxies at low and high redshift, and their role in galaxy evolution, I will not directly tackle this topic in my presentation. I will instead summarize and discuss the properties of a complementary sample of galaxies: the high-z (z$`>`$2), UV-bright systems (, ). The number of known z$`>`$2 galaxies is now large enough that they can be classified as a population, and have been used to infer the past star formation history of the Universe (, ). As with all statistical studies, observational incomplenetesses and selection biases are a concern. In the case of the high-z population, volume corrections, luminosity selections and, last but potentially the most important, dust obscuration effects have been discussed by a number of authors (e.g., , , , , ). Here I will highlight the impact of dust obscuration on the high-z galaxies and the inferred star formation history. I hope in this way to set a ground for comparison of the high-z UV-bright galaxy population with the high-z Ultraluminous Far-IR galaxies recently discovered with SCUBA (, , , ; see also the contributions of I. Smail, of A. Blain, and of M. Rowan-Robinson to these Proceedings).
## 2 The Lyman-Break Galaxies and their Low-Redshift Counterparts
Since it first was presented (), the Lyman-break technique had stood out as one of the most powerful tools for identifing high-z galaxy candidates. As of this writing, more than 550 candidates have been spectroscopically confirmed to be at z$``$3 over an area $``$0.3 square degrees and about 50 at z$``$4 over an area $``$0.23 square degrees (). In number density, the bright ends of the z$``$3 and z$``$4 populations have similar values to the local L$`>`$L galaxies, for a flat cosmology. Even if merging may have played a role in changing these values over time, just the number of stars contained in Lyman-break galaxies at z$`>`$2 accounts for $``$20–30% of all the stars known today. The basic fact is that the Lyman-break galaxy populations are a significant fraction of the total galaxy population today. Thus, understanding the nature of the Lyman-break galaxies remains a gateway to understanding the evolution of galaxies.
The identification of high-z candidates is based on the detection of the Lyman break at 912 Å, which is the strongest discontinuity in the stellar continuum of star-forming galaxies. A galaxy at, say, z=3 will have the Lyman break redshifted to 3648 Å. If a pair of filters is chosen to straddle the break, the galaxy will appear extremely red in this color. In order to avoid as much as possible low-z interlopers, one or more filters are generally added longward of the Lyman break to select only candidates which are blue in this(these) additional color(s). With a careful selection of the color criteria, the Lyman-break technique is extremely successful at identifying high-z candidates; spectroscopic confirmations give a $``$95% success rate for the z$``$3 sample and a $``$80% success rate for the z$``$4 sample (, , ). The lower success rate at z$``$4 is due to the incidence of low-z interlopers, namely elliptical galaxies at z$``$0.5–1 whose 4,000 Å break falls inside the selection window of the filters.
While the determination of the intrinsic nature of the Lyman-break galaxies, whether they are massive systems or galaxy fragments, and what kind of progenitors they are, is still a source of heated debate (e.g., , , ), the identification of their observational low-z counterparts appears less controversial.
By selection, Lyman-break galaxies are UV-bright, actively star-forming systems, with a preferentially blue spectral energy distribution (SED). Observed star formation rates (SFRs) range from a few to 50 M yr<sup>-1</sup>, for a Salpeter Initial Mass Function (IMF) in the range 0.1–100 M (). This range of values is typical of what observed in Local, UV-bright starburst galaxies (e.g., ). The restframe UV and B-band half-light radii are around 0.2–0.3 arcsec, which correspond to spatial radii $``$1–3 h$`{}_{}{}^{1}{}_{50}{}^{}`$ kpc, depending on q<sub>o</sub> (, ). The similarity of the half-light radii at both UV and B suggests that the UV is a reliable tracer of the full extent of the light-emitting body. Ground-based optical spectra, which correspond to the restframe 900-1800 Å range for a z$``$3 galaxy, show a wealth of absorption features, and sometime P-Cygni profiles in the CIV(1550 Å) line (cf. the figures in ), typical of the predominance of young, massive stars in the UV spectrum. Currently limited ground-based near-IR spectroscopy (e.g. ) has revealed nebular line emission in these galaxies. Hybrid line equivalent widths constructed using the UV flux density f(UV) as denominator, namely, EW’(Å)=F(line)/f(UV), show that the observed values for the high-z galaxies fall in the loci observed for local starburst galaxies (). In summary, the observational properties of the Lyman-break galaxy population fully resemble, in the restframe UV-optical range, those of low-redshift, UV-bright starburst galaxies ().
The Lyman-break galaxies share another global characteristic with the Local starbursts. If we parametrize the observed UV stellar continuum with a power law, F($`\lambda `$)$`\lambda ^\beta `$, Lyman-break galaxies cover a large range of $`\beta `$ values, roughly from $``$3 to 0.4, namely from very blue to moderately red (Figure 1, left panel). This range is not very different from that covered by the Local, UV-bright starbursts (Figure 1, right panel). Population synthesis models (e.g. ) indicate that a dust-free, young starburst or constant star-formation population have invariably values of $`\beta <`$2.0, for a vast range of metallicities. What does cause the UV stellar continuum of Lyman-break galaxies to be redder than expected for a young star-forming population?
## 3 Dust Reddening and Obscuration in Local Starbursts and in Lyman-break Galaxies
There are two main causes for a red UV stellar continuum: 1) ageing of the stellar population; 2) presence of dust (variations of the intrisic IMF will not be discussed here).
An ageing stellar population loses the high-mass, hot stars first, and then, progressively, lower-mass and colder stars. In the process, the UV stellar continuum becomes redder and redder and, also, the 4,000 Å break increases in strenght. This break spans a small wavelength range, thus is unaffected by dust reddening. The strenght of the 4,000 Å break is therefore a powerful constraint on the age of the stellar population. The Local starbursts can have very red UV continua ($`\beta >`$0), while still showing rather small 4,000 Å breaks, telltales of the presence of a young stellar population ($`<`$10<sup>7</sup> yr) or of constant star formation (over timescales $``$10<sup>9</sup> yr, see ). Ageing of the stellar population is not the main reason for the presence of a red UV SED in Local starbursts. Broad-band J, H, and K observations provide limited information on the strenght of the 4,000 Å break in high-z galaxies, still accurate enough to exclude ageing as a general cause for the red UV spectra in this case as well (Dickinson 1997, priv. communication).
Dust reddening is then the likely cause for red UV spectra, as demonstrated by the correlation between $`\beta `$ and color excess (Figure 1, right panel). Dust reddening is generally a close-to-unsolvable problem for unresolved stellar populations (e.g., distant galaxies), because the effective obscuration will be a combination of dust distribution relative to the emitters, scattering, and environment-dependence of the extinction (, ). The situation gets better in the case of starbursts because the high energy environment is generally inhospitable to dust. Shocks from supernovae can destroy dust grains, while gas outflows can eject significant amounts of interstellar gas and dust from the site of star formation. If little diffuse dust is present within the star-forming region, the main source of opacity will come from the dust surrounding the region. Parametrizing the ‘net’ obscuration of the stellar continuum as: $`F_{obs}(\lambda )=F_0(\lambda )10^{0.4E_s(BV)k(\lambda )}`$, with F<sub>obs</sub>($`\lambda `$) and F<sub>0</sub>($`\lambda `$) the observed and intrinsic fluxes, respectively, obscuration in starbursts is expressed as:
$`k(\lambda )`$ $`=`$ $`2.656(2.310+1.315/\lambda )+\mathrm{4.88\; 0.63}\mu m\lambda 1.60\mu m`$ (1)
$`=`$ $`2.656(2.156+1.509/\lambda 0.198/\lambda ^2+0.011/\lambda ^3)+4.88`$
$`0.12\mu m\lambda <0.63\mu m.`$
The connection between the color excess E<sub>s</sub>(B$``$V) and the measured spectral slope $`\beta `$ is given by the correlation in Figure 1 (right panel).
It is worth stressing that, although dust reddening corrections for starbursts are parametrized above as a foreground dust screen, Equation 1 has been derived with NO assumptions on the geometrical distribution of the dust within the galaxies. Equation 1 is a purely empirical result (, ), which includes into a single expression any effect of dust geometry, scattering, and environment-dependence of the dust composition.
Equation 1 provides a recipe for correcting the observed SEDs for the effects of dust reddening. Does it fully account for the dust obscuration as well? In other words, does Equation 1 completely recover the light from the region of star formation or does it misses the flux from dust-enshrouded regions? The answer to these questions is a positive one: Equation 1 is able to recover, within a factor $``$2, the UV-optical light from the entire star-forming region of UV-bright, i.e. moderately obscured, starbursts.
We can prove the above statement by studying the FIR emission of the local starbursts. Dust emits in the Far-IR the stellar energy absorbed in the UV-optical. However, the Far-IR emission is not, by itself, an unambiguous measure of the opacity of the galaxy, as the intensity of the dust emission is also a function of the SFR in the galaxy. A good measure of the total opacity of the galaxy is instead provided by the ratio FIR/F(UV) (). The Far-IR flux, FIR, and the UV flux, F(UV), are both proportional to the SFR, but their sensitivity to dust has opposite trends: roughly, FIR increases while F(UV) decreases for increasing amounts of dust, although the details of the trends are dictated by the geometrical distribution of the dust. In the assumption that the foreground dust screen parametrization is valid, the FIR/F(UV) ratio is related to the UV attenuation in magnitudes, A(UV), via ():
$$FIR/F(UV)=1.19\left[10^{0.4A(UV)}1\right],$$
(2)
where the constant value 1.19 is the combination of the ratio of the bolometric stellar luminosity to the UV luminosity and the ratio of the bolometric dust emission to the FIR emission. Since F(UV) and FIR (e.g., from IRAS) are measurable in galaxies, as is $`\beta `$, the UV attenuation can be related to the UV spectral slope via Equation 2 (). Figure 2 shows A(UV) measured at 1,600 Å as a function of $`\beta `$ for a sample of Local starbursts. Overplot on the data is the trend predicted by Equation 1, with $`\beta `$ related to E<sub>s</sub>(B$``$V) using Figure 1 and the limiting case $`\beta _0=`$2.1 for E<sub>s</sub>(B$``$V)=0 (). Equation 1 and Figure 1 have no adjustable parameters. The agreement between the data and the predicted trend is therefore impressive, especially if we take into account that the latter is a recipe for reddening, and could in principle not account for the entire dust obscuration. Discrepancies at the low end of the locus of data points in Figure 2 are understandable in terms of sample incompletenesses.
How does all this apply to Lyman-break galaxies? The entire purpose of obtaining dust obscuration corrections for the high-z galaxy sample is to recover the intrinsic UV emission of the galaxies, therefore deriving a more meaningful UV luminosity function, a more accurate value of the SFR for each object (which can bear into the understanding of the nature of these objects), and, finally, the intrinsic cosmic SFR density (). Figure 1 (right panel) shows the observed UV spectral slopes of the z$``$3 galaxies. Those slopes can be ‘translated’ into a value of the effective color excess, which is calculated to have mean value E<sub>s</sub>(B$``$V)$``$0.15 for the z$``$3 galaxies, or an attenuation A(1600)$``$1.6 mag (, see also ). Incidentally, this mean value of E<sub>s</sub>(B$``$V) is similar to that observed in the local starburst sample (); this is purely coincidental, and is borne of the fact that the two samples of galaxies cover similar ranges of $`\beta `$. A similar mean value of the effective color excess has been obtained by Pettini et al. () from the analysis of the nebular emission lines in the NIR spectra of a small sample of Lyman-break galaxies. Correcting the observed UV spectra for dust attenuation increases the median SFR of $`\times `$5 in the z$`>`$2 galaxies and of $`\times `$3 in the z$``$1 galaxies (Figure 3). The difference in the correction factors at low and high-z is entirely due to the different wavelengths at which the two redshift regimes are probed: $``$1,600 Å the high-z galaxies and $``$2,800 Å the lower-z galaxies. The dust correction factors have been ‘measured’ only for the high-z sample and have been assumed to hold unchanged for the z$``$1 sample, modulo the change in wavelength (see discussion in ).
## 4 The Evolution of the Stellar, Metal, and Dust Content of Galaxies
The next question in line is whether a median attenuation of $``$1.6 mag in the UV is reasonable at z$`>2`$, when galaxies where at most a few Gyr old and, presumably, metal- and dust-poor. Both the Cosmic Far-IR Background (CIB) detected by COBE (, ) and the FIR-bright galaxies detected by SCUBA at z$``$1 (, , , ) demonstrate that dust was present at high redshift. The luminosity of the CIB is about 2.5 times higher than the luminosity of the UV-optical Background (), implying a proportionally higher contribution of the redshift-integrated dust emission. However, neither the CIB nor the SCUBA galaxies are telling us how the dust content in galaxies has evolved with redshift. In the case of the SCUBA galaxies, the redshift and luminosity distribution and the AGN fraction of the sources will need to be tackled before providing such information.
The time evolution of the UV luminosity density of galaxies and of the derivative SFR density (Figure 3) can be used to constrain the metal and dust enrichment of galaxies and, therefore, the intrinsic SFR density (). The stars which produce the observed UV luminosity at each redshift produce also metals and dust with negligible delay times, at most 100-200 Myr in the case of dust (). The obscuration from dust will produce an observed UV flux lower than the true flux. Once the effects of the dust on the observed UV emission are evaluated and removed, a new SFR density is calculated. The procedure is repeated iteratively till convergence (). A number of observational contraints are used in the model: no more than $``$10% of the baryons are in galaxies; inflows/outflows keep the z=0 metallicity of the gas in galaxies to about solar, with a $``$15% mean residual gas content, and the z$``$2–3 metallicity to about 1/10–1/15 solar (); the intrinsic SFR density at z=0 must be comparable with that measured from H$`\alpha `$ surveys (); the dust emission must reproduce the observed CIB and not exceed the FIR emission of local galaxies.
These constraints are still not enough to yield a unique solution; one of the missing ingredients is the behavior of the SFR density at z$`>`$4, where there are no data points. Different assumptions will lead to different intrinsic SFR histories. The range of solutions is bracketed by Models A and B in Figure 3. Figure 4 shows, for each of the two solutions, the evolution of the dust column density in the average galaxy and the contribution to the CIB at selected wavelengths as a function of redshift. The latter is however dependent on the assumptions about the intrisic dust emission SED, which is not well constrained.
Model B resembles the SFR density derived from the obscuration corrected Lyman-break galaxies (Figure 3). This demonstrates that attenuations of about 1.6 mag in the UV are perfectly reasonable within the framework of a simple model of stellar and dust content evolution in galaxies.
## 5 The Intrinsic Star Formation History of the Universe and the SCUBA galaxies
The shape of Model B resembles the SFR history expected for the ‘monolithic collapse’ model of galaxy evolution, although expectations for hierarchical galaxy formation models are not ruled out (, see discussion in ; see also the contribution of G. Kauffmann to these Proceedings). Thus the monolithic-versus-hierarchical dilemma is still unsolved by our current knowledge of the SFR history. Values of the SFR density at higher redshifts (z$``$5) will be able place more definite constraints on the galaxy evolution scenario.
The final question we want to ask is what fraction of the total SFR density the Lyman-break galaxies represent at each given redshift. And how much of the SF is so deeply buried in dust that its accounting is missing. The obscuration curve discussed in Section 3 is technically valid only for UV-bright star-forming galaxies; it cannot, obviously, correct for objects which are missing from the sample because they are too dusty. On the one hand, Model B is only slightly in excess of the obscuration-corrected SFR density calculated from the z$`>`$2 galaxies (by a negligible amount within the observational uncertainties), and the SFR history of Model B is perfectly sufficient to reproduce the observed CIB. It appears that the fraction of SF missed by considering the Lyman-break galaxies only is relatively small. On the other hand, a number of considerations invite to take this as a preliminary statement. We know that at low redshift a fraction of the star formation is deeply buried in dust, and is obscured even at IR wavelengths. The same could happen at high redshift, and the SCUBA sources seem to suggest that large dust contents are not impossible in high-z galaxies. The angular density of the SCUBA sources is about 1/2–1 of that of the z$``$3 galaxies, and are spread over a (possibly) much larger redshift range than the Lyman-break galaxies, namely over $``$5–10$`\times `$ larger cosmological volumes. The SCUBA sources are then $``$5–20% of the Lyman-break galaxies by number density, but are forming stars with SFR$``$300–500 M yr<sup>-1</sup>. Thus the SCUBA galaxies could still add $``$25-100% to the SFR density of the obscuration-corrected Lyman-break galaxies, although an assessment of the AGN contribution is still missing.
Because of their characteristics, the two populations, the UV-bright Lyman-break galaxies and the FIR-bright SCUBA sources, are likely to be complementary, rather than overlapping. At the level of current knowledge, it appears that about 50–80% of the SF in the early phases of the Universe is accounted for by the obscuration-corrected Lyman-break galaxies; the remaining 20–50% of the SF may be contained in FIR-bright sources. However, more investigation of the nature, luminosity distribution and redshift placement of the SCUBA sources is needed before these figures can be taken at face value.
ACKNOWLEDGEMENTS. I am indebted with C. Steidel, M. Giavalisco, and M. Dickinson for making their most recent results on the Lyman-break galaxies available to me prior to publication, for discussions, and for a critical reading of the manuscript. I would like to thank the Organizing Committee for inviting me to this stimulating meeting and for financially support my stay at the Ringberg Castle.
|
no-problem/9902/cond-mat9902051.html
|
ar5iv
|
text
|
# 1. Introduction
## 1. Introduction
Not too much is known about the critical behaviour of statistical systems on non-periodic graphs. Although certain cases may still be treated by commuting transfer matrices (see Ref. 1 and references therein) using so-called $`Z`$-invariance arguments$`^{\text{3},\text{4}}`$, these may well not be representative. For instance, for the case at hand, the local magnetization turns out to be position-independent, what one certainly does not expect to happen for a general (ev. random) distribution of the coupling constants.
The example considered in this note, the Ising model on the so-called Labyrinth tiling, has recently been investigated in detail$`^\text{1}`$ by means of duality arguments and commuting transfer matrices. Here, we briefly review the relevant results and supplement these by first numerical investigations.
## 2. The Model
The silver mean chain is obtained by repeated application of the two-letter substitution rule $`(ab,bbab)`$ to the letter $`a`$. From this, the Labyrinth tiling$`^\text{7}`$ can be constructed by considering an orthogonal Cartesian product of two identical silver mean chains in the proper geometric representation$`^{\text{1},\text{7}}`$ and connecting points on one of the two subgrids of the resulting rectangular grid, see Fig. 1.
In this way, we obtain a tiling which consists of three different tiles. As a graph, it has the topology of the square lattice, but contains edges of three types (or eight if one accounts for orientations). It is therefore natural to assign individual coupling constants to different types of edges, defining in this way a nearest-neighbour coupling for the Ising spins $`\sigma _{i,j}\{\pm 1\}`$ which we place on the vertices of the graph. We denote the ferromagnetic couplings (in units of $`k_BT`$) by $`K_{xy}`$ and $`L_{xy}`$, where $`xy\{aa,ab,ba,bb\}`$ labels the abscissa and ordinate of the corresponding rectangle in the underlying grid and the letters $`K`$ and $`L`$ refer to the two different diagonals$`^\text{1}`$.
The proper periodic approximants for finite systems are constructed as above, but from a periodic grid. This is defined by identifying the first and last letter of a word obtained by applying the substitution rule a certain number of times. This ensures that no additional tiles or vertex configurations are created$`^\text{1}`$.
## 3. Exact Results
Let us restrict to the case of non-vanishing ferromagnetic couplings that are uniformly bounded from above and below by finite constants. Under these assumptions, the Peierls argument$`^{\text{6},\text{1}}`$ guarantees the existence of at least one phase transition.
Some more information about the critical behaviour can be obtained from a duality argument. Assuming that there is only a single transition, it must occur on the self-dual surface in the space of coupling constants, which is given by$`^{\text{4},\text{1}}`$
$$S_{xy}:=\mathrm{sinh}(2K_{xy}^{})\mathrm{sinh}(2L_{xy}^{})=\mathrm{\hspace{0.33em}1}$$
(1)
for all index pairs $`xy\{aa,ab,ba,bb\}`$.
Even more, the model is exactly solvable in the sense of commuting transfer matrices$`^\text{3}`$ in a subspace defined by the four equations $`S_{xy}=\mathrm{\Omega }`$ (for the possible index pairs $`xy\{aa,ab,ba,bb\}`$) plus one additional equation, see eq. (4.5) in Ref. 1. The corresponding coupling constants can be parametrized explicitly in terms of elliptic functions. For a given bond, the argument is the difference of two rapidity parameters$`^{\text{2},\text{4}}`$ that are attached to the two lines which intersect on the bond, see Fig. 1.
In the three-dimensional solvable subspace, the model shows (in terms of the temperature-like variable $`\mathrm{\Omega }^2`$) a single second-order phase transition at $`\mathrm{\Omega }=1`$ (i.e., on the intersection of the solvable and the self-dual surface, compare Eq. 1) which belongs to the Onsager universality class. In particular, the local magnetization $`\sigma `$ turns out to be position-independent and shows, in the thermodynamic limit, the critical singularity$`^{\text{1},\text{4}}`$
$$\sigma =\{\begin{array}{cc}(1\mathrm{\Omega }^2)^{1/8}\hfill & \text{if }\mathrm{\Omega }^2>1\hfill \\ 0\hfill & \text{if }\mathrm{\Omega }^21\hfill \end{array}$$
(2)
at $`\mathrm{\Omega }=1`$ governed by the magnetic exponent $`\beta =1/8`$ of the Ising model. Furthermore, one can calculate the free energy by essentially counting bond frequencies$`^\text{1}`$. This is due to the “mobility” of the rapidity lines$`^\text{2}`$ which is a consequence of the Yang-Baxter equation. Note that the periodic boundary conditions guarantee that moving rapidity lines does not create any surface contributions in our case.
Clearly, the latter result reflects the severe restrictions imposed by integrability. It is an interesting question whether the converse is also true, i.e., whether the position-independence of the local magnetization is sufficient for solvability. For a generic choice of couplings, one certainly expects the local magnetization to depend on the neighbourhood. This poses the question whether the solvable case is representative at all — a partial answer to which can be obtained by numerical investigation of non-integrable cases.
## 4. Numerical Results
As a first approach, we consider the dependence of the local magnetization on the position of the spin while digressing from the solvable surface. The simplest scenario occurs when going from the periodic case (i.e., all couplings equal) to the case of three different couplings according to the length of the bonds, i.e., $`K_{aa}=L_{aa}=J_s/k_BT`$, $`K_{ab}=K_{ba}=L_{ab}=L_{ba}=J_m/k_BT`$ and $`K_{bb}=L_{bb}=J_l/k_BT`$, where the subscripts $`s`$, $`m`$ and $`l`$ refer to short, medium and long bonds, respectively. We consider the periodic approximant which is defined by the word of length 41 obtained after five applications of the substitution rule to the initial letter $`a`$. For this patch of 1600 sites, we estimated the magnetization at three different sites (where we chose representatives of the three different vertex configurations, see Fig. 2) by means of the Swendsen-Wang Monte-Carlo algorithm$`^\text{5}`$.
The result is shown in Fig. 3, where the abscissa displays $`T/T_c`$ and the ordinate the normalized magnetization. Here, the three sets of coupling constants were chosen as follows. Fig. 3(a) corresponds to $`J_s=J_m=J_l`$, in Fig. 3(b) we used $`J_s/J_m=6/5`$, $`J_l/J_m=4/5`$ and Fig. 3(c) displays the results for $`J_s/J_m=7/5`$ and $`J_l/J_m=3/5`$. In order to keep the critical temperature approximately constant, we adjusted the coupling constants such that the average coupling (per bond) for the three cases is the same.
One can clearly see that the magnetization is site-independent in the periodic case (a), as it must be, but develops pronounced site-dependence as the difference of the three couplings increases (from (b) to (c)). On the other hand, we cannot decide whether we get more than one point of phase transition or a critical region, although our calculations seem to support the expectation that the critical point is unique.
## 5. Concluding Remarks
To get a better understanding of critical phenomena on non-periodic graphs with long-range order (which are in between the periodic and the random case), we have investigated the classical Ising model on a 2D quasiperiodic tiling. The example chosen, the so-called Labyrinth, can be considered as a quasiperiodic modulation of the square lattice. The Ising model is exactly solvable on a three-dimensional subspace of the coupling space considered, which contains the periodic case. Solvability resulted in site-independence of the local magnetization, while the generic case shows clear dependence on the local neighbourhood. A conclusive statement on the phase structure and the nature of critical behaviour requires further investigation of the model by algebraic and numerical means.
## 6. Acknowledgements
We thank R.J. Baxter, B. Nienhuis and P.A. Pearce for discussions. U.G. is grateful for financial support from the Samenwerkingsverband FOM/SMC Mathematische Fysica.
## 7. References
1. M. Baake, U. Grimm and R. J. Baxter, “A Critical Ising Model on the Labyrinth”, Int. J. Mod. Phys. B8 (1994) 3579–600; reprinted in: Perspectives on Solvable Models, eds. U. Grimm and M. Baake (World Scientific, Singapore, 1994), p. 131–52.
2. R. J. Baxter, “Solvable eight-vertex model on an arbitrary planar lattice”, Philos. Trans. R. Soc. London 289 (1978) 315–46.
3. R. J. Baxter, Exactly Solved Models in Statistical Mechanics (Academic Press, London, 1982).
4. R. J. Baxter, “Free-fermion, checkerboard and $`Z`$-invariant lattice models in statistical mechanics”, Proc. R. Soc. London A404 (1986) 1–33.
5. J. J. Binney, N. J. Dowrick, A. J. Fisher and M. E. J. Newman, The Theory of Critical Phenomena (Clarendon Press, Oxford, 1992).
6. R. Peierls, “On Ising’s model of ferromagnetism”, Proc. Cambridge Philos. Soc. 32 (1936) 477–81.
7. C. Sire, R. Mosseri and J.-F. Sadoc, “Geometric study of a 2D tiling related to the octagonal quasiperiodic tiling”, J. Phys. (France) 50 (1989) 3463–76.
|
no-problem/9902/astro-ph9902058.html
|
ar5iv
|
text
|
# Iron line in the afterglow: a key to unveil Gamma–Ray Burst progenitors
## 1 Introduction
Piro et al. (1999) and Yoshida et al. (1999) report the detection of an iron emission line in the X–ray afterglow spectrum of GRB 970508 and GRB 970828, respectively. The line detected in GRB 970508 is consistent with an iron $`K_\alpha `$ line redshifted to the rest–frame of the candidate host galaxy ($`z=0.835`$, Metzger et al. 1997), while GRB 970828 has no measured redshift and the identification of the feature with the same line would imply a redshift $`z0.33`$. The line fluxes (equivalent widths) are $`F_{Fe}=(2.8\pm 1.1)\times 10^{13}`$ erg cm<sup>-2</sup> s<sup>-1</sup> ($`\mathrm{EW}1`$ keV) and $`F_{Fe}=(1.5\pm 0.8)\times 10^{13}`$ erg cm<sup>-2</sup> s<sup>-1</sup> ($`\mathrm{EW}3`$ keV) for GRB 970508 and GRB 970828, respectively. Although the significance of these features is admittedly not extremely compelling ($`99\%`$ in both cases), the implications that they bear are so important to justify a study on the mechanism that would produce them. A strong iron emission line unambiguously points towards the presence, in the vicinity of the burster, of a few per cent of iron solar masses concentrated in a compact region. Thus the presence of such a line in the X–ray afterglow spectrum would represent the “Rosetta Stone” for unveiling the burst progenitor.
Three main classes of models have been proposed for the origin of gamma–ray bursts (GRB): neutron star – neutron star (NS–NS) mergers (Paczyński 1986; Eichler et al. 1989), Hypernovae or failed type Ib supernovae (Woosley 1993; Paczyński 1998) and Supranovae (Vietri & Stella 1998). In the NS–NS model the burst is produced during the collapse of a binary system composed of two neutron stars or of a neutron star and a black hole. In this case the explosion should take place in a clean environment, due to the relatively large speed (up to $``$1000 km s<sup>-1</sup>) of such systems. Since the time required for the binary system to coalesce and merge is of the order of a billion year, the GRB should be outside the original star forming region and hence in a rarefied environment. On the contrary, in the Hypernova scenario, the burst is due to the evolution of a massive ($`100M_{}`$) star, which collapses forming a Kerr black hole, whose rotational energy is tapped in a few seconds, producing the burst. Hypernovae should be located in dense molecular clouds, probably iron rich, but there should be no Hypernova remnant.
The Supranova scenario (Vietri & Stella 1998) assumes that, following a supernova explosion, a fast spinning neutron star is formed with a mass that would be supercritical in the absence of rotation. As radiative energy losses spin it down in a time–scale of months to years, it inevitably collapses to a Kerr black hole, whose rotational energy can then power the GRB. A supernova remnant (SNR) is naturally left over around the burst location.
The detection of a strong iron line redshifted to the rest frame of the GRB progenitor poses severe problems to the NS–NS model, which could produce lines only inside the fireball. These hypothetical lines should be blueshifted by the bulk Lorentz factor $`\mathrm{\Gamma }`$ of the fireball (Mészáros & Rees 1998a) and should then be detected at frequencies $`\mathrm{\Gamma }/(1+z)`$ times larger.
X–ray line emission following GRB events has been recently discussed in the Hypernova scenario by Ghisellini et al. (1999) and Boettcher et al. (1998). None of these works predict, with reasonable assumptions on the burst surrounds, iron lines strong enough to be detectable during the X–ray afterglow. Moreover, line emission should last over a time–scale of years given the width of the emitting nebula. The production of a stronger line in the Hypernova scenario has been mentioned by Mészáros & Rees (1998b), who consider recombination in a relatively dense torus, formed by the interaction of a compact companion with the pre–Hypernova envelope.
As we will show in this letter, the Supranova scenario can easily account for the large amount of iron rich material needed to explain the observed line features.
This letter is organized as follows: in section 2 we derive model independent general constraints on the ambient material, in section 3 we discuss the line emission process and in section 4 we draw our conclusions.
## 2 General Constraints
We consider a line with a flux comparable to a typical afterglow X–ray flux<sup>1</sup><sup>1</sup>1Here and in the following we parametrise a quantity $`Q`$ as $`Q=10^xQ_x`$ and adopt cgs units.: $`F_{Fe}=10^{13}F_{Fe,13}`$ erg cm<sup>-2</sup> s<sup>-1</sup>. This in itself constrains both the amount of line–emitting matter and the size of the emitting region.
Assume for that the emitting region is a homogeneous spherical shell centered in the GRB progenitor, with radius $`R`$ and width $`\mathrm{\Delta }RR`$. The flux of the iron line cannot exceed the absorbed ionizing fluence $`q`$ (where $``$ is the total GRB fluence and $`q`$ is the fraction of it which is absorbed and reprocessed into the line), divided by the light crossing time of the region, $`R/c`$. This, independently from the line flux variability, gives an upper limit to the size:
$$R<3\times 10^{18}q\frac{_5}{F_{Fe,13}}\text{cm}$$
(1)
Since $`q`$ is $`0.1`$ at most (Ghisellini et al. 1999), the emitting region is very compact, ruling out emission from interstellar matter, even assuming the large densities appropriate for star forming regions.
The total line photons produced at 6.4–6.9 keV in $`10^5t_5`$ seconds, for a GRB located at $`z=1`$<sup>2</sup><sup>2</sup>2The cosmological parameters will be set throughout this letter to $`H_0=65`$ km s<sup>-1</sup> Mpc<sup>-1</sup>, $`q_0=0.5`$ and $`\mathrm{\Lambda }=0`$., are $`3\times 10^{57}F_{Fe,13}t_5`$. This means that, for a reasonable amount of iron, each atom has to produce a large number of photons. For this reason, we call $`k`$ the number of photons produced by a single iron atom and we use this parameter to constrain the required mass:
$$M_{Fe}150F_{Fe,13}\frac{t_5}{k}M_{}$$
(2)
The parameter $`k`$ depends on the details of the assumed scenario, but general limits can be set. If we neglect thermal processes, which will be discussed in more detail below, any iron atom can emit photons only when illuminated by an ionizing flux, i.e. the burst itself or the afterglow high energy tail. Since burst light has enough power to photoionize all the matter in the vicinity of the progenitor (see e.g. Boettcher et al. 1998), line photons will be emitted only through the recombination process. Thus the value of the parameter $`k`$ will not be larger than the total number of photoionizations an ion can undergo during the burst and/or the afterglow. For iron $`K`$–shell electrons, with cross section $`\sigma _K=1.2\times 10^{20}`$ cm<sup>2</sup> we have:
$$k\stackrel{<}{}\frac{qE}{4\pi ϵ_{ion}R^2}\sigma _K=6.5\times 10^6\frac{qE_{52}}{R_{16}^2}$$
(3)
where $`E`$ is the total energy emitted by the burst and/or afterglow and $`ϵ_{ion}`$ the energy of a single ionizing photon.
Inserting equation 3 in equation 2, we obtain a lower limit on the iron mass $`M_{Fe}\stackrel{>}{}2.3\times 10^5`$ $`F_{Fe,13}t_5R_{16}^2/(qE_{52})M_{}`$ which corresponds to a total mass:
$$M\stackrel{>}{}0.013\frac{F_{Fe,13}t_5R_{16}^2}{qA_{}E_{52}}M_{}$$
(4)
i.e. a tenth of solar mass for $`q0.1`$ and $`A_{}=1`$, where $`A_{}`$ is the iron abundance in solar units. These general requirements about the mass and its location exclude that the iron line can be emitted by interstellar material, even if made denser by a strong pre–Hypernova wind <sup>3</sup><sup>3</sup>3Assuming a wind of $`\dot{m}_{wind}=10^4`$ solar masses per year and a wind velocity $`v=10^8`$ cm s<sup>-1</sup>, the total mass within a radius $`R`$ is $`M=\dot{m}_{wind}R/v=3.2\times 10^3\dot{m}_{wind,4}R_{17}/v_8M_{}`$..
If such a large amount of mass were uniformly spread around the burst location, it would completely stop the fireball. In fact (see e.g. Wijers, Rees & Mészáros 1997) the fireball is slowed down to sub–relativistic speeds when the picked up mass equals the initial rest mass of the fireball. With a typical baryonic load of $`10^{(4÷6)}M_{}`$, the mass predicted in equation 4 would stop the fireball after an observer time $`t\mathrm{\Gamma }^2R/c3\times 10^4\mathrm{\Gamma }_1^2`$ s, i.e. almost one day. Any surviving long wavelength emission should then decay exponentially in the absence of energy supply. The fireball synchrotron model have been applied to GRB 970508 by Wijers & Galama (1999) and Granot, Piran and Sari (1999). Despite the differences of their results, both find an ambient density $`n<10`$ cm<sup>-3</sup>, nine orders of magnitude lower than the density required for the line production (see below). The only way to reconcile a monthly lasting power–law optical afterglow with iron line emission is through a particular geometry, in which the line of sight is devoid of the remnant matter. Therefore, the matter distribution must be anisotropic, with the bulk of the mass located outside the line of sight of the burst (see Fig. 1), even if the covering factor of this matter must be significant to reprocess a sufficient fraction of the primary burst photons in the line.
## 3 Line emission processes
We assume that the line emitting region is located at a distance $`R`$ from the bursts, has a width $`\mathrm{\Delta }R`$, density $`n=(M/m_p)/(4\pi R^2\mathrm{\Delta }R)`$ and scattering optical depth $`\tau _T=\sigma _Tn\mathrm{\Delta }R=(M/m_p)/(4\pi R^2)`$. These values must satisfy the constraints derived in section 2; in addition, the first two processes discussed below require that the optical depth is in the range 0.1–1, to let the material absorb enough energy without smearing too much the iron line. Consistent values are a solar mass located at $`R10^{16}`$ cm, which gives $`\tau _T=0.6(M/M_{})/R_{16}^2`$, and a particle density $`n=9.5\times 10^9(M/M_{})/(R_{16}^2\mathrm{\Delta }R_{14})`$.
The third process discussed below requires instead $`\tau _T>1`$, implying $`R<8\times 10^{15}(M/M_{})^{1/2}`$ cm.
Since the line emitting material may well be a young supernova remnant, we allow for a high iron abundance of the plasma.
### 3.1 Multiple photoionizations and recombinations in an optically thin shell
If the plasma can remain cold and dense enough, burst photons can be reprocessed into line photons through recombination. When the plasma is illuminated by burst photons, iron atoms are fastly ionized and a line photon is produced each time an electron recombines. Since burst photons rapidly re–ionize the hydrogenoid iron atom, the process can be very efficient and $`k`$ can be large. Since the re–ionization time is very fast ($`10^5`$ s for a typical burst flux), it is the recombination time that measures the efficiency of the line emitting process. In this case $`k=t_{ill}/t_{rec}`$, where $`t_{ill}`$ is the illumination time and $`t_{rec}`$ is the mean recombination time. Solving equation 2 for the $`k`$ coefficient and substituting the iron mass $`M_{Fe}`$ with the total mass of the shell $`M`$ we obtain:
$$t_{max,rec}\stackrel{<}{}1.3\times 10^5F_{Fe,13}A_{}\frac{M}{M_{}}\frac{t_{ill}}{t_5}$$
(5)
The recombination time of an hydrogenic ion of atomic number $`Z`$ in a thermal plasma is $`t_{rec}=(\alpha _rn)^1`$ (Verner & Ferland 1996), where $`n`$ is the electron density and the recombination coefficient $`\alpha _r`$ is given by (Seaton 1959; see also Arnaud & Rothenflug 1985; Verner & Ferland 1996):
$$\alpha _r(Z,T)=5.2\times 10^{14}Z\lambda ^{1/2}\left[0.429+0.5\mathrm{ln}(\lambda )+\frac{0.496}{\lambda ^{1/3}}\right]$$
(6)
where $`\lambda =1.58\times 10^5Z^2T^1`$. During the burst, the Compton temperature $`T_c`$ of the plasma is bound to be large due to the high typical energies of burst photons and to the relative inefficiency of radiative cooling processes. The free–free cooling time is of the order of $`10^5`$ s. For a typical burst spectrum we have $`T_c10^8`$ K. The recombination time turns out to be $`t_{rec}10^2n_{10}^1`$ s, while equation 5, with an illumination time of 100 s, gives $`t_{max,rec}\stackrel{<}{}10^2`$ s. We hence conclude that recombination cannot be effective during the burst. During the afterglow, Inverse Compton losses cool the plasma efficiently, leading to a lower Compton temperature. For GRB 970508, the observed optical/X–ray spectrum half a day after the burst gives $`T_c6\times 10^6`$ K. This yields a shorter recombination time, $`t_{rec}10n_{10}^1`$ s, to be compared with the value $`t_{max,rec}\stackrel{<}{}1`$ s, obtained from equation 5 for a shell of unit solar mass and solar iron abundance. We conclude that, during the afterglow, a shell with several solar masses and/or high iron abundance could produce the observed line through the recombination process.
### 3.2 Thermal emission from the surrounding shell
This process should become efficient after the burst has passed, leaving behind a thermal plasma with a temperature $`T10^8T_8`$ K. This plasma is in the same conditions of the intra cluster medium (ICM) in cluster of galaxies, systems that emit a strong $`6.7`$ keV iron line (Raymond & Smith 1977; Sarazin 1988). A key question to solve if we want to apply ICM computations in this case is whether the collisional ionization equilibrium holds in our plasma. In the very first time, soon after the burst photons have passed, the iron will be almost completely ionized, and a recombination time $`t_{rec}`$ is needed to reach equilibrium. From section 3.1 we have that $`t_{rec}100`$ s for standard shell parameters. Since this time is very short compared to the equilibrium cooling time of the plasma ($`t_{cool}2.3\times 10^5n_{10}^1T_8^{1/2}`$ s), we can assume collisional equilibrium to compute the iron line intensity.
The equivalent width of the line in a solar abundance plasma has been carefully computed by Bahcall & Sarazin (1978) (see in particular their Figure 1) and ranges from several tens of eV at high ($`5\times 10^8`$ K) temperatures to $`2`$ keV at $`2.5\times 10^7`$ K. A very weak line is expected for temperature lower than $`5\times 10^6`$ K. For temperatures larger than $`5\times 10^7`$ K the EW dependence on temperature can be reasonably approximated as a power law. Assuming an iron abundance 10 times solar we have:
$$\text{EW}(T)3.8T_8^{1.9}\mathrm{keV}(T_80.5)$$
(7)
Taking into account the spectral energy density of the bremsstrahlung continuum at 6.7 keV, we obtain a line luminosity of:
$$L_{Fe}8\times 10^{44}\mathrm{exp}\left(\frac{0.8}{T_8}\right)\left(\frac{M}{M_{}}\right)^2V_{47}^1T_8^{2.4}\text{erg s}\text{-1}$$
(8)
for a shell of volume $`V=10^{47}V_{47}`$ cm<sup>3</sup>. For a $`z=1`$ burst we obtain a flux:
$$F_{Fe}2.5\times 10^{14}\mathrm{exp}\left(\frac{0.8}{T_8}\right)\left(\frac{M}{M_{}}\right)^2V_{47}^1T_8^{2.4}\frac{\text{erg}}{\text{cm}\text{2}\text{ s}}$$
(9)
Therefore a shell of several solar masses, typical for many type II SN (see Raymond 1984; Weiler & Sramek 1988; Woosley 1988; McCray 1993), at a temperature slightly below $`10^8`$ K can produce a line flux of $`10^{13}`$ erg cm<sup>-2</sup> s<sup>-1</sup> for $`z=1`$ bursts. The EW with respect to the underlying bremsstrahlung radiation would be a few keV, but any other emission component (e.g. afterglow emission) would decrease the line EW. Note that the predicted X–ray bremsstrahlung continuum has a flux $`F_{ff}6\times 10^{14}(M/M_{})^2T_8^{1/2}V_{47}^1`$ erg cm<sup>-2</sup> s<sup>-1</sup>, a value comparable with a typical burst afterglow X–ray flux, especially if $`M`$ a few $`M_{}`$. The line emission process can be stopped after about one day, if the afterglow photons enhance the plasma cooling via inverse Compton, lowering the temperature down to less than $`10^7`$ K. Line emission can also be quenched by the re–heating produced by the incoming fireball.
### 3.3 Reflection
In Seyfert galaxies we see a fluorescence 6.4 keV iron line produced by the relatively cold ($`T<10^6`$ K) accretion disk, illuminated by a hot corona, which provides the ionizing photons (e.g. Ross & Fabian 1993). The EW, if the observer receives both the hot corona emission and the line photons, is of the order of 200 eV if the disk intercepts $`1/2`$ of the hard X–rays. In such systems the radiation energy density $`U_r10^8L_{45}/R_{13}^2`$ erg cm<sup>-3</sup>, similar to the radiation energy density at $`R=10^{15}`$ cm from the burst. It is therefore conceivable that a similar mechanism can work also for GRB, if there exists a dense material in the vicinity of the burst (Mészáros & Rees 1998b). In the case of GRB, the equivalent width could be much larger, since the reflected component (line and Compton bump) is observed when the burst has faded and only the much weaker afterglow contributes to the continuum. In this case, besides a scattering optical depth $`\tau _T>1`$, we require a size large enough to allow the line being emitted even $``$ one day after the GRB event (i.e. $`R\stackrel{>}{}10^{15}`$ cm).
In this model the emission line is produced only during the burst event, but in the observer frame it lasts for a time $`R/c`$. The observed luminosity of the Compton reflection component is equal to the $``$ 10% of the absorbed energy, divided by the time $`R/c`$: $`L3\times 10^{45}E_{abs,51}/R_{15}`$ erg s<sup>-1</sup>. The luminosity in the iron line (see e.g. Matt, Perola & Piro 1991) is roughly 1% of this, times the iron abundance in solar units. Therefore the Compton reflection component can contribute to the hard X–ray afterglow emission and the iron line can have a luminosity up to $`3\times 10^{44}A_{}E_{abs,51}/R_{15}`$ erg s<sup>-1</sup>, corresponding to fluxes up to $`10^{13}`$ erg cm<sup>-2</sup> s<sup>-1</sup> for a $`z=1`$ burst.
## 4 Discussion
We have discussed three possible mechanisms for the production of a strong iron line, visible during the X–ray afterglow emission of GRBs. All mechanisms require the presence of a large amount of iron in a compact region. Both the general constraints derived in section 2 and the limits due to the particular emission processes discussed in section 3 point towards the presence of more than a solar mass of matter, iron rich, in close vicinity with the burst location. The more natural astronomical scenario in which such conditions are found is the young remnant of a supernova, exploded several months before the burst onset. In fact, with a radial velocity of the ejecta $`v_{ej}=10000`$ km s<sup>-1</sup>, a monthly lived SNR has a radius of $`R2.6\times 10^{15}`$ cm. A young SNR surrounding the burst is predicted by the Supranova scenario (Vietri & Stella 1998).
The other strong general requirement concerns the special geometry needed if we want to explain the presence, in the same burst, of both a strong iron line and an optical afterglow (if interpreted as due to a decelerating fireball). Since the line emitting plasma receives the burst radiation from a different orientation than our line of sight, the iron emission line is a powerful tool to measure how isotropic the burst emission is.
We find that the multiple ionization and recombination scenario has difficulties in reconciling the low temperature required to have a fast recombination with the large heating due to the burst flux. However, during the afterglow, the longer illumination time and the lowest plasma Compton temperature allow a stronger emission line produced by recombination, marginally consistent with the Piro et al. (1999) and Yoshida et al. (1999) observations. The two other alternatives (i.e. thermal emission and reflection) are more promising and not mutually exclusive. The prevalence of one over the other mechanism depends on the set up of the system: compact regions, possibly corresponding to very young supernova remnants ($`1`$ month), would produce iron line photons by fluorescent reflection, while somewhat more extended regions, corresponding to less young remnants ($`1`$ year), could produce thermal emission. Much weaker line fluxes, but lasting for a longer time, can be produced by more relaxed systems (i.e. supernovae exploded more than one year before the burst).
If the emission line is produced by a thermal plasma, its duration is of the order of the cooling time, since this is likely to be longer than the light crossing time $`R/c`$. On the other hand, as discussed above, the iron line emission is quite sensitive to the temperature, and can then be quenched if the emitting material is suddenly re–heated by the incoming fireball or cooled by the afterglow photons. In the first case, the line flux can decrease rapidly, to increase again later on, once the shell has cooled again to the appropriate temperature. This mechanism would allow relatively short lived ($`R/c`$) lines even in the thermal emission scenario.
With the available information, it is hard to tell which is the case for GRB 970508 and GRB 970828. The first had a $`1`$ keV equivalent width line whose flux apparently disappeared after half a day, in concomitance with the “rebursting” phase in the X–ray and optical bands. The second burst had a $`3`$ keV equivalent width line whose flux, instead, appeared in concomitance with a small “rebursting” phase. The continuum spectra should be the sum of the power law afterglow emission and a bremsstrahlung spectrum (in the case of thermal emission) or a harder (in the 10–100 keV band) Compton reflection spectrum (in the case of reflection). The short duration of both lines, if real, corresponds to a size $`R\stackrel{<}{}10^{15}`$ cm, implying a Thomson thick remnant and favoring the reflection model.
As it is often the case, to be more conclusive we must await better spectra of other bursts: a key signature for thermal emission would be the detection of a strong iron $`K_\beta `$ blend. In fact, this line cannot be produced by fluorescence given the lower photoelectric yield and is very weak even in a recombination scenario. At lower energies (1–3 keV, rest frame), $`L`$–shell iron lines, Mg, Si and S lines should also be visible (see Sarazin 1988). In the reflection scenario, afterglow spectra should show the typical hardening of the spectrum above a few keV, and line duration should be short.
The possible association of GRB with supernovae has been investigated recently in detail by Bloom et al. (1998), Kippen et al. (1998) and Wang & Wheeler (1998), following the explosion of GRB 980425, likely associated with the type Ic SN 1998bw. Among these works, only Wang & Wheeler (1998) find evidence for a connection while the other two limit to a few percent the bursts possibly associated with supernovae. In the Supranova scenario, however, the association of supernovae with bursts should suffer a time delay, variable between few days to some years, which would smear the time correlation between the two phenomena.
Should the iron line possibly detected in GRB 970508 and GRB 970828 be real and confirmed by other cases, then we have a strong case for the connection between supernovae and gamma–ray bursts. The next generation of experiments and satellites, such as XMM, AXAF and ASTRO–E, will provide us with the necessary information to draw more accurate conclusion on the puzzling problem of the gamma–ray burst progenitor.
## Acknowledgments
We thank L. Piro and A. Yoshida for providing us a copy of their manuscripts in advance to publication. We thank M. Vietri, S. Covino and E. Ripamonti for fruitful discussions during the conception and preparation of this work. D.L. thanks the Cariplo Foundation for financial support.
|
no-problem/9902/cond-mat9902057.html
|
ar5iv
|
text
|
# An observation of spin-valve effects in a semiconductor field effect transistor: a novel spintronic device.
## Abstract
We present the first spintronic semiconductor field effect transistor. The injector and collector contacts of this device were made from magnetic permalloy thin films with different coercive fields so that they could be magnetized either parallel or antiparallel to each other in different applied magnetic fields. The conducting medium was a two dimensional electron gas (2DEG) formed in an AlSb/InAs quantum well. Data from this device suggest that its resistance is controlled by two different types of spin-valve effect: the first occurring at the ferromagnet-2DEG interfaces; and the second occurring in direct propagation between contacts.
The idea of electronic devices which exploit both the charge and spin of an electron for their operation has given rise to the new field of ‘spintronics’, literally spin-electronics . The two-component nature of spintronic devices is expected to allow a simple implementation of quantum computing algorithms as well as producing spin transistors and spin based memory devices . However, this new field has yet to have any real impact on the semiconductor microelectronics industry since no implementation of a spintronic device has appeared in the form of a semiconductor field effect transistor (FET).
Spin-polarized electron transport from magnetic to non-magnetic metals has been the subject of intense investigation since the early 70’s when Tedrow and Meservey demonstrated the injection of a spin-polarized current from ferromagnetic nickel to superconducting aluminium. This work was subsequently extended to include spin-dependent transport between other materials. The investigation of ferromagnetic-ferromagnetic/paramagnetic materials resulted in the important discovery of the giant magnetoresistance effect . Work on ferromagnetic-semiconductor systems has so far been more limited. Alvarado and Renaud have demonstrated spin-polarized tunneling from a ferromagnet into a semiconductor by analyzing luminescence induced by a tunneling current between a nickel tip and a GaAs surface in a scanning tunneling microscope (STM). Similar experiments were conducted by Sueoka et al and Prins et al .
In this paper, we present results from a spintronic semiconductor FET based on the theoretical ideas of Datta and Das . In their proposed FET, resistance modulation is achieved through the spin-valve effect by varying the degree of spin precession which occurs in a two dimensional electron gas (2DEG) between identical ferromagnetic contacts. In our device, resistance modulation is also achieved through the spin-valve effect but by having ferromagnetic contacts with different coercivities and varying an applied magnetic field. We show that the low field magnetoresistance of the device results from two types of spin-valve effect: a ferromagnet-semiconductor contact resistance; and a direct effect between the magnetic contacts.
The device consisted of a 2DEG formed in a 15nm wide InAs well between two AlSb barriers. The top barrier was 15nm thick and had a 5nm GaSb cap layer to prevent oxidation of AlSb. Two parallel ferromagnetic permalloy ($`Ni_{80}Fe_{20}`$) contacts (see inset of figure 1a), one 5$`\mu `$m wide (contact A) and the other 1$`\mu `$m wide (contact B), were patterned using electron beam lithography. They were placed 1$`\mu `$m apart and stretched across a 25$`\mu `$m wide Hall bar produced by optical lithography. The different aspect ratios of these contacts ensured that they had different coercivities with an easy axis of magnetization along their long axes . This allowed them to be magnetized either parallel or antiparallel to each other in different ranges of external magnetic field. To ensure good ohmic behavior between the contacts A and B and the 2DEG, the top GaSb and AlSb layers were etched away selectively in the area of the contacts using Microposit MF319 developer . Any oxide on the InAs surface, which could act as spin scatterer, due to the paramagnetic nature of oxygen, was removed by dipping the sample in $`(NH_4)_2S`$. This is known to passivate the InAs surface with sulfur and decelerate the oxidation process . Moreover, it has been shown to improve the tunneling properties in STM studies of InAs . It is also expected to remove any Sb residue which is known to be present after etching AlSb with the MF319 developer . Within 5 minutes of passivation, a film of 50nm of permalloy was evaporated followed by 20nm of Au in order to protect the permalloy from oxidation. A network of extended NiCr/Au contacts (shown as 1-8 in figure 1a, inset), patterned by optical lithography, was used to connect contacts A and B with external circuitry. A layer of polyamide insulated this network from the device surface. Non-magnetic NiCr/Au ohmic contacts used for four-terminal device characterization were patterned at each end of the Hall bar. For basic non-magnetic characterization an identical Hall bar without magnetic contacts was prepared on the same wafer.
A reduction in mobility of the device 2DEG from $`\mu `$=4.9 to 0.09m<sup>2</sup>s<sup>-1</sup>V<sup>-1</sup> (at 0.3K) was observed after removal of the AlSb barrier in the regions of contacts A and B and subsequent dipping of the device in $`(NH_4)_2S`$. A reduction in mobility (from 3.6 to 0.5m<sup>2</sup>s<sup>-1</sup>V<sup>-1</sup> at 0.3K) was observed in a reference sample, having no magnetic contacts, after removal of the AlSb barrier above the InAs well over the whole surface of the Hall bar. We believe that the reduction in mobility in the device is partly due to lateral etching of the AlSb barrier layer located between the magnetic contacts after dipping in $`(NH_4)_2S`$. It is known that $`(NH_4)_2S`$ attacks GaSb and AlSb chemically . It is also possible that inhomogeneous band bending at the sulfur-passivated surface produces greater charge scattering than an oxide surface.
The spin transport properties of the 2DEG are important to the operation of the device. There are two parts to this: a 2DEG in an InAs quantum well is diamagnetic ; and has strong spin precession . This spin precession results from the Rashba term in the spin-orbit interaction. In transport, the combination of multiple elastic scattering from non-magnetic impurities and spin-precession results in a randomization of spin orientation and can give rise to weak antilocalization . This effect was observed in our characterization Hall bar (at the center of a weak localisation peak) enabling us to estimate the spin dephasing length, $`\mathrm{}_{sd}`$, and the related zero-field spin-splitting energy, $`\mathrm{\Delta }E`$, of the device 2DEG. These measurements were made in a magnetic field applied perpendicular to the 2DEG.
By fitting the weak antilocalization peak as described in ref. we estimated the spin dephasing time, $`\tau _s`$, to be 9ps. For the calculations we used an electron density $`n=6\times 10^{15}`$m<sup>-2</sup>, calculated from the Shubnikov-de-Haas oscillations, a mobility of $`\mu `$=4.9m<sup>2</sup>s<sup>-1</sup>V<sup>-1</sup> and the effective mass for InAs $`m^{}`$=0.04$`m_o`$ ($`m_o`$ = electron rest mass) . $`\mathrm{}_{sd}`$, was calculated from the expression, $`\mathrm{}_{sd}=(\mathrm{}\upsilon _F\tau _s)^{1/2}`$, where $`\mathrm{}`$ is the elastic mean free path and $`\upsilon _F`$ is the Fermi velocity in our system, and found to be $`\mathrm{}_{sd}`$= 1.8$`\mu `$m. $`\mathrm{\Delta }E`$ at zero magnetic field was calculated using an expression for $`(\tau _s)^1`$ given in , $`(\tau _s)^1`$=$`(<\mathrm{\Delta }E^2>\tau _e)`$/$`4\mathrm{}^2`$, where $`<\mathrm{\Delta }E^2>`$ is the Fermi-surface average of $`\mathrm{\Delta }E^2`$, and $`\tau _e`$ is the relaxation time for elastic scattering and was found to be $`<\mathrm{\Delta }E^2>`$= 0.16(meV)<sup>2</sup>. From the expressions for $`\tau _s`$ and $`\mathrm{}_{sd}`$ we can see that $`\mathrm{}_{sd}=2\mathrm{}\upsilon _F/\sqrt{<\mathrm{\Delta }E^2>}`$ implying that $`\mathrm{}_{sd}`$ is independent of the mobility. Our device should therefore be expected to have a similar $`\mathrm{}_{sd}`$ to the characterization Hall bar since they have similar carrier concentrations and zero-field spin splitting.
Weak antilocalization was not observed after sulfur passivation since a reduction in mobility causes a reduction in the inelastic scattering length $`\mathrm{}_\phi `$ and can therefore break the condition for observation of weak antilocalization ( $`\mathrm{}_\phi `$ comparable or larger than $`\mathrm{}_{sd}`$ ). For our characterization Hall bar we estimated $`\mathrm{}_\phi =1\mu `$m by fitting the weak localization part of the magnetoresistance . From the ratio of the mobilities with and without sulfur passivation we estimate $`\mathrm{}_\phi 0.1\mu `$m$`<<\mathrm{}_{sd}`$ after sulfur passivation .
In order to determine the magnetic properties of the contacts A and B we performed four-terminal magnetoresistance measurements at 0.3K using a constant ac current of 100$`\mu `$A. For contact A the current was applied between positions 2 and 6 (see inset in figure 1a) and the voltage drop between contacts 1 and 5 was recorded by lock-in amplification techniques. Similarly for contact B the current was applied between positions 3 and 7 and the voltage drop was recorded between positions 4 and 8 of the contact. These measurements are shown in figure 1a with the magnetic field being applied along the long axis of the contacts A and B. The sharp minimum in each curve corresponds to the switching of the magnetization of the contact and therefore occurs at its coercive field . For contact A we measured a coercive field $`Hc_A`$=3.5mT and for contact B, $`Hc_B`$=8.5mT.
In order to observe the spintronic properties of the device, magnetoresistance measurements were carried out in magnetic fields applied parallel to the plane of the 2DEG and along the easy axis of the contacts A and B at temperatures ranging from 0.3K to 10K. A constant ac current of 1$`\mu `$A was applied between positions 1 and 4 (see inset in figure 1a) of the magnetic contacts and the voltage drop between positions 5 and 8 was recorded. Figure 1b shows these measurements, plotted as the change in the magnetoresistance, $`\mathrm{\Delta }R`$:
$$\mathrm{\Delta }R=R(H)R(H=0),$$
(1)
from its zero field value $`R(H=0)=588\mathrm{\Omega }`$. $`H`$ is the applied magnetic field. At 0.3 K figure 1b shows both an up sweep (solid line) and a down sweep (dashed line). The principal features in these sweeps are a peak in magnetoresistance between the two coercive fields $`H_{C_A}`$ and $`H_{C_B}`$ and a dip on either side of this peak. The dip on the low-field side is deeper than the one on the high-field side. This structure is repeated symmetrically on opposite sides of zero field for the up and down sweeps.
By comparing four-terminal resistance measurements made between the contacts A and B with those made between the non-magnetic contacts the interface conductance, G, was found to be 10mS. Furthermore, the spin conductance of the 2DEG, $`g_s`$, defined as the conductance of a length of the bulk material equal to $`\mathrm{}_{sd}`$, , was found to be 2mS. Therefore, since G and $`g_s`$ are comparable we expect a contribution from both the interface and the 2DEG in the device magnetoresistance. The magnetoresistance ( $`\mathrm{\Delta }R`$ ) of our device will therefore have the following contributions:
$$\mathrm{\Delta }R=\mathrm{\Delta }R_A+\mathrm{\Delta }R_B+\mathrm{\Delta }Rc_A+\mathrm{\Delta }Rc_B+\mathrm{\Delta }R_s$$
(2)
where $`\mathrm{\Delta }R_A`$ and $`\mathrm{\Delta }R_B`$ are the magnetoresistance changes of contacts A and B respectively, $`\mathrm{\Delta }Rc_A`$ and $`\mathrm{\Delta }Rc_B`$ are those of the interface between the 2DEG and contacts A and B respectively, and $`\mathrm{\Delta }R_s`$ is the resistance change due to electrons propagating from one ferromagnetic contact to the other without spin scattering.
As can be seen by comparing figures 1a and 1b the contributions $`\mathrm{\Delta }R_A`$ and $`\mathrm{\Delta }R_B`$ ($``$ 2m$`\mathrm{\Omega }`$) are 500 times smaller than the magnetoresistance changes in $`\mathrm{\Delta }R`$ ($``$ 1$`\mathrm{\Omega }`$). The results in figure 1b cannot therefore be attributed to changes in the magnetoresistances of the contacts themselves. The part of the interface resistance $`\mathrm{\Delta }Rc_A`$+$`\mathrm{\Delta }Rc_B`$ which results from the spin-valve effect , and therefore has a dependence on applied field, will have the schematic form shown in figure 2a. Its shape derives from both the spin properties of the 2DEG and the difference in coercive fields of the contacts A and B. It is a maximum when the magnetizations of contacts A and B are parallel to each other and antiparallel to the spin orientation of the 2DEG. It has a minimum value when the magnetizations in A and B are both parallel to the spin orientation in the 2DEG and an intermediate value when the contact magnetizations are antiparallel. The part of the resistance contribution from direct propagation between contacts A and B, $`\mathrm{\Delta }R_s`$, which results from the spin-valve effect will have the form shown schematically in figure 2b. This resistance is a minimum when both ferromagnetic contacts are magnetized parallel to each other and maximum between the two coercive fields where the magnetization of the two contacts is antiparallel to each other. The broken lines in figure 2 represent a more realistic picture of the magnetoresistance changes resulting from the two spin-valve effects. They represent an average over the local switching of different magnetic domains in the ferromagnetic contacts. A schematic representation of the sum of the two spin-valve contributions to $`\mathrm{\Delta }R`$ is shown as a grey line in figure 1b, taking the coercive fields from the contact magnetoresistances in figure 1a. This line has the same shape as the experiment and appears in the correct place for both up and down field sweeps. The depth and width of the high-field magnetoresistance dip depend upon the extent to which the shape of the up peak in figure 2b exactly compensates the dip down between the coercive fields in figure 2a. If these are identical there will be no high-field dip.
The small amplitude of the device resistance modulation $`\mathrm{\Delta }R`$/R(H=0)$``$0.2$`\%`$ shown in figure 1b is consistent with the above picture. Electrons contributing to the direct spin-valve effect shown in figure 2b have to take fairly direct paths between contacts A and B. Those which take paths involving multiple scattering pick up random angles of spin orientation and therefore on average will cancel with each other and not contribute to the effect. The temperature dependence of the magnetoresistance is also consistent with our picture. The peak between the two coercive fields decreases in amplitude with increasing temperature and almost disappears at 10K (see figure 1b). At this temperature $`k_BT`$ ( = 0.8meV ) is greater than the zero-field spin splitting and therefore thermal activation has sufficient energy to destroy both spin-valve effects shown in figure 2b.
Alternative mechanisms which could produce the magnetoresistance oscillations observed would have to be capable of: producing the symmetry we see in up and down field sweeps (solid and dashed lines in figure 1b); producing features of an appropriate shape which align with the contact coercive fields; and persist up to temperatures of 10K in a 2DEG with a zero field resistance of 588$`\mathrm{\Omega }`$ and an inelastic scattering length one tenth the length of the device. Such fluctuations are unknown in the literature. Universal conductance fluctuations (UCFs) could occur in a device of such low resistance. However, their period in magnetic field ($`H_{C_B}H_{C_A}`$ = 5mT) is consistent with a phase coherent area of $`1(\mu m)^2`$ which is two orders of magnitude larger than that estimated from the inelastic scattering length of the device ( $`\mathrm{}_\phi 0.1\mu `$m). In addition UCFs are not seen in magnetic fields applied parallel to a 2DEG . Also, since the field was applied along the easy axis of the contacts A and B there should be no stray fields with a significant component perpendicular to the 2DEG. The most likely origin of the small amplitude random modulations appearing in the data and the differences in shape between up and down magnetic field sweeps is the complex pattern of domain formation in the contacts A and B and their pattern of switching as a function of external field.
In conclusion, we have provided evidence that we observed experimentally two kinds of spin-valve effect in a spintronic FET. The first effect results from the ferromagnet-2DEG interface resistance and the second effect results from spins propagating from one ferromagnetic contact to the other. The combination of these effects produces a resistance maximum between the coercive fields of the two contacts and dips in resistance on either side. Both effects are suppressed with increasing temperature as the thermal smearing becomes comparable to the zero field spin splitting.
We thank J.A.C. Bland, M. Pepper and C. J. B Ford for invaluable discussions. This work was funded under EPSRC grant GR/K89344, and the Paul Instrument Fund. CHWB,EHL and DAR acknowledge support from the EPSRC the Isaac Newton Trust, and Toshiba Cambridge Research Center. The corresponding author: S. Gardelis, email: sg234@cus.cam.ac.uk
Figure Captions
Fig.1 (a) Change in resistance of contacts A and B with external magnetic field H, averaged over 4 up sweeps. $`H_{C_A}`$ and $`H_{C_B}`$ coercive fields of A and B. Inset: schematic of device: black pads - magnetic contacts A,B; dark grey - NiCr/Au ohmic contacts; light grey - Hall bar mesa. (b) Change in device resistance at 0.3K averaged over 9 sweeps (up - solid, down - dashed lines) and 4.2K and 10K averaged over 2 up sweeps. Grey line: schematic showing the expected sum of the two spin-valve effects. The arrows indicate the direction of the field sweep. All traces are offset for clarity.
Fig.2 (a) Schematic of the interface spin-valve effect: $`\mathrm{\Delta }Rc_A`$+$`\mathrm{\Delta }Rc_B`$. Arrows indicate magnetization direction in A and B and 2DEG S. H is the external field which is being swept up from negative value. (b) Schematic of direct spin-valve effect: $`\mathrm{\Delta }R_s`$. Dashed lines in (a), (b) indicate averaging over the local switching of different magnetic domains in A and B.
|
no-problem/9902/cond-mat9902165.html
|
ar5iv
|
text
|
# I INTRODUCTION
## I INTRODUCTION
The modeling of the mechanical properties of everyday materials is a very challenging problem. The main difficulty is the vastly different length and time scales at which the various processes occur during deformation — ranging from the Ångström and sub-picosecond scales of the atomic processes, to beyond the millimeter and second scales of the macroscopic deformation. Naturally, very different modeling techniques are required to model phenomena at so different scales. Atomic-scale simulations (typically molecular dynamics) can handle time scales of up to a few nanoseconds and system sizes of up to $`10^8`$ atoms , although one is typically limited to significantly smaller system sizes and simulation times by the available computer resources, and by the need to repeat the simulations at different conditions.
At larger length and time scales, one is forced to abandon the atomistic description of the material. One option is dislocation dynamics, where the fundamental unit of the simulation is the dislocation. The fundamental idea of dislocation dynamics is similar to that of molecular dynamics: by calculating the forces on the dislocation (from the interactions with each other as well as from any applied stress), the equations of motions can be solved numerically. The modeling becomes significantly more complicated than molecular dynamics, mainly because dislocations are lines whereas atoms are points. For that reason, dislocation dynamics is often done in two dimensions, where the dislocations become point defects. One difficulty encountered when the atomic-scale description of the material is abandoned is how to treat inherently atomic-scale processes. In the case of dislocation dynamics, it will mainly be processes such as dislocation annihilation and core effects when the dislocations are intersecting. Parameters describing such processes must be extracted from experiments, or calculated directly using selected atomic-scale simulations.
At still larger length and time scales, continuum models must be used. This will typically be some kind of finite-element based calculation, where the plastic behavior is described by a constitutive relation.
A number of advanced simulation techniques have been proposed, where simulation techniques on several length scales are combined into a single multiscale simulation. Various ways of combining finite element calculations of the long-range elastic fields with atomic-scale simulations of some regions of the material have appeared in the literature. Unfortunately, most of these methods are limited to quasi-static simulations, i.e. to zero temperature.
Various simulation techniques have been proposed to extend the time scales beyond the nanosecond scale reachable by molecular dynamics. Some methods are based on “accelerating” the microscopic dynamics of the system by modifying the potential energy surface or by extrapolating from higher to lower temperatures. Other methods, such as the “nudged elastic band” method and constrained molecular dynamics, allow one to determine energy barriers, from which reaction rates of slow processes can be determined.
A different way of addressing the length-scale challenge is by simulating suitably chosen problems, where the characteristic length scale is within the reach of atomic-scale modeling. This has the advantage that the entire problem can be addressed at the atomic scale, removing the need for the a priori assumptions about which processes should be included in the simulation — sometimes unexpected atomic-scale processes are seen in such simulations, processes that could not have been included in a coarser-scaled simulation as its importance was not suspected.
One such atomic-scale problem is the mechanical deformation of nanocrystalline metals, i.e. metals where the grain size is in the nanometer range. Nanocrystalline metals have recently received much interest because they may have mechanical, chemical and physical properties different from their coarse-grained counterparts. For example, the hardness and yield stress may increase 5–10 times when the grain size is reduced from the macroscopic to the nanometer range.
Recently, computer simulations of the structure of nanocrystalline metals and semiconductors, and of their elastic and plastic properties have appeared in the literature. In previous papers, we described the plastic deformation of nanocrystalline copper at zero temperature. In this paper we focus on the elastic and plastic properties of nanocrystalline metals, in particular copper, at finite temperature. We find that the materials have a very high yield stress, and that the yield stress decrease with decreasing grain size (reverse Hall-Petch effect). The main deformation mode is found to be localized sliding in the grain boundaries.
The high yield stress and hardness of nanocrystalline metals is generally attributed to the Hall-Petch effect, where the hardness increases with the inverse square root of the grain size. The Hall-Petch effect is generally assumed to be caused by the grain boundaries acting as barriers to the dislocation motion, thus hardening the material. The detailed mechanism behind this behavior is still under debate. A cessation or reversal of the Hall-Petch effect will therefore limit the hardness and strength that can be obtained in nanocrystalline metals by further refining of the grain size. There are a number of observations of a reverse Hall-Petch effect, i.e. of a softening when the grain size is reduced. The interpretation of these results have generated some controversy. It is at present not clear if the experimentally reported reverse Hall-Petch effect is an intrinsic effect or if it is caused by reduced sample quality at the finest grain sizes. The computer simulations presented here show that an intrinsic effect is clearly possible.
The structure of the paper is as follows. In section II we discuss the setup of the nanocrystalline model systems. Section III discusses the simulation and analysis methods used. The simulation results are presented in section IV, and subsequently discussed in section V.
## II SIMULATION SETUP
In order to obtain realistic results in our simulations, and to be able to compare our simulations with the available experimental data, we have attempted to produce systems with realistic grain structures. Unfortunately, the microscopic structure is not very well characterized experimentally, and depends on the way the nanocrystalline metal was prepared. We have tried to create systems that mimic what is known about the grain structure of nanocrystalline metals generated by inert gas condensation. The grains seem to be essentially equiaxed, separated by narrow grain boundaries. The grains are essentially dislocation free. The grain size distribution is log-normal.
### A Construction of the initial configuration
In our simulations the grains are produced using a Voronoi construction: a set of grain centers are chosen at random, and the part of space closer to a given center than to any other center is filled with atoms in a randomly rotated face-centered cubic (fcc) lattice. Periodic boundary conditions are imposed on the computational cell. This procedure generates systems without texture and with random grain boundaries. Effects of texture could easily be included by introducing preferred orientations of the grains. In the limit of a large number of grains, the Voronoi construction will generate a grain size distribution close to a log-normal distribution.
In the grain boundaries thus generated, it is possible that two atoms from two different grains get too close to each other. In such cases one of the atoms is removed to prevent unphysically large energies and forces as the simulation is started. To obtain more relaxed grain boundaries the system is annealed for 10 000 timesteps (50 ps) at 300 K, followed by an energy minimization. This procedure is important to allow unfavorable local atomic configurations to relax.
To investigate whether the parameters of the annealing procedure are critical, we have annealed the same system for 50 and 100 ps at 300 K, and for 50 ps at 600 K. We have compared the mechanical properties of these systems with those of an identical system without annealing. We find that the annealing is important (the unannealed system was softer), but the parameters of the annealing are not important within the parameter space investigated.
A similar generation procedure has been reported by Chen, by D’Agostino and Van Swygenhoven , and by Van Swygenhoven and Caro. A different approach was proposed by Phillpot, Wolf and Gleiter: a nanocrystalline metal is generated by a computer simulation where a liquid is solidified in the presence of crystal nuclei, i.e. small spheres of atoms held fixed in crystalline positions. The system was then quenched, and the liquid crystallized around the seeds, thus creating a nanocrystalline metal. In the reported simulations, the positions and orientations of the seeds were deterministically chosen to produce eight grains of equal size and with known grain boundaries, but the method can naturally be modified to allow randomly placed and oriented seeds. The main drawback of this procedure is the large number of defects (mainly stacking faults) introduced in the grains by the rapid solidification. The stacking faults are clearly seen in the resulting nanocrystalline metal (Fig. 7 of Ref. ). The appearance of a large number of stacking faults was also seen in the solidification of large clusters even if the cooling is done as slowly as possible in atomistic simulations.
### B Structures
A typical system (after annealing and cooling to zero Kelvin) with a grain size of 5.2 nm is shown in Fig. 1. The atoms have been color coded according to the local crystal structure, as determined by the Common Neighbor Analysis (see section III C). In Figure 2 the radial distribution function (RDBF) $`g(r)`$ for the same system is shown. It is defined as the average number of atoms per volume at the distance $`r`$ from a given atom. The RDBF is seen to differ from that of a perfect fcc crystal in two ways. First, the peaks are not sharp delta functions, but are broadened somewhat. This broadening is in part due to strain fields in the grains (probably originating from the grain boundaries), and in part due to atoms in or near the grain boundaries sitting close to (but not at) the lattice positions. The second difference is seen in the inset: the RDBF does not go to zero between the peaks. It is the signature of some disorder which in this case comes from the grain boundaries.
Experimentally, information about the RDBF can be obtained from X-ray absorption fine structure (XAFS) measurements. This method has been used to measure the average coordination number of Cu atoms in nanocrystalline Cu, finding coordination numbers of $`11.8\pm 0.3`$ and $`11.9\pm 0.3`$ in samples with 34 nm and 13 nm grain size, respectively. From these results the average coordination number of the atoms in the grain boundaries was estimated to $`11.4\pm 1.2`$, i.e. within the experimental uncertainty it is the same as in the bulk. Integrating the first peak of the calculated RDBF (Fig. 2) gives an average coordination number of $`11.9\pm 0.15`$. As the RDBF does not go to zero between the first two peaks, it is not clear where the upper limit of the integration should be chosen, hence the uncertainty. The value given is for an upper limit of 3.125 Å. There is thus excellent (but perhaps rather trivial) agreement between the calculated and the experimental coordination numbers.
Numerical studies have shown that the Voronoi construction results in a grain size distribution that it is well described by a log-normal distribution (although for more than 5000 grains a two-parameter gamma distribution gives a better fit). In Fig. 3 we show the grain size distributions in our simulations with intended average grain sizes of 3.28 and 5.21 nanometers. The observed distributions are consistent with log-normal distributions, although due to the rather low number of grains it is not possible to distinguish between a log-normal or a normal distribution.
## III SIMULATION METHODS
We model the interactions between the atoms using a many-body Effective Medium Theory (EMT) potential. EMT gives a realistic description of the metallic bonding, in particular in fcc metals and their alloys. Computationally, it is not much more demanding than pair potentials, but due to its many-body nature it gives a far more realistic description of the materials properties.
The systems can be deformed by rescaling the coordinates along a direction in space (in the following referred to as the $`z`$ direction). During this deformation either a conventional molecular dynamics (MD) algorithm or a minimization algorithm is used to update the atomic positions in response to the deformation.
In molecular dynamics, the timestep used when integrating the equation of motion must be short compared to the typical phonon frequencies in the system. We use a timestep of 5 fs, safely below the value where the dynamics becomes unstable. A consequence of the short timesteps involved in MD simulations is that only brief periods of time can be simulated. For the size of systems discussed here 1 ns (200000 timesteps) is for all practical purposes an upper limit, although for repeated simulations 0.1 ns is a more realistic limit.
One consequence of the short time-scale is that very high strain rates are required to get any reasonable deformation within the available time, a typical strain rate in the simulations reported here is $`5\times 10^8s^1`$. This is very high, but still the ends of the system separates at velocities far below the speed of sound. We have investigated the effects of varying the strain rate, see section IV B 4 .
As another consequence of the short time-scale, slower processes will not be seen in the simulations. In particular, most diffusional processes will be unobservable (this would also be the case in experiments performed at these high strain rates). However, measurements of diffusional creep (Coble creep) in nanocrystalline metals indicate that diffusional creep is not a large effect.
Some of the simulations were performed using a minimization procedure, i.e. the system is kept in a local energy minimum while it is deformed. In such a simulation time is not defined, since we are not solving an equation of motion. In a similar way, time should not be relevant in an experiment performed truly at zero temperature, since there will be no thermally activated processes, and thus no way the sample can leave a local energy minimum (neglecting quantum tunelling). So the strain rate will not have an effect on the results, providing it is low enough to prevent a heating of the sample, and providing the minimization procedure is fully converged. The minimization simulations can thus be seen as a model for idealized experiments at zero temperature in the low strain rate limit, where there is time for the heat generated by the deformation to be removed.
### A The minimization procedure
To simulate deformation at zero temperature a minimization procedure was used to keep the system in or near a local minimum in energy at all times. The deformation and minimization was done simultaneously. The minimization algorithm is a modified molecular dynamics simulation. After each MD timestep the dot product between the momentum and the force is calculated for each atom. Any atom where the dot product is negative gets its momentum zeroed, as it is moving in a direction where the potential energy is increasing. In this way the kinetic energy of an atom is removed when the potential energy is close to a local minimum along the direction the atom is moving. This minimization procedure quickly brings the system close to a local minimum in energy, but a full convergence is not obtained, as it would require a number of timesteps at least as great as the number of degrees of freedom in the system. However, we find only little change in the development of the system, when we increase the number of minimization steps.
At each timestep the system is deformed by a tiny scaling of the coordinates, the $`z`$ coordinates are multiplied by $`1+ϵ`$, the $`x`$ and $`y`$ coordinates by $`1\nu ϵ`$, where $`ϵ`$ is a very small number, chosen to produce the desired deformation rate. The constant $`\nu `$ is an “approximate Poisson’s ratio”. A Monte Carlo algorithm is used to optimize the two lateral dimensions of the system: after every 20th timestep a change in the lateral dimensions is proposed. If the change result in a reduction of the energy it is accepted, otherwise it is discarded. In this way the exact value chosen for $`\nu `$ becomes uncritical, as the Monte Carlo algorithm governs the contraction in the lateral directions. The use of $`\nu 0`$ is just for computational efficiency. We used $`\nu =0.4`$ as this was between the optimal value in the part of the simulation where elastic deformation dominates (Poisson’s ratio $`0.3`$) and in the part where plastic deformation dominates ($`\nu 0.5`$ as volume is then conserved).
A few simulations were repeated using a Conjugate Gradient (CG) minimization instead of the MD minimization algorithm. The two algorithms were approximately equally efficient in these simulations, provided that the CG algorithm was restarted approximately every 20 line minimizations. Otherwise the CG algorithm will not minimize twice along the same direction in the $`3N`$-dimensional configuration space.
### B Molecular dynamics at finite temperature
At finite temperatures a conventional molecular dynamics algorithm is used, where the Newtonian equations of motion for the atoms are solved numerically. During the simulation the computational box was stretched as described above. The Monte Carlo algorithm optimizing the lateral dimensions is the conventional Metropolis algorithm.
Before the deformation is applied the system is heated to the desired temperature by a short Molecular Dynamics simulations using Langevin dynamics, i.e. where a friction and a fluctuating force is added to the equation of motion of the atoms. When the desired temperature has been reached (after approximately 10 ps), the simulations is performed using the velocity Verlet algorithm. During the deformation process the internal energy increases by the work performed on the system. This amounts in practice to a small heating of the system of the order of $``$30 K.
### C Analysis of the results
While the simulation is performed, the stress field is regularly computed. The stress tensor is the derivative of the free energy of the system with respect to the strain. The effective medium theory allows us to define an energy per atom, which allows us to define an “atomic” stress for each atom. The stress is a suitable derivative of the energy with respect to the interatomic distances:
$$\sigma _{i,\alpha \beta }=\frac{1}{v_i}\left(\frac{p_{i,\alpha }p_{i,\beta }}{m_i}+\frac{1}{2}\underset{ji}{}\frac{E_{\text{pot}}}{r_{ij}}\frac{r_{ij,\alpha }r_{ij,\beta }}{r_{ij}}\right)$$
(1)
where $`\sigma _{i,\alpha \beta }`$ is the $`\alpha ,\beta `$ component of the stress tensor for atom $`i`$, $`v_i`$ is the volume assigned to atom $`i`$ ($`_iv_i=V`$, where $`V`$ is the total volume of the system), $`m_i`$ is the mass of atom $`i`$, $`p_{i,\alpha }`$ is the $`\alpha `$ component of its momentum and $`r_{ij}`$ is the distance between atoms $`i`$ and $`j`$ ($`r_{ij,\alpha }`$ is a component of the vector from atom $`i`$ to $`j`$).
The atomic stress tensor cannot be uniquely defined. Eq. 1 is based on the Virial Theorem, but other definitions are possible. When the atomic stress is averaged over a region of space the various definitions quickly converge to a macroscopic stress field.
During the simulation, stress-strain curves are calculated by averaging the atomic stresses over the entire system.
To facilitate the analysis of the simulations the local atomic order was examined using an algorithm known as Common Neighbor Analysis (CNA). In this algorithm the bonds between an atom and its neighbors are examined to determine the crystal structure (two atoms are considered to be bonded if they are closer together than a cutoff distance chosen between the first two peaks in the radial distribution function). Bonds are classified by three integers $`(ijk)`$. The first integer $`i`$ is the number of common neighbors, i.e. atoms bonded to both atoms in the bond under consideration. The second integer $`j`$ is the number of bonds between these common neighbors. The third integer $`k`$ is the longest chain that can be formed by these bonds.
The number and type $`(ijk)`$ of bonds that an atom has determines the local crystal structure. For example, atoms in a perfect fcc crystal have 12 bonds of type 421, whereas atoms in a perfect hcp crystal have six bonds of type 421 and six of type 422. In these simulations we have mainly used CNA to classify atoms into three classes: fcc, hcp and “other”, where the “other” class is atoms that have a different number of bonds than 12, or that have bonds that are not of type 421 or 422.
The use of CNA makes dislocations, grain boundaries and stacking faults visible in the simulation. Intrinsic stacking faults appear as two adjacent $`111`$ planes of hcp atoms, extrinsic stacking faults are two $`111`$ planes of hcp atoms separated by a single $`111`$ plane of fcc atoms, whereas twin boundaries will be seen as a single $`111`$ plane of hcp atoms. Dislocation cores and grain boundaries consist of atoms in the “other” class, although grain boundaries also contain a low number of hcp atoms.
When analyzing simulations made at finite temperatures, the lattice vibrations may interfere with the CNA. For simulations at temperatures up to 300 K it is sufficient to choose the cutoff distance carefully, but for higher temperatures it is necessary to precede the CNA analysis by a short minimization (short enough to remove most of the lattice vibrations, but not long enough to allow dislocations etc. to move). For consistency, such a minimization procedure was used for all finite-temperature simulations regardless of temperature.
## IV RESULTS
In this section we first discuss the results of the simulations based on the minimization algorithm (i.e. the zero-temperature results), and then we discuss the results obtained using molecular dynamics at finite temperatures.
### A Simulations at zero temperature
During the deformation, we calculate the average stress in the system as a function of the strain. Fig. 4 shows the obtained stress-strain curves from simulations at 0 K. We see a linear elastic region followed by a plastic region with almost constant stress. Similar results are found for palladium (Fig. 5). Each stress-strain curve shown in Figs. 4 and 5 are obtained by averaging over a number of simulations with different (randomly produced) grain structures with the same average grain size. A set of stress-strain curves from individual simulations are shown in Fig. 6.
One of the rationales using a minimization procedure to study the deformations was the hope that the system would evolve through a series of local energy minima, separated by discrete events when the applied deformation causes the minima to disappear. In this way, the simulation would have resulted in a unique deformation history for any given sample. However, the deformation turned out to happen through a very large number of very small processes, that could not be individually resolved by this procedure (see below). One symptom of this is, that the individual curves in Fig. 6 are not completely reproducible. Any even minor change in the minimization procedure, or a perturbation of the atomic coordinates, will result in a slightly different path through configuration space, and in different fluctuations in the stress-strain curves. Those differences are suppressed when average stress-strain curves are calculated, as in Fig. 4, and would also disappear as the system size (and thus the number of grains) increase.
#### 1 Young’s Modulus
In the linear elastic region the Young’s modulus is found to be around 90-105 GPa and it is increasing with increasing grain size. The experimental value for macrocrystalline copper is 124 GPa at 300 K, and the value found for single crystals using this potential is 150 GPa at 0 K (Hill average calculated from the anisotropic elastic constants $`C_{11}=173`$ GPa, $`C_{12}=116`$ GPa, $`C_{44}=91`$ GPa). A similar reduction of Young’s modulus is seen in simulations of nanocrystalline metals grown from a molten phase. The low value is due to the large volume fraction of the atoms being in the grain boundaries. These atoms experience a different atomic environment, which could result in a reduction of the elastic moduli similar to what is seen in amorphous metals. This local reduction of the elastic constants in grain boundaries is confirmed by atomistic simulations.
Experimental measurements of the Young’s modulus of high-quality (i.e. low-porosity) samples of nanocrystalline copper and palladium show a reduction in Young’s modulus of at most a few percent when correcting for the remaining porosity. These results were obtained for significantly larger grain sizes than were used in the simulations. The reduction of Young’s modulus that we observe in these simulation, will be difficult to detect experimentally due to the much lower volume fraction of atoms in the grain boundaries for typical grain sizes in high-quality samples ($`20`$ nm).
#### 2 Yield and flow stress
The onset of plastic deformation is usually described by the yield stress $`\sigma _y`$, traditionally defined as the stress where the strain is 0.002 larger than what would be expected from extrapolation from the elastic region. In these simulations the stress continues to increase after the yield point has been reached, until it reaches a plateau and becomes constant (or slightly decreasing). We call the level of the plateau the flow stress.
Fig. 7 shows the dependence of the yield and flow stress on the grain size. A clear reverse Hall-Petch effect is observed, i.e. a softening of the material as the grain size is reduced, as discussed in a previous paper.
#### 3 Structural changes
Fig. 8 shows the same system as Fig. 1, but after 10 % deformation. Some stacking faults have appeared in the grains, they are caused by partial dislocations (Shockley partials) nucleating at the grain boundaries and moving through the grains. One such dislocation is seen in the figure.
The radial distribution function (Fig. 9) has been changed somewhat by the deformation. The peaks have been broadened, this is mainly caused by the anisotropic stress fields in the sample. The “background level” between the first two peaks has increased a little, indicating a larger amount of disorder in the system. Increased disorder is also seen in Fig. 8, where the grain boundaries appear to have become slightly thicker compared to the initial configuration. This is confirmed by Fig. 10, showing the number of atoms in different local configurations before and after the deformation. We see how more atoms are neither fcc nor hcp after the deformation than before.
A strong increase in the number of atoms near stacking faults (atoms in hcp symmetry) is also seen in Figs. 8 and 10. The stacking faults appear as partial dislocations move through the system, and they are thus the signature of dislocation activity. At zero temperature, we do not observe cases where a second partial dislocation erases the stacking faults (we observe only a very few atoms changing from local hcp order to local fcc order). We can therefore use the total number of hcp-ordered atoms to estimate an upper bound on the amount of plastic deformation caused by the dislocations.
If a dislocation with Burgers vector $`\stackrel{}{b}`$ runs through the entire system, the dimensions of the system are changed by $`\stackrel{}{b}`$ and the strain $`\epsilon _{zz}`$ is thus $`b_zL_z^1`$, where $`b_z`$ is the $`z`$ component of the Burgers vector and $`L_z`$ is the dimension of the system in the $`z`$ direction. If the dislocation only passes through a part of the system, the resulting deformation is reduced by $`A\mathrm{cos}\varphi (L_xL_y)^1`$, where $`A`$ is the area of the slip, $`L_x`$ and $`L_y`$ are the lateral dimensions of the system and $`\varphi `$ is the angle between the slip plane and the $`xy`$ plane. The contribution from a slip plane to the $`zz`$ component of the strain is thus $`\epsilon _{zz}=(b_zL_z^1)A\mathrm{cos}\varphi (L_xL_y)^1`$.
The maximal value of $`b_z`$ is $`b\mathrm{sin}\varphi `$, where $`b=|\stackrel{}{b}|`$, since $`\stackrel{}{b}`$ lies in the slip plane. The maximal strain from the slip thus becomes $`\epsilon _{\text{max}}=bA(2V)^1=aA(2\sqrt{6}V)^1`$ for $`\varphi =\pi /4`$, as the Burgers vector of a Shockley partial is $`b=a/\sqrt{6}`$, where $`a`$ is the lattice constant and $`V`$ is the volume of the simulation cell. A slip plane of area $`A`$ results in two $`\{111\}`$ planes of hcp atoms, i.e. $`4A(\sqrt{3}a^2)^1`$ atoms. The total system contains $`4Va^3`$ atoms, so the fraction of hcp atoms becomes $`n=Aa(\sqrt{6}V)^1`$. Hence a fraction $`n`$ of hcp atoms can at most have resulted in a strain of $`\epsilon _{\text{max}}=2^{3/2}n`$. As the simulation generate at most 9% hcp atoms during 10% deformation, we get that $`\epsilon _{\text{max}}3\%`$, provided that all slip planes and Burgers vectors are ideally aligned. It is therefore clear that the main deformation mode is not by dislocation motion.
Fig. 11 illustrates how the main part of deformation has taken place. The atoms are colored according to their motion relative to the global stretching of the system. We clearly see that the upper parts of the grains have moved down and the lower parts up, relative to what would be expected in a homogeneous deformation. This shows that the grains do not stretch as much as in a homogeneous deformation. On the other hand, it is seen that significant deformation has happened in the grain boundaries, as the atoms typically are moving up on one side and down on the other side of a grain boundary. An analysis of that deformation shows that it is in the form of a large number of apparently uncorrelated small slipping events, where a few atoms (or a few tens of atoms) move relatively to reach other, i.e. not in the form of collective motion in the grain boundaries. A minor part of the plastic deformation is in the form of dislocation motion inside the grains. The slip planes are clearly seen in Fig. 11, in particular in the large grain in the upper left part of the figure, where two dislocations have moved through the grain, and a third is on its way near one of the previous slip planes.
### B Simulations at finite temperature
The simulations were repeated with the same systems (i.e. the same initial grain structure) at finite temperatures. Fig. 12 shows the stress-strain curves for the same system at different temperatures. We clearly see a softening with increasing temperature. Rather large fluctuations are seen in the curves. These are mainly thermal fluctuations and fluctuations due to single major “events” in the systems (e.g. the nucleation of a dislocation). These fluctuations are only visible due to the small system size.
The softening of the material with decreasing grain size is also observed in simulations at 300 K, see Fig. 13.
#### 1 Young’s Modulus
Young’s Modulus ($`E`$) is the slope of the stress-strain curve in the linear region. When calculating Young’s modulus from the simulation data, a compromise must be made between getting enough data point for a reliable fit, and staying within the clearly linear region. For the zero-temperature simulations, fitting Young’s modulus to the data point for $`\epsilon <0.1\%`$ satisfies both conditions, but for the finite-temperature simulations more data points are required. We have chosen to use data in the interval $`\epsilon <1\%`$, this ensures that we have enough data for a reliable fit, but results in a slight underestimate of the Young’s modulus, as some plastic deformation is beginning in this interval. The results of this procedure is shown in Fig. 14, showing Young’s modulus for a single system with grain size $`d=5.2`$ nm, simulated at different temperatures. For consistency, the larger strain interval has been used even for the $`T=0`$ simulation. Using the smaller interval ($`\epsilon <0.1\%`$) would result in $`E=119`$ Gpa instead of $`E=100`$ GPa.
The observed temperature dependence of $`E`$ is approximately -72 MPa/K, which is somewhat larger than what has been observed experimentally (-40 MPa/K) in copper with a grain size of 200 nm. This may be because the Young’s modulus of the grain boundaries is more temperature sensitive than in the bulk, or it may be due to increased creep in the higher-temperature simulations, see the discussion in section IV B 4.
#### 2 Yield and flow stress
The Yield stress is again determined as the stress where the strain is 0.2% above what would be expected from extrapolation from the linear regime. The difficulties leading to an underestimate of the Young’s Modulus thus leads to an overestimate of the yield stress. The values for the yield stress obtained for the simulations at 300 K can therefore not be compared directly with the values obtained at 0 K, but values obtained at different grain sizes can of course be compared, as the same method was used to estimate the yield stress in all cases. The flow stress is a far more well-defined quantity, and direct comparison is possible between simulations at different temperatures. The variation of the yield and flow stresses with temperature is seen in Fig. 15.
Fig. 16 shows the variation of both the yield- and the flow stress with grain size. As in the simulations at 0 K a reverse Hall-Petch relationship is found.
#### 3 Structural changes
The main deformation mode appears to be the same at zero and at finite temperatures. Figure 17 shows the atomic displacements, again the majority of the deformation has taken place in the grain boundaries, only a few slip planes are seen.
The grain boundaries do not appear to increase as much in thickness as they to at 0 K. Fig. 18 shows the change in the fraction of atoms in different local environments during the deformation of the same system at 0 K and at 300 K. In both cases we clearly see an increase in the number of hcp atoms (stacking faults) due to the motion of dislocations through the grains, but the number of atoms in the grain boundaries increases significantly more at 0 K than at 300 K (the increase is approximately twice as big at 0 K as at 300 K). The increase appears to be caused by the deformation in the grain boundaries. Apparently, the local disorder introduced in this way is partially annealed out at 300 K.
#### 4 Strain rate
The finite-temperature simulations presented in this paper were performed at a strain rate of $`\dot{\epsilon }=5\times 10^8s^1`$, unless otherwise is mentioned.
In order to investigate the influence of the strain rate, we simulated the same deformation of the same system using different strain rates in the range $`2.5\times 10^7s^1`$$`1.0\times 10^{10}s^1`$, see Fig. 19. A strong dependence on the strain rate is seen for strain rates above $`1\times 10^9s^1`$. Below this “critical strain rate” the strain rate dependence on the stress-strain curves is far less pronounced. Fig. 20 confirms this impression. It shows the yield and flow stress as a function of the strain rate. Experiments on ultrafine-grained ($`d300`$ nm) Cu and Ni show a clear strain rate dependence on the yield and flow stresses at high strain rates.
Perhaps surprisingly, the Young’s modulus appears to depend on the strain rate as well (Fig. 19). This indicates that some kind of plastic deformation occurs in the “linear elastic” region. This is confirmed by stopping a simulation while the system still appears to be in the elastic region, and then allowing it to contract until the stresses are zero. The system does not regain the original length: plastic deformation has occurred.
To examine the time-scale over which this deformation occurs, a configuration was extracted from the simulation at $`\dot{\epsilon }=2.5\times 10^7s^1`$ after 0.4% deformation. The system was held at a fixed length for 300 ps while the stress is monitored, see Fig. 21. The stress is seen to decrease with a characteristic time of approximately 100 ps. By plotting the atomic motion in a plot similar to Figs. 11 and 17, it is seen that the relaxation is due to small amounts of plastic deformation in the grain boundaries. The consequence of this is that the systems do not have time to relax completely during the simulations, explaining the observed strain rate dependence. In order to allow for complete relaxation of the systems, strain rates far below what is practically possible with MD simulations are required.
#### 5 Grain rotation
Grain rotation has previously been reported in simulations of nanocrystalline nickel. We have investigated the rotation of the grains during some of the simulations, the results are summarized in Fig. 22. The figure shows the rotation of five randomly selected grains as a function of strain and temperature. The rotations were identified by a three-dimensional Fourier transform of the positions of the atoms in the grains.
We see that the grain rotation increases with increasing temperature. There is a large variation between how much the individual grains rotate. The grains with the largest rotations keep the same axis of rotation during the entire deformation, whereas the grains that only rotate a little have a varying axis of rotation. Probably some grains are in a local environment where a significant rotation results in an advantageous deformation of the sample which reduces the stress. Other grains are randomly rotated as the many small deformation processes in the grain boundaries occur.
#### 6 Porosity
As the observed reverse Hall-Petch effect is often explained as an artifact of sample porosity (see section V A), we found it relevant to study how pores influence the mechanical properties. The void structure in experimentally produced samples is usually not well known, so we chose to study several different types of voids. In all cases the voids resulted in a reduction of both the Young’s modulus and the flow stress, see Fig. 23.
##### Elliptical voids.
These crack-like voids were created by removing all atoms within an oblate ellipsoid with an axis ratio of 3.16. The short axis can be oriented along the pulling direction (the $`z`$-axis) or perpendicular to it (the $`x`$-axis). The former orientation corresponds to cracks that are activated by the applied stress field, the effect of these cracks is therefore expected to be much larger than the effect of the “inactive” cracks. This is clearly seen in Fig. 23.
##### Missing grains.
There have been reports of pore sizes comparable to (and proportional to) the grain size. To emulate this, we have tried to remove whole grains from the system. As the grains are approximately equiaxed, it is not surprising that the effect of removing a grain is intermediate between the effects of removing ellipsoids in the two orientations, provided that approximately the same number of atoms are removed.
##### Missing grain boundary atoms.
In samples experimentally produced by compacting a powder, it is reasonable to assume that the porosity will mainly be in the form of (possibly gas-filled) voids between the grains. There is also some experimental evidence that this is indeed the case. To emulate this, we have removed all atoms in the grain boundaries within one or nine spherical regions in the sample, creating one large or nine small voids in the grain boundaries. This type of voids have the largest effect on the materials properties, giving a reduction of 35–40% in the Young’s modulus and flow stress for a 12.5% porosity. It seems rather natural that a large effect is obtained with the voids concentrated in the grain boundaries since we know that the main part of the deformation is carried by these boundaries.
## V Discussion
### A The reverse Hall-Petch effect
A reverse Hall-Petch effect in nanocrystalline copper was first observed in nanocrystalline Cu and Pd by Chokshi *et al.* in 1989. Since then, there have been numerous observations of softening at very small grain sizes.
The reverse Hall-Petch effect seems to depend strongly on the sample preparation technique used and on the sample history, perhaps indicating that in most cases the reverse Hall-Petch effect is caused by various kinds of defects in the samples. Surface defects alone have been shown to be able to decrease the strength of nanocrystalline metals by a factor of five, and recent studies have shown that even very small amounts of porosity can have a dramatic effect on the strength. Improved inert gas condensation techniques have reduced the porosity resulting in samples with densities above 98% of the fully-dense value. In these samples the ordinary Hall-Petch effect is seen to continue down to grain sizes around 15 nm. There are only few data points below that grain size, but apparently no further increase in the hardness is seen. It is suggested that most of the observations of a reverse Hall-Petch effect in nanocrystalline metals are a result of poor sample quality. This impression is supported by literature studies indicating that the reverse Hall-Petch effect is mainly seen when the grain size is varied by repeated annealing of a single sample, whereas an ordinary Hall-Petch relationship is seen when as-prepared samples are used.
However, there does seem to be a deviation from the Hall-Petch effect for grain sizes below approximately 15 nm, where the Hall-Petch slope is seen to decrease or vanish in samples produced with various techniques. This is seen in Cu samples produced by inert gas condensation followed by warm compaction (sample densities above 98%) and in electroplated Ni (claimed to be porosity free).
There are theoretical arguments for expecting that the Hall-Petch relation ceases to be valid for grain sizes below $``$20 nm: as the grain size becomes too small, dislocation pileups are no longer possible, and the usual explanation for Hall-Petch behavior does not apply.
Many models have been proposed to explain why a reverse Hall-Petch effect is sometimes seen. Chokshi *et al.* proposed that enhanced Coble creep, i.e. creep by diffusion in the grain boundaries, should result in a softening at the smallest grain sizes as the creep rate increases with decreasing grain size ($`d`$) as $`d^3`$. Direct measurements of the creep rate have however ruled this out.
It has been suggested that the grain boundaries in nanocrystalline metals have a different structure, making them more transparent to dislocations than “ordinary” grain boundaries. If it becomes possible for the dislocations to run through several grains as the grain size is reduced, the Hall-Petch relations would break down. In our simulations, we have not observed dislocations moving through more than one grain.
If the Hall-Petch effect is explained by appealing to dislocation sources in the grain boundaries, the Hall-Petch relationship is expected to break down when the grain sizes becomes so low that there are no longer dislocation sources in all grain boundaries (assuming a constant density of dislocation sources in the grain boundaries).
Hahn *et al.* suggest that the reverse Hall-Petch effect is caused by deformation in the grain boundaries. If a grain boundary slides, stress concentrations build up where the grain boundary ends, limiting further sliding. Substantial sliding on a macroscopic scale occurs when sliding occurs on slide planes consisting of many aligned grain boundaries; the sliding is hindered by the roughness of the slide plane due to its consisting of many grain boundaries. As the grain size is reduced and becomes comparable to the grain boundary width, the roughness of such slide planes decreases and the stress required for mesoscopic sliding decreases. This would result in a reverse Hall-Petch effect. They estimate the transition from normal to reverse Hall-Petch effect to occur at grain sizes near 50 nm for Cu.
The simulations reported in the present paper indicate that the main deformation mechanism at these grain sizes is indeed sliding in the grain boundaries. However, it is not clear if the proposed “collective” sliding events are occurring, it appears that sliding occurs on individual grain boundaries, and that the resulting stress buildup is relieved through dislocation motion in the grains. There is a competition between the ordinary deformation mode (dislocations) and the grain boundary sliding. As the grain size is increased, the dislocation motion is eventually expected to dominate, and we expect a transition to a behavior more like what is seen in coarse-grained materials, incl. a normal Hall-Petch effect. The transition is beyond what can currently be simulated at the atomic scale, but we do see a weak increase in the dislocation activity when the grain size is increased: The increase in the fraction of hcp atoms during a simulation is increasing slightly with the grain size (Fig. 10).
## VI CONCLUSIONS
Molecular dynamics and related techniques have been shown to be a useful approach to study the behavior of nanocrystalline metals. We have in detail investigated the plastic deformation of nanocrystalline copper, and shown that the main deformation mode is sliding in the grain boundaries. The sliding happens through a large number of small, apparently uncorrelated events, where a few grain boundary atoms (or a few tens of atoms) move past each other. It remains the main deformation mechanism at all grain sizes studied (up to 13 nm), even at zero temperature. As the grain boundaries are the main carriers of the deformation, decreasing the number of grain boundaries by increasing the grain size leads to a hardening of the material, a *reverse Hall-Petch effect*. This is observed in the simulations, both for $`T=0K`$ and for $`T=300K`$.
The Young’s moduli of the nanocrystalline systems are found to be reduced somewhat compared to the experimental value for polycrystalline copper with macroscopic grain sizes, decreasing with decreasing grain size. This indicates that the grain boundaries are elastically softer than the grain interiors. The Young’s modulus is decreasing with increasing temperature at a rate somewhat above what is seen experimentally in coarser-grained copper.
Pores in the samples have a large effect on both the Young’s modulus and the flow stress. This effect is enhanced if the pores are mainly in the grain boundaries, as one could expect in samples produced experimentally by inert gas condensation. Sample porosity can explain a large number of experiments showing reverse Hall-Petch effect, but the softening due to grain boundary sliding may be important for high-quality samples with grain sizes close to the lower limit of what can be reached experimentally.
## ACKNOWLEDGMENTS
Major parts of this work was financed by The Danish Technical Research Council (STVF) through Grant No. 9601119. Parallel computer time was financed by the Danish Research Councils through Grant No. 9501775. The Center for Atomic-scale Materials Physics is sponsored by the Danish National Research Council.
|
no-problem/9902/astro-ph9902273.html
|
ar5iv
|
text
|
# Probing Red Giant Atmospheres with Gravitational Microlensing
## 1 Introduction
Stars populating the red giant branch spend a substantial fraction ($`5\%`$) of their entire lifetime there. Red giants are thus a major component of any galactic stellar population. In old systems, like Galactic globular clusters, their contribution is even larger. The parameters of the red giant branch have been traditionally derived from multi-color photometry and interpreted using stellar atmosphere models (e.g. Vandenberg & Bell 1985; Kurucz 1992). These model atmospheres involve one-dimensional integrations of semi-infinite slabs in hydrostatic and radiative equilibrium. However, real atmospheres of red giants are usually extended and the slab approximation is not adequate. Despite significant progress in computing extended spherically-symmetric models (e.g., Scholz & Tsuji 1984; Plez, Brett, & Nordlund 1992; to name a few), it is still difficult to use these improvements in massive grids of models, which therefore continue to be computed as plane-parallel (Houdashelt et al. 2000). A number of other assumptions and simplifications had to be usually made, some of which have not been verified yet by direct observations. Examples include the treatment of convective transport and small-scale velocity fields (e.g. micro- and macro-turbulence). Similarly, stellar evolution models also depend on a number of parameters which describe poorly understood physics, such as convection near the surface of the star.
These problems have hindered significant improvement in the determination of fundamental stellar properties over the past two decades. The angular diameters of only a small number of stars have been measured up to now, with an accuracy of $``$ 10% at best (Armstrong et al. 1995). Stellar disk brightness distributions (limb darkening) have been inferred only for a handful of stars at the level of a proof-of-concept (directly $``$ Mozurkewich et al. 1991; or in binaries $``$ Andersen 1991). An exception is provided by the direct $`HST`$ observation of the red supergiant $`\alpha `$ Ori, which revealed limb darkening as well as a bright region on the surface (Uitenbroek, Dupree, & Gilliland 1998). In general, the available inferred results cannot be used to build a model atmosphere, as in the case of the Sun. Due to this lack of an adequate observational basis, our understanding of stellar light is implicitly tied to the solar atmosphere model.
This situation is particularly unfortunate, since recently there has been an increased demand for accurate stellar models in a number of applications. For example, much improved color-temperature conversions are needed for calibrating distance indicators and determining stellar ages. Stellar population syntheses (e.g. for high-redshift galaxies) also rely critically on the accuracy of current stellar models. Our Sun is not a good standard for most of these applications.
The advent of large interferometric arrays (Keck interferometer, CHARA, SIM, VLTI) will go a long way towards solving these problems, but these complex and expensive facilities are still mostly in a preliminary development phase. In the meantime, gravitational microlensing offers an easily accessible, immediate, and inexpensive means for imaging at least large stars, such as red giants. It also offers, by its nature, access to stellar populations in the Galactic bulge and the Magellanic Clouds, which are well beyond the reach of any interferometer.
By now there are more than 350 microlensing events detected towards the Galactic bulge and the Large and Small Magellanic Clouds (see e.g. Alcock et al. 1997a, 1997b, 1997c; Becker et al. 1998; Renault et al. 1997; Afonso et al. 1999; Udalski et al. 1997). During a Galactic gravitational microlensing event the flux from a distant star is temporarily amplified by the gravitational field of a massive dim object passing in the foreground. The standard light curve in the point-source limit is characterized by two observables: its peak amplitude and duration (Paczyński 1996). These observables physically depend only on the lensing impact parameter and the Einstein radius crossing time of the source. However, in events with a small impact parameter the light curve is also affected by the resolved surface of the lensed star, thus providing an excellent opportunity for stellar surface imaging (Valls-Gabaud 1995, Sasselov 1996).
The first well documented case of resolved finite-size microlensing effects is MACHO Alert 95-30 (hereafter M95-30), with the impressive follow-up campaign by the GMAN and MACHO collaborations (Alcock et al. 1997d). The first spectroscopic observations of a binary lens event in which the caustic crossed the face of the source star were reported by Lennon et al. (1996). In a similar caustic-crossing binary event, Albrow et al. (1999) recently determined limb darkening profiles of the lensed K giant star for both observed spectral bands ($`I`$ and $`V`$).
Finite-size effects were first theoretically studied as methods to partially remove the degeneracy of microlensing light curves. These methods are based on alterations of the standard light curve (e.g. Gould 1994, 1995; Nemiroff & Wickramasinghe 1994; Witt & Mao 1994; Gould & Welch 1996), resolved polarization (Simmons, Willis, & Newsam 1995; Simmons, Newsam, & Willis 1995; Bogdanov, Cherepashchuk, & Sazhin 1996), spectral shifts due to stellar rotation (Maoz & Gould 1994), and narrow-band photometry of resonance lines (Loeb & Sasselov 1995). Here we want to put emphasis on the inverse problem $``$ using microlensing for studying stellar surface features and probing the atmosphere of the source star.
The resolved stellar surface brightness distribution, $`B(\lambda ,\stackrel{}{r})`$, can vary strongly with wavelength in selected spectral regions - in the continuum as well as within spectral lines. The time-dependent microlensing amplification then becomes wavelength-dependent through $`B(\lambda ,\stackrel{}{r})`$. Photometry in different bands and sets of spectra taken in the course of a microlensing event can therefore be used to study the brightness distribution of the source. Microlensing light curve chromaticity was first evaluated by Witt (1995), Valls-Gabaud (1995) and Bogdanov & Cherepashchuk (1995). Methods for obtaining the brightness distribution by light curve inversion were studied by Bogdanov & Cherepashchuk (1996) and Hendry et al. (1998), inversion error analysis was presented by Gaudi & Gould (1999). Few works have been published so far on spectral effects in stellar microlensing. These include studies of line profile changes in rotating giants (Maoz & Gould 1994; Gould 1997) and in circumstellar envelopes in bulk motion (Ignace & Hendry 1999). First results for selected spectral lines in a simple model stellar atmosphere were presented by Valls-Gabaud (1996, 1998).
In order to optimize observations of spectral effects in actual events, the most practical approach is to theoretically predict the lensing effect on the entire synthesized optical spectrum of the source in microlensing alerts similar to M95-30. This way one can determine the most sensitive spectral features before the event peaks. An efficient method and computer code capable of dealing with the tens of thousands of frequencies involved was developed by Heyrovský & Loeb (1997; hereafter Paper I). In this work we use the method to explore point-mass microlensing effects on both low and high resolution optical spectra, synthesized directly from up-to-date red giant model atmosphere calculations.
The outline of this paper is as follows. In §2 we describe the model atmospheres used in this paper. The effect of microlensing on the overall spectrum of the source star as well as on individual spectral features is presented in §3. Finally §4 summarizes the main conclusions and future prospects of this work.
## 2 Model Atmospheres of Red Giants
In a stellar population with some spread of metallicities red giants exhibit a range of temperatures. The cooler giants have atmospheres which, unlike in the case of the Sun, are dominated by molecular compounds. Most of the hydrogen gets locked into H<sub>2</sub> and most of the carbon into CO. As the temperature drops, TiO, VO, and eventually H<sub>2</sub>O become prominent. This requires paying special attention to the equation of state and background opacities (Hauschildt et al. 1997; Kurucz 1992). Of great importance is the treatment of convection, as transport by convection affects a large part of the photosphere. Finally, for models with high luminosity (and low surface gravity, log $`g`$ $``$ 1), spherical geometry must be adopted in the treatment of radiative transfer (Hauschildt et al. 1999).
Here we have limited our study to static (i.e., non-Mira) red giant models. Our main interest is to compute realistic models of the center-to-limb variation of the intensity on the stellar disk $``$ both for individual spectral features in high resolution, as well as for the optical pseudo-continuum. The state-of-the-art work for such models has been done recently by Hofmann & Scholz (1998) and Jacob et al. (1999). They employ model atmospheres from the grid of Bessell et al. (1989, 1991). Here we use similar models which reproduce their results. Striving for this level of sophistication is crucial in our study of center-to-limb and depth dependence of spectral features in red giants. For example, Valls-Gabaud (1998) used the linear limb darkening law and fitted simple analytical expressions for spectral lines from Kurucz (1992) model grids to predict chromatic and spectroscopic signatures of microlensing events of stars in general. We could not use the same simple approach, because we find that for red giants these linear approximations fail by a factor which is much larger than the uncertainties of the models, in line with similar findings by Jacob et al. (1999).
The structure of the models in this paper is derived by assuming local thermodynamic equilibrium (LTE) for the background opacities in flux-constant plane-parallel atmospheres. We use the most recent opacity sampling routines of Kurucz (1999) and TiO line data of Schwenke (1998). These models serve as input for computations of hydrogen and calcium, which are treated out of LTE (NLTE) with multi-level model atoms and in spherical geometry (for more details see Loeb & Sasselov 1995). The hydrogen atom model has energy levels n=1$``$5, plus continuum; all bound-bound and bound-free transitions are calculated in detail. The calcium atom model contains 8 energy levels for the neutral species (CaI) and 5 energy levels for the first ionization state (CaII), plus continuum (CaIII); all transitions are calculated in detail. In the case of red giants, atomic hydrogen and calcium are not important for the overall atmospheric structure. However, they still provide strong spectral features with significant center-to-limb variation (e.g. in the hydrogen Balmer lines, CaII H&K, etc.). Most moderately strong and weak lines diminish towards the limb (see Figure 1). As discussed further below, this variation could be detected in a microlensing event by measuring the total equivalent widths of spectral lines on medium-resolution spectra with a high signal-to-noise ratio, or it could be seen directly on high-resolution spectra (see §3).
The variation of the intensity from the center of the stellar disk to its limb is studied theoretically by computing the contribution function of each transition. This function defines the line-forming region in both space (depth in the atmosphere) and frequency. The line-forming region is shaped by local as well as non-local influences. Following Magain (1986), we define the NLTE contribution function (CF) for the relative line depression as
$$CF(\mathrm{log}\tau _{\lambda _0})=\frac{\mathrm{ln}10}{\mu }\tau _{\lambda _0}\frac{\kappa _l}{\kappa _0}\left(1\frac{S_l}{I_c}\right)e^{\tau _R/\mu },$$
where $`\mu `$ is the cosine of the incidence angle; $`\tau _{\lambda _0}`$ is the optical depth at a reference wavelength $`\lambda _0`$ ; $`\tau _R`$ is the optical depth corresponding to the opacity $`\kappa _R=\kappa _l+\kappa _cS_c/I_c`$ ; $`\kappa _0`$ is the absorption coefficient at $`\lambda _0`$ ; $`\kappa _l`$ and $`\kappa _c`$ are the line and continuum absorption coefficients, respectively; $`S_l`$ and $`S_c`$ are the line and continuum source functions, respectively; and $`I_c`$ is the emergent continuum intensity (if the line were absent). The traditional dependence on frequency has been omitted from our notation for clarity. The CF for a given synthesized spectral line indicates where the line is formed, and thus it provides a theoretical tool for recovering the depth dependence of thermodynamic variables in stellar atmospheres.
The CFs of the two lines presented in Figure 1 are shown in Figure 2. Note how two spectral lines from the same element and spectral series can exhibit differences in their CFs, in the CF’s location against the local continuum, and hence in their center-to-limb variation. For example, the change in the strength of the H$`\beta `$ line undergoes a reversal close to the limb of the star. The line gradually weakens away from the disk center until it reaches a minimum, then it becomes slightly more prominent again near the limb. In the case of the H$`\alpha `$ line this effect is much weaker and occurs closer to the limb. This difference between the two lines is due to the stronger dependence of H$`\beta `$ on the TiO opacity, which is dominant at its wavelength.
## 3 Effects of Microlensing on the Source Spectrum
### 3.1 Microlensing of an Extended Source
Next we study the effect of microlensing by a point-mass on the spectrum of the source star. For simplicity we describe the lensing configuration in terms of angular distances in the plane of the sky, using the angular source radius as a length unit.
In the absence of a lens, the observed flux from an unresolved source star at wavelength $`\lambda `$ is obtained by integrating the surface brightness distribution $`B(\lambda ,\stackrel{}{r})`$ over the projected surface of the star $`\mathrm{\Sigma }`$,
$$F_0(\lambda )=\underset{\mathrm{\Sigma }}{}B(\lambda ,\stackrel{}{r})𝑑\mathrm{\Sigma }.$$
(1)
Any information about the surface structure is therefore concealed by the integral. This information can be uncovered by microlensing, as its signature contains differential information on the brightness distribution. The flux from a simple point-source separated from a point-mass lens by a projected distance $`\sigma `$ in the plane of the sky gets amplified by a factor
$$A_0(\sigma )=\frac{\sigma ^2+2ϵ^2}{\sigma \sqrt{\sigma ^2+4ϵ^2}},$$
(2)
where $`ϵ`$ is the angular Einstein radius of the lens (Paczyński 1996). The lensed flux from an extended source can then be computed as
$$F(\lambda ,\sigma _0)=\underset{\mathrm{\Sigma }}{}B(\lambda ,\stackrel{}{r})A_0(\sigma )𝑑\mathrm{\Sigma },$$
(3)
here $`\sigma `$ is the distance from the lens to point $`\stackrel{}{r}`$ on the source, $`\sigma _0`$ is the distance to the source center. In our units, $`\sigma _0=1`$ corresponds to having the lens projected at the source limb. The conversion between the $`\sigma `$ distances and the source-centered position vector $`\stackrel{}{r}=(\rho \mathrm{cos}\psi ,\rho \mathrm{sin}\psi )`$ is
$$\sigma =\sqrt{\sigma _0^22\sigma _0\rho \mathrm{cos}\psi +\rho ^2}.$$
(4)
The coordinates are oriented so that the lens lies in the direction $`\psi =0`$. In this notation $`\rho =\sqrt{1\mu ^2}`$, where $`\mu `$ is the cosine of the incidence angle used in §2. Generally the flux in equation (3) depends also on the angle between the lens and a fixed direction on the source (see Paper I). However, in this work we consider for simplicity a circularly symmetric source with $`B(\lambda ,\stackrel{}{r})=B(\lambda ,\rho )`$. Angular surface irregularities such as spots are discussed in detail in a separate paper (Heyrovský & Sasselov 2000a). Equation (3) indicates that as the lens moves, it gradually scans the source, giving highest weight to the region of the star closest to the projected lens position. The total resulting amplification of the source star is
$$A(\lambda ,\sigma _0)=\frac{F(\lambda ,\sigma _0)}{F_0(\lambda )}.$$
(5)
The center-to-limb surface brightness profile ($`B`$ as a function of $`\rho `$) at different wavelengths has not only a different amplitude, but also a different shape. As a consequence, the wavelength- and position-dependence of the brightness distribution cannot be separated $`B(\lambda ,\rho )f_1(\lambda )f_2(\rho )`$ and equation (5) truly contains a chromatic dependence of the amplification. This fact does not contradict the achromaticity of light deflection predicted by the general theory of relativity, it merely reflects the different appearance of the source star at different wavelengths.
For most microlensing events the point-source limit in equation (2) gives a satisfactory description of the light curve. Extended source effects become important only in events with a sufficiently low impact parameter (defined as the projected distance of closest approach of the lens and source on the sky). Expanding the amplification (5) in powers of the inverse of the lens-source separation $`\sigma _0^1`$ irrespective of the Einstein radius of the lens (keeping the ratio $`ϵ/\sigma _0`$ fixed) we obtain
$$A(\lambda ,\sigma _0)=\frac{\sigma _0^2+2ϵ^2}{\sigma _0\sqrt{\sigma _0^2+4ϵ^2}}\left[1+\frac{8ϵ^4(\sigma _0^2+ϵ^2)}{\sigma _0^2(\sigma _0^2+2ϵ^2)(\sigma _0^2+4ϵ^2)^2}\frac{\underset{0}{\overset{1}{}}B(\lambda ,\rho )\rho ^3𝑑\rho }{\underset{0}{\overset{1}{}}B(\lambda ,\rho )\rho 𝑑\rho }+o(\sigma _0^2)\right].$$
(6)
Strictly speaking, this $`\sigma _01`$ expansion is convergent for $`\sigma _0>1+\sqrt{2}`$. The first extended source correction term in the brackets is a product of two factors. The first one depends purely on the lens parameters ($`ϵ,\sigma _0`$), while the second one depends only on the source properties ($`B`$). The lens factor is a monotonously decreasing function of the lens distance, dropping from a value of $`\sigma _0^2/4`$ when the source lies well within the Einstein ring ($`\sigma _0ϵ`$) to a value of $`8ϵ^4\sigma _0^6`$ when the source is well beyond the Einstein ring ($`\sigma _0ϵ`$). The magnitude of the chromatic source factor is less than one, typically on the order of $`0.4`$. Using this value we find that any lens with $`ϵ4`$ will produce a 1% effect at $`\sigma _0=3`$. Therefore, in order to achieve a larger extended source effect, even a lens with high $`ϵ`$ has to approach the source closer than 3 source radii, beyond the region where the above expansion is valid<sup>1</sup><sup>1</sup>1 Lenses with $`ϵ0.4`$ are too weak to trigger a microlensing alert in the current surveys. Lenses with $`0.4<ϵ<4`$ require an approach closer than 3 source radii for a 1% effect.. An observed example of such an event is M95-30 (Alcock et al. 1997d), with a fitted value of the impact parameter $`p=0.715\pm 0.003`$ and $`ϵ=13.23\pm 0.02`$.
Note that we estimate the extended source effect in comparison to a point-source light curve with the same event parameters. Studying detectability requires comparison with a best-fit point-source light curve, which further lowers the upper limit for the closest approach. Gould & Welch (1996) estimate that using color observations the finite-source effect is detectable out to 2 source radii.
We study the case of close approaches or source transits directly from equations (3) and (5). For evaluation of the involved surface integral we follow the method described in Paper I. The $`\sigma ^1`$ divergence of the amplification factor $`A_0(\sigma )`$ at the point directly behind the lens can be avoided by integrating in lens-centered polar coordinates. The product of $`A_0(\sigma )`$ with the area element $`d\mathrm{\Sigma }=\sigma d\sigma d\varphi `$ then leads to a finite integrand. For $`B(\lambda ,\rho )`$ we use brightness profile data generated for each wavelength directly from model atmosphere calculations (see §2). We interpolate the computed center-to-limb data points by simple quadratic segments $`a+b\rho ^2`$. The computational advantage of this approach is that the integral can then be analytically reduced to one dimension (for details see Paper I). Additionally, as the lensed flux is linear in the interpolation coefficients, it is sufficient to compute the light curve for a single wavelength and then by substituting coefficients obtain the other light curves for the same event. With this technique it is feasible to perform the computation for a large number of wavelengths and thus study spectral changes associated with microlensing.
### 3.2 Light Curve Shapes and Low Resolution Spectra
The effect of the brightness profile shape on the light curve is demonstrated in Figure 3. For both selected wavelengths the profile is normalized to unit total flux, in order to match the form in which it appears in the amplification formula (5). The light curves plotted in the lower panel have a zero impact parameter, thus the time $`t`$ measured from closest approach in units of source radius crossing time is equal to the lens distance ($`|t|=\sigma _0`$). For illustration we use the M95-30 lens with $`ϵ=13.23`$. From the figure it can be seen that the light curve traces the brightness profile shape - the flatter profile produces a flatter light curve. In addition, the light curve with lower amplification at the source center typically crosses the other at $`\sigma _00.7`$ and has a higher amplification at the limb of the source. We note that flatter profiles usually indicate formation higher in the stellar atmosphere.
Figure 4 illustrates the wavelength dependence of the amplification on a low resolution (R=500) spectrum from 401 to 775 nm. These results are obtained using the same model atmosphere as in Figure 3, with T=3750 K and $`\mathrm{log}g=0.5`$, lensed by an $`ϵ=13.23`$ lens. The model spectrum in the absence of the lens is shown in the upper panel. The lower three amplification plots correspond to lens positions $`\sigma _0=0`$ (source center), 0.72 (closest approach of M95-30) and 0.96 (close to the limb). Comparison of the plots at the center and the limb further illustrates the relative change in amplification at different wavelengths seen in Figure 3 $``$ the amplification curve at the limb is virtually a vertical flip of the curve at the center. The overall slope of the curves indicates that for $`\sigma _0=0`$ the spectrum will appear bluer and for $`\sigma _0=0.96`$ redder than in the absence of a lens (see also Gould & Welch 1996; Valls-Gabaud 1998).
The finer structure of the curves provides further interesting results. Sections of the curves with significant spectral variations of amplification indicate spectral regions particularly sensitive to microlensing. Observations of such regions during a transit event can then be used to test the depth dependence of the atmosphere model. While the resolution of this particular spectrum is too low to resolve individual spectral lines, broader molecular bands are clearly visible. An example of a sensitive feature is the TiO band system at 710 nm ($`AX`$, $`\gamma `$-bands), as hinted by the bump in the amplification curve. Recent models by Jacob et al. (1999) also point to the extreme limb-darkening effects in this band system. It should be noted that not all bands are affected in the same manner $``$ the $`CX`$ TiO system at 516 nm ($`\alpha `$-bands) shows an opposite variation in amplification. The nature of the change in these bands is discussed in more detail further below.
The three amplification curves are each plotted for convenience with a different vertical scale. For example, the relative amplification variation at $`\sigma _0=0.72`$ is much smaller than at either the center or the limb. A simple way to quantify the degree of this variation is to take the maximum, minimum and average amplification over the full range of wavelengths and plot the ratio $`(A_{max}A_{min})/A_{aver}`$ as a function of lens position $`\sigma _0`$. This ratio in fact serves as a measure of chromaticity - it drops to zero in the achromatic point-source limit. Results for three different atmosphere models (wavelength range 401–775 nm) are presented in Figure 5. While there is a difference in overall amplitude, the general character of all three curves is similar. As the lens approaches the source, the degree of chromaticity increases, reaching a local peak of $``$ 3–5% just after crossing the limb. If the lens passes closer to the center, the chromaticity drops to $`<`$ 0.5% at $`\sigma _00.7`$ (approximately the M95-30 impact parameter, by coincidence). This indicates that the light curves for all the studied wavelengths intersect approximately at the same point, as seen in Figure 3. A yet closer approach causes the chromaticity to increase to a peak value of $``$ 8–11% when the lens is positioned at the source center. To summarize, the strongest spectral effects can be expected when the lens is aligned with the source center, with weaker effects near the limb. In between these two positions there is a drop in chromaticity nearly to zero (achromaticity). Although the limb effects are weaker, in principle they should be observable in any transit event.
The results in Figure 5 were computed using the M95-30 lens with $`ϵ=13.23`$. The dependence of the results on the Einstein radius is explored in Figure 6 for the model atmosphere with T=3750 K and $`\mathrm{log}g=0.5`$. While the chromaticity predictably increases with $`ϵ`$, its $`\sigma _0`$-dependence rapidly reaches a limit curve. In other words, the degree of lensing chromaticity cannot increase beyond this curve for a given atmosphere model and fixed spectral resolution. This behavior can be understood by studying the form of equation (5) during a close approach (as compared to the Einstein radius). In the limit<sup>2</sup><sup>2</sup>2 Convergence requires also $`ϵ>(\sigma _0+1)/2`$. $`ϵ\sigma _0`$ the amplification
$$A(\lambda ,\sigma _0)=ϵ\frac{\underset{\mathrm{\Sigma }}{}\sigma ^1B(\lambda ,\rho )𝑑\mathrm{\Sigma }}{\underset{\mathrm{\Sigma }}{}B(\lambda ,\rho )𝑑\mathrm{\Sigma }}+O\left(\frac{\sigma _0}{ϵ}\right)$$
(7)
is directly proportional to the Einstein radius $`ϵ`$. The degree of chromaticity depends on amplification ratios and is therefore independent of $`ϵ`$ in this limit, as seen in Figure 6. We conclude that as far as spectral effects are concerned, the “only” advantage of microlensing by stronger lenses ($`ϵ>5`$) is an overall increase in flux.
Figure 4 illustrates the capability of microlensing to resolve the depth structure of a given stellar atmosphere. Can the microlensing signature also be used to distinguish between different atmospheres? Figure 7 contains the results of Figure 4 together with those computed for the two other atmosphere models used in Figure 5; a cooler one (T=3500 K, $`\mathrm{log}g`$=1) and a hotter one (T=4000 K, $`\mathrm{log}g`$=0.5). As expected, the results indeed vary substantially between the different models. In the 3500 K atmosphere, for example, the TiO $`\alpha `$-bands at 516 nm show a stronger amplification variation, and more importantly, the TiO $`\gamma `$-bands at 710 nm show the opposite variation than in the 3750 K case. The amplification variation in the 4000 K model is much weaker overall, because the TiO bands are less prominent at this higher temperature. The only prominent amplification feature at 420 nm is in a spectral region with very low flux, hence it would be difficult to detect any spectral changes there.
### 3.3 High Resolution Spectra and Line Profiles
The results presented in Figures 4, 5, 6 and 7 illustrate the overall microlensing effects on low resolution spectra. However, studying the behavior of finer features, such as individual molecular bands or spectral lines, can provide stronger constraints on model atmospheres. First we return to the TiO bands mentioned above. In this particular case, even the low resolution results illustrated in Figure 4 indicate the different behavior of the two band systems. When the lens is at $`\sigma _0=0`$, the $`\gamma `$-band system at 710 nm has a higher amplification in its region of highest absorption. As a result, the band system will appear weaker (shallower) than in the absence of the lens. With the lens at the limb, the highest absorption region has a lower amplification than the adjacent spectral region. The band system will therefore be more prominent than if the lens were absent. In contrast, the $`\alpha `$-band system at 516 nm shows exactly the opposite behavior. It will be more prominent with the lens at the source center, and less prominent when the lens is at the limb. The different behavior of the $`\gamma `$-band system is due to the extension of its region of formation higher in the atmosphere.
These effects are verified by the results of high spectral resolution (R=500,000) calculations shown in Figure 8. The first feature to notice in these plots, apart from the confirmed band system structure, is the high “fuzziness” of the amplification curves. This property indicates that the amplification varies highly within individual spectral lines. In fact the less “fuzzy” 710 nm region also has a lower density of spectral lines. Studying the amplification variation of both regions in Figure 9, it comes as no surprise that while the shape of the dependence is similar, the degree of chromaticity is higher than in the full extent of the low resolution spectrum. This higher variation is due to the fine spectral structure, which is averaged out in the low resolution results. The degree of chromaticity for the 516 nm and 710 nm regions in high resolution again peaks at the source center (20% and 12%, respectively) and has a secondary peak at the limb (9.5% and 5.5%, respectively). For comparison, the center and limb values for the low resolution spectrum are 9% and 4%, respectively. Note that the high values for the 516 nm region are partly caused by the single atomic spectral line at 526.95 nm <sup>3</sup><sup>3</sup>3 Unfortunately this line is highly saturated at its core, where the amplification undergoes the high variation - the effect will therefore be difficult to detect because of the low photon flux at the line center..
A closer look at the fine structure in both sets of high resolution results (apart from the large-scale TiO band effect) suggests a significant correlation between the unlensed spectrum and the amplification curve for $`\sigma _0=0`$. A blow-up of a 0.4 nm interval in Figure 10 confirms the correlation - most individual lines have their cores amplified less than their wings at $`\sigma _0=0`$, while at $`\sigma _0=0.96`$ the cores are amplified more than the wings. Hence most lines become more prominent with the lens at the source center, and less prominent with the lens close to the limb than in the absence of the lens ($`\sigma _0\mathrm{}`$). There are fewer examples of lines with the opposite behavior (e.g. at 739.044 nm in Figure 10), with virtually no change (flat amplification curve) or with more complex behavior.
The characteristic correlation can be explained for typical absorption lines in the following way. Such lines form at a certain depth in the atmosphere, and gradually become less prominent away from the center towards the limb, as there is relatively less absorption at the limb. The center-to-limb variation curves at the cores of these lines are therefore flatter than in the adjacent continuum. The correlation is now directly explained by the discussion of light curve shapes above - when the lens is at the source center, the line cores are amplified less than the continuum, while with the lens at the limb the cores are amplified more than the continuum.
The less common anti-correlation (or lack of correlation) of the unlensed spectrum and the amplification at source center can be expected for example in some emission features, high excitation lines, or in spectral lines affected by more complex physical effects, such as resonant scattering (Loeb & Sasselov 1995), overlapping lines, lines within molecular bands, etc.
A straightforward method for studying the change of a particular line is to compute its equivalent width, which is generally given by
$$W(\sigma _0)=\mathrm{\Delta }\left[1\frac{F(\lambda ,\sigma _0)𝑑\lambda }{F_c(\lambda ,\sigma _0)𝑑\lambda }\right],$$
(8)
where $`F_c(\lambda ,\sigma _0)`$ is the continuum flux and the integrals are performed over a wavelength interval of width $`\mathrm{\Delta }`$ around the line. As an example we present here results for the hydrogen Balmer lines H$`\alpha `$ and H$`\beta `$ in the same T=3750 K, $`\mathrm{log}g`$=0.5 model. The center-to-limb variation of their unlensed profiles is shown in Figure 1. The dependence of their equivalent widths on lens position is plotted in Figure 11 for a set of Einstein radii $`ϵ`$. First we note that the change with $`ϵ`$ has the same character as in Figure 6. Any sufficiently strong lens ($`ϵ>5`$) will therefore affect the equivalent width in the same way. This result can again be traced to the fact that the equivalent width in equation (8) depends on a ratio of amplifications. This ratio is independent of $`ϵ`$ in the $`ϵ\sigma _0`$ limit, as seen from equation (7). During a microlensing event both Balmer lines behave in the typical manner described above. An approach of a sufficiently strong lens first causes the width of both lines to drop (by $``$7%), as it gives higher weight to the limb where the lines are weak. Similarly, a lens positioned at the center, where both lines are strong, would increase their width ($``$19%) above the asymptotic value. Both the H$`\alpha `$ and H$`\beta `$ curves are similar, although those of H$`\beta `$ are more centrally peaked and have a broader minimum near the limb.
We compared our results to those of Valls-Gabaud (1998) for H$`\beta `$ with the same event parameters<sup>4</sup><sup>4</sup>4 We neglect here the earlier results in Valls-Gabaud (1996), which are inconsistent with Valls-Gabaud (1998).. The general character of the change with lens position is the same - decrease of the equivalent width (EW) at the limb and a higher increase closer to the source center. However, Valls-Gabaud’s results show a two to three times lower relative change of the EW, the actual factor depending on the lens position. As a result, the predicted ratio of EW increase (close to the center) to peak EW decrease (at the limb) is also lower than our computations show. Furthermore, the achromaticity point appears to be closer to the limb (around $`\sigma _0=0.8`$) than in our analysis. As mentioned in §2, assumptions such as linear limb darkening degrade Valls-Gabaud’s models too much to give realistic predictions of equivalent widths, particularly in the case of red giants. This comparison further illustrates the sensitivity of microlensing to the atmosphere structure of the resolved source.
A detailed study of high resolution spectral changes in different model atmospheres and comparison with the M95-30 observational results will be presented elsewhere (Heyrovský & Sasselov 2000b).
## 4 Conclusions
Microlensing events in which a point-mass lens with Einstein radius $`ϵ>4`$ (in source radius units) approaches the source star within three source radii already exhibit $``$1% deviations from point-source light curves. The deviations further increase for closer approaches. These finite source effects depend on the surface brightness distribution of the source star. Observations of such effects, particularly during source transit events, can therefore be used to resolve and study the structure of the stellar disk. As the brightness distribution depends on wavelength, the usual assumption of microlensing achromaticity breaks down in these cases. Spectral observations can then be used to probe the depth structure of the atmosphere of the source star. The most promising sources for such studies are red giants in the Galactic bulge, among other reasons due to their large size, intrinsic brightness and fairly high abundance. These circumstances are in fact particularly fortuitous, as the structure of cool red giant atmospheres is currently very poorly observationally constrained.
The degree of microlensing chromaticity can be measured by the relative spectral variation of the microlensing amplification. Thus defined, chromaticity as a function of lens position is found to have two peaks - primary peak at the source center and secondary peak close to the source limb, with a dip to achromaticity in between. In the usually observed type of events, in which the Einstein radius $`ϵ>5`$, chromaticity is independent of $`ϵ`$. Its peak values for a low resolution optical spectrum are typically 10% at the center and 4% at the limb (both values increase with higher spectral resolution). Any such transit event should therefore exhibit at least the 4% effect, which should be readily observable under favorable conditions. In future potential source-crossing events it is therefore important not to miss the time when the lens is close to the limb, as was unfortunately the case in M95-30.
During a red giant transit event the overall spectrum appears redder when the lens is at the source limb, and bluer if the lens comes close to the source center. Individual spectral lines respond to microlensing in different ways, because of their different center-to-limb variation, which reflects their different depth of formation in the star’s atmosphere. Most absorption lines will typically turn weaker (lower equivalent width) when the lens is close to the limb and more prominent when the lens is near the center. Some emission lines, high excitation lines, overlapping lines, lines within molecular bands and lines affected by more complex physical effects can exhibit opposite or generally different behavior. As a result, broad molecular bands visible even on a low resolution spectrum of a given source may behave in different ways. The pattern of their changes depends on physical parameters, composition and structure of the atmosphere. Observations of these bands can therefore be used to constrain the global parameters, and to check theoretical assumptions used for constructing model stellar atmospheres. High resolution spectra resolving individual lines would, however, also enable studies of the depth structure of the star’s atmosphere in detail. Particularly in the case of red giants, gravitational microlensing therefore provides a unique new tool for stellar physics.
During M95-30, the only well-documented point-mass microlensing transit event up to now, eleven low resolution and three high resolution spectra were taken in addition to photometry in two colors (Alcock et al. 1997d). In a forthcoming paper (Heyrovský & Sasselov 2000b), we will use the methods and findings described here together with the observational data to study the lensed giant and construct an adequate model of its atmosphere.
We thank R. Kurucz for his kind help with opacities and molecular data. We also thank an anonymous referee for comments that helped improve the manuscript. DS is grateful to I. Shapiro for his support during part of this study.
|
no-problem/9902/quant-ph9902038.html
|
ar5iv
|
text
|
# Quantum Key Distribution Using Three Basis States
## 1 Introduction
Since the act of measurement reduces the state function of a quantum object, one cannot determine what state characterized the system prior to the measurement. This property is at the heart of the method of quantum key distribution which, under certain assumptions, provides an absolutely secure method of sharing a secret subject to public exchange of side information. As an example of quantum communication, the problem of key distribution forms a part of the general subject of quantum information science that has relevance for our understanding of the foundations of quantum and information theories.
In cryptography, two users, by convention called Alice (A) and Bob (B), wish to exchange information which must remain inaccessible to Eve (the eavesdropper). The quantum key distribution method of Bennett and Brassard (1984) (BB84), presented first at a conference in Bangalore sixteen years ago, uses 4 different polarizations of the photons and a pair of basis states in the detector for this information exchange. The BB84 protocol is symmetric in its use of the polarizations. After the key has been obtained, this protocol requires the exchange of further information about parity of randomly chosen subsets of the key.
Can we devise a scheme, somewhat in the spirit of joint encryption and error-correction coding (Kak 1985), that will not only distribute the key but also provide additional information about the integrity of the distribution process? In particular, we would like to have a method where the additional exchange of information, that is required after the key has been distributed, is unnecessary.
Given the symmetry of the BB84 method, one would expect that it could not be improved upon. But symmetry-breaking can provide advantage for part of the range of the operation of the system, albeit at the cost of some leakage of information. In this note we consider a variant protocol that breaks the symmetry of the BB84 method. In our method, three polarization states of the photons and three basis states of the detector are used. We show that doing so provides advantage over the BB84 method for certain sizes of the key under certain, not unreasonable, assumptions about the attacks mounted on it.
## 2 The BB84 protocol
In the BB84 method Alice (A) chooses photons (or other particles) prepared in four polarization states of $`0,45,90`$ and $`135`$ degrees and sends $`n`$ of them in random order to Bob (B), who measures each photon using detectors matched in a random sequence to two pairs of orthogonal bases: $`(0,90)`$ and $`(45,135)`$ degrees. Note that if the directions and the polarizations of $`0,45,90`$ and $`135`$ degrees are represented by $`0,k,1`$ and $`l`$, respectively, it is sufficient to have the directions $`0`$ and $`k`$ as the bases in the detector. If it is assumed that the photons are sent according to a clock, when the detector outputs nothing ($`e`$), it is clear that the input was a photon in a state orthogonal to the detector setting.
Bob now tells Alice the sequence of the bases he used for his detection. Alice informs him which detector bases were correctly aligned. Alice and Bob keep only the data from these correctly measured (or inferred) photons, discarding the rest. Now Alice and Bob test their key by publicly choosing a random subset of bit positions and verifying that this subset has the same parity (defined as odd) in their respective versions of the key. If their keys had differed in one or more bit positions, this test would have discovered that fact with probability of $`\frac{1}{2}`$. One bit is now discarded, to compensate for the information leaked when its parity was revealed. This step is repeated $`m`$ times, leading to a certification with probability of $`12^m`$ that their mutual keys are identical, at the cost of reducing the key by $`m`$ bits. Since the average number of correct alignments between the input bases and the detection pairs is about half of the total number of photons sent, the expected size of the key, certified with the probability $`12^m,`$ is $`\frac{n}{2}m`$.
This protocol assumes that Eve, the eavesdropper, cannot make multiple copies of the transmitted photons, perform various tests on them, and estimate their original polarizations using the public channel information from Alice. Should Eve intercept the photons directly, make measurements and transmit the measured photons to Bob, it would change the polarizations and this would be detected by Alice and Bob in their exchanges of the parity of the random subsets of the exchanged key.
It is interesting that there is a certain complementarity in the workings of classical and quantum communication. The state of a classical object can be fully determined and duplicated leading to ease of storage and transmission, and difficulty of secrecy-coding. On the other hand, the state of the quantum object is not fully known, so it is hard to duplicate it and hard to store and transmit it, while secrecy-coding is easy. The information associated with classical and quantum processing is also different (Kak 1998).
## 3 Key distribution with three states
In our asymmetric method, photons are prepared by Alice in the polarization states of of $`0,45`$ and $`90`$ degrees only. At the receiving end, Bob uses filters before his detector that are matched to the same polarizations. We have introduced asymmetry at two places: by cutting down on the number of photon polarizations from 4 to 3, and by increasing the number of detector states from 2 (which is equivalent to 4) to 3.
Table 1 summarizes the 9 different possibilities between the photon and the detector states for the data from Alice, Bob’s detector settings, and what Bob actually receives.
Table 1: Photon and detector states
| Alice’s data | 0 | 0 | 0 | k | k | k | 1 | 1 | 1 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Bob’s filter | 0 | k | 1 | 0 | k | 1 | 0 | k | 1 |
| Bob receives | 0 | k/e | e | 0/e | k | 1/e | e | k/e | 1 |
Bob sends information about his filter settings to Alice on a public channel, and Alice uses the same public channel to tell Bob which settings were correct. The latter information makes it possible for Bob to infer which $`e`$ outputs should be replaced by either a $`0`$ or $`1`$. For example, if Bob is told by Alice that his setting “1” is correct in the 3rd column of the Table, he would know that he should replace his estimate of the transmitted polarization from “nothing” ($`e`$) to $`0`$.
As we can see, out of these 9 cases Bob is able to correctly receive or infer 5 cases. Now Eve, the eavesdropper, being unable to intercept or duplicate the photons being sent by Alice to Bob, can only work on the information being exchanged on the public channel. The only case where she would know the correct output is when Bob’s filter setting is k and it is declared correct by Alice. This means that she knows one of the 5 correct cases of Bob. So while the correct recognition rate for Bob is $`\frac{5}{9}`$, bits at a rate of only $`\frac{4}{9}`$ have guaranteed security. Just these 4 bits out of the 9 listed in the Table will actually be used for creating the key, the fifth bit (the setting of k) helps in authenticating the integrity of the process. The latter bits, at the rate of $`\frac{1}{9}`$, provide confirmation that there has been no tampering with the transmitted photons.
There is a $`1/3`$ probability that Eve would have used the correct filter placement at that spot if she had intercepted the photon sequence and replaced it with her own, so the transmission for every 9 bits can be certified to the probability $`11/3=2/3`$.
Quantum information is properly examined only in terms of the arrangement of the system and the experimenter. We assume here that Eve, like Bob, is using a receiver with settings of 3 basis states with the specified directional information. The certification probabilities quoted in this paper are based on this assumption. Unlike the BB84-method, Eve can, by using filter settings stuck at $`0`$ or $`1`$, know the polarization of one-third of the transmitted photons. But that way, she could never hope to guess all the polarization choices made by Alice, and so this would not constitute a practical method of attack. To understand this clearly, assume that Eve’s filter settings are all at 0. She would receive a transmitted 0 as 0, a transmitted 1 as e, and a transmitted k as 0 or e. One half of her received bits are 0 and the other half are 1, and she is unable to work backwards and guess any specific value from her data alone.
### A probabilistic analysis
Prior to being told the correct locations, the average mutual information, $`I(A;B)`$, between Alice and Bob is computed by
$$I(A;B)=H(A)+H(B)H(A,B).$$
where $`H(A),H(B)`$, and $`H(A,B)`$ are the individual entropy measures of $`A`$, $`B`$, and the joint entropy of $`A`$ and $`B`$, respectively.
$`H(A)`$ equals $`log31.585`$ bits. $`H(B)`$ is computed by first finding the probabilities of the four received states, $`0,k,1,e`$, which are easily seen to be $`1/6,2/9,1/6,4/9`$, respectively. From this we find that
$$H(B)=15/9log37/91.864bits.$$
Likewise, the joint entropy, $`H(A,B)`$, is easily computed from the table of probabilities.
$$H(A,B)=15/9log3+5/93.197bits.$$
In other words, the amount of information leaking in this system is $`I(A;B)=log34/30.252`$ bits/photon, or about $`16\%`$. Or the actual uncertainty from the point of Bob or Eve is $`log30.252=4/3`$ bits per photon.
Since the k-photons (that constitute 1 out of every 3 photons transmitted) do not ultimately contribute to the formation of the key, the information being sent out by Alice is at the rate of $`2/3\times 4/3=8/9`$ bits. With a $`50\%`$ uncertainty for each photon, only half of them wil lead to the key sequence, so the information rate is now $`1/2\times 8/9=4/9`$ bits, which is precisely the value we have argued using a different reasoning. This analysis confirms the information exchange figures associated with our protocol.
## 4 When n photons are transmitted
When n photons are transmitted we will, on an average, be able to obtain $`\frac{4n}{9}`$ bits for the key and an additional $`\frac{n}{9}`$ bits for authentication of the transmission process. In other words, the transmission would be certified to the probability $`13^{n/9}`$.
In comparison, the 4-state BB84 protocol provides $`\frac{n}{2}m`$ key bits and certification of $`12^m`$. When $`n/2m=4n/9`$, the key bits are the same number for the two cases, or for $`n=18m`$.
We obtain more key bits from the 3-state protocol compared to the BB84-protocol for $`n<18m`$. Thus for $`n=54`$ and $`m=6`$, the 3-state protocol gives 24 usable bits, while the BB84-protocol gives only 21 usable bits.
Furthermore, the certification probability is higher. The two certification probabilities will be the same if
$$13^{n/9}=12^m.$$
Or BB84-certification probability is higher for $`m>0.176n`$. But this is an impossible range of comparison because the effective value of $`m`$ in the 3-state protocol is only $`n/9`$.
We stress again that the certification probability comparisons are only notional, based on our assumption of a 3-state detection scheme used by Eve. Real comparison would require that the nature of the attack strategy be spelt out.
## 5 Concluding remarks
By weakening some assumptions regarding the nature of the quantum cryptographic system, we have devised a new 3-state protocol that has some attractive features, and it provides unexpectedly good results. It has the unique property that no further exchange of verification information is necessary after the initial steps of the protocol, unless one desires the certification probability to be greater than what is inherent in the system.
Quantum key distribution systems described here and elsewhere (Bennett 1992; Ekert 1991) are fundamentally dual in nature in as much that part of the information must be sent over a classical channel, as happens in the post-photon transmission communications between Alice and Bob. This aspect makes the communication system somewhat similar to those brain models that postulate an underlying quantum basis to cognitive processes (e.g. Hameroff 1998; Kak 1996). Is the “understanding” of the incoming sensory stimuli facilitated by the filtering information mapped into the neural structures of the brain during the process of evolution? And if the universe of the stimuli is itself structured, as is the case for the polarizations chosen by Alice, then is the subject led to an understanding of this structure by trying out various “stuck” settings of his receiver apparatus and exploiting the fact that biological signals are often repetitive?
## References
Bennett, C H and Brassard, G 1984 “Quantum cryptography: Public key distribution and coin tossing,” Proceedings of the IEEE Intl. Conf. on Computers, Systems, and Signal Processing, Bangalore, India (IEEE New York, 1984, pages 175-179).
Bennett, C H 1992 Phys. Rev. Lett. 68, 3121
Ekert, A K 1991 Phys. Rev. Lett. 67, 661
Hameroff, S 1998 Phil. Trans. R. Soc. Lond. A 356, 1869
Kak, S 1985 IEEE Trans. on Computers C-34, 803
Kak, S 1996 “The three languages of the brain: Quantum, reorganizational, and associative.” In Learning as Self-Organization, K. Pribram and J. King (eds.). (Lawrence Erlbaum Associates, Mahwah, 1996, pages 185-219).
Kak, S 1998 Foundations of Physics 28, 1005
|
no-problem/9902/astro-ph9902006.html
|
ar5iv
|
text
|
# GRANAT/SIGMA Observation of the Early Afterglow from GRB 920723 in Soft Gamma-Rays
## 1 Introduction
Fast and accurate localizations of gamma-ray bursts by BeppoSAX helped to establish the connection of GRB with the sources of decaying X-ray, optical, and radio emission (e.g. Costa et al. 1997, Van Paradijs et al. 1997, Frail et al. 1997). X-ray afterglows were found in 15 of 19 well-localized bursts; in most cases, the X-ray flux decayed as a power law of time, $`t^\beta `$, with $`\beta `$ ranging from $`1.1`$ (GRB 970508, Piro et al. 1998) to $`1.57`$ (GRB 970402, Nicastro et al. 1998). The power law decay of flux also is observed in the optical (e.g. Wijers et al. 1997, Sokolov et al. 1998). This is a characteristic prediction of the relativistic fireball model of GRB (Mészáros, Rees 1993, Mészáros 1997, Waxman 1997, Sari et al. 1998). Indeed, the energy release in some GRB is enormous and sufficient to power the relativistic fireball (Kulkarni et al. 1998). The fireball observations immediately after the burst, when the temperature and density are at maximum, are of great interest. Unfortunately, it has been impossible to observe the afterglows in the radio, optical, or X-rays earlier than approximately 10 hours after the burst.
Some earlier observations indicated that the afterglows could immediately follow some GRB. There were detections of X-ray emission lasting for tens of seconds after the main burst was finished in gamma-rays (Sunyaev et al. 1990, Murakami et al. 1991, Terekhov et al. 1993, Sazonov et al. 1998). PVO observatory observed a faint gamma-ray emission over $`1000`$ s after the long, $`200`$s, burst GRB 840304 (Klebesadel 1992). The presence of slowly fading soft gamma-ray (100–500 keV) emission was found in about $`10\%`$ of bursts detected by GRANAT/PHEBUS (Tkachenko et al. 1995). Hard gamma-ray photons (0.2–10 GeV) were detected during $`1.5`$ hours after GRB 940217 by EGRET telescope (Hurley et al. 1994).
We present here a detailed analysis of the GRB 920723 light curve, which reveals a soft gamma-ray afterglow with flux decaying as a power law $`t^{0.7}`$ during at least 1000 s after the main burst.
## 2 Observations
SIGMA is the coded-mask telescope with a $`15^{}`$ angular resolution operating in the 35–1300 keV energy band (Paul et al. 1991). Typically, SIGMA performs uninterrupted 20–30 h observations, during which the telescope pointing is maintained with a $`30^{}`$ accuracy (but known to within $`15^{\prime \prime }`$). Although the telescope field of view is only $`11.4^{}\times 10.5^{}`$ (FWHM), some fraction of gamma-rays from sources closer than $`35^{}`$ to the pointing direction reaches the detector through the gaps in the passive shield and produces arc-shaped images. This “secondary optics” (or “sidelobes”) was described in detail by Claret et al. (1994a) along with the appropriate analysis techniques.
GRB 920723 was observed by SIGMA through the secondary optics. An $`1^{}`$ localization was obtained from this observation (Claret et al. 1994b). GRB 920723 is one of the brightest bursts observed by GRANAT instruments and the brightest detected by SIGMA. The burst was triggered at $`20^\mathrm{h}03^\mathrm{m}08^\mathrm{s}.3`$ UT and lasted for about 6 s. The WATCH all-sky monitor provided a $`0.2^{}`$ localization (Sazonov 1998) and observed the fading X-ray emission in the 8–20 keV band during more than $`40`$ s after the main burst (Terekhov et al. 1993). PHEBUS measured the peak burst flux $`5\times 10^5`$ erg s<sup>-1</sup> cm<sup>-2</sup> and fluence $`1.4\times 10^4`$ erg cm<sup>-2</sup> in the 100–500 keV energy band (Terekhov et al. 1995).
The SIGMA data allows the measurements of the burst light curve with better than $`0.1`$ s time resolution (depending on flux) during $`7.5`$ s after the trigger. In addition, the count rate in four wide energy bands (35–70, 70–150, 150–300, 300–600 keV) is recorded with the 4 s time resolution over the entire observation. With these data, it is possibile to study the burst emission long after the trigger. Below, we use only the first three energy channels, because the last one is plagued by low sensitivity. The peak burst count rate in the 35–300 keV band was 7900 cnt s<sup>-1</sup>, much higher than the average background rate 310 cnt s<sup>-1</sup>.
During the observation of July 23, 1992, SIGMA was pointed to Her X-1. The pulsar was in eclipse and was not detected. The $`3\sigma `$ upper limit on its 35–70 keV flux averaged over the observation, was $`0.25`$ cnt s<sup>-1</sup>. The Her X-1 spectrum is known to be very soft (the 20–100 keV photon index is $`4.4`$), and therefore its flux is negligible above 70 keV. The pulsar was in eclipse between 12000 s before the burst and 9000 s after the burst. Therefore, it could not cause any significant variability of the SIGMA count rate during the reported observation. No other known bright sources were visible through either primary or secondary optics of the SIGMA telescope. GRANAT operates on the high apogee orbit and, during the observation, was not influenced by the Earth radiation belts or other magnetospheric anomalies (such as the South Atlantic Anomaly). As a result, the SIGMA background usually does not show any significant variations on the time scales shorter than $`10^3`$ s. Therefore, it can be accurately modeled by a low degree polynomial.
## 3 Results
Usually, sources contribute only a small fraction of the total SIGMA count rate. Therefore, the correct background subtraction is vital for the source variability studies. We modeled the background using Chebyshev polynomials. A complete description of the SIGMA background subtraction techniques is presented elsewhere (Burenin et al. 1999); this analysis has shown that the background variations around the subtracted value in excess of Poisson noise are smaller than 0.6 cnt s<sup>-1</sup> on the 300 s time scale (on the $`95\%`$ significance level).
Figure 1 shows the burst light curve in the 35–300 keV band. There is a small peak in burst light curve just after the trigger. Within the first second after the trigger, the burst flux rose rapidly. Over the next five seconds, it remained at approximately the same level, showing a strong variability at all resolved time scales. At approximately 6 s after the trigger, the flux started to decline rapidly.
Figure 2 shows the burst light curve in logarithmic coordinates of both time and flux. The shape of the light curve in these coordinates strongly depends on the choice of the reference time. In Fig 2, the reference time is chosen at the moment of the burst trigger. There appears to be a power-law decay of flux starting at 10–20 s after the trigger. This behavior is consistent with GRB entering the stage of self-similar fireball expansion soon after the main burst. It should be emphasized that the self-similar behavior is expected only on time scales much larger than those of the main energy release. Therefore, we used the data in the $`20`$$`1000`$ s time interval for the power-law modeling of the light curve.
The solid line in Fig. 2 shows the power law fit in the time interval $`20`$$`1000`$ s; the dash-dotted line shows the exponential fit in the same interval. The reduced $`\chi ^2`$ is 1.5 and 6.5 (4 dof) for the power law and exponential models, respectively. The power law adequately describes the data and results in a better fit than the exponent. The best fit power law index is $`0.69\pm 0.17`$ ($`\mathrm{\Delta }\chi ^2=2.7`$ for the index $`1`$). This power law tail contains at least $`20\%`$ of the main burst fluence. Applying the same procedure to the three wide energy channels separately, we obtained the power law indices $`1.29\pm 0.55`$, $`0.64\pm 0.19`$, and $`0.41\pm 0.6`$ in the 35–70, 70–150, and 150–300 keV energy bands, respectively. Interestingly, the extrapolation of the power law (shown by the dotted line in the Fig. 3) points to the small peak near the beginning of the main burst (Fig. 1).
The spectral evolution of the burst flux can be characterized by the ratio of the 8–20 keV flux measured by WATCH (Terekhov et al. 1993, Sazonov et al. 1998) and the SIGMA flux in the 75–200 keV band. Standard SIGMA calibration does not apply to the secondary optics flux, so we used the PHEBUS measurement of the main burst flux to find the conversion coefficient between SIGMA counts and flux. Figure 3 shows the observed ratio of 8–20 and 75–200 keV fluxes expressed in terms of equivalent spectral index in the 8–200 keV energy range (i.e., the spectral index of an $`F_\nu \nu ^\alpha `$ spectrum with the same hardness ratio as observed). The time behavior of $`\alpha `$ indicates that the afterglow spectrum is significantly softer than that of the main burst. During the main burst (in the 0–6 s time interval), the usual hard-to-soft evolution of the GRB spectrum is observed (e.g. Ford et al. 1995). Near the start of the gradual decline of the burst flux, 6 s after the trigger (shown by the vertical dotted line in Fig. 1 and 3), $`\alpha `$ changes abruptly from $`0.3`$ to $`1`$.
It is interesting to examine the light curve with the reference time chosen at $`t=6`$s after the trigger because this moment can be singled out in both flux and spectral history of the burst; the result is presented in Fig. 4. With this choice of zero time, the data in the 0.01–20 s time interval lies on the extrapolation of the power law fit from the 20–1000 s time interval. Adding these data to the fit results in the power law index $`0.70\pm 0.03`$. Note that in this case, the power law flux decay is observed over approximately four orders of magnitude of time.
## 4 Discussion
We presented a high sensitivity observation of the GRB 920723 light curve in the soft gamma-ray band. The stable background of SIGMA allows detection of the burst emission on the level of better than 1/1000 of the peak intensity. A similar analysis would be complicated with BATSE because the background is less stable and because the source is eclipsed by Earth every several thousands seconds. We were able to detect the burst afterglow extending up to $`1000`$ s after the main burst. There is a continuous transition of the main burst to its power law afterglow (Fig. 1 and 2). The afterglow spectrum is significantly softer than that of the main burst. An abrupt change in the burst spectrum occurs at approximately the same moment when the power law decay of flux seems to start, at $`t6`$s after the trigger.
The behavior of GRB afterglows in the lower energy bands and at $`t>3\times 10^4`$ s can be explained by the synchrotron emission of electrons accelerated in external shocks generated by relativistically expanding fireball colliding with the interstellar medium (e.g. Wijers et al. 1997, Waxman 1997, Sari et al. 1998, Wijers and Galama 1998). In the framework of this model, the spectral flux at the observed frequency $`\nu `$ is given by $`F_\nu \nu ^\alpha t^\beta `$, where $`\alpha `$ and $`\beta `$ are constant and depend only on the spectral index of electrons on sufficiently late stages of the fireball evolution. For GRB 920723 we obtain $`\alpha =1\pm 0.2`$ and $`\beta =0.69\pm 0.17`$. Both the spectrum and the light curve of GRB 920723 seem to be considerably flatter than that of X-ray afterglows observed for other gamma-ray bursts at $`t>3\times 10^4`$ s — $`\alpha =1.4`$$`1.7`$ and $`\beta =1.1`$$`1.6`$ (e.g., in’t Zand et al. 1998, Piro et al. 1998, Nicastro et al. 1998). Furthermore, the flux decay observed by SIGMA is flatter than $`t^1`$ (at $`90`$% confidence), i.e. the total flux diverges if extrapolated to $`t\mathrm{}`$. This suggests that the afterglow light curve should steepen at some moment during or after the SIGMA observation.
In the relativistic fireball model, the afterglow light curve and the energy spectrum should steepen simultaneously at the moment $`t_m`$ when the maximum in the electron spectrum, $`E_m`$, passes through the SIGMA bandpass (we assume below that $`t_m`$ corresponds to $`E_m=100`$keV). At later stages of the fireball evolution, indices $`\alpha `$ and $`\beta `$ do not change with time. Since the light curve steepening after 100–1000 s is required by our data at $`90\%`$ confidence, it can be suggested that $`t_m>100-1000`$s. Also, if indeed $`t_m`$ is $`>100`$s, our flat spectrum may become consistent with the parameters of X-ray afterglows (see above) because the spectrum softens after $`t_m`$. In the adiabatic fireball, $`t_m`$ can be estimated as $`t_m140\epsilon _B^{1/3}\epsilon _e^{4/3}E_{53}^{1/3}`$ s, where $`E_{53}\times 10^{53}`$ergs is the total energy release, and $`\epsilon _e<1`$ and $`\epsilon _B<1`$ are the fractions of the electron and magnetic field energy in the total shock energy, respectively (Sari et al. 1998). The value of $`t_m`$ not much less than $`100`$s would not be strongly inconsistent with the SIGMA data. However, using $`\epsilon _e`$ and $`\epsilon _B`$ estimates from the parameters of radio, optical, and X-ray afterglows of GRB 970508 and GRB 971214 at $`t10^5-10^6`$s (Wijers & Galama 1998), we obtain $`t_m3`$ s, which does seem to contradict our data at $`90\%`$ confidence level. This may indicate a large diversity of the fireball parameters in different bursts or some problems of a simple model of a spherically symmetric fireball in explaining the early stages of the gamma-ray burst afterglows.
SIGMA data provides the first convincing observation of the power law afterglow in the soft gamma-rays and immediately after the burst. A very important issue is whether such afterglows are common. A preliminary analysis of other SIGMA bursts revealed no other convincing afterglows, primarily because of the faintness of other bursts; on the basis of SIGMA data alone, we cannot rule out that the soft gamma-ray afterglow is a common phenomenon. A preliminary analysis of the PHEBUS data confirms the detection of the afterglow in GRB 920723 and reveals a similar afterglow in GRB 910402 (Tkachenko et al. 1998). The results of our systematic search for soft gamma-ray afterglows in the GRANAT data will be presented in the future.
## 5 Acknowledgments
This work was supported by RBRF grants 96–02–18458 and 96–15–96930.
## Appendix A Background modeling
(This section does not appear in the Journal version)
We start with fitting the background by the Chebyshev polynomial. We exclude the data between 1000 s before the burst and 4000 s after the burst. We increase the order of the polynomial until the F-test indicates that no additional powers of $`t`$ are necessary; we find that the polynomials of the second and third orders are required to describe the background in spectral channels 1&2 and 3, respectively.
Since the fit is made for the entire observation, the uncertainty of the fit value, arising from statistical uncertainties in the polynomial coefficients, is negligible in any small part of the observation — at least compared to the statistical error of flux in that part. So, we do not consider the fit uncertainties any further.
The deviations of the data from the fit should ideally be Poissonian. However, we cannot exclude apriori the existence of internal background variations on any time scale. So, we want to place an upper limit on such internal variations. We proceed as follows.
a) We choose the time scale 300 s for this study because this is the width of time bins near 1000 s in Fig. 2 and 3.
b) We average the observed flux in the 300 s bins and make a histogram of deviations from the fit expressed in units of Poisson error in this bin. In the absence of internal variations, this histogram should be consistent with the Gaussian with zero mean and standard deviation = 1 (i.e., with the normal distribution).
c) We do find that the histogram can be described by normal distribution. This shows that there are no biases in the background determination and that the internal background variations on the 300s time scale are small. To set the upper limit, we fit the width of the distribution and then convert the upper limit of the width into the corresponding count rate, assuming that internal variations (if any) and Poisson noise were added in quadrature.
This technique results in a 95% limit of 0.6 cnt/s for internal background variations on the 300 s time scale.
|
no-problem/9902/hep-lat9902006.html
|
ar5iv
|
text
|
# On the perfect lattice actions of abelian-projected SU(2) QCD presented by S. Kato
## Abstract
We study the perfect lattice actions of abelian-projected SU(2) gluodynamics. Using the BKT and duality transformations on the lattice, an effective string model is derived from the direction-dependent quadratic monopole action, obtained numerically from SU(2) gluodynamics in maximally abelian gauge. The string tension and the restoration of continuum rotational invariance are investigated using strong coupling expansion of lattice string model analytically. We also found that the block spin transformation can be performed analytically for the quadratic monopole action.
1. INTRODUCTION
The infrared effective theory of QCD is important for the analytical understanding of hadron physics. Abelian monopoles which appear after abelian projection of QCD seem to be relevant dynamical degrees of freedom for infrared region . Shiba and Suzuki derived the monopole action from vacuum configurations obtained in Monte-Carlo simulations extending the method developed by Swendsen.
We studied the renormalized monopole action performing block spin transformations up to $`n=8`$ numerically, and saw that scaling for fixed $`b`$ looks good. If the effective action obtained here is very near to the perfect action, the physical quantities from it should reproduce the continuum rotational symmetry, although the action is formulated on the lattice. In order to show restoration of rotational invariance, the direction-dependence of the renormalized monopole action is very important. Practically, it is difficult to evaluate the string tension numerically using monopole action for infrared region, since Wilson loop operators follow a simple exponential curve and they become too small within the statistical noise. So we try to evaluate the Wilson loops using the string model corresponding to the renormalized monopole action.
2. STRING REPRESENTATION FROM
MONOPOLE ACTION
We derive here the lattice string model using BKT(Berezinskii -Kosterlitz-Thouless) transformation.
Let us start from the following direction-dependent monopole partition function;
$`𝒵={\displaystyle \underset{\genfrac{}{}{0pt}{}{k_\mu (x)=\mathrm{}}{(_\mu ^{}k_\mu (x)=0)}}{\overset{\mathrm{}}{}}}\mathrm{exp}\left\{{\displaystyle \underset{x,y}{}}{\displaystyle \underset{\mu =1}{\overset{D}{}}}k_\mu (x)𝒟(x,y;\widehat{\mu })k_\mu (y)\right\},`$
where $`D=4`$ is space-time dimensionality and closed monopole currents $`k_\mu (x)`$ are defined on the dual lattice. Operator $`𝒟(x,y;\mu )`$ is composed of direction-independent part $`𝒟_1`$ and -dependent part $`𝒟_2`$;
$`𝒟(x,y;\widehat{\mu })𝒟_1(xy)+𝒟_2(xy;\widehat{\mu }),`$
$`𝒟_2(x,y;\widehat{\mu })=g_1(b)\delta _{x,y}+g_2(b)\delta _{x+\widehat{\mu },y}`$
$`+g_3(b){\displaystyle \underset{\gamma \mu }{}}\delta _{x+\widehat{\gamma },y}+\mathrm{},`$
where $`𝒟_1>>\left|𝒟_2\right|`$ and $`g_2(b)g_3(b)`$ etc. .
$`b(\beta ,n)=na\left(\beta \right)=\sqrt{\stackrel{~}{\kappa }(\beta ,n)/\kappa _{phys}}`$ is the physical length in unit of the physical string tension $`\kappa _{phys}`$. The dimensionless string tension $`\stackrel{~}{\kappa }`$ is determined by the lattice Monte-Carlo simulation, $`a(\beta )`$ is the lattice spacing and $`n`$ is the number of blocking steps. The couplings of the above monopole action are determined using the extended Swendsen method(\[3-4\]) We have found that four and six point interactions are very small for low-energy region of QCD. Hence we consider above only quadratic interactions for monopole currents for simplicity.
Using the transformation suggested in Ref. , this type of monopole action can be transformed into the string model;
$`𝒵=\text{const.}{\displaystyle \underset{\stackrel{~}{\sigma }_{\mu \nu }(x)=\mathrm{}}{\overset{\mathrm{}}{}}}\left({\displaystyle \underset{x}{}}\delta _{_\mu ^{}\stackrel{~}{\sigma }_{\mu \nu }(x)=0}\right)\mathrm{exp}\left\{𝒮_{STR}\right\}`$
$`𝒮_{STR}=\pi ^2{\displaystyle \underset{x,y}{}}{\displaystyle \underset{\mu <\nu }{}}\stackrel{~}{\sigma }_{\mu \nu }(x)({\displaystyle \frac{1}{\mathrm{\Delta }𝒟_1}})(xy)\stackrel{~}{\sigma }_{\mu \nu }(y)`$
$`{\displaystyle \frac{\pi ^2}{4}}{\displaystyle \underset{x,y}{}}{\displaystyle \underset{\mu \nu }{}}ϵ_{\mu \nu \xi \eta }ϵ_{\mu \alpha \gamma \delta }\stackrel{~}{\sigma }_{\xi \eta }(x\widehat{\xi }\widehat{\eta })`$
$`\times \left({\displaystyle \frac{𝒟_2}{(\mathrm{\Delta }𝒟_1)^2}}\right)(xy;\widehat{\mu })_\alpha ^{}_\nu \stackrel{~}{\sigma }_{\gamma \delta }(y\widehat{\gamma }\widehat{\delta })`$
$`(\text{higher order term})`$
where we used
$`𝒟^1=𝒟_1^1+𝒟_1^1𝒟_2𝒟_1^1+\mathrm{}`$
The integer-valued plaquette field $`\stackrel{~}{\sigma }_{\mu \nu }(x)`$ which is defined on the original lattice represents the closed world surface formed by a color electric flux tube. The leading part of this model comes from the direction- independent part of the monopole action; the next-leading terms come from the contribution of the direction-dependent part.
3. ROTATIONAL INVARIANCE
In order to check the restoration of continuum rotational symmetry, let us consider the $`q`$-$`\overline{q}`$ static potential at the points $`(2,0,0)`$ and $`(1,1,0)`$ of a three-dimensional time-slice, respectively. (The quark is attached on the origin $`(0,0,0)`$ and antiquarks are attached on $`(2,0,0)`$ and $`(1,1,0)`$, respectively.)
The static potential $`V(x,y,z)`$ are calculated from the Wilson loop operators:
$`V(2,0,0)=\underset{T\mathrm{}}{lim}{\displaystyle \frac{1}{T}}\mathrm{log}<W(2,0,0,T)>`$
$`V(1,1,0)=\underset{T\mathrm{}}{lim}{\displaystyle \frac{1}{T}}\mathrm{log}<W(1,1,0,T)>.`$
If the potential is purely linear and the continuum rotational symmetry is restored, then the ratio $`V(2,0,0)/V(1,1,0)`$ should become $`\sqrt{2}`$.
The quantum average of Wilson loop operator in the string representation is written as follows.
$`W(C)={\displaystyle \frac{1}{𝒵_{STR}}}{\displaystyle \underset{\genfrac{}{}{0pt}{}{\stackrel{~}{\sigma }_{\mu \nu }(x)=\mathrm{}}{\left(_\mu ^{}\stackrel{~}{\sigma }_{\mu \nu }(x)=\stackrel{~}{J}_\nu (x)\right)}}{\overset{\mathrm{}}{}}}\mathrm{exp}\left\{𝒮_{STR}[\stackrel{~}{\sigma }_{\mu \nu }(x)]\right\}`$
Note that the string field $`\stackrel{~}{\sigma }_{\mu \nu }(z)`$ are forming open surfaces whose boundaries are the Wilson loop. We can evaluate this quantity using strong coupling expansion.
At $`b=2.14`$ ( $``$ 0.96 fm ), as a preliminary result, the string tension from $`V(2,0,0)`$ become $`1.5`$ in unit of $`\kappa _{phys}`$ and the ratio $`V(2,0,0)/V(1,1,0)=1.07`$. The quadratic part of the renormalized direction-dependent monopole action does not seem to reproduce the correct string tension and the continuum rotational invariance. We probably need (1) to consider 4 and 6 point interactions, (2) more steps of blocking on larger lattice volume for larger $`\beta `$ and (3) more complicated form of monopole action.
4. ANALYTIC BLOCK SPIN TRANSFORMATION
We found that the block spin transformation can be performed analytically for the quadratic monopole action.
When $`b`$ is large, the London limit of the abelian Higgs model works good as an effective theory of QCD. For example, let us start from the following simple monopole partition function defined on the small $`a(\beta )`$-lattice:
$`Z={\displaystyle \underset{\genfrac{}{}{0pt}{}{a^3k=\mathrm{}}{\left(_\mu ^{}k_\mu (x)=0\right)}}{\overset{\mathrm{}}{}}}\mathrm{exp}\left\{{\displaystyle \underset{s,s^{};\mu }{}}k_\mu (s)D(ss^{})k_\mu (s^{})\right\}`$
$`D(p)=(\alpha +{\displaystyle \frac{\beta }{4_\rho \mathrm{sin}^2(\frac{p_\rho }{2})}})(1+4ϵ{\displaystyle \underset{\rho }{}}\mathrm{sin}^2({\displaystyle \frac{p_\rho }{2}}))`$
This action corresponds to the London limit of the abelian-Higgs model, when $`ϵ=0`$.
Using the Poisson summation formula, the monopole action defined on the $`b=na(\beta )`$ lattice is given by
$`e^{S[K]}={\displaystyle \underset{a^3k_\mu (as)=\mathrm{}}{\overset{\mathrm{}}{}}}\delta ({\displaystyle \underset{\mu }{}}_\mu ^{}k_\mu (as))\delta \left(b^3K_\mu (bs)\right)`$
$`\mathrm{exp}\left\{(a^4)^2{\displaystyle \underset{s,s^{};\mu }{}}k_\mu (as)D(asas^{})k_\mu (as^{})\right\}`$
where,
$`{\displaystyle \underset{i,j,l=0}{\overset{n1}{}}}a^3k_\mu \left(nas+(n1)a\mu +ia\nu +ja\rho +la\sigma \right),`$
$`S[K]={\displaystyle _{\pi /na}^{+\pi /na}}{\displaystyle \frac{d^4p}{(2\pi )^4}}K_\mu (p)\left[\mathrm{}_{\mu \nu }^{GF}(p)\right]^1K_\nu (p)`$
and
$`\mathrm{}_{\mu \nu }\left(p\right)={\displaystyle \frac{1}{\left(\alpha \beta ϵ\right)}}\left\{\overline{\mathrm{}}_{\mu \nu }(p,ϵ^1)\overline{\mathrm{}}_{\mu \nu }(p,{\displaystyle \frac{\beta }{\alpha }})\right\}`$
$`\overline{\mathrm{}}_{\mu \nu }(p,m^2){\displaystyle \frac{m^2}{n^6}}{\displaystyle \underset{l=0}{\overset{n1}{}}}{\displaystyle \frac{\mathrm{\Pi }_{\neg \mu }\left(p+2\pi l\right)\mathrm{\Pi }_{\neg \nu }\left(p+2\pi l\right)}{4_\rho \mathrm{sin}^2\left(\frac{p_\rho a}{2}+\frac{\pi l_\rho }{n}\right)+m^2}}`$
$`\times \left(\delta _{\mu \nu }{\displaystyle \frac{\mathrm{sin}\left(p_\mu a+\frac{\pi l_\mu }{n}\right)\mathrm{sin}\left(p_\nu a+\frac{\pi l_\nu }{n}\right)}{_\rho \mathrm{sin}^2\left(p_\rho a+\frac{\pi l_\rho }{n}\right)}}\right)e^{\frac{i}{2}\left(p_\mu p_\nu \right)na}`$
$`\mathrm{\Pi }_{\neg \mu }\left(p\right){\displaystyle \underset{i\mu }{}}{\displaystyle \frac{\mathrm{sin}\left(p_ina/2\right)}{\mathrm{sin}\left(p_ia/2\right)}}.`$
Using the above analytical block spin transformation, we can find a lattice monopole action which reproduces the continuum rotational invariance with the correct string tension if we take $`n\mathrm{}`$ and $`a(\beta )0`$ for fixed $`b=na(\beta )`$.
Let us evaluate the string tension and the ratio of the potential from string model expression corresponding to $`S[K]`$ above. The ratio $`V(2,0,0)/V(1,1,0)=1.24`$ at $`b=2.14`$( $``$ 0.96 fm ). <sup>3</sup><sup>3</sup>3The parameters $`\alpha =0.73`$,$`\beta =0.73`$ and $`ϵ=0.01`$ are determined so that the string tension becomes unity in unit of physical one at $`b=2.14`$. It approaches considerablly to the value expected from the continuum rotational invariance. The discrepancy may be due to truncation effect of the monopole action.
5. CONCLUSIONS
Let us compare the monopole action $`S[k]`$ in previous section with the present numerical data fixed by the Swendsen method. We see the self coupling $`G_1`$ and the discrepancy between nearest-neighbor couplings($`G_2`$ and $`G_3`$) are larger than those of the monopole action determined numerically as shown in Figure 1. These behavior must be important to reproduces the continuum rotational invariance.
Recently, the spectrum of glueball masses in non-supersymmetric Yang-Mills theory based on a Maldacena’s conjectured duality between supergravity and large N gauge theories are evaluated. Our string model also yields glueball mass spectrum analytically using the strong coupling expansion of the correlation functions of gauge invariant local operators. This is now in progress.
|
no-problem/9902/physics9902019.html
|
ar5iv
|
text
|
# The ground-state spectroscopic constants of Be2 revisited
## I Introduction
Despite the small size of the beryllium dimer, Be<sub>2</sub>, a correct computational description of its $`X{}_{}{}^{1}\mathrm{\Sigma }_{}^{+}`$ ground state has long been considered as one of the most challenging problems in quantum chemistry. Intuitively one would expect a purely repulsive potential between two closed-shell singlet atoms — or perhaps a shallow van der Waals-like minimum — and in fact the Hartree-Fock potential is purely repulsive. However, the small $`(2s)(2p)`$ gap in atomic beryllium complicates the picture, and when angular correlation is admitted, a tightly bound molecule is in fact found due to an avoided crossing between $`(2s)^2+(2s)^2`$ and $`(2s)^1(2p_z)^1+(2s)^1(2p_z)^1`$ curves. As a result, the wave function is strongly biconfigurational, and in fact an active space of at least four orbitals (the abovementioned plus $`(2s)^1(2p_z)^1+(2s)^2`$ and $`(2s)^2+(2s)^1(2p_z)^1`$) is required to obtain a qualitatively correct potential curve .
The Hartree-Fock limit potential is purely repulsive, and early coupled cluster with all double excitations (CCD) calculations found only a shallow van der Waals-like minimum. Multireference configuration interaction studies on the other hand predicted a tightly bound minimum, as did (with a highly exaggerated binding energy) a pioneering density functional study. These conclusions were corroborated in 1983 by a valence FCI (full configuration interaction) study, and in the next year, Bondybey and English reported the first experimental observation. Bondybey subsequently reported $`R_e`$=2.45 Å and the first four vibrational quanta 223.2, 169.7, 122.5, and 79 cm<sup>-1</sup>; assuming a Morse potential, he suggested a dissociation energy of 790$`\pm `$30 cm<sup>-1</sup>. Petersson and Shirley (PS), following ab initio calculations of their own, re-analyzed the experimental data in terms of a Morse+$`1/R^6`$ potential and suggested an upward revision to $`D_e`$=839$`\pm `$10 cm<sup>-1</sup>. Recent high-level calculations suggest even higher binding energies: for instance, Stärck and Meyer (SM), using MRCI (multireference configuration interaction) and a core polarization potential (CPP) found $`D_e`$=893 cm<sup>-1</sup> as well as $`r_e`$=2.448<sub>5</sub> Å, while MR-AQCC (multireference averaged quadratic coupled cluster) calculations by Füsti-Molnár and Szalay (FS) established $`D_e`$=864 cm<sup>-1</sup> as a lower bound. Røeggen and Almlöf (RA) carried out extensive calibration calculations with an extended geminal model and gave 841$`\pm `$18 cm<sup>-1</sup> as their best estimated binding energy. Evangelisti et al. (EBG) carried out valence-only FCI calculations in a $`[6s5p3d2f1g]`$ basis set, and concluded that inner-shell correlation must contribute substantially to the binding energy since their value (an exact valence-only solution within this large basis set) was still appreciably removed from experiment. This conclusion was confirmed by an all-electron FCI in a small $`[9s2p1d]`$ basis set (which still involved in excess of 10<sup>9</sup> determinants).
Part of the uncertainty in the best theoretical values resides in the fact that the basis sets used, while quite large, are still finite. Convergence of angular correlation is known to be excruciatingly slow, with an asymptotic expansion in terms of the maximum angular momentum $`l`$ that starts at $`l^4`$ for contributions of individual angular momenta and at $`l^3`$ for overall $`l`$-truncation error. Recently $`l`$-extrapolations have been proposed which permitted the calculation of total atomization energies of small polyatomic molecules with mean absolute errors as low as 0.12 kcal/mol. Among other applications, this method made possible a definitive re-evaluation of the heat of vaporization of boron from a calibration quality calculation on BF<sub>3</sub>.
In the present work, we apply this method to the dissociation energy of Be<sub>2</sub>. It will be shown that the valence-only basis set limit is in fact as large as 875$`\pm `$10 cm<sup>-1</sup>, and the overall $`D_e`$ as large as 945$`\pm `$20 cm<sup>-1</sup>.
## II Methods
The multireference and FCI calculations, as well as those using the CCSD(T) coupled cluster method, were carried out using a prerelease version of MOLPRO97 <sup>*</sup><sup>*</sup>* MOLPRO 97.3 is a package of ab initio programs written by H.-J. Werner, and P. J. Knowles, with contributions from J. Almlöf, R. D. Amos, A. Berning, D. L. Cooper, M. J. O. Deegan, A. J. Dobbyn, F. Eckert, S. T. Elbert, C. Hampel, R. Lindh, A. W. Lloyd, W. Meyer, A. Nicklass, K. A. Peterson, R. M. Pitzer, A. J. Stone, P. R. Taylor, M. E. Mura, P. Pulay, M. Schütz, H. Stoll, and T. Thorsteinsson, running on an SGI Origin 2000 minisupercomputer at the Weizmann Institute of Science. Calculations with other coupled cluster methods were carried out using ACES II J. F. Stanton, J. Gauss, J. D. Watts, W. Lauderdale, and R. J. Bartlett, (1996) ACES II, an ab initio program system, incorporating the MOLECULE vectorized molecular integral program by J. Almlöf, J. and P. R. Taylor, and a modified version of the ABACUS integral derivative package by T. Helgaker, H. J. Aa. Jensen, P. Jørgensen, J. Olsen, and P. R. Taylor. running on a DEC Alpha workstation.
Most basis sets used belong to the correlation consistent polarized valence $`n`$-tuple zeta (cc-pV$`n`$Z) family of Dunning. The cc-pVDZ, cc-pVTZ, cc-pVQZ and cc-pV5Z basis sets are $`[3s2p1d]`$, $`[4s3p2d1f]`$, $`[5s4p3d2f1g]`$, and $`[6s5p4d3f2g1h]`$ contractions, respectively, of $`(9s4p1d)`$, $`(11s5p2s1d)`$, $`(12s6p3d2f1g)`$, and $`(14s8p4d3f2g1h)`$ primitive sets. For assessing inner-shell correlation effects, we used the core correlation basis set of Martin and Taylor: MTvtz and MTvqz denote completely uncontracted cc-pVTZ and cc-pVQZ basis sets, respectively, augmented with one tight $`p`$, three tight $`d`$, and two tight $`f`$ functions with exponents derived by successively multiplying the highest exponent already in the basis set with a factor of three. The MTv5z basis set is obtained similarly, but in addition has a single tight $`g`$ function as well.
## III Results and discussion
### A Valence electron contribution
For the cc-pVDZ, cc-pVTZ, and cc-pVQZ basis sets, valence-only FCI calculations could be carried out. The results at the reference geometry $`R=2.45`$ Å are given in Table 1.
By comparison with CCD, CCSD, and CCSDT results in the same basis sets (CCSDTQ being equivalent to FCI for this case), we can partition the valence binding energy into contributions from connected single, double, triple, and quadruple excitations as well as investigate their basis set convergence. As previously noted by Sosa et al. in small basis sets, no covalent binding is seen at the CCSD level; they found CCSDT-1{a,b} and CCSDT-2 to display only a shallow ripple, while CCSDT-4 slightly exaggerates the potential well and full CCSDT is slightly above the FCI result. These conclusions are confirmed here; moreover, as the basis set is increased, the CCSDT results closely track the FCI ones, which in this case implies that the contribution of connected quadruples to the binding converges very rapidly to an estimated basis set limit of 85 cm<sup>-1</sup>. By contrast, the contribution of connected triples is actually substantially larger than the atomization energy itself, and is apparently not yet converged with the cc-pVQZ basis set.
Our attempts to carry out a CCSDT/cc-pV5Z calculation with the available computer infrastructure met with failure. CCSD(T) calculations are an obvious alternative, but are seen in Table 1 to on the one hand underestimate the importance of connected triple excitations, and on the other hand to display considerable basis set dependence in the difference with full CCSDT (hence making it a poor candidate for extrapolation). The difference between CCSD(T) and CCSDT starts at fifth order in perturbation theory; in the method alternatively known as CCSD+T(CCSD)\* and, in Bartlett’s recent notation, CC5SD(T), the missing $`E_{5TT}`$ term is included quasiperturbatively at a computational expense scaling as $`n_{\mathrm{occ}}^3n_{\mathrm{virt}}^5`$. As seen in Table 1, CC5SD(T) slightly overestimates the connected triple excitations contribution but does so in a highly systematic manner, the difference being constant between 38 and 40 cm<sup>-1</sup>. Because of an error compensation with neglect of connected quadruple excitations, it is actually the one single-reference method short of full CI that we find to be closest to the exact solution. In short, it is the ideal candidate for basis set extrapolation.
The CCSD+TQ(CCSD)\* or CC5SD(TQ) method, which includes the leading contribution of connected quadruple excitations in a similar fashion, appears to seriously overestimate it, and we have not considered it further.
Basis set superposition error for the valence electrons was considered using the standard counterpoise (CP) correction. In the present case, it drops from 36 cm<sup>-1</sup> (cc-pVDZ) over 24 (cc-pVTZ) to 6 cm<sup>-1</sup> for the cc-pVQZ basis set, and a paltry 3.5 cm<sup>-1</sup> for the cc-pV5Z basis set.
From the FCI/cc-pV{D,T,Q}Z results, we may attempt extrapolation, either from the uncorrected $`D_e`$ values (assuming that the extrapolation will absorb BSSE which strictly vanishes at the basis set limit) or after subtracting the counterpoise correction in each case. With a variable-$`\alpha `$ 3-parameter correction, this leads to basis set limits of 841 and 859 cm<sup>-1</sup>, respectively. Using the simple $`A+B/l^3`$ formula on just the final two results, we obtain values of 863 (raw) and 870 (CP-corrected) cm<sup>-1</sup>.
It can rightly be argued that the cc-pVDZ basis set is really too small to be involved in this type of extrapolation, and that a cc-pV5Z result is essential for this purpose. This requires us to estimate an FCI/cc-pV5Z result from the additivity approximation Method/cc-pV5Z$`+`$FCI/cc-pVQZ$``$Method/cc-pVQZ. With Method=CC5SD(T), we obtain $`D_e`$(FCI/cc-pV5Z)$``$818.2 cm<sup>-1</sup>; 3-point extrapolation yields 881 cm<sup>-1</sup> for the raw, and 872 cm<sup>-1</sup> for the CP-corrected, results as the basis set limit. Using the simple $`A+B/l^3`$ formula, we obtain the alternative results 857 and 873 cm<sup>-1</sup>, respectively. The fact that the two extrapolations yield essentially the same result for the CP-corrected values, as well as that they are in very close agreement with the results with the smaller basis sets, is very satisfying.
It could likewise be argued that in fact the SCF and correlation contributions should be handled separately, with an exponential or $`(l+1/2)^5`$ formula for the SCF contribution and an $`A+B/(l+1/2)^\alpha `$ or $`A+B/l^3`$ formula for the correlation contribution alone. We then find that the SCF contribution, with the cc-pV5Z basis set, lies within 3 cm<sup>-1</sup> of the numerical HF limit; after adding in the basis set limits for the correlation contribution, we obtain, after counterpoise correction, 869 cm<sup>-1</sup> with the 3-point and 871 cm<sup>-1</sup> with the 2-point formula.
One further objection would be to the use of even a high-level single-reference method for a problem that is intrinsically multireference in character. We have therefore considered MRCI (multireference configuration interaction) augmented with the multireference Davidson correction, MRACPF (multireference averaged coupled pair functional), and MRAQCC (multireference averaged quadruples coupled cluster) methods with a variety of active spaces. A 4/4 active space appears to be unsatisfactory for our purposes; hence we have considered full-valence CAS(4/8)-ACPF (averaged coupled pair functional) and CAS(4/8)-AQCC as alternatives. Except for the cc-pVDZ basis set, both methods seem to track the FCI results quite closely, with CAS(4/8)-ACPF accidentally coinciding with the FCI results. Again applying the same additivity approximation as above, we obtain estimated FCI/cc-pV5Z results from these calculations of 821.5 and 819.6 cm<sup>-1</sup>, especially the latter quite close to the CC5SD(T) derived value.
Interestingly, the CAS(4/8)-ACPF wave function contains a fairly large number of external excitations with fairly high amplitudes, most of them involving excitation into (3p)-type Rydberg orbitals. Inspection of the atomic wave function for Be atom revealed that excitations into the fairly low-lying (3p) orbitals have amplitudes as large as 0.09 (for each of three symmetry-equivalent components); since in addition the (3s) orbital is below the (3p) orbital in energy and there appears to be no clear separation between $`(3s)`$\- and $`(3p_z)`$-derived $`\sigma `$ orbitals, this suggests a (4/16) active space which spans all molecular orbitals derived from atomic $`(2s,2p,3s,3p)`$ orbitals. External excitations now carry so little weight in the wave function that CAS(4/16)-MRCI+Dav, CAS(4/16)-ACPF and CAS(4/16)-AQCC yield essentially identical results. Arbitrarily selecting the CAS(4/16)-ACPF result for extrapolation, we obtain a best estimate of 821.5 cm<sup>-1</sup> for the FCI/cc-pV5Z $`D_e`$. After counterpoise correction, the CAS(4/16)-ACPF derived value leads to a basis set limit value of 885.6 cm<sup>-1</sup> with the 3-point and 861.4 cm<sup>-1</sup> with the 2-point formula. Taking the average of the latter two values and the CC5SD(T) derived ones, we finally propose 872$`\pm `$15 cm<sup>-1</sup> as our best estimate for the valence-only $`D_e`$.
As a final remark, let it be noted that the extrapolations in all cases bridge an area of no more than 50–70 cm<sup>-1</sup>; by substituting $`l=6`$ in the extrapolation fomulas, we can estimate that calculations with the next large basis set, cc-pV6Z (i.e. \[7s6p5d4f3g2h1i\]), would only recover about 20–25 cm<sup>-1</sup> of that total.
### B Inner-shell contribution
By taking the difference between their computed MRCI results with and without the core polarization potential, SM found that inner-shell correlation would add 0.38 m$`E_h`$, or 83 cm<sup>-1</sup>, to the atomization energy. RA computed a contribution of $`(1s)`$ correlation (almost exclusively core-valence correlation) of 0.40654 m$`E_h`$, or 89.2 cm<sup>-1</sup>.
Our results for the effect of inner-shell correlation are collected in Table 2. Using the MTvtz, MTvqz, and MTv5z basis sets in succession at the CAS(4/16)-ACPF level. we find contributions of inner-shell correlation to the binding energy of 82.1, 80.6, and 77.8 cm<sup>-1</sup>. BSSE contributions to the core-correlation contribution (taken as the difference between all-electron and valence-only BSSEs in the same basis set) are 3.8, 2.9, and 1.5 cm<sup>-1</sup>, respectively, such that the counterpoise-corrected values of 78.3, 77.7, and 76.3 cm<sup>-1</sup> appear to be quite handsomely converged.
For comparison, the counterpoise-corrected CCSD(T) results are 75.0, 73.1, and 70.9 cm<sup>-1</sup>, while a CC5SD(T)/MTvtz calculation yielded 63.3 cm<sup>-1</sup> without counterpoise correction. CAS(4/4)-ACPF and CAS(4/8)-ACPF calculation actually yielded small negative inner-shell correlation contributions which are clearly an artifact of the reference space.
We also note that the counterpoise-corrected all-electron CAS(4/16)-ACPF/cc-pV5Z $`D_e`$ of 882.4 kcal/mol is already higher than the FS number, and in fact near the SM value. Indeed, since this level of electron correlation appears to systematically underestimate the valence binding energy by 15–16 cm<sup>-1</sup> compared to FCI (see Table 1), we can establish 900 cm<sup>-1</sup> as a lower limit to $`D_e`$.
Adding the best inner-shell correlation energy contribution of 76.2 cm<sup>-1</sup> to our best valence binding energy, we obtain a best estimate for the all-electron binding energy of 948$`\pm `$20 cm<sup>-1</sup>, where the increased error bar reflects the added uncertainty in the inner-shell contribution.
The effect of scalar relativistic effects was gauged from Darwin and mass-velocity terms obtained from CAS(4/16)-ACPF/MTvqz calculations by perturbation theory. At $``$4.0 cm<sup>-1</sup>, it is essentially negligible.
Combining our best estimates for valence, inner-shell, and relativistic contributions, we finally obtain a best estimate for $`D_e`$(Be<sub>2</sub>) of 944 $`\pm `$25 cm<sup>-1</sup>, which suggests that the PS value for $`D_e`$ may need to be revised upward by as much as 100 cm<sup>-1</sup>.
### C Potential curve
Computed bond distances $`r_e`$, harmonic frequencies $`\omega _e`$, and the first three anharmonicities $`\omega _ex_e`$, $`\omega _ey_e`$, and $`\omega _ez_e`$ are collected in Table 3. They were obtained by a Dunham analysis on eighth-order polynomials fitted to some 25 computed energies at bond distances spaced around the putative minimum with distances of 0.02 Å.
While good fits could be obtained to the CCSD(T) and CC5SD(T) results, attempts to fit CAS(4/8)-{MRCI,ACPF,AQCC} curves in the same manner met with failure. No such problem was encountered with results based on a smaller CAS(4/4) reference wave function: investigation of the CASSCF energies revealed that while the CAS(4/4) curve is bound, the CAS(4/8) curve is purely repulsive in the region sampled. Further investigation revealed that with increasing $`r`$, amplitudes for excitations into $`(3p)`$ derived Rydberg orbitals progressive take on pathological dimensions (as large as 0.35): under such circumstances, the noisy character of the CAS(4/8)-ACPF potential curves should not come as a surprise. As expected, expanding the reference space to CAS(4/16) eliminates the problem, as well as restores a bound CASSCF potential curve. Apparently the (2p) and (3p) orbitals are close enough in importance that a balanced reference space requires that they either be both included or both excluded.
From comparing CAS(4/16)-ACPF/cc-pVTZ and FCI/cc-pVTZ spectroscopic constants, it is obvious that the former treatment is indeed very close to an exact solution and the method of choice for 1-particle basis set calibration. CC5SD(T) yields surprisingly good $`r_e`$ and $`\omega _e`$ values (in fact agreeing more closely with FCI than CCSDT) but strongly overestimates the anharmonicity of the curve. Performance of CCSD(T) is fairly poor, although the quality of the results is still amazing considering the pathological character of the molecule.
Extension of the basis set to cc-pVQZ has a very significant effect on the spectroscopic constants, with $`r_e`$ being shortened by 0.026 Å and $`\omega _e`$ going up by 16 cm<sup>-1</sup>. Further extension to cc-pV5Z has a much milder effect, and suggests that convergence is being approached for the molecular properties. $`A+B/l^3`$ extrapolation suggests that further basis set extension may affect $`r_e`$ by a further $``$0.003 Å and increase $`\omega _e`$ by another $`+`$2 cm<sup>-1</sup>.
Ideally, we would have liked to present all-electron CAS(4/16)-ACPF/MTv5z curves in order to include inner-shell correlation. Since however a single point in such a curve took more than a day of CPU time on an SGI Origin 2000, we have not pursued this option further, and have instead contented ourselves with considering the difference between CCSD(T)/MTv5z curves with and without constraining the $`(1s)`$-like orbitals to be doubly occupied. Our results suggest that inner-shell correlation reduces $`r_e`$ by 0.03 Å and increases $`\omega _e`$ by 14 cm<sup>-1</sup>. The spectroscopic constants given as ‘best estimate’ are obtained by adding these contributions to the extrapolated CAS(4/16)-ACPF/cc-pV$`\mathrm{}`$Z results, as well as the small difference between FCI/cc-pVTZ and CAS(4/16)-ACPF/cc-pVTZ.
Obviously, given the highly anharmonic nature of the potential surface, a Dunham-type perturbation theory analysis is not appropriate. Like in our recent calibration study on the first-row diatomic hydrides, we have transformed our 8th-order Dunham expansion and computed dissociation energy to a variable-beta Morse (VBM) potential
$$V_c=D_e\left(1\mathrm{exp}[z(1+b_1z+b_2z^2+\mathrm{}+b_6z^6)]\right)^2$$
(1)
in which $`z\beta (rr_e)/r_e`$ and the parameters $`b_n`$ and $`\beta `$ are obtained by derivative matching as discussed in detail in Ref.. The one-dimensional Schrödinger equation was then integrated using the algorithm of Balint-Kurti et al., on a grid of 256 points over the interval $`[0.2r_e,3r_e]`$.
The results for the first four vibrational quanta are given in Table IV. We have considered three potentials. The first two are the uncorrected FCI/cc-pVTZ and CAS(4/16)-ACPF/cc-pV5Z potentials; the third one was obtained by substituting our best estimate $`D_e`$ and $`r_e`$, and adjusting $`\beta `$ such that the best estimate $`\omega _e`$ is matched. (The $`b_n`$ remain unchanged from the CAS(4/16)-ACPF/cc-pV5Z values.) What this latter approaches in effect assumes is that the shape of the CAS(4/16)-ACPF/cc-pV5Z curve is fundamentally sound.
As expected, the unadjusted FCI/cc-pVTZ potential seriously underestimates the first three vibrational quanta because of the strong dependence of $`D_e`$, $`\omega _e`$, and $`r_e`$ on the basis set and the inclusion of inner-shell correlation. CAS(4/16)-ACPF/cc-pV5Z does so to a much lesser extent. Our ‘best estimate’ potential, however, reproduces the fundamental (the only transition known with some precision) essentially exactly, and is in good agreement with experiment for the next two quanta. Since the VBM form of the potential does not take into account long-distance behavior and the fourth quantum lies at 80% of the dissociation energy, it is not surprising that the fourth quantum is seriously overestimated.
Finally, let us turn to the spectroscopic constants derived from our best potential (Table 5). Our best $`\omega _e`$ is in perfect agreement with SM but substantially lower than the Bondybey value. Our best $`\omega _ex_e`$ is substantially smaller than both the Bondybey and SM values: however, both of the latter were determined phenomenologically as $`[G(2)2G(1)G(0)]/2`$ and therefore include contributions from higher-order anharmonicities. If we compute the same quantity, we obtain perfect agreement with the SM value. While our rotation-vibration coupling constant $`\alpha _e`$ is in very good agreement with the SM calculations, it is substantially larger than the Bondybey value. However, it should be noted that the Be<sub>2</sub> potential is so anharmonic that the series $`B_n=B_e\alpha _e(n+1/2)+\gamma _e(n+1/2)^2+\delta _e(n+1/2)^3+\mathrm{}`$ cannot be truncated after the linear term; from our best computed spectroscopic constants, we obtain $`B_0`$=0.6086 cm<sup>-1</sup>, in perfect agreement with Bondybey’s value of 0.609 cm<sup>-1</sup> for this observable quantity. In short, we argue that our computed $`r_e=2.440`$ Å is more reliable than the Bondybey value of 2.45<sub>0</sub> Å.
As a final note, we point out that this revised reference geometry ($`r_e`$=2.440 Å) would not have affected our calculation of $`D_e`$ materially, since the energy difference between $`R=`$2.44 and $`R=2.45`$ Å with our best potential only amounts to 0.4 cm<sup>-1</sup>.
## IV Conclusions
From an exhaustive basis set convergence study on the dissociation energy of the ground-state Be<sub>2</sub>, we find that the accepted experimental value needs to be revised upward to a best estimate of 944 $`\pm `$25 cm<sup>-1</sup>. Individual contributions to this value include a valence-only FCI basis set limit of 872$`\pm `$15 cm<sup>-1</sup>, an inner-shell contribution of 76$`\pm `$10 cm<sup>-1</sup>, and relativistic corrections as small as $``$4 cm<sup>-1</sup>. The performance of single-reference methods for this molecule is crucially dependent on their treatment of connected triple excitations; while CCSD(T) underestimates binding in this molecule, the CC5SD(T) method performs surprisingly well at a fraction of the cost of full CCSDT. The contribution of connected quadruple excitations is small (80 cm<sup>-1</sup>) and fairly insensitive to the basis set. Accurate multireference calculations require an active space which treats angular (2p,3p) correlation in a balanced way; a full-valence CAS(4/8) reference does not satisfy this criterion. For the utmost accuracy, a CAS(4/16) reference including the $`(3s,3p)`$ orbitals is required, while for less accurate work a CAS(4/4) reference is recommended. Our best computed spectroscopic observables (expt. values in parameters) are $`G(1)G(0)`$=223.7 (223.8), $`G(2)G(1)`$=173.8 (169$`\pm `$3), $`G(3)G(2)`$=125.4 (122$`\pm `$3), and $`B_0`$=0.6086 (0.609) cm<sup>-1</sup>. Our best computed spectroscopic constants represent substantial revisions from the experimentally derived values; in particular, the bond length is 0.01 Å shorter than the accepted experimental value.
###### Acknowledgements.
The author is a Yigal Allon Fellow, the incumbent of the Helen and Milton A. Kimmelman Career Development Chair, and an Honorary Research Associate (“Onderzoeksleider in eremandaat”) of the National Science Foundation of Belgium (NFWO/FNRS). He acknowledges support from the Minerva Foundation, Munich, Germany. This study was inspired by discussions with Dr. Russell D. Johnson III (NIST) on the poor performance of standard computational thermochemistry methods.
|
no-problem/9902/cond-mat9902264.html
|
ar5iv
|
text
|
# Quasiparticle Density of States of Clean and Dirty s-Wave Superconductors in the Vortex State
## I Table
|
no-problem/9902/astro-ph9902318.html
|
ar5iv
|
text
|
# A Test of Pre-Main Sequence Evolutionary Models Across the Stellar/Substellar Boundary Based on Spectra of the Young Quadruple GG Tau1footnote 11footnote 1Based partly on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract No. NAS5-26555.
## 1 Introduction
The masses and ages of T Tauri stars, a class of young ($`10^7`$yrs), low mass (M $``$ 2 M) stars, are primarily inferred from the comparison of their observationally determined stellar temperatures and luminosities to the predictions of theoretical pre-main sequence (PMS) evolutionary models (e.g., D’Antona & Mazzitelli 1994, 1997; Swenson et al. 1994; Burrows et al. 1997; Baraffe et al. 1998). Unfortunately, these mass and age estimates are currently very uncertain because of the uncertainties in the input physics of the evolutionary models. There are at best only minimal constraints on the choice of opacities, the convection prescription, and the proper treatment of the atmosphere (e.g., grey vs. non-grey). As a consequence, evolutionary models are computed with a variety of assumptions and predict discrepant masses and ages when compared to a specific luminosity and temperature. For example, a 1 Myr, 0.08 M star according to the Baraffe et al. (1998) model has the same luminosity and temperature as a 5 Myr, 0.12 M star according to the D’Antona & Mazzitelli (1997) model. Variations by a factor as large as 10 in age and 2 in mass are common between the models.
An additional problem in determining the masses and ages of T Tauri stars is that it is unclear whether a temperature scale similar to dwarfs or giants is more appropriate for these moderately over-luminous, young stars. While the dwarf and giant temperature scales are nearly identical near spectral type M0, the giant scale is more than 400<sup>o</sup>K hotter than the dwarf scale by spectral type M7 (e.g., Leggett et al. 1996 vs. Perrin et al. 1998). Thus even with accurate spectral types it is difficult to place T Tauri stars on an HR diagram for comparison with evolutionary models. For mid- and late-M T Tauri stars, these temperature scale differences lead to additional mass and age discrepancies by factors of a few.
Much of the uncertainty in the evolutionary models and the T Tauri temperature scale is a consequence of the limited observational constraints at the youngest ages and at solar or lower masses. Open clusters at an age $`100`$ Myrs such as the Pleiades have been used to test the later stages of evolutionary models; cluster members spanning a wide range in mass should lie along a single isochrone for an accurate evolutionary model using the proper temperature scale (e.g., Luhman 1998; Stauffer, Hartmann & Barrado y Navascues 1995; Stauffer et al. 1997). While this test has had some success in distinguishing between evolutionary models at these older ages, it is not possible to conduct this test with the youngest clusters or T associations such as Orion or Taurus-Auriga. The spread in ages for cluster members in these star forming regions is comparable to the age of the cluster (e.g., Hillenbrand 1997; Luhman & Rieke 1998; Luhman et al. 1998b).
Young binary stars provide a variety of methods for assessing the validity of the masses and ages inferred from evolutionary models. Binaries exhibiting orbital motion can be used to estimate the sum of the stellar masses (Ghez et al. 1995; Roddier et al. 1995; Thiébaut et al. 1995; Simon, Holfeltz & Taff 1996), double-lined spectroscopic binaries provide accurate mass ratios (Lee 1992; Figueiredo 1997; Prato 1998), and the rare eclipsing binaries yield both masses and radii (Popper 1987; Claret, Giménez & Martín 1995; Martín and Rebolo 1993; Corporon et al. 1994, 1996; Casey et al. 1998). Unfortunately, these types of binaries have offered little constraint on the earliest stages of PMS evolution as the orbital solutions are too preliminary and/or the the stellar components are near the zero age main-sequence (see review by Mathieu et al. 1998).
The relative ages of young multiple systems offer an additional constraint on evolutionary models and the T Tauri temperature scale. Unlike the youngest open clusters, the components of a multiple system, which can be thought of as a mini-cluster, are expected to have a negligible age difference. The correct evolutionary model and temperature scale should yield the same age for all components. Here we pursue this test using the relative ages of the PMS quadruple GG Tau in the nearby star forming region Taurus-Auriga (D = 140 pc; Kenyon, Dobrzycka & Hartmann 1994). GG Tau is a hierarchical quadruple comprised of two binary stars (Figure 1). The close pair, GG Tau A, with components Aa & Ab (separation 0$`\stackrel{}{\mathrm{.}}`$25 $``$ 35 AU) is separated by 10$`\stackrel{}{\mathrm{.}}`$1 from a wider pair, GG Tau B, with components Ba & Bb (= GG Tau/c; separation 1$`\stackrel{}{\mathrm{.}}`$48 $``$ 207 AU). The GG Tau system is a particularly useful system for testing evolutionary models since there also exists a dynamical mass estimate for GG Tau A from kinematic studies of its circumbinary disk (Dutrey, Guilloteau & Simon 1994; Guilloteau, Dutrey & Simon 1998).
This test of evolutionary models requires accurately determined stellar properties for each component, which can only be extracted from spatially resolved spectra and photometry. We have therefore obtained spatially separated optical spectra of GG Tau Aa and Ab using the Hubble Space Telescope (section 2.1.1), and of GG Tau Ba and Bb with LRIS and HIRES on the 10-m W. M. Keck telescopes (sections 2.1.2 & 2.1.3). These spectra are used in conjunction with high spatial resolution photometry to determine the spectral type, luminosity and accretion activity of each component (section 3). Constraints on evolutionary models and the inferred stellar properties of the GG Tau components are discussed in section 4 and our conclusions are summarized in section 5.
## 2 Optical Spectroscopy and Data Analysis
### 2.1 Faint Object Spectrograph on the Hubble Space Telescope
The Faint Object Spectrograph (FOS) aboard the Hubble Space Telescope (HST) was used to obtain spatially resolved spectra of each component of the 0$`\stackrel{}{\mathrm{.}}`$25 binary GG Tau A on 1995 Nov 8. Each star was observed through the 0$`\stackrel{}{\mathrm{.}}`$09 $`\times `$ 0$`\stackrel{}{\mathrm{.}}`$09 aperture with the G570H (4600 - 6800 Å) and the G400H (3250 - 4800 Å) gratings, yielding a spectral resolution of R $``$ 1400 (cf. Keyes et al. 1995). Similar FOS observations of other close binary systems indicate that scattered light contamination from a nearby ($`0\stackrel{}{\mathrm{.}}3`$) point source is negligible ($`0.2`$ percent), as is expected from the sharp COSTAR PSF (Keyes et al. 1995). The HST spectra were initially calibrated by the FOS calibration pipeline (cf. Keyes et al. 1997). However, flat fielding information for small aperatures was not initially available, thus the spectra were later flat fielded using the appropriate sensitivity functions provided by T. Keyes & E. Smith (private communication), which are now available in the calibration pipeline.
### 2.2 HIRES at Keck
High resolution spectra of GG Tau Ba and Bb were obtained on 1997 Dec 10 with the Keck I telescope using the High Resolution Echelle Spectrometer (HIRES; Vogt et al. 1994). With the 0$`\stackrel{}{\mathrm{.}}`$86 slit (HIRES Decker ’C1’), the instrument yielded 15 spectral orders from 6350 - 8730 Å (with gaps between the orders) at a spectral resolution of R $``$ 45,000. The wavelength scale was determined from a thorium-argon calibration lamp exposure. Several mid-M Hyades stars were also observed with this setup for spectral comparison.
Spectra of both components of GG Tau B were obtained by aligning the pair along the slit. The spectra were binned by 2 in the spatial direction, which yielded an effective plate scale of 0$`\stackrel{}{\mathrm{.}}`$38/pixel. The peaks of the spatial intensity distribution for each component were clearly separated, although the wings overlapped. In order to deconvolve the spectra accurately, a simple model consisting of two Gaussians was fit to each 1D cut in the spatial direction of the two dimensional spectra. The full-width half maximum of the best fit Gaussians, approximately equal to the seeing, was 0$`\stackrel{}{\mathrm{.}}`$8.
### 2.3 LRIS at Keck
Low resolution spectra of the 1$`\stackrel{}{\mathrm{.}}`$48 binary GG Tau B were obtained 1997 Dec 9 with the Keck II Low Resolution Imaging Spectrograph (LRIS; Oke et al. 1995). The observations were conducted with a 400 l/mm grating and a one arcsecond slit. This setup yielded a spectral resolution of R $``$ 1300, spanning the wavelength range from 6400 - 9400 Å. The wavelength scale was determined from a Neon-Argon calibration lamp exposure.
A spectrum of the fainter component, GG Tau Bb, was obtained by centering the slit on the Bb component with the slit aligned perpendicular to the separation axis. This slit orientation was chosen to minimize the contamination from GG Tau Ba. To estimate the resulting contribution from Ba, a second ’offset’ spectrum was obtained on the opposite side of Ba at the same separation and slit orientation. This offset spectrum, which is dominated by scattered light from Ba, should not be contaminated by the faint companion Bb on the opposite side of Ba. We therefore assign the offset spectrum to GG Tau Ba.
Due to uncertainties in positioning the slit, the intensity of the offset spectrum was an overestimate of Ba’s contribution to Bb’s spectrum; a direct subtraction (Bb spectrum minus offset) resulted in a negative continuum at the blue end. The general appearance of Bb’s spectrum and the offset spectrum were markedly different (with Bb’s spectrum characteristic of a cooler object) and suggested only a minimal amount of contamination by Ba in Bb’s spectrum. In order to quantify and to remove Ba’s contribution, the offset spectrum was scaled such that the subtracted spectrum (Bb spectrum minus scaled offset) yielded TiO absorption strengths at 7124 Å, 7743 Å and 8432 Å identical to those measured in the HIRES spectrum. The subtraction removed only about 20 percent from Bb’s continuum at 6500 Å and 5 percent from Bb’s continuum at 9000 Å. Uncertainty in this subtraction led to an increased uncertainty in the continuum level near 6500 Å and thus in the inferred equivalent width (EW) of H$`\alpha `$ at 6563 Å for Bb (section 3.2). The inferred spectral type, however, is quite robust (section 3.1).
## 3 Results
The primary goal of our spectroscopic analysis is accurate spectral classification of each component, which includes determining their spectral types, confirming that all components are indeed T Tauri stars and establishing their T Tauri types (classical vs. weak). Classical T Tauri stars are thought to be accreting circumstellar material and, consequently, exhibit strong Balmer series emission attributable to the high temperature regions generated from the accretion flow (e.g., Basri & Bertout 1989; Gullbring 1994). Weak T Tauri stars are presumed to be experiencing no accretion and the ‘weak’ Balmer series emission common in their spectra is attributed to enhanced chromospheric activity (e.g., Walter et al. 1988). We use the strength of H$`\alpha `$ emission to distinguish classical from weak T Tauri stars, and a classical/weak dividing value set by Martín (1998); T Tauri stars are considered classical if their EW(H$`\alpha `$) $``$ 5 Å for K stars, $``$ 10 Å for M0-M2 stars and $``$ 20 Å for cooler stars. We characterize the excess optical emission or veiling for each component, which is common in classical T Tauri stars with large accretion rates; neglecting this excess emission can affect both the inferred spectral type and the stellar luminosity (e.g., Hartigan et al. 1991). The radial and rotational velocities of GG Tau Ba and Bb are also extracted from the HIRES spectra.
### 3.1 The Inferred Spectral Types
The HST FOS spectra of GG Tau Aa and Ab are shown in Figure 2. Their spectral types were established by comparison with spectral standards from Montes et al. (1997) over the temperature sensitive region 5700 - 6800 Å. This longer wavelength portion of each spectrum was used for spectral classification since it suffers the least from continuum excess emission common in classical T Tauri stars (cf. Basri & Batalha 1990; Hartigan et al. 1991; Gullbring et al. 1998). Additionally, this wavelength region is not strongly sensitive to surface gravity, which can affect spectral classification at even longer wavelengths for these moderately over-luminous stars. Based on these comparisons, we classify Aa as a K7 $`\pm `$ 1 and Ab as an M0.5 $`\pm `$ 0.5. The best fit dwarf spectra are also shown in Figure 2. As can be seen in the Figure, the CaH (6382 & 6389 Å; Kirkpatrick, Henry & McCarthy 1991) feature is weaker in both Aa and Ab than in the comparison dwarf and is indicative of a lower surface gravity.
The LRIS spectra of GG Tau Ba and Bb are shown in Figure 3. Sections of their HIRES spectra near Li I 6708 Å, H$`\alpha `$ 6563 Å and K I 7699 Å are shown in Figure 4. The spectral types for Ba and Bb were established by comparison of the LRIS spectra with spectral standards from Kirkpatrick et al. (1991) and Kirkpatrick, Henry & Irwin (1997) over the temperature sensitive region 6400 - 7600 Å. Based on these comparisons, we classify Ba as an M5 $`\pm `$ 0.5 and Bb as an M7 $`\pm `$ 0.5. The best fit dwarf and giant spectra are shown in Figure 3, and are roughly identical over this temperature sensitive region except near the gravity sensitive CaH band at $``$ 7000 Å (Kirkpatrick et al. 1991). Photospheric absorption features in the HIRES spectra are consistent with these inferred spectral types.
As with GG Tau Aa and Ab, the surface gravities of GG Tau Ba and Bb appear less than that of a dwarf. This is particularly striking in the LRIS spectrum of Bb. As Figure 3 shows, the strength of the gravity sensitive CaH band at $``$ 7000 Å for Bb is intermediate between that of giants and dwarfs, while other gravity sensitive features (K I at 7665 & 7699, Na I at 8183 & 8195; Kirkpatrick et al. 1991; Martín, Rebolo & Zapatero-Osorio 1996; Schiavon et al. 1997) are weak and are much more giant-like than dwarf-like. Figure 4 shows the K I absorption at 7699 Å in detail for both Ba and Bb. It is notably narrower in the T Tauri stars than in the Hyades M5 dwarf RHy 83 (Reid 1993), again indicative of a lower surface gravity. These results are consistent with the giant-like characteristics of other mid- to late-M T Tauri stars (Basri & Marcy 1995; Luhman, Liebert & Rieke 1997; Briceño et al. 1998; Luhman et al. 1998a; Luhman & Rieke 1998; Neuhäuser & Comerón 1998) and strengthens our conclusion that these are very young PMS objects.
### 3.2 The T Tauri Type and Continuum Excesses
All four GG Tau components exhibit prominent H$`\alpha `$ emission (Figures 2, 3, 4; Table 1). For the close pair, this emission is much stronger in Aa than Ab (EW\[H$`\alpha `$\]<sub>Aa</sub> = $`57\pm 2`$ Å and EW\[H$`\alpha `$\]<sub>Ab</sub> = $`16\pm 1`$ Å). For the wider pair, the EW\[H$`\alpha `$\] measured from both the LRIS and the HIRES spectra give similar values for GG Tau Ba ($`21\pm 1`$ Å & $`24\pm 1`$ Å). However, the value measured for Bb varied by a factor of two ($`43\pm 4`$ Å & $`20\pm 1`$ Å), which occurred over a 1 day timescale. Rapid variations such as this have been previously observed for T Tauri stars (e.g., Johns-Krull & Basri 1997), and may be quite common for late-M T Tauri stars where relatively small changes in the H$`\alpha `$ emission can result in significant changes in the observed EW because of the relatively low continuum flux.
Strong Li I absorption at 6708 Å, a signature of extreme youth, was detected in the HIRES spectra of both GG Tau Ba and Bb (Figure 4, Table 1). Because of the limited resolution of the FOS spectra, no Li I was detected in the spectra of GG Tau Aa or Ab (Table 1 gives upper limits). However, high resolution ground-based spectra of the unresolved pair, which are known to be a physical pair based on orbital motion (Ghez et al. 1995; Roddier et al. 1996), show strong Li I absorption (EW\[Li I\] = 0.72 Å; Basri, Martín & Bertout 1991). The low surface gravity features and Li I absorption, in conjunction with the strong H$`\alpha `$ emission above the weak T Tauri star limit (section 3) imply that all four components of the GG Tau system are classical T Tauri stars.
The resolution of our FOS spectra is insufficient to determine any continuum excess emission or veiling which may be present in the spectra of Aa and Ab. However, using higher resolution ground-based spectra of the unresolved GG Tau A pair, Gullbring et al. (1998) find GG Tau A exhibits only a small amount of excess emission. Specifically, they measure the ratio of excess flux to photospheric flux, $`r`$, to be only 0.24 over the spectral region 4500 - 5100 Å. This excess emission, which can be attributed to the optically brighter primary ($`\mathrm{\Delta }`$V = 2.9; section 2.2), is consistent with other continuum excess measurements for GG Tau A (Basri & Batalha 1990; Hartigan et al. 1991) implying that the veiling is not highly variable and is usually low. Since $`r`$ is known to decrease towards longer wavelengths where the photosphere becomes relatively brighter (Basri & Batalha 1990; Hartigan et al. 1991), we expect little effect on the inferred spectral types; large veilings at levels greater than $`r0.4`$ at 6300 Å are needed to alter the observed spectral type for Aa (K7) by one spectral subclass (M0), which is nevertheless within our uncertainties.
Additionally, since Balmer series emission is known to be correlated with continuum veiling (Basri & Batalha 1990; Hartigan et al. 1990), the relatively weak Balmer series emission for Ab, which is only modestly above the weak T Tauri star limit (section 3), suggests that it experiences little or no optical veiling as well.
With high resolution spectra of both GG Tau Ba and Bb, continuum excess emission can be measured directly from these spectra once their radial velocities and rotational velocities are established. The radial velocities for Ba and Bb were determined via cross-correlation with Gl 876, Gl 447 (Delfosse et al. 1998) and Hyades M dwarfs (Reid 1999). We measure radial velocities of $`16.8\pm 0.7`$ km/s for Ba and $`17.1\pm 1.0`$ km/s for Bb. The rotational velocities ($`v`$sin$`i`$) for Ba and Bb were determined from the width of the peak in their cross-correlation with the slowly rotating ($`<`$ 2 km/s) mid-M dwarfs Gl 876 and Gl 447 (Delfosse et al. 1998). We find a $`v`$sin$`i`$ of $`9\pm 1`$ km/s for Ba and $`8\pm 1`$ km/s for Bb. The uncertainties in both the radial and rotational velocities are determined from the scatter in these estimates over several orders.
The spectra of GG Tau Ba and Bb were then compared to dwarf spectra of similar spectral type and $`v`$sin$`i`$ to identify the presence of continuum veiling (Hartigan et al. 1989). The T Tauri spectra were modeled as a standard stellar spectrum plus a constant level of excess emission. Orders with no prominent telluric absorption features were divided up into 20 Å bins and the optical excess emission within each bin was determined by minimizing the $`\chi ^2`$ of the model fit. We found no detectable veiling for either GG Tau Ba or Bb over nearly the entire wavelength coverage of the spectra. The veiling upper limits at 6500 Å are $`r<0.1`$ for Ba and $`r<0.25`$ for Bb, while at 8400 Å the limits are $`r<0.05`$ for Ba and $`r<0.1`$ for Bb (where $`r=`$ F<sub>excess</sub> / F<sub>photosphere</sub>). These veiling upper limits can be used to place upper limits on the mass accretion rates. If it is assumed that veiling is caused by the gravitational energy released from accreting material (Hartigan, Edwards & Ghandour 1995; Gullbring et al. 1998), then the mass accretion rate for both Ba and Bb must be less than $`10^9`$ M/yr. Thus, although both GG Tau Ba and Bb do show signatures of accretion (strong H$`\alpha `$ emission, NIR excess emission), this accretion is not likely to alter their final masses significantly.
### 3.3 Extinction and Luminosity Estimates
The spatially resolved photometric measurements for the four components of GG Tau are shown in Figure 5 and listed in Table 2. The optical values are from the HST measurements of Ghez, White & Simon (1997), which have been transformed onto the more standard Johnson-Cousins system (cf. Ghez et al. 1997). The NIR measurements are the median of spatially resolved JHKL photometry for GG Tau compiled from both the literature and recent observations (White & Ghez 1999).
With spatially resolved photometry and spectral types, the line-of-sight extinction and stellar luminosity of each component can be determined from standard spectral type, color and bolometric correction relations. Much like the relative temperatures of dwarf and giant stars, the colors of late-K and early-M dwarfs and giants are similar, but by mid-M spectral types the optical colors of giant stars begin to become bluer than the colors of dwarf stars (cf. Bessell & Brett 1988). Since dwarf colors are better established than giant colors, we adopt dwarf colors in our analysis and give a simple argument for their preference based on the derived extinctions below. Specifically, we adopt the dwarf color relations of Bessell & Brett (1988) and Bessell (1991) for the K7 star and that of Kirkpatrick & McCarthy (1994) for M0 and cooler stars. We also use the V-L color relations of Kenyon & Hartmann (1995) for all spectral types.
A mean line-of-sight extinction is determined for each component from E(V-R<sub>c</sub>), E(R-I<sub>c</sub>) and E(I<sub>c</sub>-J), using the extinction law of Rieke & Lebofsky (1985) and color excess relations of Hillenbrand (1997) and Luhman & Rieke (1998). We assign the half-range of these three estimates as the uncertainty in the extinction. We note that if giant colors (Thé, Steenman & Alcaino 1984; Bessell & Brett 1988) are used to derive the extinction, the half-range uncertainty estimates are considerably larger, especially for the M5 and M7 components (A<sub>V</sub> error for Ba\[M5\] = 0.37 magnitudes for dwarf colors vs. 0.69 magnitudes using giant colors and A<sub>V</sub> error for Bb\[M7\] = 0.24 magnitudes for dwarf colors vs. 2.12 magnitudes for giant colors). This suggests that the intrinsic photospheric colors of these T Tauri stars may indeed be more similar to dwarfs than giants.
The luminosity is then derived from the reddening corrected I band measurements and a bolometric correction, assuming a distance of 140 pc, the distance to the Taurus-Auriga star forming region (Elias 1978; Kenyon, Dobrzycka & Hartmann 1994; Preibisch & Smith 1997). Typical uncertainties in the distance to Taurus-Auriga are 10 pc, although the large spatial extent of the star forming region on the sky suggests that the uncertainty in the distance to any particular star could be as large as 30 pc. The effect of errors in our assumed distance are discussed in section 4.2.
The bolometric corrections from Bessell (1991) are used for stars hotter than spectral type M3 while the values from Monet et al. (1992) are used for spectral type M3 and cooler. While bolometric corrections for early M stars are reasonably well established, their remains considerable uncertainty in the bolometric corrections for late-M stars (Monet et al. 1992; Kirkpatrick et al. 1993). We assign a generous uncertainty in our adopted bolometric corrections of 0.05 magnitudes for Aa and Ab, and 0.1 and 0.2 magnitudes for Ba and Bb, respectively. The uncertainty in the luminosity is determined from the convolved uncertainty in the bolometric correction, the extinction and the I band photometry. These values are listed in Table 1. It is worth noting that the close pair, GG Tau Aa and Ab with a projected separation of only 35 AU, differ in visual extinction by 2.5 magnitudes. These extinction values are consistent with the reddened photospheric continua observed in the FOS spectra. The extinction difference is most likely due to the differences in the local distribution of circumstellar material for each component. This demonstrates the need for obtaining spatially separated spectra of all components within a young multiple system in order to correctly determine their stellar and circumstellar properties.
## 4 Discussion
### 4.1 Evidence for Coeval Formation
The four components of the GG Tau system appear to represent a physical quadruple. The GG Tau A and GG Tau B pairs have a projected separation of only 1400 AU. This separation is considerably less than the size of a typical cloud core ($``$ 20,000 AU) and therefore on a scale consistent with that expected from core fragmentation models which are thought to produce multiple systems (e.g., Bonnell & Bastien 1993; Boss 1996; Sigalotti & Klapp 1997). All components exhibit classical T Tauri star signatures consistent with a similar stage of early evolution. More directly, GG Tau Aa and Ab are known to be a physical pair based on orbital motion studies (Ghez et al. 1995; Roddier et al. 1996). The HIRES radial velocity measurements of both GG Tau Ba and Bb are in good agreement with the radial velocity measurements for GG Tau A of $`17.9\pm 1.8`$ by Hartmann & Stauffer (1989), and thus are consistent with a comoving system. Millimeter maps of the ring around GG Tau A show a well defined extension towards GG Tau B, which may be interpreted as a tidal distortion due to GG Tau B (Koerner, Sargent & Beckwith 1993; Dutrey, Guilloteau & Simon 1994). It is also important to consider that there are no other known T Tauri stars within 30 arcminutes of the GG Tau system. It thus seems very unlikely that four closely spaced yet relatively isolated classical T Tauri stars with similar radial velocities and stellar properties would not be physically associated.
### 4.2 A Test of PMS Evolutionary Models and the T Tauri Temperature Scale
The relative ages of the PMS quadruple GG Tau offer an observational test of evolutionary models and the T Tauri temperature scale. The correct evolutionary model and temperature scale should yield the same age for all components. Currently, the GG Tau system is uniquely suited for the relative age test. Its multiple components span a wide range in spectral type and have all been resolved photometrically and spectroscopically. Other systems with apparently wide ranges in spectral type (e.g., UX Tau; Magazzú, Martín & Rebolo 1991) typically have components that have not been resolved spectroscopically (UX Tau B is a 0$`\stackrel{}{\mathrm{.}}`$14 binary; Duchêne 1998).
In order to place the components of GG Tau onto an H-R diagram for comparison with evolutionary models, the observed spectral types need to be converted into effective temperatures. Unfortunately, the spectral type - effective temperature relation for T Tauri stars is not well known. Since their surface gravity appears to be intermediate between that of dwarfs and giants, the correct T Tauri temperature scale is likely to be constrained between that of dwarfs and giants (Martín et al. 1994; Luhman et al. 1997). We use these extremes as boundaries for the range of plausible temperatures for T Tauri stars.
For the dwarf temperature scale, we use a K7 temperature of 4000<sup>o</sup>K (Bessell 1991) and the M dwarf temperature scale of Leggett et al. (1996) as fit by Luhman & Rieke (1998), which is consistent with the available observational constraints for dwarf stars (Luhman & Rieke 1998). We use the giant temperature scale of Perrin et al. (1998), which has been accurately established via interferometric angular diameter measurements of cool giant stars.
In Figure 6, the components of the GG Tau system are plotted on several H-R diagrams with both the dwarf (solid squares) and giant (open diamonds) temperature scales. A dotted line connects the dwarf and giant temperatures, identifying the range of plausible temperatures for each component. As the Figure illustrates, the dwarf and giant temperature scales are essentially identical for spectral types near M0, but diverge significantly for cooler spectral types. The errorbars plotted for each component correspond to the uncertainty in the spectral type (section 3.1) and luminosity (section 3.3).
The locations of the GG Tau components define an empirical isochrone which can be compared with PMS evolutionary models. We conduct this comparison using six popular PMS evolutionary models, which differ primarily in their prescription for convection, the assumed opacities and the assumption of a grey or non-grey atmosphere. The models we consider include (1) Swenson et al. (1994, model F, hereafter S94), computed with a mixing-length theory for convection, the opacities of Alexander (1992) and Iglesias, Rogers & Wilson (1992) and a grey atmosphere, (2) D’Antona & Mazzitelli (1994, hereafter DM94-MLT), computed with a mixing-length theory for convection, the opacities of Alexander (1992) and Rogers & Iglesias (1992) and a grey atmosphere, (3) D’Antona & Mazzitelli (1994, hereafter DM94-CM), computed with the Full Spectrum of Turbulence model for convection (Canuto & Mazzitelli 1992) and the same opacities and grey atmosphere as DM94-MLT<sup>2</sup><sup>2</sup>2D’Antona & Mazzitelli (1994) also compute evolutionary models using the Kurucz opacities. We do not consider these evolutionary models since the Kurucz opacities are inadequate for temperatures $`4000^\mathrm{o}`$K where molecules play an important role (D’Antona & Mazzitelli 1997)., (3) D’Antona & Mazzitelli (1997, hereafter DM97), also computed with the Full Spectrum of Turbulence convection model and a grey atmosphere, but with updated opacities (Alexander & Ferguson 1994, Iglesias & Rogers 1996), (5) Baraffe et al. (1998, hereafter B98-I), computed with a mixing-length theory for convection with the mixing-length equal to the pressure scale height in the atmosphere, updated opacities (Alexander & Ferguson 1994, Iglesias & Rogers 1996) and a non-grey atmosphere, and finally (6) Baraffe et al. (1998, hereafter B98-II), which is identical to B98-I except that the mixing length is increased by a factor of 1.9 for masses greater than 0.6 M<sup>3</sup><sup>3</sup>3A factor of two change in the mixing length has inconsequential effects on evolutionary models of mass $``$ 0.6 M (Chabrier & Baraffe 1997, Baraffe et al. 1998). Each of these models is computed with near solar abundances.
As shown in Figure 6, GG Tau Aa and Ab are coeval according to all evolutionary models and either temperature scale, although as noted above, the temperature scales are nearly identical for these spectral types (K7 and M0.5). We therefore use the isochrone defined by their average age (dashed line in Figure 6) to test the evolutionary models at cooler temperatures.
The S94 model yields the most discrepant ages (Figure 6a). The GG Tau A isochrone predicts temperatures considerably colder than the range of plausible temperatures for GG Tau Ba and, with modest extrapolation, GG Tau Bb. The DM97 model exhibits a similar problem (Figure 6b). The coldest component, GG Tau Bb, is coeval at a temperature consistent with the cool dwarf temperature, but its slightly hotter companion, GG Tau Ba, is coeval at a temperature well below even the dwarf temperature. The DM94-MLT and DM94-CM models appear moderately more successful. Both yield coeval ages (Figure 6c & 6d), but require a very cool dwarf temperature at M5 and then a considerable jump of more than a 200<sup>o</sup>K from the dwarf scale at M7.
The B98-I and B98-II models provide the most consistent ages with a temperature scale intermediate between that of dwarfs and giants (Figures 6e & 6f). For the B98-I model, a temperature scale hotter than the dwarf scale by $`160^\mathrm{o}`$K is needed to make both Ba and Bb coeval, resulting in effective temperatures of 3160<sup>o</sup>K and 2840<sup>o</sup>K, respectively. The B98-II model also suggests a temperature scale moderately hotter than the dwarf temperature scale, with perhaps some spectral type dependence; the M5 is coeval when 40<sup>o</sup>K hotter than the dwarf temperature while the M7 is coeval when 140<sup>o</sup>K hotter than the dwarf temperature, resulting in effective temperatures of 3050<sup>o</sup>K and 2820<sup>o</sup>K for Ba and Bb, respectively.
It is important to note that these results are not notably affected by the uncertainties in the distance to GG Tau, even if significant errors (30 pc) are assumed. This is because evolutionary models are primarily vertical in the H-R diagram, and the isochrones are roughly parallel, for these masses at a young age. Thus while changes in distance can notably affect the absolute age, there is much less of an effect on the relative age.
The success or failure of these evolutionary models in yielding consistent ages may identify inadequacies in the input physics of some evolutionary models. The earliest models (S94, DM-MLT94 & DM-CM94) all assume a grey atmosphere and similar but slightly outdated sets of opacities, but differ in their convection prescription. Their failure to yield coeval ages using a consistent temperature scale suggests that a grey atmosphere and these older sets of opacities are not appropriate for young stars, independent of the convection prescription. It is also worth noting that these models poorly match the empirical Pleiades isochrone for masses below $`0.5`$ M, although newer evolutionary models are moderately more successful (Stauffer et al. 1995; Luhman 1998; D’Antona & Mazzitelli 1997). However, the DM97 evolutionary model computed with updated opacities does not yield a coeval age, which suggests difficulties in the model other than the choice of opacities. Given the success of both the B98-I and B98-II models, it appears that the assumption of a non-grey atmosphere is an important condition for success. This is not a surprising result. Molecules become stable in the atmosphere at these low effective temperatures ($`5000^\mathrm{o}`$K), and constitute a significant source of absorption. Since molecular absorption coefficients are strongly depend upon wavelength, a grey atmosphere approximation is not expected to be valid (Chabrier & Baraffe 1997).
Although the relative age test cannot distinguish between the two Baraffe et al. (1998) models, a dynamical mass estimate for GG Tau A exists and offers an additional constraint. Orbital velocity measurements of the circumbinary disk surrounding GG Tau A imply a combined stellar mass (Aa plus Ab) of $`1.28\pm 0.08`$ M (Dutrey et al. 1994; Guilloteau et al. 1998). For comparison, the B98-I model implies a total mass for GG Tau A of $`2.00\pm 0.17`$ M, while the B98-II model implies a total mass of $`1.46\pm 0.10`$ M (section 4.3). The dynamical mass measurements are clearly in much closer agreement with the B98-II models. For completeness we also report that the sum of the masses inferred for Aa and Ab from S94 is $`1.08\pm 0.22`$ M, from DM94-MLT is $`1.12\pm 0.16`$ M, from DM94-CM is $`0.81\pm 0.17`$ M and from DM97 is $`0.80\pm 0.14`$ M. The successful B98-II model is also reasonably consistent with the available mass constraints for GM Aur and DM Tau based on dynamical studies of their circumstellar disks (see Appendix). This agreement suggests that convective mixing lengths nearly twice the pressure scale height are more appropriate for young stars than mixing lengths equal to the pressure scale height. Alternative prescriptions for convection such as the Full Spectrum of Turbulence method (Canuto & Mazzitelli 1992) will hopefully also be considered in future evolutionary models which incorporate a non-grey atmosphere. It is worth noting though that the two models computed with this more complicated convection prescription (DM-CM94 & DM97) consistently predict masses which are systematically less than the dynamical mass estimates for GG Tau A (given above), GM Aur and DM Tau (Appendix). We suggest that future evolutionary codes consider convective scales which result in masses consistent with dynamically inferred masses.
### 4.3 The Inferred Stellar Properties: A Substellar Companion
Since the B98-II evolutionary model is the most consistent with the available observational constraints, we use this model and the implied coeval temperature scale, at an age of 1.5 Myrs, to infer the stellar masses (Table 1). The uncertainties in mass for Aa and Ab are determined from the uncertainty in the spectral type, assuming a dwarf temperature scale. The uncertainties in the mass of Ba and Bb are determined from the uncertainty in their luminosity and the assumption that they are constrained to lie on the average age isochrone of Aa and Ab. Of particular interest is the lowest mass component, GG Tau Bb, with a mass of 0.044 $`\pm `$ 0.006 M. This mass is well below the hydrogen burning minimum mass of $`0.075`$ M (Chabrier & Baraffe 1997), and can therefore be considered a young brown dwarf. As can be seen by comparing the B98-I and B98-II models, changes in the assumed isochrone have little effect on the inferred mass for Bb. This is a consequence of the roughly steady state burning of deuterium, which keeps $`0.05`$ M objects at a similar luminosity and temperature for nearly 5 Myrs (Baraffe et al. 1998).
With a spectral type of M7, GG Tau Bb is currently the coldest, lowest mass, spectroscopically confirmed companion to a T Tauri star. The only confirmed T Tauri companion of similar spectral type is UX Tau C (M5 - M6; Magazzú et al. 1991; Basri & Marcy 1995). Other low mass companions to T Tauri stars have been suggested based on proximity and photometry (Tamura et al. 1998, Lowrance et al. 1999; Webb et al. 1999), but confirmation of their companionship and mass requires spatially resolved spectroscopy. GG Tau Bb is also one of the coldest, lowest mass T Tauri objects in the Taurus-Auriga star forming region. Only a few objects of comparable mass (Briceño et al. 1998; Luhman et al. 1998a) or possibly lower mass (Reid & Hawley 1999) are known in this region. Moreover, only a handful of T Tauri objects with temperatures cooler than GG Tau Bb are known in any star forming region (Hillenbrand 1997; Luhman et al. 1997; Luhman et al. 1998b; Neuhäuser & Comerón 1998; Wilking, Greene & Meyer 1999; Reid & Hawley 1999).
All components of the GG Tau system appear to support actively accreting circumstellar disks. These disks are inferred from the strong Balmer series emission (Figures 2, 3 and 4) as well as UV and NIR excess emission (Figure 5) characteristic of circumstellar accretion. While circumstellar accretion disks are common in binary systems of a few hundred AU separation (Brandner & Zinnecker 1997; Ghez et al. 1997; Prato 1998), it is intriguing that the substellar companion also supports a circumstellar disk. It appears that young brown dwarf companions of mass $``$ 0.05 M can support a circumstellar disk. The similarities in the circumstellar properties of GG Tau Bb compared to other low mass T Tauri binaries may also suggest, albeit very speculatively, that brown dwarf companions of mass $``$ 0.05 M form in the same fashion that it is believed young binary stars form, from the fragmentation of a collapsing cloud core (e.g., Boss 1988).
## 5 Summary
The spatially resolved optical spectra of the young quadruple GG Tau demonstrate that all components are classical T Tauri stars with surface gravities intermediate between that of dwarf and giants. The similarities in their position (and relative isolation within the cloud), kinematics, stellar and accretion properties further establish these stars as a physical quadruple.
The components of this ’mini-cluster’, which span from K7 to M7 in spectral type, provide a test of evolutionary models and the temperature scale for very young, low mass stars under the assumption of coeval formation. Of the evolutionary models tested, the Baraffe et al. (1998) models, which are unique in their assumption of a non-grey atmosphere, yield the most consistent ages using a temperature scale intermediate between that of dwarfs and giants. The Baraffe et al. (1998) model computed with a mixing length nearly twice that of the pressure scale height (B98-II) is the most consistent with the dynamical mass estimates for GG Tau A, and is also reasonably consistent with the dynamical mass estimates for GM Aur and DM Tau. This model suggests an age for the system of 1.5 Myrs and a T Tauri temperature scale which diverges modestly from the dwarf temperature scale: $``$ M0 T Tauri stars are consistent with the dwarf scale, M5 and M7 T Tauri stars are roughly 40<sup>o</sup>K and 140<sup>o</sup>K hotter than the dwarf scale. It is clear that PMS multiple systems with a wide range in stellar masses similar to GG Tau can be used as a powerful test of evolutionary models at this very interesting, but uncertain early stage of stellar evolution.
Using the successful Baraffe et al. model (B98-II) with the implied coeval temperature scale, we find that the GG Tau Bb component is substellar with a mass of $`0.044\pm 0.006`$ M. This substellar companion is particularly intriguing as its large H$`\alpha `$ emission and NIR excess emission suggest that it supports a circumstellar accretion disk. GG Tau Bb is currently the lowest mass, spectroscopically confirmed companion to a T Tauri star, and is one of the coldest, lowest mass T Tauri objects in the Taurus-Auriga star forming region.
Some of the data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation. Support for this work was provided by NASA through grant NAGW-4770 under the Origins of Solar Systems Program and grant number G0-06014.01-94A from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS5-26555. The authors are grateful to W. Brandner, K. Luhman, J. Patience, D. Popper, L. Prato, R. Webb & B. Zuckerman for helpful comments and discussions and to G. Basri, D. Kirkpatrick & K. Luhman for spectral standards. We thank D. Kirkpatrick and J. Stauffer for helping with the LRIS observations and we greatly appreciate the assistance provided by the FOS instrument scientists T. Keyes and E. Smith.
## Appendix A Mass Constraints on Evolutionary Models from GM Aur and DM Tau
The single T Tauri stars GM Aur and DM Tau have mass estimates from dynamical studies of their circumstellar material (Dutrey et al. 1998; Guilloteau & Dutrey 1994, 1998). Here we compare these mass estimates to the predictions of evolutionary models. In order to place these two stars on the H-R diagram, we adopt the spectral types of K7 for GM Aur (Gullbring et al. 1998, Basri & Batalha 1990, Hartigan et al. 1995) and M1 for DM Tau (Hartmann & Kenyon 1990). These spectral types are converted to effective temperatures using a dwarf temperature scale, which is similar to the giant temperature scale for these spectral types (section 4.2). The line-of-sight extinctions and luminosities are derived from the broad-band photometry reported in Kenyon & Hartmann (1995), assuming dwarf colors, a distance of 140 pc and the methodology outlined in section 3.3. For GM Aur (log T<sub>eff</sub> = 3.602, A<sub>V</sub> = 0.10 mag, log(L/L) = $`0.25`$), the evolutionary models considered in section 4.2 predict masses of 0.70 M (S94), 0.70 M (DM94-MLT), 0.53 M (DM94-CM), 0.51 M (DM97), 1.06 M (B98-I) and 0.78 M (B98-II). The B98-II model offers the best agreement with the dynamical mass estimates of $`0.84\pm 0.05`$ M (Dutrey et al. 1998). For DM Tau (log T<sub>eff</sub> = 3.566, A<sub>V</sub> = 0.63 mag, log(L/L) = -0.60), the evolutionary models predict masses of 0.58 M (S94), 0.48 M (DM94-MLT), 0.43 M (DM94-CM), 0.44 M (DM97), 0.67 M (B98-I) and 0.64 M (B98-II). Here the B98-II model is less consistent the dynamical mass of $`0.47\pm 0.06`$ M (Guilloteau & Dutrey 1998), while the various models by D’Antona & Mazzitelli (DM97, DM94-CM, DM94-MLT) are in agreement with the dynamical mass. However, DM Tau is known to be considerably veiled (Hartmann & Kenyon 1990), which typically causes the optical spectrum to look like a hotter star. If the intrinsic spectral type for DM Tau is cooler by one spectral subclass (M2), then the Baraffe et al. models (both B98-I and B98-II are identical at this mass) predict a mass of 0.50 M which is consistent with the dynamical mass. We caution however that GM Aur and DM Tau currently only offer weak mass constraints due to the uncertainties in their distance and their spectral type. Nevertheless, the B98-II model, in addition to being consistent with the relative ages of the components of the GG Tau system, is also reasonably consistent with the available dynamical mass constraints.
|
no-problem/9902/astro-ph9902257.html
|
ar5iv
|
text
|
# Toward the Evidence of the Accretion Disk Emission in the Symbiotic Star RR Tel
## 1 Introduction
Symbiotic stars exhibit the thermal components typical of a cool star and a hot star in their spectra with additional emission nebulosity. It is usually thought that they form a binary system of a giant suffering a heavy mass loss and a white dwarf surrounded by an emission nebula (e.g. Kenyon 1986). The double-peaked profiles often observed in the emission lines of many symbiotic systems convincingly imply that the emission regions may be characterized by disk-type motions. The variability and nova-like outbursts may also be attributed to the existence of an accretion disk. Since the giant provides material to the hot star in the form of a stellar wind, it is plausible to expect that an accretion disk may be formed around the hot star (Robinson et al. 1994). Recently, Mastrodemos & Morris (1998) presented their numerical computations on disk formation in a wide binary system of a white dwarf and a giant through a dusty wind.
An important and distinct aspect of spectroscopy of symbiotic stars is provided by the Raman-scattered features around 6830 Å and 7088 Å, which are identified by Schmid (1989). According to him, they are originally the O VI 1032, 1038 doublet lines that are absorbed and re-emitted by atomic hydrogen initially in the ground $`1s`$ state and finally to de-excite to the $`2s`$ state. Basic atomic physics of the Raman-scattering processes is discussed by many researchers (Lee & Lee 1997a, Nussbaumer et al. 1989, Saslow & Mills 1969, Sadeghpour & Dalgarno 1992).
The Raman-scattered features are characterized by the strong polarization and the Doppler enhancement by a factor of $`7`$ attributed to the incoherence of scattering. Therefore, the broadened profiles and polarization are expected to carry much information about the physical properties of the emission regions and scattering geometry. This point is illustrated by the numerical works by Harries & Howarth (1997) and Schmid (1992) and by the spectropolarimetric observations performed by Harries & Howarth (1996), and Schmid & Schild (1990).
However, thus far it appears that neither numerical works nor observational reports successfully address the important role of the Raman-scattered features that they provide a unique view of the emission region from the neutral scatterers rather than in the line of sight to the observer. One good example of this point is found in the spectropolarimetry of the broad double-peaked H$`\alpha `$ line in the radio galaxy Arp 102B by Corbett et al. (1998). They proposed that the single-peaked profile in the polarized flux is mainly associated with the different line of sight to the scatterers than the observer’s sight line that is prone to give a double-peaked profile in the direct flux. Similar but more interesting effects are expected in the case of the Raman-scattered lines, because these are purely scattered features with no direct component and furthermore the associated polarized fluxes carry additional and complementary information.
Recently, simultaneous observations of the Raman-scattered features and emission lines in the UV range have been performed (Espey et al. 1995, Birriel et al. 1998, Schmid 1998). In this regard, this viewpoint about the Raman-scattered lines is expected to be useful for building more elaborate models of symbiotic stars. In this Letter, we adopt this point to compute the profiles and polarization of the Raman-scattered O VI lines and explain many prominent features seen in the symbiotic star RR Tel, and discuss the evolutionary status in relation to bipolar protoplanetary nebulae.
## 2 Model
### 2.1 Accretion Disk in Symbiotic Stars
In this subsection, we give a brief description of our model adopted for the computation of the profiles and polarization of the Raman-scattered lines in the symbiotic star RR Tel. Spectropolarimetric observations of RR Tel have been performed by a number of researchers (e.g. Harries & Howarth 1996, Espey et al. 1995). The main features of the spectra of RR Tel include the clear double-peaked profiles, high degree of polarization and polarization flip in the red wing in the Raman-scattered lines. Because the scatterers responsible for the Raman-scattered lines are believed to be located near the giant, the double-peaked profiles imply that the emission regions are characterized by a disk-type motion.
In Fig. 1 is shown a schematic diagram illustrating the accretion disk emission regions and the scattering geometry adopted in this Letter. There are two main emission regions around the hot star, that is, the red emission region (marked by ‘RER’) and the blue emission region (marked by ‘BER’), which give the red and blue components in the direction of the binary axis, respectively. Here, by the binary axis we mean the line connecting the two stars. According to Mastrodemos & Morris (1998) more mass concentration is expected in the RER than in the BER. We discuss this effect in the next section.
The scatterers are assumed to be concentrated mainly in two regions, that is, in region (A) near the giant star and in region (B) forming a slowly expanding shell. Solf (1984) investigated the bipolar mass outflow of the symbiotic star HM Sge and found that in addition to the outflow of velocity $`200\mathrm{km}\mathrm{s}^1`$, there are slowly moving features near the equatorial plane. The amount of neutral hydrogen in region (B) is not certain at the moment, and in this paper it is assumed to be much smaller than that in region (A), but not negligible. It is also assumed that when the incident wavevector $`\widehat{𝐤}_i`$ makes an angle less than $`45^{}`$ with the binary axis, then the photon hits region (A) and otherwise it is scattered in region (B).
### 2.2 Double-Peaked Emission from an Accretion Disk
We follow the approximation adopted by Horne & Marsh (1986) to generate the line profiles from the disk inclined at various angles. The emission lines are assumed to be optically thick, and the disk obeys the Keplerian rotation. We also assume that the disk is geometrically thin with the disk height ratio $`H/R=1/50`$. Since we do not know enough about the accretion disk in symbiotic stars and even less about the emission regions in the disk, we just take the line source function to be a simple power-law $`S_Lr^{1.2}`$, which has been used by Horne & Marsh (1986) to match the line shape of H$`\alpha `$ in Z Cha. Fig. 2 shows the line profiles for inclination angles $`i=10^{}`$, 30, 60, 90. The maximum velocity of the emission region is chosen to be $`50\mathrm{km}\mathrm{s}^1`$ and the thermal broadening has not been applied. We note that the low-inclination profile is narrower than the high-inclination counterpart.
In this paper, we do not attempt to reproduce the observational results by fitting accurately, and instead we want to point out the physical origins of the main characteristic features shown in the observational data. Therefore, for the computation of the generic profiles and polarization, we use two Gaussians to approximate the profiles in Fig. 2, and adopt the peak strength ratio as a new free parameter. It is referred to Lee & Lee (1997b) for the detailed description of the Monte Carlo computation of the scattered flux profile and polarization. (See also Harries & Howarth 1997, Schmid 1992)
## 3 Result and Discussion
### 3.1 Profile and Polarization
In Fig. 3 is shown the main result. We set the strength ratio of the blue part to that of the red part to be 2/3 and discuss this point in the next subsection. The top panel shows the flux and polarization of the component scattered near the giant. In Panel (b) the same quantities are shown for the radiation scattered in the spherical shell receding with velocity $`v_{shell}=30\mathrm{km}\mathrm{s}^1`$ and total scattering optical depth $`\tau _T=0.3`$. In the bottom panel is shown the sum of the preceding two components. Here, the polarization direction is represented by the sign, where the positive sign represents the polarization in the direction perpendicular to the binary axis whereas the negative sign the parallel direction.
The double-peaked profile in the top panel is obtained because the scatterers near the giant see a double-peaked incident source. This component is strongly polarized in the direction perpendicular to the binary axis. Because the relative velocity of the scatterers near the giant is ignored, the scattered flux profile is almost the same as the incident profile in the binary axis direction and the breadth of the feature represents mainly the kinematics of the source part.
In Panel (b), we obtain a broad profile with a single peak. The polarization is in the direction parallel to the binary axis and is weaker because region (B) subtends a larger solid angle than region (A) does. Due to the relative motion of the shell, the location of the peak is redshifted from the center. The single-peaked profile is obtained because the averaged profile incident to the shell is smooth with a single peak near the center.
In combining the two components we may see interesting fine points that may be revealed in the spectropolarimetry of symbiotic stars. Firstly, the strength of the scattered component is determined mainly by the scattering optical depth and the solid angle of the scattering region. Our choice is such that the shell has a small scattering optical depth so that the synthetic scattered flux is dominantly determined by the near axis-scattered component. In RR Tel, most emission lines are single-peaked and the Raman lines show the double-peaked profile, which strongly imply the emission regions in disk-type motion with low inclination.
Secondly, the shell-scattered component is red-shifted and single-peaked. Therefore, this component adds more flux to the red part and increases slightly the ratio of the red peak strength to that of the blue in the synthetic flux. However, since the shell-scattered component is polarized in the parallel direction, in the overlapping region with the flux scattered in region (A) the total polarized flux is reduced as a result of cancellation of polarization but remains still in the perpendicular direction. Because the shell-scattered component extends more redward than the component scattered in region (A), there remains a parallelly polarized flux in the reddest part where the (A)-scattered component does not contribute. The polarization flip in the red wing part is an important feature shown in many spectropolarimetric observations of symbiotic stars.
Finally, as Harries & Howarth (1996) pointed out, the overall profiles observed for many symbiotic systems are broader than the terminal speed of the stellar wind associated with K or M type giants and the commonest profile type is triply-peaked where the reddest component is often polarized oppositely compared with the remainder part. It turns out that the overall breadth of the profile is determined by the kinematics of the source part and the relative motion of the scatterers. Furthermore, by increasing the speed of the receding shell, we can also generate triply-peaked profiles with the polarization flipped in the reddest component.
### 3.2 O VI Doublet Strength Ratio
In this subsection we discuss a possible prediction of the symbiotic star model consisting of an accretion disk with a bipolar wind. According to Mastrodemos & Morris (1998), it is expected that the RER has a higher optical depth than the BER does. For the O VI 1034 doublet, the 1032 photons have twice larger optical depth than the 1038 counterparts. In an optically thin medium, the oscillator strength ratio becomes that of the emergent line strength ratio. However, in an optically thick medium the emergent line strength of the 1032 Å component becomes similar to that of the 1038 Å component. However, complications may occur in a non-stationary medium, where the sensitive dependence of the resonance scattering cross section on the velocity field may easily alter the escape probability so that it becomes eventually proportional to the velocity gradient (e.g. Sobolev 1963, Lee & Blandford 1997, Michalitsianos et al. 1988). Further complications can be expected if the medium is dusty, in which case resonantly scattered O VI photons are subject to destructions by the dust particles (e.g. Neufeld 1991).
It is an interesting possibility that if the BER is optically thin and the RER is thick, then the Raman-scattered line around 6830 Å will show a larger ratio of the blue peak strength to the red peak strength than the 7088 Å feature, as is depicted in Fig. 1. It appears that in RR Tel the 7088 Å feature has a weaker red part compared that of the 6830 Å feature (Harries & Howarth 1997). However, since the 7088 Å feature is much weaker, higher resolution spectroscopy with good signal to noise ratio must be invoked to exclude other possibilities such as selective interstellar absorptions.
Hayes & Nussbaumer (1986) proposed that the electron density $`n_e3\times 10^6\mathrm{cm}^3`$ and the size of the emission line region $`R10^{15}\mathrm{cm}`$ in RR Tel. A simple computation of the line center optical depth of the O VI 1034 doublet in the emission region gives
$$\tau _c7.2\times 10^4T_4^{1/2}[f_iA_{OVI}/10^4][n_e/(10^6\mathrm{cm}^3)][R/(100\mathrm{AU})],$$
$`(3.1)`$
where $`T_4`$ is the temperature in units of $`10^4\mathrm{K}`$, $`A_{OVI}`$ is the O VI fraction in number and $`f_i`$ is the oscillator strength of the O VI resonance transition (Rybicki & Lightman 1979). Apparently, the emission region in RR Tel is very optically thick and the O VI doublet strength ratio would be nearly 1:1 in the entire region, even if the BER is an order of magnitude rarer in mass concentration than the RER. Therefore, both high resolution spectroscopy and further theoretical work on the radiative transfer based on a more refined model will shed light on this point.
### 3.3 Evolutionary Status of Symbiotic Stars
Asymmetric morphologies in the emission nebulae have been known in many symbiotic systems including V 1016 Cyg (Schild & Schmid 1996), HM Sge (Solf 1983, Eyres et al. 1995). Solf(1984) emphasized that a large fraction of symbiotic stars exhibit bipolarity and that there is a remarkable morphological resemblance with postnova shells and protoplanetary nebulae. The spectroscopic similarity between symbiotic stars and bipolar protoplanetary nebulae has led many researchers to propose evolutionary links between them (e.g. Iben & Tutukov 1996, Corradi 1995, Corradi & Schwarz 1995). A supporting argument to this effect is that the binarity may play an important role in forming an aspherical morphology of the nebula, typically characterized by bipolarity (Morris 1987, Soker 1998). However, the binarity of symbiotic stars is well-established whereas that of the bipolar protoplanetary nebulae still remains controversial.
The accretion disk formation in a young planetary nebula has been discussed by a number of researchers (e.g. Morris 1987, Soker & Livio 1994). Recently, Mastrodemos & Morris (1998) presented an interesting numerical computation on the dusty wind accretion in a detached binary of a mass-losing AGB star and a hot star that may be responsible for the bipolar morphology in protoplanetary nebulae. They found that a permanent and stable accretion disk is formed around the hot companion with an efficient cooling associated with dust in the wind (see also Theuns & Jorissen 1993). In particular, they concluded tha the limiting binary separation for disk formation should be greater than 20 AU for their M4 model. Therefore, it is interesting that the evolutionary links between symbiotic stars and bipolar protoplanetary nebulae imply that the hypothesis of accretion disk emission in RR Tel is also rendered a strong case. Furthermore, as shown in Fig. 1 the bipolar wind along the disk axis may provide natural scattering sites for the Raman-scattered flux that constitutes the red wing part with polarization flip.
The Raman-scattered lines are characterized by the high polarization and broadened profiles, which enable one to put strong constraints on the scattering geometry. Therefore, the main spectroscopic and polarimetric features imply very plausibly that the symbiotic star RR Tel may possess an accretion disk with a bipolar outflow. So called ‘type 2’ symbiotic stars seem to show similar behaviors in the Raman scattered lines and exhibit bipolar type morphologies. Furthermore, about a half of symbiotic stars exhibit the Raman scattered lines. Fine-tuned conditions such as a restricted range of the initial mass ratios of the two constituent stars and the orbital parameters affecting the mass transfer rate may be needed for the co-existence of a large amount of neutral hydrogen and highly ionized nebulae which characterize the symbiotic phenomenon.
An interesting example is provided by Péquignot (1997), who performed high resolution spectroscopy on a young planetary nebula NGC 7027 and found a Raman scattered He II line blueward of H$`\beta `$. Gurzadyan (1996) discusses the relation of NGC 7027 to symbiotic stars, noting that NGC 7027 shows high ionization lines along with a strong IR component. So far the Raman-scattering by H I is found to operate only in symbiotic stars except NGC 7027, and theoretical possibilities have been discussed in other astrophysical objects such as active galactic nuclei (Nussbaumer et al. 1989, Lee & Yun 1998). Therefore, it will remain an interesting possibility that similar processes are expected to operate in bipolar protoplanetary nebulae, in which case the Raman scattering can be regarded as an important tool that reveals the evolutionary links between symbiotic systems and bipolar protoplanetary nebulae.
###### Acknowledgements.
We are grateful to Dr. Hwankyung Sung and Sang-Hyeon Ahn for helpful discussions. We also thank the referee, who gave many suggestions that improve the presentation of this paper. HWL is supported by the Post-Doc. program (1998) at the Kyungpook National University. MGP gratefully acknowledges support from KOSEF grant 971-0203-013-2.
|
no-problem/9902/astro-ph9902291.html
|
ar5iv
|
text
|
# 1 INTRODUCTION
## 1 INTRODUCTION
Twenty years ago Active Galactic Nuclei (AGN) seemed spectacular but rare objects, a kind of sideshow compared to the main astrophysical concerns of the geometry of the Universe, the formation of stars, and the origin of galaxies. Since then, quasars have become a standard cosmological tool; we have realised the close connections between the theoretical problems of star formation and the formation and fuelling of AGN; we have gathered strong evidence that every galaxy contains a weak AGN or a quiescent black hole; and there is a growing realisation that the nuclear activity and star formation histories of the Universe are closely linked. This COSPAR workshop has brought together a fascinating mixture of scientists to address these issues. The progress we have made and the distance still to go can be summarized by looking at nine key questions. In this review I will skim briefly over the surface of these nine questions.
## 2 DO ALL GALAXIES CONTAIN MASSIVE DARK DARK OBJECTS?
Although some of the most impressive cases for Massive Dark Objects (MDOs) have come from gas dynamics and masers (e.g. Marconi et al 1997; Miyoshi et al 1995) the most systematic searches have been the stellar kinematics studies. The review by Kormendy and Richstone (1995) found MDOs in 20% of E-Sb galaxies searched, and claimed a correlation between the mass of the dark object and the stellar mass of the bulge. However, to pass the rigorous filter of these workers (in particular excluding velocity anisotropy) conditions had to be favourable - MDOs are easier to confirm in edge-on strongly rotating bulges. The recent study by Magorrian et al (1998) of a large number of galaxies uses simplified modelling, justified by the results of the more rigorous studies. These authors find that almost all the galaxies they examine show clear evidence for central dark objects, and with a correlation between the mass of the object and the mass of the galactic bulge. Their average ratio ($`M_{MDO}/M_{bulge}=0.006`$ ) is moreover twice that suggested by Kormendy and Richstone (1980).
These are very exciting results but there are some caveats and worries. First, the evidence for ubiquitousness comes almost entirely from the most massive early type galaxies, and so does not yet really tell us that all galaxies have an MDO, or that the mass correlation is really with bulge mass rather than total galaxy mass. Second, although the new survey has improved the MDO–bulge correlation, there is still a very large scatter in MDO size (two orders of magnitude) at any one galaxy size. Third, these are very large black holes compared to the sizes usually invoked in models of quasars and Seyfert galaxies.
There seems to be a genuine “relic problem”. Several authors have noted that the implied mass density in black holes is an order of magnitude larger than that expected from the integrated quasar light, and an assumed accretion efficiency of 10% (Phinney 1997; Haehnelt, Natarajan and Rees 1998; Faber, these proceedings). There are various possible explanations. Accretion efficiency may be much lower than we have assumed; the black holes may grow most of their mass in some early pre-quasar phase (Haehnelt, Natarajan and Rees 1998); or of course the MDO mass estimates may be wrong. There is however another very attractive possibility - that there exists a population of obscured quasars which outnumber normal quasars by a factor of several. Direct estimates of the number of obscured AGN are uncertain, depending sensitively on selection method (Lawrence 1991), but there is strong indirect evidence - current models of the X-ray background require obscured AGN to outnumber naked AGN by roughly 3 to 1 (e.g. Comastri et al 1995), and these numbers still do not include objects with columns in excess of a few times $`10^{24}`$ which will not contribute significantly to the X-ray background.
## 3 ARE THE MASSIVE DARK OBJECTS ACTUALLY SUPERMASSIVE BLACK HOLES?
The central massive objects are certainly dark \- in most cases we can say that the mass-to-light ratio is at least several tens and sometimes hundreds, thus clearly ruling out normal stellar populations, but leaving other exotic possibilities such as a cluster of dark stellar remnants. The two most impressive cases are NGC 4258 and the centre of our own Galaxy, which are constrained on impressively small size scales and have large minimum central densities. In the case of NGC 4258, the motions of water masers are detected at radio wavelengths on milli-arcsec scales, implying a mass density of at least $`4\times 10^9M_{}\mathrm{pc}^3`$ on a scale $`r<0.13\mathrm{pc}`$. Maoz (1995) argues further that the central object cannot be significantly extended without measurably distorting the perfect Keplerian rotation curve, implying a mass density of at least $`5\times 10^{12}M_{}\mathrm{pc}^3`$ on a scale $`r<0.012\mathrm{pc}`$. In our own Galaxy, the latest central mass estimates come from stunning observations of the proper motion of stars in the nuclear star-cluster which are consistent with movement around the radio source Sgr A\* at speeds up to 1700 km s<sup>-1</sup>. This has been made possible by shift-and-add image sharpening in the near-IR, first on the NTT (Eckart and Genzel 1996, 1997; Genzel et al 1997) and more recently on Keck (Ghez et al 1998; Morris, these proceedings). The new Keck data have 0.05<sup>′′</sup> resolution and 0.002<sup>′′</sup> positional accuracy; if monitoring is continued we can even expect to detect accelerations of the stars (Morris, these proceeedings). The central dark object in the Galactic Centre has mass $`2\times 10^6M_{}`$ within a radius of 0.01 pc, implying a minimum mass density of $`10^{12}M_{}\mathrm{pc}^3`$ (Ghez et al 1998). Genzel et al (1997) have pointed out that the fact that SgrA\* has no measurable proper motion in the (Galactic) radio frame strongly supports its identification as the location of the black hole, and indeed on statistical virial grounds this argues that its mass is at least $`10^5M_{}`$, so that SgrA\* probably contains most of the mass causing the stellar motions.
At the very high minimum densities deduced in NGC 4258 and the Galactic Centre, any cluster of stellar-size dark objects will have a two-body relaxation time less than $`10^8`$ years, so that, following arguments along the lines of Begelman and Rees (1978), collapse to a black hole seems inevitable (e.g. Genzel et al 1997, Ghez et al 1998). We have reached a stage where from an astronomer’s point of view, the circumlocution “massive dark object” seems unnecessarily cautious. However from a physicist’s point of view this hardly seems proof of the existence of supermassive black holes. How close are we getting to the relativistic regime ? The directly measured size scales in NGC 4258 and the Galactic Centre are at roughly $`10^4`$ times the Schwarzschild radius in those systems. If we accept the argument of Maoz (1995) that the dark object in NGC 4258 can’t be distributed and is no larger than 0.01pc, we are still a factor of a thousand from the event horizon. If we make the assumption that the radio source SgrA\* must be larger than the mass causing the stellar motions in the Galactic Centre, then we have reached 15 Schwarzschild radii - but of course this is not a safe assumption at all. If we accept the virial argument that SgrA\* itself is at least $`10^5M_{}`$, then the radio source covers 300 Schwarzschild radii (Genzel et al 1997). But of course the virial argument is statistical (we might have been unlucky), and in a distributed model, there is no generic reason why the radio source can’t be smaller than the whole object. Probably the best evidence that we are actually dealing with black holes comes from the broad X-ray iron lines in active objects (e.g Tanaka et al 1995), where we seem to be seeing the signatures we would expect from rotation within a few Scharzschild radii - very large velocities, a double peak with blue peak stronger, and the whole profile shifted to the red by gravitational redshift. Given the quality of the data, we should say that the evidence is extremely tempting rather than completely convincing, but hopefully AXAF and XMM will settle this question.
## 4 WHY ARE THE DARK OBJECTS SO DARK?
Fabian and Canizares (1988) first raised the worry that large black holes in elliptical galaxies ought to be extremely luminous from accretion of the hot gas that pervades such objects - but they are not. Likewise, the central sources in M31 and the Galactic Centre are extremely feeble; but gas is clearly present so one might expect luminosities many orders of magnitude larger than those seen (Goldwurm et al 1994; Melia 1994). Meanwhile a parallel problem has arisen with the quiescent states of low mass X-ray binaries, where the deduced accretion rate from the companion star onto the disc should produce an X-ray luminosity orders of magnitude larger (e.g. McLintock et al 1995; Lasota 1997). The solution proposed by several authors (Naryan and Yi 1995; Abramowicz et al 1995; Fabian and Rees 1995) is the idea of the Advection Dominated Accretion Flow (ADAF), which may well be the natural state of affairs at very low accretion rates. Such flows are predicted to have very low efficiencies (thus solving the black hole darkness problem) and poor cooling, leading to electron temperatures of the order 10<sup>9</sup>K. The expected spectral energy distribution (SED) has two peaks, one from free-free in hard X-rays, and another from thermal synchrotron, together with secondary peaks due to Compton scattering. The magnetic field is deduced from pressure equipartition. Quite convincing ADAF models have been published for the Galactic Centre, for NGC 4258, and for soft X-ray transients (Narayan, Yi, and Mahadevan 1995; Lasota et al 1996; Esin, McLintock and Narayan 1997; see also review by Narayan 1997). Until recently the available data have not tested the existence of the predicted GHz peak ; however recent high frequency radio and sub-mm observations (Hernstein et al 1998; Fabian, these proceedings) show that the ADAF models overpredict the observations by several orders of magnitude. It seems very hard for ADAF models to escape this blow.
What other possibilities can explain the darkness problem ? Firstly, perhaps a significant fraction of the expected energy output could emerge as mechanical outflow rather than as radiation. This seems after all to be the case in SS433, where the mechanical luminosity is 1000 times larger than the X-ray luminosity (Watson et al 1986). Secondly, accretion flow need not be steady. The possible mass supply to SgrA\* from mass loss in the nuclear star cluster is on a scale of one parsec, $`10^5`$ times the Schwarzschild radius. The dynamical timescale is of the order of a hundred years, but the flow time is likely to be much longer. These considerations may give us a reasonable idea of the time averaged accretion flow onto the outer accretion disc, but may not tell us the current accretion rate onto the black hole. There is a well known thermal instability which can lead to effective viscosity, and so accretion rate, changing by many orders of magnitude between high and low states. This is a popular explanation for dwarf nova and soft x-ray transient outbursts (e.g. Mineshige, Kim, and Wheeler 1990; Lasota 1997) and has been invoked to explain the quasar luminosity function (Siemiginowska and Elvis 1997).
## 5 DO ALL GALAXIES CONTAIN AGN?
We have known since the early 1980s that nearly all galaxies show nuclear emission lines, and that a third of all galaxies, and most early Hubble types, show LINER spectra, hinting at but not proving that some kind of weak quasar-like activity is extremely common (Heckman 1980). The heroic high S/N spectral survey of 486 galaxies by Ho Filippenko and Sargent (1997 and references therein) has strengthened this suspicion, showing that $``$ 10% of galaxies show weak broad H$`\alpha `$ lines, and almost half show AGN-like narrow lines. Meanwhile a very large fraction of elliptical galaxies show weak compact radio cores (Sadler, Jenkins and Kotanyi 1989; Wrobel and Heeschen 1991; Sadler, these proceedings.) It is now also becoming clear that a large fraction of very nearby galaxies contain weak nuclear X-ray sources (Colbert, Lira et al these proceedings). It seems that the large galaxies are more likely to contain AGN candidates - see later section. Given that star formation activity in late type galaxies could actually mask very weak quasar-like activity, it is increasingly tempting to believe that ALL galaxies contain some kind of AGN or AGN remnant. Of course the worry throughout about such objects (LINERS, weak radio sources, weak X-ray sources) is - are they really AGN ?
## 6 ARE THE UBIQUITOUS LOW LUMINOSITY AGN CANDIDATES REALLY AGN?
Some LINERs have broad permitted lines and so are proper quasar analogues - but what about the very common objects that have only narrow LINER spectra ? Ho (these proceedings) stressed that if the ratio of “LINER 2s” to “LINER 1s” is similar to the ratio of Type 2 to Type 1 Seyfert galaxies, then a large fraction of all LINERs would be explained. The expectation that LINER 2s can be obscured versions of LINER 1s has been spectacularly confirmed by the discovery of polarised broad H$`\alpha `$ in NGC 1052 (Barth 1998, PhD thesis - diagram shown by Ho in these proceedings), showing the existence of an obscured BLR revealed in reflection, just as in NGC 1068. On the other hand, UV spectroscopy by Maoz et al (1998) of the compact UV sources seen in some LINERs shows very clear signatures of winds from hot young stars, showing that such objects contain young stellar clusters. Maoz et al show that those objects with clear stellar signatures are at least an order of magnitude less luminous in X-rays. It may be that LINERS are a genuinely heterogeneous class. On the other hand, the emission from such a young cluster could actually mask the presence of a very weak or obscured AGN.
The X-ray emission from broad-lined LINERs seems quite consistent with other properties (Koratkar et al 1995; Fabbiano 1996; Serlemitsos, Ptak, and Yaqoob 1996; Terashima et al 1997; Terashima, these proceedings) and indeed one cannot distinguish “dwarf AGN” whose narrow-line components are LINER-like from those whose narrow-line components are Seyfert-like. However worries have been raised that seem to distinguish dwarf AGN from more luminous objects like Seyfert galaxies and quasars. (i) It has been suggested that they do not vary, or vary less than Seyfert galaxies (Shields and Filippenko 1992; Ho, Filippenko, and Sargent 1996; Ptak et al 1998; Awaki, these proceedings). However the least luminous known AGN, NGC 4395, has been shown to vary rapidly, with colour changes just like those seen in Seyfert galaxies (Lira et al 1998). It may well be that NGC 4395, a dwarf galaxy, has a very small black hole, whereas many other LINERs have large black holes with low accretion rates. Further careful quantification is needed on the variability question. (ii) A second worry is that dwarf AGN tend to have no Big Blue Bump, but instead have steep optical-UV spectra, with $`\alpha 2`$, and possibly also have a mid-IR excess compared to quasars (Ho, Filippenko and Sargent 1996; Barth et al 1996; Ho, these proceedings).
A possible explanation is dwarf AGN have very cool “bumps” rather than absent ones. Empirically, their steep spectra are consistent with the trends claimed by Kriss (1988), Wandel and Mushotzky (1989) and Zheng and Malkan (1993). Quasar SEDs systematically steepen from optical to UV, reaching $`\alpha 2`$ in the far UV (Zheng et al 1997), and in any one spectral range steepen systematically as luminosity is lowered, from $`\alpha =0`$ for the most luminous quasars to $`\alpha =2`$ for dwarf AGN. Previous attempted explanations have concentrated on changing bump strength, but an attractive possibility is that characteristic temperature changes with luminosity. Lawrence (1998a) describes how multi-temperature models scale in a characteristic fashion and produce an excellent fit to the trends of SED shape with luminosity.
## 7 DOES AGN ACTIVITY CORRELATE WITH HOST GALAXY PROPERTIES?
One of the persistent facts about local AGN is that more or less without exception radio-loud AGN are in elliptical galaxies, whereas radio quiet AGN are in spirals. There has been a debate about whether such a distinction continues to hold for the hosts of low-redshift quasars (see eg Taylor et al 1996 and references therein). The most careful study of quasar hosts so far is being undertaken with HST by Dunlop and collaborators. At this workshop, Kukula showed evidence that all the most luminous quasars live in giant ellipticals, regardless of radio loudness. However residuals from the smooth $`r^{1/4}`$ fits often show much disturbed structure suggesting mergers, complicating the interpretation. It has been suggested that mergers are central to the process of formation of both elliptical galaxies and quasars (Sanders et al 1988; Kormendy and Sanders 1989).
For many years there has been tantalising but not completely convincing evidence that quasar luminosity correlates with host galaxy luminosity (e.g. Lawrence 1993 and references therein). McLeod and Rieke (1995) have argued that rather than being a simple correlation between those quantities, there is an upper envelope to quasar luminosity which is proportional to galaxy size, and that quasars and Seyferts are consistent with the same relationship. In other words, big galaxies can have big or small AGN, but small galaxies can only have small AGN. A possible simple explanation is that black hole mass is on average proportional to galaxy mass (as local dynamical studies seem to indicate), but that a given black hole can have any accretion rate below a maximum given by the Eddington rate. The observed upper envelope is at least roughly consistent with that expected from the Magorrian et al $`M_H/M_{bulge}`$ relationship (McLeod 1997).
In their study of weak radio cores, Sadler, Jenkins and Kotanyi (1989) made essentially the same point concerning the wedge-like statistical relation between AGN power and galaxy luminosity, but went somewhat further, constructing the bivariate luminosity function, and trying various ways of quantifying the relationship, such as correlating galaxy luminosity $`L_B`$ with the 30th percentile radio power $`P_{30}`$. (Radio astrononmers were always better at statistics ..) They found that $`P_{30}L_B^{2.2}`$. On the other hand it seems that optical emission line strength is proportional to $`L_B`$ (Sadler, these proceedings; Sadler, Jenkins and Kotanyi 1989). Lira et al (these proceedings) have searched for weak nuclear X-ray sources in a volume limited sample of nearby galaxies. Nearly all the non-detections were in the smaller galaxies, and once again there appeared the all too familiar wedge-like pattern, consistent with the idea of an upper envelope to nuclear X-ray luminosity being proportional to galaxy luminosity. Of course we can’t be sure yet whether these weak X-ray sources are really AGN.
A significant puzzle, noticed by Sadler, Jenkins and Kotanyi, but now significantly strengthened, is that optical, X-ray, and emission line activity all seem to correlate roughly linearly with galaxy luminosity, whereas radio core power is much more sensitive, going as something like $`L_B^{23}`$. Whatever the explanation, this might help to make sense of the radio loudness dichotomy if the connection is specifically with spheroid component as often assumed.
## 8 HOW ARE AGN FUELLED?
A clear analysis of the fuelling problem is given in Shlosman, Begelman, and Frank (1989), and a useful collection of reviews, results, and theories can be found in the proceedings of the 1993 Kentucky conference Mass Transfer Induced Activity in galaxies (Shlosman 1994), and also in the proceedings of the 1996 Saas-Fee meeting, Galaxies : Interactions and Induced Star Formation (Kennicut, Schweizer, and Barnes 1998). It is an exceedingly difficult problem. Material has to change radial scale by perhaps nine orders of magnitude and lose something like five orders of magnitude of specific angular momentum. (Phinney 1994 makes this point particularly clearly not only in words but with a wonderful cartoon which I shamelessly stole for my talk at this conference). There is no shortage of ideas, most of which involve some kind of gravitational instability or non-axisymmetric potential. The problem becomes not so much “can it be done ?” but rather “which of these ACTUALLY happens ?” . However there will never be a simple theory of AGN fuelling. Each candidate process manages to shrink material by typically a factor of a few \- so clearly a whole sequence of processes is needed. We can crudely divide the problem into four stages - (1) galaxy scale to central regions; (2) central regions to ten parsec scale; (2) ten parsec scale to accretion disc; (3) accretion disc to event horizon.
Most of the observation and argument in recent years has concerned stage (1), and the role of interactions, mergers, and large-scale bars. It seems a particularly good bet that such processes are involved in triggering central starbursts. Most of this work concerns current day activity in existing galaxies, but some authors argue that the chaotic dynamics and clump interactions in the process of galaxy formation itself naturally leads to collapse of some central fraction of the gas, which may be closely related to the peak of quasar activity (Lake, Katz, and Moore 1998; Lake, Noguchi, these proceedings). Stage (2), from hundreds of parsecs to a few parsecs, has received less detailed attention. Theoretical possibilities include gravitational instabilities in the self-gravitating gas disk formed in stage (1), possibly as a cascade of bar instabilities (Shlosman, Frank and Begelman 1989); disruption of the disc by star formation; or magnetic braking (Krolik and Meiksin 1990). Some clues may be coming from gas morphology in the central regions. CO mapping of galaxies has shown that the central cold gas can have a variety of morphologies - rings, bars, central peak, twin peaks. These seem unlikely to be equilibrium dynamical structures and instead may tell us about the evolution of the central regions. Some important distinctions seem to be emerging - galaxies with AGN usually have CO rings and small ratios of gas mass to dynamical mass, whereas galaxies with HII spectra have CO bars and large ratios of gas mass to dynamical mass (Sakamoto et al 1997; Ishizuki, these proceedings). This is a complicated subject but we may be close to putting together a feasible history of episodic collapse and star formation.
If Roberto Terlevich is right (e.g. Terlevich et al 1992), stages (3) and (4) are not needed, as the AGN phenomenon is actually an exotic form of starburst on parsec scales in a high density environment. In the black hole model there is a long way to go to the event horizon, but somewhere on the parsec scale we will reach a point where the gravitational field of the hole dominates over the stellar field, so that the final fate of the material seems inevitable. It is tempting to believe that once a ten parsec scale gas disk has been produced in stage (2), it can fuel the black hole slowly and steadily by some local viscosity mechanism. However as explained by Begelman (1994) and Shlosman, Begelman and Frank (1989) such a giant accretion disc picture has serious problems - the inflow time is of the order of 10<sup>9</sup> years or more, and the accretion rate will be too small to power quasar luminosities, unless the density is large, in which case the disc becomes gravitationally unstable to local clumping (in which case it will probably form stars, cease to dissipate, and so stop flowing in). It seems likely that even in this inner region, large scale (rather than local) gravitational or magnetic effects are needed to re-distribute angular momentum, which will probably happen in lurches rather than in a nice steady fashion. An interesting alternative is some kind of hot accretion flow, which can potentially support a faster flow and more mass without becoming unstable (Shlosman, Begelman and Frank 1989).
So far I have assumed that material needs first to be assembled from large distances. An alternative is that material is supplied from a nuclear star cluster. Early versions of such models explored disruption of stars (e.g. Hills 1975) but more recent work concentrates on stellar mass loss and supernovae, the fuelling from which will evolve with time following the formation of the cluster (Norman and Scoville 1988; Murphy, Cohn, and Durisen 1991; Williams and Perry 1994). Shlosman, Begelman and Frank (1989) argue that the supply rate from mass loss is unlikely to be enough to power luminous quasars. However if quasars are short-lived this may not be relevant, as the mass loss serves to accumulate a reservoir of material which can later be accreted. Strong support for developing models of this kind comes from the fact there is now good evidence that such compact nuclear star clusters really exist - for example in the Galactic Centre (Krabbe et al 1995), in NGC 1068 (Thatte et al 1997), and in a variety of other nearby galaxies (Ho 1997). It may be that this a late-stage phenomenon connected with recurring low-level activity in recent epochs; but it has also been suggested that the large abundances seen in high redshift quasars requires a starburst closely associated with quasars in both time and space (Hamman and Ferland 1992).
Finally we arrive at the accretion disc. This is a much more mature problem but cannot be considered completely solved. Reconsideration of the role of advection has recently shaken the subject up and the origin of viscosity is still unclear. The most promising choice for viscosity is thought to be the Balbus and Hawley magnetic instability (Balbus and Hawley 1991) but in this case the disc will not behave at all like a standard “$`\alpha `$ disc” . Energy release will be above the disc rather than inside it (Begelman and De Kool 1991; Begelman 1994). Accretion is not necessarily steady. Indeed Siemiginowska and Elvis (1997) use predicted variations in accretion rate due to thermal instability to explain the quasar luminosity function.
Finally, an important generic point. At every stage of fuelling there is good reason to expect that activity will be episodic. As Krolik and Meiksin (1990) point out in their discussion of hundred–parsec scale magnetic braking, conservation of angular momentum demands that whenever some of the material goes in, other stuff goes out, so that reservoirs tend to evacuate as they “dump down” to the next stage. This naturally leads to episodes of accretion. The same basic point applies at all stages. Other feedback loops seem likely to operate. For example, various authors have suggested that in nuclear star clusters feedback between accretion, central radiation, winds, mass loss, and gravitational instability will lead to episodes of star formation, mass accumulation, and nuclear activity in turn (e.g. Bailey and Clube 1978; Williams and Perry 1994; Morris, these proceedings). Accretion discs may be subject to a thermal limit cycle (see earlier discussion), and accretion near the Eddington limit may be self-limiting and erratic. Finally, fuelling may be a stochastic event that occurs when one particular molecular cloud with low angular momentum intersects the black hole (Sanders 1981). An understanding of these time-dependent processes may be important for understanding quasar evolution.
## 9 IS AGN ACTIVITY RELATED TO STAR BURST ACTIVITY?
There is good circumstantial evidence that vigorous star formation is nearly always associated with quasar-like activity, and that much of the long-wavelength continuum energy distribution is actually from the starburst (Lawrence 1998 and references therein). But is there an actual causal connection ? It may be that starbursts always precede AGN activity, in a grand sequence of galaxy interaction – infall – starburst – further infall – quasar (e.g. Sanders et al 1988). On a smaller scale, parsec scale starbursts may always be closely connected with quasar-like activity with the causal connection going in both directions (see previous section) and indeed it has been argued that parsec scale starbursts could be the whole explanation of quasar-like activity (Terlevich et al 1992). Alternatively the large and small scale processes could be separate phenomena. Perhaps galaxy mergers cause starbursts which collapse no further, whereas AGN activity and fuelling is entirely connected with small scale structures formed at early times. This is one reason why we would still like to answer that fashionable question of the late 80s - are the ultraluminous IRAS galaxies (which are very frequently mergers) really starbursts or are they obscured AGN ? Mid-IR spectroscopy from ISO seems to show clear cases of each, but with most objects being starbursts and a large minority being AGN (Genzel et al 1998; Lutz et al 1998). X-ray studies also suggest a mixture (Rigopoulou et al 1995; Nakagawa, these proceedings). Re-assuringly, mid-IR and X-ray classifications seem to agree reasonably well (Lutz, these proceedings). As one might have guessed, ULGs are a heterogeneous class, so one must be careful drawing conclusions from them.
## 10 HOW DO QUASARS RELATE TO GALAXY FORMATION?
The peak of quasar activity at z=2-3 is suspiciously similar to the predicted epoch of spheroid formation in cosmological theories, suggesting a close connection between quasars and galaxy formation (e.g. Efstathiou and Rees 1988, Haehnelt and Rees 1993). Of course one (probably) needs a galaxy before one can get a quasar; the subsequent decline may be connected with a decline in fuelling (e.g. Small and Blandford 1992) but this is not yet clear. Alternatively some authors have suggested that quasar activity actually has a causal role in triggering or inhibiting galaxy formation (Chokshi 1997; Silk and Rees 1998). Our perspective on such questions is changing rapidly. For three decades quasar evolution was a hard observed fact, whereas galaxy formation was a theoretical blur. This situation is now changing dramatically as galaxies are being detected at high redshift, and we can begin to construct the cosmic history of the star formation rate (Madau et al 1996). It has been noted that the evolution of quasar luminosity density tracks the cosmic star formation rate very closely (Dunlop 1997; Boyle and Terlevich 1998). Silk and Rees (1998) argue that the star formation rate peaks later (z=1-2) than quasar activity, and suggest a feedback loop between winds from early quasars and spheroid formation. However, it now seems clear that the high-redshift star formation rate is higher than had been thought. The optically selected high-z sources have significant reddening (Pettini et al 1997) and the star formation rate deduced from the new population of faint sub-mm sources is a factor of several higher, improving the close agreement with the shape of quasar luminosity density evolution (Hughes et al 1998, Blain et al 1998).
The faint sub-mm sources are generally assumed to be starbursts, the most striking conclusion being that most star formation at early times is occurring at any one time in a small number of luminous bursts. But could the faint sub-mm sources be AGN ? Almaini, Lawrence and Boyle (1998) calculate predicted obscured AGN counts in the sub-mm (by requiring that the number of obscured AGN matches that required in X-ray background models) and find that around 5 - 20% of the detected sources are probably AGN. This number is sensitive to assumed cosmology and the form of high-z evolution, such that this fraction is very uncertain, and likely to increase to even fainter fluxes. Whether the faint sub-mm sources are starbursts or quasars, they are certainly things going BANG, and such objects dominate the energetics of the young universe. This is in strong contrast to today, when the luminosity density in AGN and starbursts combined is a tiny fraction of the total galaxian luminosity density. History belongs to the heroes, but the meek shall inherit the Earth.
## 11 REFERENCES
Abramowicz, M.A., Chen, X., Taam, R.E., Astrophys.J. 452, 379 (1995).
Almaini, O., Lawrence, A., and Boyle, B.J., Mon.Not.R.astr.Soc., in press (1998).
Bailey, M.E., and Clube, S.V.M., Nature, 275, 278 (1998).
Balbus, S.A., and Hawley, J.F., Astrophys.J. 376, 214 (1991).
Barth, A.J., Reichert, G.A., Filippenko, A.V., Ho, L.C., Shields, J.C., Mushotzky, R.F., Puchnariewicz, E.M., Astron.J. 112, 1829
Begelman, M.C., in Mass Transfer Induced Activity in Galaxies, ed. I.Shlosman (Cambridge University Press) (1994)
Begelman, M.C., and Rees, M., Mon.Not.R.astr.Soc., 185, 847 (1978).
Blain, A.W., Smail, I., Ivison, R.J., and Kneib, J.-P., Mon.Not.R.astr.Soc., , in press (1998).
Boyle, B.J., and Terlevich, R.J., Mon.Not.R.astr.Soc., 293, L49 (1998).
Chokshi, A., Astrophys.J. 491, 78 (1997).
Comastri, A., Setti, G., Zamorani, G., and Hasinger, G., Astron.Astrophys., 296, 1 (1995).
Dunlop, J.S., in Observational Cosmology with the New Radio Surveys, eds. Bremer M.N. et al (Kluwer : Dordrecht) (1997).
Eckart, A., and Genzel, R., Nature, 383, 415 (1996).
Eckart, A., and Genzel, R., Mon.Not.R.astr.Soc., 284, 576 (1997).
Efstathiou, G.P., and Rees, M.J., Mon.Not.R.astr.Soc., 230, 5p (1988).
Esin, A.A., McClintock, J.E., and Narayan, R., Astrophys.J. 489, 865 (1997).
Fabbiano, G., in The Physics of LINERs in View of Recent Observations, eds. Eracleous, M., Koratkar, A., Leitherer, C., and Ho, L. (ASP Conference Series Vol 103) (1996).
Fabian, A.C., and Canizares, C., Nature, 333, 829 (1988).
Fabian, A., and Rees, M.J., Mon.Not.R.astr.Soc., 277, 55 (1995).
Genzel, R., Eckart, A., Ott, T., and Eisenhauer, F., Mon.Not.R.astr.Soc., 291, 219 (1997).
Genzel, R., Lutz, D., Sturm E., Egami, E., Kunze, D., Moorwood, A.F.M., Rigopulou, D., Spoon, H.W.W., Sternberg, A.,Tacconi-Garman, L.E., Tacconi, L., and Thatte, N., Astrophys.J. 498, 579 (1998).
Ghez, A.M., Klein, B.L., Morris, M., and Becklin, E.E., in Observational Evidence for Black Holes in the Universe, ed. S.K.Chakrabahrti (1998).
Goldwurm, A., Cordier, B., Paul, J., Ballet, J., Bouchet, L., Roques, J.P., Vedrenne, G., Mandroui, P., Sunyaev, R., Chrazov, E., Gilfanov, M., Finogenov, A., Vikhlinin, A., Dyachkov, A., Khavenson, N., Kovtunenko, V., Nature, 371, 589 (1994).
Grindlay, J., Nature, 371, 561 (1994).
Haehnelt, M.G., and Rees, M.J., Mon.Not.R.astr.Soc., 263, 168 (1993).
Haehnelt, M.G., Natarajan, P., and Rees, M.J., submitted to Mon.Not.R.astr.Soc., (1998)
Hamman, F., and Ferland, G., Astrophys.J.Lett 391, L53 (1992).
Heckman, T.M., Astron.Astrophys., 87, 152 (1980).
Hernstein, J.R., Greenhill, L.J., Moran, J.M., Diamond, P.J., Inoue, M., Nakai, N., and Miyoshi, M., Astrophys.J.Lett 497, L69 (1998).
Hills, J.G., Nature, 254, 295 (1975).
Ho, L.C., Filippenko, A.V., and Sargent, W.L.W., Astrophys.J. 462, 163 (1996).
Ho, L.C., Filippenko, A.V., and Sargent, W.L.W., Astrophys.J. 487, 568 (1997).
Ho, L.C., in Starburst Activity in Galaxies, eds J.Franco, R.Terlevich, and G.Tenorio-Tagle (Pub) (1998).
Hughes, D., Serjeant, S., Dunlop, J., Rowan-Robinson, A., Blain, A., Mann, R.G., Ivison, R., Peacock, J., Efstathiou, A., Gear, W., Oliver, S., Lawrence, A., Longair, M., Goldschmidt, P., Jenness, T. Nature, 394, 241 (1998).
Kennicut, R.C., Schweizer, F., and Barnes, J.E., eds., Galaxies : interactions and induced star formation, (Springer) (1998).
Koratkar,A., Deustua, S.E., Heckman, T., Filippenko, A., Ho, L.C., and Rao, M., Astrophys.J. 440, 132 (1995).
Kormendy, J., and Richstone, D., Ann.Rev.Astron.Astrophys., 33, 581 (1995).
Kormendy, J., and Sanders, D.B., Astrophys.J. 390, L53 (1989).
Krabbe, A., et al Astrophys.J. 447, 95
Kriss, G., Astrophys.J. 324, 809 (1988).
Krolik, J.H., and Meiksin, A., Astrophys.J. 352, 33 (1990).
Lake, G., Katz, N., and Moore, B., Astrophys.J. 495, 152 (1998).
Lasota, J.-P., Abramowicz, M.A., Chen, X., Krolik, J., Narayan, R., and Yi, I., Astrophys.J. 462, 142 (1996).
Lasota, J.-P. in Accretion Phenomena and related outflows, IAU colloqium 163, eds. Wickramasinghe, D.T., Bicknell, G.V., and Ferrario, L. (ASP Conf series Vol 121) (1997).
Lawrence, A., Publ.astron.Soc.Pacif., 99, 309 (1987).
Lawrence, A., Mon.Not.R.astr.Soc., 252, 586 (1991).
Lawrence, A., in The Nearest Active Galaxies, eds Beckman, J., Colina, L., and Netzer, H. (Madrid : CSIC) (1993).
Lawrence, A., paper in preparation (1998a).
Lawrence, A., in The Far Infrared and sub-mm Universe, ed. A.Wilson (ESA SP-401) (1998b).
Lira, P., Lawrence, A., O’Brien, P.O., Johnson, R., Terlevich, R., and Bannister, N., Mon.Not.R.astr.Soc., in press (1998).
Lutz, D., Spoon, H.W.W., Rigopoulou, D., Moorwood, A.F.M., and Genzel, R., Astrophys.J. 505, L103 (1998).
Madau, P., Ferguson, H.C., Dickinson, M.E., Giavalisco, M., Steidel, C.C., and Fruchter, A., Mon.Not.R.astr.Soc., 283, 1388 (1996).
McLeod, K.K., and Rieke, G.H., Astrophys.J. 441, 96 (1995).
McLeod, K.K., in Quasar Hosts, ed Clements D.L. (Berlin: Springer-Verlag) (1997).
McClintock, J.E., Horne, K., Remillard, R.A., Astrophys.J. 442, 358 (1995).
Magorrian, et al Astron.J. 115, 2285 (1998).
Marconi, A., Axon, D.J., Macchetto, F.D., Capetti, A., Sparks, W.B., and Crane, P., Mon.Not.R.astr.Soc., 289, L21 (1997).
Maoz, D., Koratkar, A., Shields, J.C., Ho, L.C., Filippenko, A.V., and Sternberg, A., Astron.J. 116, 55 (1998)
Maoz, E., Astrophys.J.Lett 447, L91, (1995).
Melia, F., Astrophys.J. 426, 577 (1994).
Mineshige, S., Kim, S., and Wheeler, J.C., Astrophys.J. 358, L5 (1990).
Murphy, B.W., Cohn, H.N., and Durisen, R.H., Astrophys.J. 370, 60 (1991).
Mushotzky, R.F., and Wandel, A., Astrophys.J. 339, 674 (1989).
Miyoshi, M., Moran, J., Hernstein, J., Greenhill, L., Nakai, N., Diamond, P., and Inoue, M., Nature, 373, 127 (1995).
Narayan, R., in Accretion Phenomena and related outflows, IAU colloqium 163, eds. Wickramasinghe, D.T., Bicknell, G.V., and Ferrario, L. (ASP Conf series Vol 121) (1997)
Narayan, R., and Yi, I., Astrophys.J. 452, 710 (1995).
Narayan, R., Yi, I., and Mahadevan, R., Nature, 374, 623 (1995).
Norman, C., and Scoville, N.Z., Astrophys.J. 332, 124 (1988).
Pettini, M., Steidel, C.C., Adelberger, K.I., Kellog, M., Dickinson, M., Giavalisco, M., in Origins eds. Woodward, C.E., Thronson, H., And Shull, J.M., (ASP Conf. Series) (1997).
Phinney, E.S., in Mass Transfer Induced Activity in Galaxies, ed. I.Shlosman (Cambridge University Press) (1994)
Phinney, E.S., in IAU Colloqium 186, Kyoto (1997).
Ptak, A., Yaqoob, T., Mushotzky, R., Serlemitsos, P., and Griffiths, R., Astrophys.J. 501, L37 (1998).
Sadler, E.M., Jenkins, C.R., Kotanyi, C.G., Mon.Not.R.astr.Soc., 240, 591 (1989).
Sakamoto, K., Okumura, S.K., Ishizuki, S., and Scoville, N.Z., in The Central Region of the Galaxy and galaxies, (Kluwer : Dordrecht) (1997).
Sanders, D.B., Soifer, B.T., Elias, J.H., Madore, B.F., Mathews, K., Neugebauer, G., and Scoville, N.Z., Astrophys.J. 325, 74 (1988).
Sanders, R.H., Nature, 294, 427 (1981).
Serlemitsos, P., Ptak, A., and Yaqoob, T., in The Physics of LINERs in View of Recent Observations, eds. Eracleous, M., Koratkar, A., Leitherer, C., and Ho, L. (ASP Conference Series Vol 103) (1996).
Shields, J.C, and Filippenko, A.V., in Relationships between Active Galactic Nuclei and Starburst Galaxies, ed Filippenko, A.V., (ASP Conference series Vol 31) (1992).
Shlosman, ed., Mass Transfer Induced Activity in Galaxies. (Cambridge University Press) (1994)
Shlosman, I., Begelman, M.C., and Frank, J., Nature, 345, 679 (1989)
Shlosman, I., Frank, J., and Begelman, M.C., Nature, 338, 45 (1989)
Sieminginowska, A., and Elvis, M., Astrophys.J.Lett 482, L9 (1997).
Silk, J., and Rees, M.J., Astron.Astrophys., in press (1998).
Small, T.A., and Blandford, R.D., Mon.Not.R.astr.Soc., 259, 725 (1992).
Tanaka, Y., Nandra, K., Fabian, A.C., Inoue, H., Otani, C., Dotani, T., Hayashida, K., Iwasawa, K., Kii, T., Kunieda, H., Makino, F., And Matsuoka, M., Nature, 375, 659 (1995).
Taylor G.L., Dunlop, J.S., Hughes, D.H., and Robson, E.I., Mon.Not.R.astr.Soc., 283, 930 (1996).
Terashima, Y., Kunieda, K., Iyomoto, N., Makishima, K., and Serlemitsos, P., in Emision Lines in Active Galaxies : New methods and Techniques, IAU Colloqium 159, eds. Peterson, B.M., Cheng, F.-Z., Wilson, A.S., (ASP Conference series Vol 113) (1997).
Terlevich, R., Tenorio-Tagle, G., Franco, J., and Melnick, J., Mon.Not.R.astr.Soc., 255, 713 (1992).
Thatte, N., Genzel, R., Tacconi, L., Krabbe, A., Kroker, H., in Starburst activity in galaxies (1998).
Watson, M.G., Stewart, G.C., King, A.R., and Brinkmann, W., Mon.Not.R.astr.Soc., 222, 261 (1986).
Williams, R.J.R., and Perry, J.J., Mon.Not.R.astr.Soc., 269, 538 (1994).
Wrobel, J.M., and Heeschen, D.S., Astron.J. 101, 148 (1991).
Zheng, W., and Malkan, M.A., Astrophys.J. 415, 517 (1993).
Zheng, W., Kriss, G.A., Telfer, R.C., Grimes, J.P., Davidsen, A.F., Astrophys.J. 475, 469 (1997).
|
no-problem/9902/nucl-th9902013.html
|
ar5iv
|
text
|
# Chiral corrections in hadron spectroscopy
## Abstract
We show that the implementation of chiral symmetry in recent studies of the hadron spectrum in the context of the constituent quark model is inconsistent with chiral perturbation theory. In particular, we show that the leading nonanalytic (LNA) contributions to the hadron masses are incorrect in such approaches. The failure to implement the correct chiral behaviour of QCD results in incorrect systematics for the corrections to the masses.
preprint: IFT-P.014/99, ADP-99-10/T355
PACS NUMBERS: 24.85.+p, 11.30.Rd, 12.39.Jh, 12.39.Fe, 12.40.Yx
KEYWORDS: Chiral symmetry, quark model, potential models, hadron spectrum
There is an extremely interesting recent series of papers by Glozman, Riska and collaborators who have investigated hadron spectroscopy on the basis of a residual $`qq`$ interaction governed by chiral symmetry. Their residual interaction, which is meant to correspond to Goldstone boson (GB) exchange, has the attractive feature, in comparison with one-gluon-exchange (OGE) , that it does not produce large spin-orbit effects which are certainly not present in the spectrum. While our remarks apply to all GB exchanges, for simplicity we concentrate on the SU(2) sector – i.e. pion exchange. In this sector GB exchange leads to an effective interaction of the form
$$H_{int}=\frac{g^2}{4\pi }\frac{1}{3}\underset{i<j}{}𝝈_i\mathbf{}𝝈_j𝝉_i\mathbf{}𝝉_j\left[m_\pi ^2\frac{e^{m_\pi r_{ij}}}{r_{ij}}4\pi \delta (r_{ij})\right],$$
(1)
where $`m_i`$ and $`m_j`$ denote the masses of the constituent quarks and $`m_\pi `$ is the pion mass. In principle there is also a tensor component, which will not be written explicitly since it is not relevant in the context of the present paper. This interaction has also been employed in studies of the hadron properties and hadron-hadron interactions . In practice, the short-distance behaviour of this interaction is not expected to be reliable \- unlike the long range Yukawa piece - and in the spectroscopic studies by Glozman and Riska the radial strength is replaced by single fitting parameter in each shell. On the other hand, the spin-isospin structure of Eq. (1) is maintained and the corrections from Eq. (1) to the energy of the nucleon (N) and the $`\mathrm{\Delta }(1232)`$ are given as
$`M_N`$ $`=`$ $`M_015P_{00}^\pi `$ (2)
$`M_\mathrm{\Delta }`$ $`=`$ $`M_03P_{00}^\pi ,`$ (3)
where $`M_0`$ is the corresponding unperturbed energy and $`P_{00}^\pi `$ is the fitting parameter corresponding to the radial matrix element of Eq. (1), in the lowest-energy unperturbed shell of the 3-quark system.
Because the basis for this approach to hadron spectroscopy is chiral symmetry, we were interested to check that the formalism is consistent with chiral perturbation theory ($`\chi `$PT) - i.e., that at least the leading nonanalytic (LNA) contribution to hadron masses is correct. It turns out to be very easy to check this and the result is that Eq. (1) is inconsistent with the LNA behaviour of QCD.
The LNA contribution to the mass of the nucleon is proportional to $`m_\pi ^3m_q^{3/2}`$ . In the quark model of Glozman and Riska, such a contribution can only arise from the linear term in the expansion of the Yukawa potential in Eq. (1)
$`H_{int}^{LNA}`$ $`=`$ $`{\displaystyle \frac{g^2}{4\pi }}{\displaystyle \frac{1}{3}}{\displaystyle \underset{i<j}{}}𝝈_i\mathbf{}𝝈_j𝝉_i\mathbf{}𝝉_jm_\pi ^2{\displaystyle \frac{1m_\pi r_{ij}+𝒪(m_\pi ^2)}{r_{ij}}}`$ (4)
$``$ $`m_\pi ^3{\displaystyle \frac{g^2}{4\pi }}{\displaystyle \frac{1}{3}}{\displaystyle \underset{i<j}{}}𝝈_i\mathbf{}𝝈_j𝝉_i\mathbf{}𝝉_j.`$ (5)
The radial matrix element is therefore a normalization integral and hence model independent, as it must be. The overall strength (in hadron $`|H`$) is given by the spin-isospin matrix element
$$SI_H=H|\underset{i<j}{}𝝈_i\mathbf{}𝝈_j𝝉_i\mathbf{}𝝉_j|H.$$
(6)
For the N and the $`\mathrm{\Delta }`$ this gives
$`SI_N^{\text{Eq.}(1)}`$ $`=`$ $`30`$ (7)
$`SI_\mathrm{\Delta }^{\text{Eq.}(1)}`$ $`=`$ $`6.`$ (8)
On the other hand, the corresponding matrix elements from the LNA contribution required by $`\chi `$PT are given by
$`SI_N^\chi `$ $`=`$ $`25`$ (9)
$`SI_\mathrm{\Delta }^\chi `$ $`=`$ $`25.`$ (10)
The formulation of $`\chi `$PT including the $`\mathrm{\Delta }(1232)`$ as an explicit degree of freedom was originally proposed in Ref. . These contributions arise from the processes shown in Figs. (1a) and (b), respectively. We stress that this requires, as usually assumed in $`\chi `$PT, that the $`N`$ and $`\mathrm{\Delta }`$ are not degenerate in the chiral limit. For a critical discussion on this subject, we refer the reader to Ref. . The LNA chiral contributions to the octet and decuplet baryons has also been calculated within the framework which combines the 1/$`N_c`$ expansion with $`\chi `$PT, where $`N_c`$ is the number of colors. Large $`N_c`$ $`\chi `$PT was originally proposed by Dashen and Manohar , and has been further developed by many authors (for a list of references, see Ref. ).
FIGURE 1. One-loop pion self-energy of (a) the nucleon (N) and (b) the delta ($`\mathrm{\Delta }`$).
A comparison of these results shows that Eq. (1) yields the wrong LNA contribution for both the $`N`$ and the $`\mathrm{\Delta }`$. For the N, the error is not large ($`30`$ compared to $`25`$). However, because the error is much larger for the $`\mathrm{\Delta }`$ the crucial point is that the systematics are wrong. For example, with the correct coefficients this mechanism provides no $`\mathrm{\Delta }`$-N mass difference at all! Of course, our arguments concern the systematics of the LNA behaviour implied by Eq. (1). Even though the Yukawa term is not actually used in the spectral studies, the coefficient of the short-range piece, which is used, is the same and hence our arguments are directly relevant to the actual calculations.
In $`\chi `$PT the LNA contribution to the nucleon mass is given by
$$M_N^{LNA}=\frac{3}{32\pi f_\pi ^2}g_A^2m_\pi ^3,$$
(11)
where $`f_\pi 93`$ MeV is the pion decay constant and $`g_A=`$ 1.26 is the weak decay constant. In a quark model, the crucial step in ensuing this LNA behaviour is to project the quark states onto bare baryon states . Specifically, in a constituent quark model of the Glozman-Riska type, the bare states would correspond to the three quark states confined by a phenomenolical potential. The effective hadronic Hamiltonian is obtained by projecting the quark-model Hamiltonian, which now includes the quark-pion vertices, on the basis of the bare three-quark states. Chiral corrections to hadronic properties, such as masses and magnetic moments, are then calculated in time-ordered perturbation theory with the effective hadronic Hamiltonian. For a constituent quark model of the Glozman-Riska type, such a procedure leads to corrections to the N and the $`\mathrm{\Delta }`$ masses of the form
$`M_N`$ $`=`$ $`M_N^{(0)}{\displaystyle \frac{3}{16\pi ^2f_\pi ^2}}g_A^2{\displaystyle _0^{\mathrm{}}}𝑑k{\displaystyle \frac{k^4u_{NN}^2(k)}{w^2(k)}}{\displaystyle \frac{3}{16\pi ^2f_\pi ^2}}{\displaystyle \frac{32}{25}}g_A^2{\displaystyle _0^{\mathrm{}}}𝑑k{\displaystyle \frac{k^4u_{N\mathrm{\Delta }}^2(k)}{w(k)\left(\mathrm{\Delta }M+w(k)\right)}}`$ (12)
$`M_\mathrm{\Delta }`$ $`=`$ $`M_\mathrm{\Delta }^{(0)}+{\displaystyle \frac{3}{16\pi ^2f_\pi ^2}}{\displaystyle \frac{8}{25}}g_A^2{\displaystyle _0^{\mathrm{}}}𝑑k{\displaystyle \frac{k^4u_{N\mathrm{\Delta }}^2(k)}{w(k)\left(\mathrm{\Delta }Mw(k)\right)}}{\displaystyle \frac{3}{16\pi ^2f_\pi ^2}}g_A^2{\displaystyle _0^{\mathrm{}}}𝑑k{\displaystyle \frac{k^4u_{\mathrm{\Delta }\mathrm{\Delta }}^2(k)}{w^2(k)}}.`$ (13)
Here, the $`M^{(0)}`$’s are the masses in the chiral limit, $`\mathrm{\Delta }M=M_\mathrm{\Delta }M_N`$, $`g_A=`$5/3 is the bare axial coupling given by the constituent quark model, $`w(k)=\sqrt{k^2+m_\pi ^2}`$ is the pion energy and $`u_{NN}(k)`$, $`u_{N\mathrm{\Delta }}(k)`$, $`\mathrm{}`$ are the $`NN\pi `$, $`N\mathrm{\Delta }\pi `$, $`\mathrm{}`$ form factors. The LNA contribution to $`M_N`$ is easily seen to arise from the first integral in Eq. (12) (c.f. Fig. 1(a)), while the LNA contribution to $`M_\mathrm{\Delta }`$ comes from the second integral in Eq. (13) - c.f. Fig. 1(b).
In order to understand why the use of Eq. (1) is wrong, we consider the limit, generally considered physically unlikely , that $`\mathrm{\Delta }M=0`$. Then all integrals in Eqs. (12) and (13) have the same LNA behaviour and the contributions are in the ratio
$`25(NN\pi N)`$ $`:`$ $`\mathrm{\hspace{0.33em}\hspace{0.33em}32}(N\mathrm{\Delta }\pi N)`$ (14)
$`8(\mathrm{\Delta }N\pi \mathrm{\Delta })`$ $`:`$ $`\mathrm{\hspace{0.33em}\hspace{0.33em}25}(\mathrm{\Delta }\mathrm{\Delta }\pi \mathrm{\Delta }).`$ (15)
In this case the ratio of the total $`N`$ and $`\mathrm{\Delta }`$ self-energies is $`57:33`$ and the difference is identical to that given by Eqs. (7) and (8). This recalls the well known result from the early work on chiral bag models that the calculation of the self-energy integrals through projection on all baryon states in which the orbital quantum numbers are unchanged, in the limit where these are degenerate, is equivalent to calculating pion emission and absorption between all quarks. In particular one must include those diagrams where the pion is emitted and absorbed by the same quark. In this case the spin-isospin structure of the pion interaction is
$$SI_H=\frac{1}{2}\underset{ij}{}H|𝝈_i\mathbf{}𝝈_j𝝉_i\mathbf{}𝝉_j|H,$$
(16)
so that $`SI_N=57`$ and $`SI_\mathrm{\Delta }=33`$ – which agree with the results based on Eqs. (12) and (13), quoted in Eqs. (14) and (15). Precisely this form of the pion self-energy was suggested in the early spectroscopic study of Mulders and Thomas – see also Refs. , and Ref. for more recent work.
For completeness, we remark that the ratios given in Eqs. (14) and (15) are precisely the leading order corrections given by large $`N_c`$ $`\chi `$PT, as can be easily checked making use of Eqs. (5.6), (C1) and C(5) of Ref. .
In practice, the $`\mathrm{\Delta }N`$ mass difference is quite large and the contribution from the process $`N\mathrm{\Delta }\pi N`$ is consequently suppressed. One would still expect to obtain a sizeable fraction of the $`\mathrm{\Delta }N`$ splitting from pion exchange. Indeed, at the price of increasing the size of the pion-quark effective coupling one could refit the whole mass difference in terms of pion exchange. This would have the consequence that the total nucleon self-energy associated with pion exchange would need to be of the order of $`700`$ MeV. Whether one is able to live with such large self-energies remains to be seen. The alternative is to add some additional hyperfine interaction, such as gluon exchange or residual instanton effects.
In conclusion, we repeat that the use of Goldstone boson exchange interactions of the type given in Eq.(1) is inconsistent with the chiral structure of QCD. In order to reproduce the correct chiral behaviour one must include Goldstone boson exchange between all quarks, including self-interactions, but the intermediate quark states must be projected onto (bare) baryon states – as carried out, for example, within the Cloudy Bag Model . While our analysis of the spectroscopic studies of Glozman and collaborators shows that these are incomplete, the findings are not entirely negative. One can still hope that the major qualitative features of this work will survive in a complete re-analysis. Such a re-analysis must now be an urgent priority.
This work was supported by the Australian Research Council and CNPq (Brazil). One of us (AWT) would like to acknowledge the warm hospitality of the Institute for Theoretical Physics at UNESP, where much of the work was carried out.
|
no-problem/9902/physics9902043.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Cosmic ray physics at energy E$``$10-100 TeV, due to the steepening of the spectrum, can be performed only by using indirect measurements. An impressive amount of data has been collected by extensive air shower arrays, Cherenkov detectors and underground muon experiments. The challenge is the interpretation of these results. The bulk of the analysis are performed by assuming a given Cosmic Ray spectrum and chemical composition (trial model), simulating the particle interaction and the shower development in atmosphere and finally comparing the simulated results with the real data. The reliability of the Monte Carlo simulation used is therefore a primary task for the correct interpretation of these data: such difficulty stimulated a lot of experimental work to validate the existing model and many theoretical ideas to improve the simulation tools. Modelling a Monte Carlo to describe the high energy cosmic ray interactions in atmosphere is a hard task, since Cosmic Rays studied with indirect measurements extend to energy and kinematical regions non covered by the accelerator experiments yet. Morever nucleus-nucleus collision have been investigated just up to few hundreds of GeV/nucleons. This poorness of experimental data is reinforced by the lack of a completely computable theory for the bulk of hadronic interactions, since QCD can be used only for high $`p_t`$ phenomena.
Many models have been developed in the last years, with different enphasis on the various components of the C.R. induced shower. Basically they can be splitted in two categories: the models using parametrization of collider results(NIM85,HEMAS) and the phenomenological models inspired for istance to the Dual Parton Model or similar approaches(DPMJET,SYBILL,QGSJET). I will concentrate in this talk on the HEMAS code, stressing the results of the comparison with the experimental data.
## 2 HEMAS: description of the code
The HEMAS code was developed in the early ’90, when a new generation of experiments (LVD, MACRO, EAS-TOP) were starting the data taking at Gran Sasso. This code is suited to simulate high energy muons ($`E_\mu `$$``$500GeV) and the electromagnetic size of the shower. It is a phenomenological model, based on the parametrization of the collider data. The code describes multiple hadron production by means of the multicluster model, suggested by UA5 experiment.
The total p-Air cross section is one of the most important ingredients of the codes. Since the cross section of hadrons on nuclei is not measured directly at energies greater than several hundred of GeV, an extrapolation to higher energies is required and is performed in the context of ”log(s)” physics.
Figure 1 shows the HEMAS cross section p-Air as a function of the centre of mass energy $`\sqrt{s}`$ compared with the cross section used in other Monte Carlo codes.
Given the $`\sqrt{s}`$ of the interaction, the average number of charged hadrons $`<`$$`n_{ch}`$$`>`$ is choosen according with the equation:
$$<n_{ch}>=7.0+7.2s^{0.127}.$$
(1)
The actual number of charged hadrons $`n_{ch}`$ is sampled from a a negative binomial distribution with
$$k^1=0.104+0.058ln(\sqrt{s})$$
(2)
Respect to the previous codes, where $`n_{ch}`$ was sampled according to a poissonian, this choice reflects in a larger fluctuation of underground muon multiplicity. Particles are then grouped in clusters, eventually decaying in mesons.
A relevant feature of HEMAS is the parametrization of muon parent mesons $`p_t`$ distribution. While for single pion cluster $`p_t`$ is always sampled from an exponential, for kaon clusters, for the leading particle and for pion clusters with at least two particles, $`p_t`$ has a given probability to be extracted from a power low:
$$\frac{dN}{dP_t^2}=\frac{const}{(p_t^o+p_t)^\alpha }$$
(3)
where $`p_t^0`$=3 GeV/c while $`\alpha `$ decreases logarithmically with energy
$$\alpha =3+\frac{1}{(0.01+0.011ln(s))}$$
(4)
Nuclear target effects are included too. The transverse momentum distribution is increased in p-N collision respect to the p-p case, according to the so called ’Cronin effect’. The ratio R($`p_t`$) of the inclusive cross section on a target of mass A to that on a proton target depends in principle from the particle produced. In HEMAS, R($`p_t`$) has been approximated with a single function:
$$R(p_t)=(0.0363p_t+0.0570)Kforp_t4.52GeV/c$$
(5)
$$R(p_t)=0.2211Kforp_t>4.52GeV/c$$
(6)
where K is a normalization constant.
The average $`<`$$`n_{ch}`$$`>`$ in p-Air collisions is obtained using the relation between the rapidity density with a nuclear target and that with a target nucleon:
$$\frac{dn/dy(pA)}{dn/dy(pp)}=A^{\beta (z)},$$
(7)
where y is the laboratory rapidity and z=y/ln(s).
The HEMAS p-Air model interaction assumes a scaling violation in the central region and a small violation in the forward region ($`x_f`$$`>`$0.5). The original HEMAS code included a naive muon transport code. This code was later replaced with the more sophisticated PROPMU code. Morever in 1995, HEMAS was interfaced with DPMJET, a dual parton model inspired code. The user has therefore the possibility of changing the original HEMAS hadronic interaction model with DPMJET. As far as the CPU time is concerned HEMAS is a fast code. Table 1 shows the CPU time required for protons of different energies, while Table 2 shows the comparison with other codes for a 200 TeV proton.
An explanation of the faster performance of HEMAS, respect to other codes, is in the treatment of the electromagnetic part of the shower. Electromagnetic particles($`e^+`$,$`e^{}`$,$`\gamma `$), coming from $`\pi _0`$ decay, are computed using the standard NKG formula. Hadrons falling below a given threshold are not transported in atmosphere and their contribution to the electromagnetic size $`N_e`$, is computed according with the parametrization of pre-computed Monte Carlo runs. Of course the threshold is high enough ($`E_{th}`$$``$500 GeV) to follow the hadrons until they can decay into an high energy muon, with some probability to survive deep underground. Anyway, as far as the validity of this approximation is concerned, it must be stressed that for primary cosmic rays with energy greater than $``$10 TeV, the total contribution of low energy hadrons to the electromagnetic size is $``$10$`\%`$.
## 3 Comparison with experimental data
The HEMAS code has been widely used to simulate the underground muons detected at Gran Sasso. When dealing with underground muons, many experimental observables depend both on the cosmic ray chemical composition and on the features of the hadronic interaction model. To test the reliability of the Monte Carlo codes it’s therefore important to study observables allowing a disentangle. The shape of the decoherence, i.e. the distribution of the distance between muon pairs, is weakly dependent on C.R. composition. This distribution is therefore a nice test to check the reliability of a Monte Carlo code. The decoherence gets contribution from various sources in the shower development:
\- The primary cosmic ray cross section;
\- the $`p_t`$ distribution of the muon parent hadrons;
\- the multiple scattering of muons throught the rock.
Fig. 2 shows the average $`p_t`$ of the muon parent mesons as a function of the average muon separation deep underground. The correlation between $`p_t`$ and $`<`$D$`>`$ is evident.
The MACRO detector is a powerfull experiment to study such distribution, taking advantage of an acceptance A$``$10,000 $`m^2`$sr. Recent results have been presented in: the decoherence function has been studied with a statistical sample of $``$ 350,000 real and 690,000 simulated muon pairs. Fig.3 shows the comparison between HEMAS expectation(MACRO composition model) and the MACRO data, properly corrected for the detector effects: the agreement is impressive. The selection of high muon multiplicity events allows to study very high energy primary cosmic rays. Muons with multiplicity $`N_\mu `$$``$8, come from primary cosmic rays with energy E$``$1000 TeV. The HEMAS expectation reproduces well the experimental data of this subsample of events too( Fig. 4). The two extreme composition models used are taken from. The comparison between data and Monte Carlo has been performed also in different windows of rock depth and cos$`\theta `$. Fig. 5 shows the average distance between muon pairs in these windows: again HEMAS reproduces quite well the experimental data.
Summarizing, the MACRO data showed that, as far as the lateral distribution of underground muons is concerned, the HEMAS capability in reproducing the real data is impressive. Some doubts pointed out by the HEMAS authors of a possible $`p_t`$ excess in the code are not supported by the MACRO data .
Neverthless, since the indirect measurements aim to study the primary cosmic ray spectrum and composition, a delicate sector of Monte Carlo simulation tools is the ”absolute” muon flux. It is of course an hard task to test experimentally the performance of the Monte Carlo codes, since the muon flux deep underground is the convolution of the cosmic ray spectrum and composition with the hadronic interaction and the shower development features. Since the Cosmic Ray spectrum is unknown we cannot use the muon flux deep underground to test the Monte Carlo.
A step forward in this direction has been carried out by the MACRO and EAS-TOP Collaborations, with the so called ”anti-coincidences” analysis. By selecting a muon events in MACRO pointing to a fiducial area well internal to the EAS-TOP edges, it’s possible to select two event samples:
a) if the number of fired detectors $`N_f`$ in EAS-TOP is $`<`$4, EAS-TOP does not provide any trigger and the event is flagged as ’anti-coincidence’. The correspondig C.R. energy ranges between 2 and few tens of TeV;
b) if 4$`<`$$`N_f`$$`<`$7, EAS-TOP provides a trigger and the events is flagged as ’low energy coincidences’.
In the energy range covered by ’anti-coincidences’ and ’low energy coincidences’ direct measurements of cosmic ray spectrum and composition are available. It is therefore possible to use these data as input to the Monte Carlo simulation to test the hadronic interaction model, by comparing the experimental data with the expectation. They used a single power low fits to the fluxes of H and He, as reported by JACEE.
$$p:5.57410^4(E/GeV)^{2.86}(m^2s^1sr^1GeV^1)$$
(8)
$$He:9.1510^3(E/GeV)^{2.86}(m^2s^1sr^1GeV^1)$$
(9)
I stress that this analysis cannot be performed with MACRO alone, since low muon multiplicity events get contribution also from higher energy cosmic ray(E$``$100TeV), where the spectrum and the chemical composition have not been measured with direct techniques.
Table 3 shows the results of the analysis and the comparison between the real data and the Monte Carlo codes HEMAS and HEMAS-DPMJET.
Taking into account a 15-20$`\%`$ uncertainty in the JACEE data fits, the low energy coincidences are reproduced by both the Monte Carlo codes. On the contrary HEMAS understimates the number of anti-coincidences respect to the real data, while, within a 20$`\%`$ accuracy, HEMAS-DPMJET reproduces the experimental data.
Sometime people is concerned by the fact that the HEMAS hadronic interaction model reproduces the experimental data at high energy(E$``$100 TeV) better than at lower energies,(1 TeV$``$E$``$100 TeV) being the latter closer to the energy range already explored by accelerator experiments. It must be stressed that muons produced from the interaction of cosmic rays with energy E$``$ few TeV, come from the decay of pions with $`x_f`$$``$$`E_\pi `$/$`E_o`$$``$1. This is the so called ’forward region’, poorly studied in accelerator experiments, requiring therefore an extrapolation in the Monte Carlo. As it has been stressed in , the higher muon flux in DPMJET in this kinematical region, reflects an intrinsic feature of this code, originating from the LUND treatment of the fast valence ”diquark” fragmentation in the projectile. Fig.6 shows the average number of muons survived deep undeerground (h=3400 hg/c$`m^2`$) as a function of the proton energy for different Monte Carlo codes. The main difference between these codes are infact found at low energy, where each code has to extrapolate the collider results with some algorithms. From this point of view, models based on the Dual Parton Model, can in principle take advantage of the limited number of free parameters, avoiding, at least in part, delicate extrapolations.
## 4 Conclusions
HEMAS is a fast Monte Carlo code for the simulation of high energy muons and electromagnetic components of the air shower. MACRO data confirm the HEMAS capability in reproducing the lateral distribution of muons detected deep underground.
The ’low -energy coincidences’ analysis performed by the EAS-TOP and MACRO collaborations pointed out a satisfactory agreement with HEMAS and HEMAS-DPMJET codes, within the primary C.R. spectrum uncertainty; the ’anti-coincidences’ analysis suggested a possible HEMAS muon deficit at threshold energies ( $`E_o`$$``$few TeV). An improvement of the agreement is found when using HEMAS interfaced with DPMJET.
## 5 Acknowledgments
I would like to thank my colleagues of the MACRO Collaboration and especially the members of the muon working group for the fruitful discussions. A special thank to G. Battistoni, C.Forti, J. Ranft and M.Sioli for their cooperation and suggestions.
Figure 1: Comparison of the cross section p-Air used by different Monte Carlo codes.
Figure 2: Relation between the muon parent mesons average $`P_t`$ and the average muon pair distance deep underground.
Figure 3: The decoherence function: comparison of the MACRO data with the HEMAS expectation.
Figure 4: Comparison of the MACRO data and HEMAS expectation for events with muon multiplicity $`N_\mu `$$``$8.
Figure 5: Comparison of the average separation between muon pairs in different rock depth and cos$`\theta `$ windows.
Figure 6: Average number of muons survived underground (3400 hg/c$`m^2`$) for different Monte Carlo codes as a function of proton energy. The same muon transport (PROPMU) has been applied in all runs.
|
no-problem/9902/cs9902024.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
### 1.1 Direct Simulation Monte Carlo Method and Sequential <br>Algorithm in Unsteady Molecular Gasdynamics
The Direct Simulation Monte Carlo (DSMC) is the simulation of real gas flows with various physical processes by means of huge number of modeling particles , each of which is a typical representative of great number of real gas particles (molecules, atoms, etc.). The DSMC method conditionally divides the continuous process of particles movement and collisions into two consecutive stages (motion and collision process) at each time step $`\mathrm{\Delta }t`$. The particle parameters (coordinates, velocity) are stored in the computer’s memory. To get information about the flow field the computational domain has to be divided into cells. The results of simulation are averaged particles parameters in cells.
The finite memory size and computer performance make restrictions to the total number of modeling particles and cells. Macroscopic gas parameters determined by particles parameters in cells at the current time step are the result of simulation. Fluctuations of averaged gas parameters at single time step can be rather high owing to relatively small number of particles in cells. So, when solving steady gasdynamic problems, we have to increase the time interval of averaging (the sample size) after steady state is achieved in order to reduce statistical error down to the required level. The averaging time step $`\mathrm{\Delta }t_{av}`$ has to be much greater than the time step $`\mathrm{\Delta }t`$ ($`\mathrm{\Delta }t_{av}\mathrm{\Delta }t`$).
For DSMC of unsteady flows the value of averaging time step $`\mathrm{\Delta }t_{av}`$ for a given problem and at the current time $`t`$ has to meet the following requirement: $`\mathrm{\Delta }t_{av}\mathrm{min}t_H(x,y,z,t)`$, where $`t_H`$ — is the characteristic time of flow parameters variation. The choice of the value of $`t_H`$ is determined by particular problem . In order to meet the condition for the averaging interval we have to carry out enough number of statistically independent calculations (runs) $`n`$ to get the required sample size. This leads to the increase of the total calculation time which is proportional to $`n`$ in the case of sequential DSMC algorithm.
The general flowchart of classic sequential algorithm is depicted in the fig. 1. The algorithm of DSMC of unsteady flows consists of two basic loops. In the first (inner) loop the single run of unsteady process is executed. First, we generate particles at input boundaries of the domain (subroutine `Generation`). Then we carry out simulation of particle movement, surface interaction (subroutine `Motion`) and collision process (subroutine `Interaction`) for determined number of time steps $`\mathrm{\Delta }t`$. The sampling (subroutine `Sampling`) of flow macroparameters in cells is carried out at a given moment of unsteady process. The inner loop itself is divided into two successive steps. At the first step we sequentially carry out simulation for each of $`N_p`$ particles independently. A special readdressing array is formed – subroutines `Enumeration`, `Indexing` – (it determines the mutual correspondence of particles and cells) after the first step. We have to know the location of all particles in order to fill that array. At the second step we carry out the simulation for each of $`N_c`$ cells independently. For $`t>\mathrm{\Delta }t_s`$ we accumulate statistical data of flow parameters in cells.
The second (outer) loop repeats unsteady runs $`n`$ times to get the desired sample size. Each run is executed independently from the previous ones. To make separate unsteady runs independent we have to shift random number generator (`RNG`).
For each unsteady run three basic arrays (`P`, `LCR`, `C`) are required. The array `P` is used for storing information about particles. The array `LCR` is the readdressing array. The dimensions of these arrays are proportional to the total number of particles. The array `C` stores information about cells and macroparameters. The dimension of this array is proportional to the total number of cells of a computational grid. The DSMC method requires several additional arrays which reserve much smaller memory size. The particles which abandon the domain are removed from the array `P`, whereas the new generated particles are inserted into the one. Since the particles move from one cell to another we have to rearrange the array `LCR` and update the array `C`. These procedures are performed at each time step $`\mathrm{\Delta }t`$.
### 1.2 Parallelization methods for DSMC of gas flows
The feasibility of parallelization and the efficiency of parallel algorithms are determined both by the structure of modeling process and by the architecture and characteristics of a computer (number of processors, memory size, etc.).
The development of any parallel algorithm starts with the decomposition of a general problem. The whole task is divided into a series of independent or slightly dependent sub-tasks which are solved parallel. For direct simulation of gas flows there are different decomposition strategies depending on goals of modeling and flow nature. The development of parallel algorithms for DSMC started not long ago (about 10 years ago). At the present time the common classification of principal types of parallel algorithms has not been formed yet. However, one can point out several approaches to parallelize the DSMC, the efficiency of which is proved by the practice of their usage. Let us conditionally single out four types of parallel algorithms of DSMC.
The first type is the parallelization by coarse-grained independent sub-tasks. This method has been realized in for parallelization of DSMC of unsteady problems. The algorithm consists in the reiteration of statistically independent modeling procedures (runs) of a given flow on several processors.
The second type is the spatial decomposition of a computational domain. The calculation in each of regions are single sub-tasks which are solved parallel. Each processor performs calculations for particles and cells in its own region. The transfer of particles accompanies with data exchange between processors. Therefore, these sub-tasks are not independent.
This method of parallelization is the most widespread at the present for parallel DSMC of both steady and unsteady flows . The main advantage of this approach is the reduction of memory size required by each processor. This method can be carried out on computers with both local and shared memory. The method has drawback for increasing number of processors: the increase of the number of connections between regions and the increase of relative amount of data to exchange between regions. The essential condition of high efficiency of this method is the ensuring of uniform load balancing and minimization of data exchange. One can use static and dynamic load balancing to make good load balancing. The modern parallel algorithms of this type usually employ dynamic load balancing.
The third type is the algorithmic decomposition. This type of parallel algorithms consists in the execution of different parts of the same procedures on different processors. For realization of these algorithms it is necessary to use a computer with architecture which is adequate to a given algorithm. The examples of this type of algorithm is the data parallelization .
The fourth type is the combined decomposition which includes all types considered precedingly. The decomposition of computational domain with data parallelization are carried out in . In this paper we shall consider two-level algorithms which include methods of first and third type.
### 1.3 Algorithm of Parallel Statistically <br>Independent Runs (PSIR)
The statistical independence of single runs make it possible to execute them parallel. The general flowchart of the PSIR algorithm is depicted in the fig. 2. The implementation of this approach on a multiprocessor computer leads to the decrease of the number of iterations of the outer loop for every single processor ($`n/p`$ — the number of iterations for the $`p`$-processor computer). The data exchange between processors goes after all calculation are finished. Only one processor sequentially analyzes the results after data exchange. The range of efficient application field for this algorithm is $`pn`$. The value of $`n`$ has to be multiply by $`p`$ to get optimal speedup and efficiency.
All arrays (`P, LCR, C`, etc.) are stored locally for each run. This algorithm can be realized on computers with any type of memory (shared or local). The message passing is used to perform data exchange on computers with local memory. The scheme of memory usage is presented in the fig. 3. The required memory size for this algorithm is proportional to $`p`$.
The speedup $`S_p`$ and the efficiency $`E_p`$ of parallel algorithm with a parallel fraction of computational work $`\alpha `$ for the computer with $`p`$ processors are as follows :
$$S_p(p,\alpha )=\frac{T_1}{T_p},$$
(1)
$$E_p(p,\alpha )=\frac{S_p}{p},$$
(2)
where $`T_1`$ — the execution time of the sequential algorithm, $`T_p`$ — the execution time of a given parallel algorithm on the computer with $`p`$ processors ($`p`$ — number of reserved processors). In this paper we use a model of computational process which assumes that there is some parallel fraction $`\alpha `$ of total calculations and sequential fraction $`(1\alpha )`$. The parallel and sequential calculations are not coincided.
The value of $`T_p`$ is given by
$$T_p=[(1\alpha )+\alpha /p]T_1.$$
(3)
To get the value of $`\alpha `$ one may use a profiler. The final formulas for $`S_p`$ and $`E_p`$ are as follows:
$$S_p(p,\alpha )=\frac{p}{p\alpha (p1)},$$
(4)
$$E_p(p,\alpha )=\frac{1}{(1\alpha )p+\alpha }.$$
(5)
The formula (4) presents a simple and general function, called the Amdahl law. According to this law, the speedup upper limit at $`p\mathrm{}`$ for an algorithm, which has two non-coinciding parallel and sequential parts, is as follows:
$$S_p(p,\alpha )\frac{1}{1\alpha }.$$
(6)
To speed up calculations we have to speed up parallel computations, however, the remaining sequential part slows down the overall computing process to more and more extent. Even small sequential fraction may reduce greatly the overall performance.
The figure 4 shows the speedup $`S_p`$ as a function of number of processors $`p`$ and parallel fraction $`\alpha `$. The efficiency $`E_p`$ as a functon of $`\alpha `$ is shown in the fig. 5. Sequential computations affected speedup and efficiency particularly in the region $`\alpha >0.9`$. Therefore, even small decrease of sequential computations in algorithms with high parallel fraction makes speedup and efficiency abruptly increase (at relatively high $`p`$).
The PSIR algorithm is coarse-grained and has high efficiency and great degree of parallelism comparing to any other parallel algorithm of DSMC of unsteady flows for the number of processor $`pn`$. The maximum value of speedup for this algorithm can be obtained at $`p=n`$. The potential of speedup which gives the computer is surplus for $`p>n`$. Thus, the PSIR algorithm for DSMC of unsteady flows has the following range of efficient usage: $`n1`$ and $`np`$. The value of parallel fraction $`\alpha `$ can be very high (up to $`0.99÷0.999`$) for typical problems of molecular gasdynamics . The corresponding speedup is $`100÷1000`$. To get the efficiency $`E_p0.5`$ at $`n=100÷1000`$ it is necessary to have $`p=100÷1000`$ respectively.
### 1.4 Data Parallelization (DP) of DSMC
The computing time of each DSMC problem is determined by the inner loop (1) time. The duration of this loop depends on the number of particles in the
domain and the number of cells. It was stated above that the inner loop consists of two consecutive stages. The data inside each stage are independent. The elements `P[k]` are processed at the first stage, whereas the elements `C[k]` — at the second one (the elements of arrays `P` and `C` are mutually independent). Since the operations on each of these elements are independent it is possible to process them parallel. Each processor takes elements from particle array `P` and cell array `C` according to its unique ID-number, i.e. the m-th processor takes the $`m`$-th, ($`m+p`$)-th, ($`m+2p`$)-th, etc. elements, where “$`m`$” is the processor ID-number. This rule of particle selection provides good load balancing because various particles require different time to process and they are located randomly in the array `P` .
The synchronization of processors is performed before the next loop iteration starts. Before the second stage begins it is necessary to fill the readdressing array `LCR`. The complete information about the array `P` is required for readdresing procedure. This task can not be parallelized, so it is performed by one processor. There are two synchronization points before the readdressing and after the one. The reduction of the computational time is due to the decrease of the amount of data which has to be processed by each processor ($`N_p/p`$ and $`N_c/p`$ instead of $`N_p`$ and $`N_c`$). After the inner loop is passed the processors also need to get synchronized. The figure 6 shows the general flowchart of DP algorithm.
The data from the array `P` is required to perform the operations on elements of array `C`. This data is located in the array `P` randomly. These arrays are stored in the shared memory in order to reduce the large data exchange between processors. The memory conflicts (several processors read the same array element) are excluded by the algorithm. The semaphore technique is used for processors synchronization. The scheme of memory usage is depicted in the fig. 7.
## 2 Algorithm of Two-Level Parallelization with Static Load Balancing
It was stated above that the potential of the multiprocessor system is surplus for the realization of the PSIR algorithm when the required number of
statistically independent runs $`n`$ is significantly less than the number of processors $`p`$ ($`np`$). In this case the efficient usage of computer resources of $`p`$-processor system can be provided by the implementation of an algorithm of two-level parallelization (TLP algorithm). The general flowchart of TLP algorithm is shown in the fig. 8. The first level of parallelization corresponds to the PSIR algorithm, the data parallelization is employed for the second level inside each independent run. The TLP algorithm is a parallel algorithm with static load balancing.
The scheme of memory usage for TLP algorithm is depicted in the fig. 9. This algorithm requires the memory size to be proportional to the number of the first level processors which compute single runs (just the same as for the PSIR algorithm). It also requires the arrays for each run to be stored in the shared memory as for the data parallelization algorithm in order to reduce the data exchange time between processors.
The speedup and the efficiency of the TLP algorithm are governed by the following equations:
$`S_p`$ $`=`$ $`S_{p_1}S_{p_2}={\displaystyle \frac{p_1}{p_1\alpha _1(p_11)}}{\displaystyle \frac{p_2}{p_2\alpha _2(p_21)}},`$ (7)
$`E_p`$ $`=`$ $`E_{p_1}E_{p_2}={\displaystyle \frac{S_p}{p_1p_2}},`$ (8)
where indices ‘1’ and ‘2’ correspond to parameters on the first level and on the second one.
The figure 10 shows the detailed flowchart of TLP algorithm for unsteady flow simulation. There are five synchronization points in the algorithm. The four of them correspond to the DP algorithm. The last synchronization has to be done after termination of all runs. The synchronization is employed with the aid of the semaphore technique. In this version the iterations of the outer loop (2) are fully distributed between the first level processors. This algorithm requires $`n`$ to be multiply by $`p`$ for uniform distribution of computer resources between single runs. In order to make the runs statistically independent we have to shift the random number generator in each run.
The HP/Convex Exemplar SPP-1600 system with 8 processors, 2Gb of memory and peak performance 1600 Mflops was used for algorithm test.
To simulate the conditions of a single user in the system we measured the execution time of the parent process which makes the start-up initialization before forking child processes and data processing after passing parallel code (this process has the maximum execution time).
The amount of parallel and sequential code was obtained from the program profiling data using standard `cxpa` utility.
The simulation of unsteady 3-D water vapor flow in the inner atmosphere of a comet was carried out in order to study the speedup and the efficiency which yields this algorithm. The number of the first level processors $`p_1`$ was fixed and equal to 6. The number of the second level processors $`p_2`$ was varied from 1 to 6. The value of parallel fraction $`\alpha _1`$ and $`\alpha _2`$ were 0.998 and 0.97 respectively. The figure 11 depicts the experimental results (circles) and theoretical curves for speedup and efficiency as functions of the total number of processors $`p=p_1p_2`$. The same figure shows the value (marked by cross-sign) of speedup and efficiency of the PSIR algorithm (TLP algorithm turns into PSIR algorithm at $`p_2=1`$).
Thus, the TLP algorithm gives the possibility to significantly reduce the computational time required for the DSMC of unsteady flows using multiprocessor computers with shared memory. The range of the efficient usage of this algorithm is the condition $`np`$. Moreover, the number of processors $`p`$ has to be multiply by $`n`$ in order to provide good load balancing.
## 3 Algorithm of Two-Level Parallelization <br>with Dynamic Load Balancing
The TLP algorithm with static load balancing described in section 2 has several drawbacks. It does not provide good load balancing (hence, one may get low efficiency) in the following cases:
1. the ratio $`p/p_1`$ is not integer (part of processors are not used);
2. each run has non-parallelized code with total sequential fraction equal to $`\beta _{}`$, which depends on the starting sequential fraction $`\beta =1\alpha `$ and the number of processors $`p_2`$:
$$\beta _{}=\frac{\beta }{\beta +\frac{1\beta }{p_2}}.$$
(9)
At small values of $`\alpha `$ or large values of $`p_2`$ some processors may be idle in each run. This leads to non-efficient usage of computer resources for high values of $`p_1`$.
The increase in efficiency can be obtained by usage of dynamic load balancing with the aid of dynamic processor reallocation (DPR). The idea of the algorithm is as follows. Let us conditionally divide all available processors into two parts: leading processors $`p_1`$ and supporting processors which form the so called “heap” (the number of heap-processors is $`pp_1`$). Each leading processor is responsible for its own run. This algorithm is similar to that of TLP but here there is no hard link of heap-processors with the specific run. Each leading processor reserves the required number of heap-processors before starting parallel computations (according to a special allocation algorithm). After exiting from parallel procedure the leading processor releases allocated heap-processors. This algorithm makes it possible to use idle processors more efficiently, in fact this leads to execution of parallel code with the aid of more processors than in the case of TLP algorithm with static load balancing. The flowchart of TLPDPR algorithm is presented in the fig. 12.
The speedup which yields this algorithm is determined by the following basic parameters: the total available number of processors in the system $`p`$, the required number of independent runs $`p_1=n`$ ($`p_1p`$), the sequential fraction of computational work in each run $`\beta `$ and the algorithm of heap-processors allocation. In this paper we use the following allocation algorithm:
$$p_2^{}=(1+\mathrm{PRI})p_2,\mathrm{PRI}=0\mathrm{}\mathrm{PRI}^{},$$
(10)
where $`p_2^{}`$ — the actual number of the second level processors, $`\mathrm{PRI}`$ — the parameter which is estimated by experimental results of similar problems, $`\mathrm{PRI}^{}`$ — the estimated upper limit of the efficient range of parameter $`\mathrm{PRI}`$.
In case of $`p`$ being multiply by $`p_1`$ and the value of $`\mathrm{PRI}`$ is equal to 0, this algorithm turns into TLP algorithm. The speedup on the second level $`S_{p_2}`$ is governed by the following equation:
$$S_{p_2}=\frac{1}{\beta +\frac{1\beta }{p_2(1+\mathrm{PRI})}}.$$
(11)
The case when parameter $`\mathrm{PRI}`$ exceeds a threshold leads to the decrease of speedup $`S_{p_2}`$. This decrease is not governed by (11) owing to overstating demands made by allocation algorithm on system resources. As a result, this leads to worse load balancing. The upper limit of the efficient range of parameter $`\mathrm{PRI}`$ can be estimated by the following condition:
$$1+\frac{pp_1}{(1\beta _{})p_1}=(1+\mathrm{PRI}^{})p_2.$$
(12)
It means that we have to find such a value of parameter $`\mathrm{PRI}`$ for which there is a uniform distribution of all idle processors at a given moment among runs which perform parallel computations. The condition for $`\mathrm{PRI}^{}`$ as a function of $`\beta `$ and $`p_2`$ can be derived from (9) and (12):
$$\mathrm{PRI}^{}=\frac{\beta }{1\beta }(p_21).$$
(13)
The expressions discussed precedingly are undoubtedly correct for $`p_11`$. The value of $`S_{p_2}`$ at $`\mathrm{PRI}=\mathrm{PRI}^{}`$ gives the upper limit of speedup for a given problem.
To study the characteristics of TLPDPR algorithm we solve the problem on unsteady flow past a body. The value of sequential fraction $`\beta =0.437`$, $`p_1=6`$. The speedup as a function of $`p_2`$ ($`p_2=1\mathrm{}6`$) for $`\mathrm{PRI}=0`$ and $`\mathrm{PRI}=\mathrm{PRI}^{}`$ is depicted in the fig. 13. The same figure shows the results of calculation for $`\mathrm{PRI}=0`$, the dot (marked by asterisk) corresponds to the optimal value of parameter $`\mathrm{PRI}`$ for $`p_2=6`$ ($`p=36`$). The maximum speedup $`S_{p_2}`$ with a given degree of parallelism ($`p_1\mathrm{}`$), which can be estimated by the formula (11), comes to 2.3. The TLP algorithm gives the speedup ($`\beta =0.437`$, $`p_1=6`$, $`p=36`$) which is 80% of the maximum value. At optimum value of parameter $`\mathrm{PRI}`$ the TLPDPR algorithm gives 93% for the same case. This is equivalent to the usage of TLP algorithm on a 120-processor computer ($`p=120`$, $`p_1=6`$, $`p_2=20`$). The figure 14 shows speedups of TLP and TLPDPR algorithms as functions of $`p_2`$ for various $`\beta `$.
The essential question one can raise about TLPDPR algorithm usage is how to determine the optimal value of parameter $`\mathrm{PRI}`$ apriori. The value given by (13) determines the upper limit of efficent range of parameter $`\mathrm{PRI}=1\mathrm{}\mathrm{PRI}^{}`$. The study of influence of parameter $`\mathrm{PRI}`$ on the speedup is presented in the fig. 15 for $`p_1=6`$, $`p_2=6`$ ($`p=36`$). The formula (11) gives good approximation of experimental results for the intial range of parameter $`\mathrm{PRI}`$. Further, we see the predicted above decrease of speedup owing to inconsistency of available and required system resources. The latter can be explained in the following manner. In (11) it is supposed that released heap processors are allocated instantly in the other runs. Actually, these processes are non-coinciding, therefore the condition (11) requires a probability coefficient which is a function of parameters of a problem and a computer. This coefficient has to determine the probability to meet requirements for system resources while allocating heap processors.
The great flexibility of this algorithm allows its efficient usage for calculation of both steady and unsteady problems. In case of steady-state modeling it is possible to perform an additional ensemble averaging for smaller number of modeling particles. This can lead to shorter computation time comparing to DP algorithm. The implemented TLPDPR algorithm has the following advantages comparing to the TLP algorithm with static load balancing:
* TLPDPR algorithm makes it possible to minimize the latency time of processors. It provides better load balancing;
* Better load balancing make it possible to get higher speedups under the same conditions.
|
no-problem/9902/hep-ph9902284.html
|
ar5iv
|
text
|
# Acknowledgement
## Acknowledgement
This work has been supported by the Natural Sciences and Engineering Research Council of Canada.
|
no-problem/9902/cond-mat9902068.html
|
ar5iv
|
text
|
# Stochastic multiplicative processes with reset events
## Acknowledgements
Financial support from Fundación Antorchas, Argentina, and from Alexander von Humboldt Foundation, Germany (SCM) is gratefully acknowledged.
|
no-problem/9902/hep-ex9902021.html
|
ar5iv
|
text
|
# Intermittency and Correlations in Hadronic Z0 Decays
## 1 Introduction
Particle density fluctuations of hadronic final states produced in high-energy collisions have been extensively investigated in the last decade. For recent reviews, see e.g. Refs. . Dynamical (i.e., non-statistical) fluctuations were observed, establishing the phenomenon of intermittency, the increase of factorial moments with decreasing bin size . The intermittency approach of studying the distributions of particles in restricted regions of phase space allows a detailed analysis of the dynamics of hadroproduction. Furthermore, the behaviour of the factorial moments shows the self-similar nature of density fluctuations, i.e., the particle distributions show similar fluctuations on all resolution scales, a characteristic of fractals .
Despite numerous experimental and theoretical studies, the origin of intermittency remains unclear, although important features of this effect have been observed . For example, experimental investigations have shown an enhancement of the phenomenon in e<sup>+</sup>e<sup>-</sup> annihilation as compared to hadronic and nuclear collisions. Furthermore, larger intermittency effects have been observed when several dynamical variables are considered together, as compared to the effect seen in one-dimensional analyses. Existing Monte Carlo (MC) models, which use parton shower simulations and differing fragmentation and hadronization models, simulate most details of e<sup>+</sup>e<sup>-</sup> collisions and the general properties of hadronic interactions well, but fall short in predicting the intermittent structure found in the data, both in e<sup>+</sup>e<sup>-</sup> collisions and in hadronic interactions. Theoretical approaches have not clarified the origin of the observed dynamical fluctuations. The intermittent behaviour of particle distributions may prove to be a strong test of QCD, which already provides guidelines for explaining the “soft” character of intermittency .
The goal of this study is to investigate the dynamical correlations of many-particle systems produced in e<sup>+</sup>e<sup>-</sup> annihilation. One must be careful to separate out the effects of lower-order correlations when searching for higher-order ones. For example, a correlation in the production of pairs of particles in neighboring regions of phase space will necessarily induce correlations when particles are considered three at a time. It has been suggested that intermittency should therefore be analysed in terms of the factorial cumulant moments to reveal “genuine” multiparticle correlations by not being sensitive to the contributions of lower-order correlations . The investigations carried out for heavy-ion reactions have not shown any correlations higher than two-particle ones, while in studies of hadron-hadron collisions significant higher-order correlations have been observed, although the latter have been seen to weaken with increasing multiplicity at a fixed centre-of-mass energy . These effects have been explained as a consequence of the events consisting of superpositions of multiple independent particle sources . These findings suggest that interactions with a low number of very hard scattering processes, such as high-energy e<sup>+</sup>e<sup>-</sup> annihilation, might be more sensitive to genuine multiparticle correlations.
This paper describes the study of the intermittency phenomena and the genuine multiparticle correlations of charged particles in the three-dimensional phase space of rapidity, transverse momentum, and azimuthal angle, as defined in Section 3. This analysis uses more than four million multihadronic events recorded by the OPAL detector at the LEP collider with $`\sqrt{s}m_{\mathrm{Z}^0}`$. This data sample is much larger than that used in OPAL’s previous publication , and in other e<sup>+</sup>e<sup>-</sup> investigations carried out at the Z<sup>0</sup> peak and at lower energies . The statistical precision of this data sample allows us to extend the former intermittency analysis to a multidimensional one with the possibility to reach high-order fluctuations at very small bins. With this high statistics we are able for the first time to search for genuine multiparticle correlations by means of normalized factorial cumulants in e<sup>+</sup>e<sup>-</sup> collisions.
The paper is organised as follows. In Section 2 the normalized factorial moments and factorial cumulant moments are introduced and the technique of extracting dynamical fluctuations and genuine multiparticle correlations is given. The detector, data sample and correction procedure are described in Section 3. The results and their comparison with MC predictions are presented and discussed in Section 4. In Section 5 both a summary and conclusions are presented.
## 2 The method
### 2.1 Scaled factorial moments and intermittency
To search for local dynamical fluctuations we use the normalized scaled factorial moments introduced in Ref. . The moments are defined as
$$F_q=𝒩^q\overline{n_m^{[q]}}/\overline{N}_m^{[q]}.$$
(1)
Here $`n_m^{[q]}`$ is the $`q`$th order factorial multinomial, $`n_m(n_m1)\mathrm{}(n_mq+1)`$, with $`n_m`$ particles in the $`m`$th bin of the phase space (e.g. rapidity interval) divided into $`M`$ equal bins. $`N_m`$ is the number of particles in the $`m`$th bin summed over all the $`𝒩`$ events in the sample, $`N_m=_{j=1}^𝒩(n_m)_j`$. The bar indicates averaging over the bins in each event, $`(1/M)_{m=1}^M`$ (“horizontal” averaging), while the angle brackets denote averaging over the events (“vertical” averaging).
The moments in Eq. (1) are given in the modified form, in contrast to those used in the earlier e<sup>+</sup>e<sup>-</sup> studies . This form has been suggested in Ref. to take into account the bias arising from the assumption of infinite statistics in the normalization calculation and affects the moments, particularly those computed with small bins.
These moments, defined in Eq. (1), are the so-called “horizontal” moments , where the fluctuations are first averaged over all $`M`$ bins in each event and then the average over all events is taken.<sup>1</sup><sup>1</sup>1 For a survey of types of factorial moments see Ref. . These moments are determined for a flat-shape single-particle distribution. In order to account for non-flatness, we apply the correction procedure proposed in Refs. , so that the corrected modified factorial moments are given in the reduced form,
$$F_q^C=F_q/R_q,R_q=\overline{N_m^{[q]}}/\overline{N}_m^{[q]},$$
(2)
where $`R_q`$ is the correction factor, and it is equal to unity for a flat single-particle distribution.
The non-statistical fluctuations have been shown to lead to increasing factorial moments with increasing number (decreasing size) of the phase-space regions, or bins. Such an increase, expressed as a scaling law,
$$F_q(M)M^{\phi _q}(M\mathrm{}),0<\phi _qq1,$$
(3)
indicates the presence of self-similar dynamics. This increase is called intermittency and the powers $`\phi _q`$, or intermittency slopes, show the strength of the effect. The size of the smallest bin under investigation is limited by the characteristic correlation length (saturation effect) and/or by the apparatus resolution . In practice, saturation happens much earlier because of statistical limitations (the “empty bin effect” ), which we here attempt to reduce by using the modified moments. Therefore, in the following we use the term “intermittency” to refer only to the rise of the factorial moments with decreasing bin size.
### 2.2 Factorial cumulant moments and genuine multiparticle correlations
To extract the genuine multiparticle correlations, the technique of normalized factorial cumulant moments, or cumulants, $`K_q`$, proposed in Ref. , is used. The cumulants are constructed from the $`q`$-particle cumulant correlation functions which vanish whenever one of their arguments becomes independent of the others, so that they measure the genuine correlations. The factorial cumulants remove the influence of the statistical component of the correlations in the same way as the factorial moments.
Similarly to the factorial moments, we use the corrected modified cumulants, defined as
$$K_q^C=𝒩^q\overline{k}_q^{(m)}/\overline{N_m^{[q]}}.$$
(4)
The normalization factor, $`\overline{N_m^{[q]}}`$ (instead of $`\overline{N}_m^{[q]}`$), comes from the correction procedure expressed in Eq. (2) and takes into account the non-flat shape of the single-particle distribution. The $`k_q^{(m)}`$ factors are the unnormalized factorial cumulant moments or the Mueller moments , and represent genuine $`q`$-particle correlations where the lower-order contributions are eliminated by subtracting the appropriate combinations of the unnormalized factorial moments, $`n_m^{[p]}`$, of order $`p<q`$ from the $`q`$th order one, e.g.
$$k_3^{(m)}=n_m^{[3]}3n_m^{[2]}n_m+2n_m^3.$$
(5)
In order to find contributions from genuine multiparticle correlations to the factorial moments we use the relations between the moments and the cumulants ,
$`F_2`$ $`=`$ $`K_2+1,`$
$`F_3`$ $`=`$ $`K_3+3K_2+1,`$ (6)
$`F_4`$ $`=`$ $`K_4+4K_3+3\overline{(K_2^{(m)})^2}+6K_2+1,`$
$`F_5`$ $`=`$ $`K_5+5K_4+10\overline{K_3^{(m)}K_2^{(m)}}+10K_3+15\overline{(K_2^{(m)})^2}+10K_2+1,\mathrm{𝑒𝑡𝑐}.`$
with $`K_q^{(m)}=k_q^{(m)}/n_m^q`$. Here and below in this section, for brevity we omit the superscript $`C`$. These relations are exact for the “vertical” moments and cumulants and the errors introduced by using them with corrected horizontal moments and cumulants are negligibly small, except when very high orders are considered for variables whose single-particle distributions are markedly non-uniform .
The composition of the $`q`$-particle dynamical fluctuations from the genuine lower-order $`p`$-particle correlations is tested by the comparison of the $`q`$th order factorial moments with the $`p`$-particle contribution $`F_q^{(p)}`$ calculated by the above Eqs. (6), which are truncated up to the $`K_p`$-terms. The excess of $`F_q`$ over $`F_q^{(p)}`$ demonstrates the importance of correlations of order higher than $`p`$ in the measured $`q`$-particle fluctuations.
For example, the two-particle contribution $`F_4^{(2)}`$ to the fourth-order factorial moment can be expressed as
$$F_4^{(2)}=3\overline{(K_2^{(m)})^2}+6K_2+1,$$
(7)
while the contribution from two and three-particle correlations, $`F_4^{(2+3)}`$, is
$$F_4^{(2+3)}=4K_3+3\overline{(K_2^{(m)})^2}+6K_2+1.$$
(8)
## 3 Experimental details
The data used in this study were recorded with the OPAL detector at the LEP e<sup>+</sup>e<sup>-</sup> collider at CERN. The analysis is restricted to charged particles measured in the central tracking chambers. The inner vertex detector has a high precision in impact parameter reconstruction. The large diameter jet chamber and outer layer of longitudinal drift chambers allow accurate measurements in the planes perpendicular and parallel to the beam axis. The jet chamber provides up to 159 space points per track, and allows particle identification by measuring the ionisation energy loss, $`d`$E/$`d`$x, of charged particles . All tracking chambers are surrounded by a solenoidal coil providing a magnetic field of 0.435 T along the beam axis. The resolution of the component of momentum perpendicular to the beam axis is $`\sigma (p_t)/p_t=\sqrt{(0.0023p_t)^2+(0.018)^2}`$ for $`p_t`$ in GeV/$`c`$, and the resolution in the angle $`\theta `$ between the charged particle’s direction and the electron beam is $`\sigma (\theta )=5`$ mrad within the acceptance of the analysis presented here. In multihadronic events, the ionisation energy loss measurement has been obtained with a resolution of $`3.5`$% for tracks with 159 measured points.
The present study was performed with a sample of approximately $`4.1`$$`\times `$$`10^6`$ hadronic Z<sup>0</sup> decays collected from 1991 through 1995. About 96% of this sample was collected at the Z<sup>0</sup> peak energy and the remaining part was collected within $`\pm 3`$ GeV of the peak. Over 98% of charged hadrons were detected.
The event selection criteria are based on the multihadronic event selection algorithms described in Refs. , and are similar to those used in other LEP intermittency studies .
For each event, “good” charged tracks were accepted if they
* had at least 20 measured points in the jet chamber;
* had a first measured point closer than 70 cm from the beam axis;
* pass within 5 cm of the e<sup>+</sup>e<sup>-</sup> collision point in the projection perpendicular to the beam axis, with the corresponding distance along the beam axis not exceeding 40 cm;
* had a momentum component transverse to the beam direction greater than 0.15 GeV/$`c`$;
* had a momentum smaller than 10 GeV/$`c`$;
* had a measured polar angle of the track with respect to the beam direction satisfying $`|\mathrm{cos}\theta |<0.93`$;
* had a mean energy loss, dE/dx, in the jet chamber smaller than 9 keV/cm to reject electrons and positrons.
Selected multihadron events were required to have
* at least 5 good tracks;
* a momentum imbalance of $`<0.4\sqrt{s}`$, which is defined as the magnitude of the vector sum of the momenta of all charged particles;
* the sum of the energies of all tracks (assumed to be pions) greater than 0.2$`\sqrt{s}`$;
* $`|\mathrm{cos}\theta _S|<0.7`$, where $`\theta _S`$ is the polar angle of the event sphericity axis with respect to the beam direction. The sphericity axis is calculated using all good tracks and electromagnetic and hadronic calorimeter clusters.
The first three requirements provide rejection of background from non-hadronic Z<sup>0</sup> decays, two-photon events, beam-wall interactions, and beam-gas scattering. The last requirement ensures that the event is well contained in the most sensitive volume of the detector. A total of about $`2.3`$$`\times `$$`10^6`$ events were selected for further analysis.
We also used two samples of about 2$`\times `$$`10^6`$ simulated events each, generated at Z<sup>0</sup> peak energy using the following MC generators:
* Jetset version 7.4 with the parton shower followed by string formation and fragmentation,
* Herwig version 5.9 with the parton shower followed by cluster fragmentation.
The parameters of both MC codes were tuned with OPAL data to provide a good description of the distributions of the measured event-shape variables and single-particle kinematic variables.
In this study we chose rapidity, azimuthal angle and transverse momentum as individual track kinematic variables. These are frequently used in multihadronic studies and, in particular, for intermittency and correlation analyses . To make our study compatible with other investigations carried out in e<sup>+</sup>e<sup>-</sup> annihilations, these variables are calculated with respect to the sphericity axis as follows:
* Rapidity, $`y=0.5\mathrm{ln}[(E+p_{})/(Ep_{})]`$ with $`E`$ and $`p_{}`$ being the energy (assuming the pion mass) and longitudinal momentum of the particle in the interval $`2.0`$$``$$`y`$$``$$`2.0`$.
* Transverse momentum in the interval $`2.4`$$``$$`\mathrm{ln}(p_T)`$$``$$`0.7.`$ The log scale is introduced, since the exponential shape of the $`p_T`$-distribution causes instability in the average multiplicity calculations, even for $`p_T`$ bins of intermediate size.
* Azimuthal angle, $`\mathrm{\Phi }`$, calculated with respect to the eigenvector of the momentum tensor having the smallest eigenvalue, in the plane perpendicular to the sphericity axis. The angle $`\mathrm{\Phi }`$ is defined in the interval $`0`$$``$$`\mathrm{\Phi }`$$``$$`2\pi `$.
The single-particle distributions of the data sample and of the MC (corrected to the hadron level, see below) are shown in Fig. 1. In the following study the maximum number of bins is taken to be $`M_{\mathrm{max}}`$=400, so that the one-dimensional minimal bin size of the above kinematic variables are: $`\delta y_{\mathrm{min}}`$$`=`$$`0.01`$, $`\delta \mathrm{\Phi }_{\mathrm{min}}`$$`=`$$`0.9^{}`$ and $`\delta (\mathrm{ln}p_T)_{\mathrm{min}}`$$``$$`0.008`$. $`M_{\mathrm{max}}`$ is the same as was chosen in our previous intermittency publication and is the largest value of $`M`$ used so far. The experimental resolution of each variable was estimated by a MC simulation. It was found that the OPAL detector allows the study of an intermittency signal down to distances of the above mentioned minimal bin sizes, although detector effects become important for bin sizes less than 0.04 in rapidity, smaller than 3 in azimuthal angle and less than 0.02 in $`\mathrm{ln}p_T`$. The distributions in several dimensions have the advantage that the event phase space may be subdivided into many more bins than $`M_{\mathrm{max}}`$ before the limits of detector resolution are reached. As in our former analysis, the smallest bin sizes used are found not to affect the observations.
To correct the measured moments for the effects of geometrical acceptance, kinematic cuts, initial-state radiation, resolution and particle decays, we apply the correction procedure adopted in our earlier factorial moment study and in analogous investigations done with e<sup>+</sup>e<sup>-</sup> annihilation. Two samples of multihadronic events were generated using the Jetset 7.4 MC program. The first sample does not include the effects of initial-state radiation, and all particles with lifetimes longer than 3$`\times `$$`10^{10}`$ seconds were regarded as stable. The generator-level factorial moments and cumulants are calculated directly from the charged particle distributions of this sample without any selection criteria. The second sample was generated including the effects of finite lifetimes and initial-state radiation and was passed through a full simulation of the OPAL detector . The corresponding detector-level moments were calculated from this set using the same reconstruction and selection algorithms as used for the measured data. The corrected moments were then determined by multiplying the measured ones by the factor
$$U_q(M)=X_q(M)_{gen}/X_q(M)_{det},$$
(9)
with $`X_q=F_q^C`$ or $`K_q^C`$ defined by Eqs. (2) and (4).
The correction factors $`U_q`$ have also been computed using the Herwig event sample. For both Jetset and Herwig MC generators, the correction factor tends to be less than unity and rises with order $`q`$ as has been observed earlier . The difference between the $`X_q`$ quantities calculated with the Jetset and Herwig generated samples has been used in the estimation of the systematic uncertainties. The statistical uncertainties on the Jetset $`U_q`$ factor have been also incorporated into the systematic uncertainties in this analysis.
Another contribution to the systematic uncertainties has been evaluated by changing the above track and event selection criteria. The moments have been calculated from the data sample of about two million events with the following variations in the selection criteria: the first measured point was required to be closer than 40 cm to the beam axis, the requirement of the transverse momentum with respect to the beam axis was removed, the total momentum was required to be less than 40 GeV/$`c`$, the charged track polar angle acceptance was changed to $`|\mathrm{cos}\theta |<0.7`$, and the requirement on the mean energy loss was removed. The deviations of the moments with these changes modify the results by no more than a few percent and do not influence their behaviour.
The total errors have been calculated by adding the systematic and statistical uncertainties in quadrature. The systematic uncertainties are shown separately in the figures (except in those given in Section 4.3) and are dominant at large $`M`$. Statistical uncertainties based on the MC samples are similar to those obtained from the data.
It was verified that the results do not appreciably change if one removes from the analysis those events which were taken at energies off the Z<sup>0</sup> peak energy.
## 4 Results and discussion
### 4.1 The measurements
In this section we present the factorial moments $`F_q^C`$ and the cumulants $`K_q^C`$, defined in Eqs. (2) and (4), respectively, and compute them in the $`y`$$`\times `$$`\mathrm{\Phi }`$$`\times `$$`p_T`$ phase space and its projections. The moments are shown in Figs. 2 to 4 and the cumulants are given in Figs. 5 to 7 as a function of $`M`$, the number of bins in one, two and three dimensions. Both the factorial moments and the cumulants are measured up to the fifth order. The second-order cumulants are not shown since their behaviour is determined directly by the second-order factorial moments, as can be seen from Eqs. (6). The higher-order cumulants behave differently from the same-order moments because the cumulants contain combinations of lower-order fluctuations, which are taken into account in Eq. (4) by means of the Mueller moments as in Eq. (5).
Overall, the factorial moments and the cumulants depend very similarly on the bin width. The factorial moments obey the scaling-law of Eq. (3) in almost all the phase-space projections (except in the $`p_T`$-subspace), and the cumulants show analogous intermittent behaviour up to high orders. This behaviour becomes more pronounced when the analysis is extended to several dimensions where, in contrast to the one-dimensional case, no saturation with decreasing bin size is observed. The largest moments and cumulants and the largest intermittency slopes are found in the $`y`$$`\times `$$`\mathrm{\Phi }`$ subspace.
The saturation at small bin sizes observed in the one-dimensional analysis in rapidity and azimuthal angle (Figs. 2 and 5) agrees with that predicted by QCD and is a consequence of the transition to the regime where the running of the QCD coupling $`\alpha _s`$ comes into play. The dynamics governing particle density fluctuations in small bins occurs at low energy scales with larger values of $`\alpha _s`$. One can see that the moments and the cumulants actually have steep linear rises at $`M`$$``$$`20`$ ($`\delta y`$$``$$`0.2`$, $`\delta \mathrm{\Phi }`$$``$$`18^{}`$) and level off at large values of $`M`$. Figs. 2 and 5 show that the transition point shifts to larger $`M`$ (smaller bin sizes) as the order $`q`$ of the moments increases, also in accordance with QCD calculations.
The enhancement of the intermittency effect in higher dimensions as shown in Figs. 3 and 4 for the factorial moments and in Figs. 6 and 7 for the cumulants, has been attributed to “shadowing” i.e., studies in lower dimensions lose information due to projection . A model of emission of strongly collimated particles, clustered in both rapidity and azimuth, has been suggested in . In the framework of this so-called “pencil-jet” model, a strong increase of the factorial moments is expected in the $`y`$$`\times `$$`\mathrm{\Phi }`$ subspace compared to $`y`$ or $`\mathrm{\Phi }`$ separately. Although formation of such jet structures has been confirmed experimentally, the increase has been found to be much less than that predicted .
Jet structure also explains the behaviour observed in the $`y`$$`\times `$$`p_T`$ and $`\mathrm{\Phi }`$$`\times `$$`p_T`$ plots of Figs. 3 and 6. Indeed, the moments and the cumulants in $`y`$$`\times `$$`p_T`$ and $`\mathrm{\Phi }`$$`\times `$$`p_T`$ for the same total $`M`$ are not found to increase as compared to those in $`y`$ and $`\mathrm{\Phi }`$, respectively, since there is no intermittency in the transverse momentum subspace. Similarly, the moments in $`y`$$`\times `$$`\mathrm{\Phi }`$ are approximately equal to those in $`y`$$`\times `$$`\mathrm{\Phi }`$$`\times `$$`p_T`$ at the same total $`M`$. At the same time, the intermittency is seen for larger $`M`$ in higher dimensions, indicating the presence of the dynamical fluctuations and correlations in additional dimensions.
In Ref. it was claimed that the increase of the factorial moments with the addition of new dimensions is a trivial consequence of a phase-space factor and has nothing to do with the jet formation mechanism. Our observations show that this is true if one compares the moments at the same abscissa $`M^{1/d}`$, where $`d`$ is the dimension of the subspace. However, comparing multidimensional moments (and the cumulants) at the same total number of bins, one obtains contributions of self-similar structure from different projections, as is the case with the jet-structure contributions to the $`y`$$`\times `$$`\mathrm{\Phi }`$ moments.
The values of the cumulants are positive in most of the cases, indicating that multiparticle dynamical correlations indeed are significantly present in the particle-production process. Large values of the cumulants of the order $`q`$$``$4 are seen in $`y`$$`\times `$$`\mathrm{\Phi }`$ and $`y`$$`\times `$$`\mathrm{\Phi }`$$`\times `$$`p_T`$. Non-zero high-order cumulants are also found in rapidity and $`y`$$`\times `$$`p_T`$. This shows that the factorial moments at $`q`$=5, the highest order considered here, have important contributions from lower-order correlations, a point discussed in Section 4.3.
### 4.2 Comparison with the Monte Carlo models
In Figs. 2 through 7 the data are compared with the predictions of the Jetset and Herwig MC models. Both MC models describe the general behaviour of the factorial moments and cumulants and show significant positive multiparticle correlations.
The one-dimensional factorial moments (Fig. 2) and cumulants (Fig. 5) in rapidity and in azimuthal angle show that while the MC describe the data rather well at small $`M`$ (large bin sizes), the models tend to fall below the data, starting at intermediate $`M`$. The discrepancy rises with $`M`$ and with the order of the moments $`q`$. The saturation effect sets in earlier in the MC models than it does in the data. In the transverse momentum projection, the models show quite different behaviour. Jetset underestimates the moments and the cumulants, whereas Herwig strongly overestimates them. Figs. 3, 4, 6 and 7 show that there is a better agreement between the data and the MC in high dimensions.
From these comparisons one can conclude that both MC models reproduce the data well while neither of them is particularly preferred. The perturbative parton shower, on which both MC models are based, seems to play an important role in the origin of the dynamical fluctuations and correlations in e<sup>+</sup>e<sup>-</sup> annihilation. The observed differences between the two MC descriptions indicate that the last steps of the hadronization process are not described correctly . Contributions from additional mechanisms to the observed fluctuations and correlations are not excluded.
### 4.3 Contributions from multiparticle correlations
This section describes the contributions of genuine multiparticle correlations to dynamical fluctuations. To this end we compare in Figs. 812 the measured corrected factorial moments to the lower-order contributions, $`F_q^{C(p)}`$, calculated using Eqs. (6).
Fig. 8 shows the one-dimensional case. The fluctuations in transverse momentum are not shown since they do not exhibit intermittency behaviour (see Fig. 2). A significant contribution of high-order genuine correlations appears. Two-particle correlations in rapidity and in azimuthal angle are insufficient to explain the measured three-particle fluctuations. At $`q`$$`=`$$`4`$, four-particle correlations are also necessary. The importance of four-particle genuine correlations is also demonstrated by the five-particle fluctuations in the rapidity subspace, where the addition of the fourth-order cumulants becomes essential. The fifth-order moment study cannot be performed for the azimuthal angle variable because the non-uniformity of the $`\mathrm{\Phi }`$ spectrum leads to large values of the correction factor $`R_5`$ which makes the relations (6) inapplicable. The difference between the moments and the correlation contributions increases with decreasing bin size.
The genuine multiparticle contributions, also seen to be important in the two-dimensional $`y`$$`\times `$$`\mathrm{\Phi }`$ analysis, are shown in Fig. 9. The failure of the genuine two- and three-particle correlations to describe the four-particle dynamical fluctuations indicates a significant four-particle contribution in the observed high-order fluctuations. The need to account for higher-order correlations is visible also in $`F_5`$ for small bin-sizes. The comparison of $`F_3^C`$ and $`F_3^{C(p)}`$ for other two-dimensional subspaces, $`y`$$`\times `$$`p_T`$ and $`\mathrm{\Phi }`$$`\times `$$`p_T`$, is shown in Figs. 10 and 11 and also indicates a considerable contribution from three-particle genuine correlations. As in the one-dimensional case in $`\mathrm{\Phi }`$, the large $`R_q`$ factor for $`q`$$`>`$$`3`$ does not allow use of Eqs. (6) in these cases. The same is seen in the three-dimensional study (Fig. 12) for $`q`$$`=`$$`5`$, whereas for $`q<5`$, the contribution of multiparticle correlations up to the fourth order is well illustrated.
### 4.4 Comparison with other experiments
#### 4.4.1 Factorial moments in e<sup>+</sup>e<sup>-</sup> annihilation
Studies of intermittency in e<sup>+</sup>e<sup>-</sup> interactions have been carried out mainly in one-dimension using the rapidity variable . The first three-dimensional analysis of factorial moments was perfomed by CELLO in Lorentz-invariant phase space. DELPHI has presented a three-dimensional analysis of intermittency in $`y`$$`\times `$$`\mathrm{\Phi }`$$`\times `$$`p_T`$ phase space and its projections, as it is chosen in this paper. The values of the moments and their $`M`$-dependence, found in all these studies, are similar to those obtained here.
In one-dimensional rapidity and azimuthal angle the saturation of the factorial moments has been observed at the same $`M`$ in all investigations . The moments in two and three dimensions have been found to be larger and to have steeper intermittency slopes than those in one dimension. Jet evolution has been suggested as the main source of multiparticle fluctuations, similarly to our finding. However, the saturation shown by DELPHI for the two-dimensional factorial moments are not present in our analysis due to the high statistics and the modification used to take into account the contents of small bins.
In agreement with recent e<sup>+</sup>e<sup>-</sup> studies , the MC models used here are found to describe the behaviour of the measured moments, although they underestimate their magnitudes.
#### 4.4.2 Factorial moments and cumulants in hadronic collisons
The factorial moments have also been investigated in $`y`$$`\times `$$`\mathrm{\Phi }`$$`\times `$$`p_T`$ phase space and in its projections in hadron-hadron collisons by NA22 .
In the NA22 $`p_T`$-subspace analysis, hadronic interactions have shown a visible intermittency effect, in contrast to its absence in e<sup>+</sup>e<sup>-</sup> annihilation. In $`\mathrm{\Phi }`$, on the other hand, one finds sensitive dynamical fluctuations in e<sup>+</sup>e<sup>-</sup> collisions, while in hadronic collisions the fluctuations are strongly suppressed by a statistical component. No saturation has been observed in $`y`$ and $`p_T`$ subspaces in hadronic collisions.
In several dimensions the factorial moments computed in hadronic interactions also differ from those in e<sup>+</sup>e<sup>-</sup> annihialtion. In two-dimensions, the largest moments are found to be in $`y`$$`\times `$$`p_T`$, and the intermittency effect is observed to be the strongest in $`y`$$`\times `$$`\mathrm{\Phi }`$. These moments show a faster increase (larger $`\phi _q`$) than those in one dimension, although their values are lower and saturate already at $`q`$$`=`$$`3`$. The three-dimensional moments show strong increase with decreasing bin size, although their values are found to be closer to the two-dimensional moments. In contrast to the linear log-log plots of the scaling behaviour (3) observed in e<sup>+</sup>e<sup>-</sup> annihilation, three-dimensional factorial moments in hadronic collisions scale only approximately; they rise more quickly than the power law of Eq. (3). This difference is attributed to the difference in dynamics between soft and hard processes that leads to isotropic dynamical fluctuations in e<sup>+</sup>e<sup>-</sup> annihilation and to anisotropic ones in hadron-hadron collisions .
Large cumulant values are expected in e<sup>+</sup>e<sup>-</sup> annihilation due to a small number of very energetic particle sources, as was mentioned in the introduction. Indeed, the cumulants measured here are much higher than those found in hadronic interactions. Furthermore, they have non-zero values for higher orders, while in hadronic collisions they are consistent with zero at $`q`$$`>`$3. Also, the power-law increase in $`M`$ is seen to be stronger in the present data.
In hadronic interactions , both in one and several dimensions , the factorial moments were found to be basically composed of two-particle correlations. Our study shows sensitive contributions of correlations of orders $`q`$=3 and even 4 to the dynamical fluctuations in e<sup>+</sup>e<sup>-</sup> annihilation. The observed contributions increase with decreasing bin size, in agreement with trends seen in hadronic interactions.
## 5 Summary and Conclusions
A multidimensional study of local fluctuations and genuine multiparticle correlations in hadronic decays of the Z<sup>0</sup> is carried out with the four-million event sample of the OPAL data collected at the LEP collider. The sphericity axis is chosen as a reference axis, and the phase space is defined by the rapidity, azimuthal angle and transverse momentum variables. The analysis is based on the intermittency approach and represents an investigation of the normalized factorial moments and, for the first time in e<sup>+</sup>e<sup>-</sup> annihilation, the normalized cumulants. The quantities studied have been corrected to reduce the bias due to the non-uniform shapes of the single-particle distributions and have been modified to eliminate effects of finite statistics. The factorial moments and cumulants are computed up to the fifth order and down to very small bin sizes.
The factorial moments show an intermittency behaviour which is strongly enhanced as the dimension of the subspace increases from one to three. In the one-dimensional analysis, the intermittency signal is found to be larger in rapidity than in azimuthal angle and to vanish in transverse momentum. The moments in rapidity and in azimuthal angle saturate at intermediate bin sizes, in agreement with the QCD expectation for the transition to the running $`\alpha _s`$ coupling regime. No saturation is observed in two and three dimensions, a consequence of jet formation.
Our study of the cumulants shows that they have large positive values, indicating the existence of genuine multiparticle correlations. The cumulants are found to be much larger than those in hadronic interactions, suggesting an increase of the correlations with the decrease of the number of independent subprocesses present in the reaction. The cumulants are analysed in subspaces of different dimensions, and the genuine multiparticle correlations are found to be larger in higher dimensions. This study reveals genuine correlations up to four particles in one dimension and significant five-particle correlations in higher dimensions. Large higher-order correlations measured in the $`y`$$`\times `$$`\mathrm{\Phi }`$ two-dimensional projection confirm the jet structure of dense groups of particles. The cumulants show intermittency rises which are stronger than for the corresponding factorial moments.
The large statistics of the present analysis allows the observation of contributions of many-particle correlations to the measured dynamical fluctuations. Considerable contributions up to four-particle are observed in the expansion of the factorial moments in terms of cumulants. The contributions increase with decreasing bin size reflecting the underlying self-similar dynamics.
The measurements are compared to Herwig 5.9 and Jetset 7.4 predictions. In general, these Monte Carlo models are found to reproduce the behaviour of the moments and the cumulants, while underestimating the measured values starting at intermediate bin sizes. High-order multiparticle correlations are found to be present in both models used. The observations confirm jet structure formation as an important contribution to the correlations, but other mechanisms of the hadronization process are possibly also relevant.
Acknowledgements
We particularly wish to thank the SL Division for the efficient operation of the LEP accelerator at all energies and for their continuing close cooperation with our experimental group. We thank our colleagues from CEA, DAPNIA/SPP, CE-Saclay for their efforts over the years on the time-of-flight and trigger systems which we continue to use. In addition to the support staff at our own institutions we are pleased to acknowledge the
Department of Energy, USA,
National Science Foundation, USA,
Particle Physics and Astronomy Research Council, UK,
Natural Sciences and Engineering Research Council, Canada,
Israel Science Foundation, administered by the Israel Academy of Science and Humanities,
Minerva Gesellschaft,
Benoziyo Center for High Energy Physics,
Japanese Ministry of Education, Science and Culture (the Monbusho) and a grant under the Monbusho International Science Research Program,
Japanese Society for the Promotion of Science (JSPS),
German Israeli Bi-national Science Foundation (GIF),
Bundesministerium für Bildung, Wissenschaft, Forschung und Technologie, Germany,
National Research Council of Canada,
Research Corporation, USA,
Hungarian Foundation for Scientific Research, OTKA T-016660, T023793 and OTKA F-023259.
|
no-problem/9902/cond-mat9902303.html
|
ar5iv
|
text
|
# Broken Symmetries in Scanning Tunneling Images of Carbon Nanotubes
\[
## Abstract
Scanning tunneling images of carbon nanotubes frequently show electron distributions which break the local sixfold symmetry of the graphene sheet. We present a theory of these images which relates these anisotropies to the off diagonal correlations in the single particle density matrix, and allows one to extract these correlations from the observed images. The theory is applied to images of the low energy states reflected at the end of a tube or by point defects, and to states propagating on defect free semiconducting tubes. The latter exhibit a novel switching of the anisotropy in the tunneling image with the sign of the tunneling bias.
\]
Scanning tunneling microscopy and spectroscopy is a powerful tool for studying the structural and electronic properties of carbon nanotubes at the atomic scale. Several experimental groups have reported tunneling images of isolated single wall carbon nanotubes and of tubes packed into bundles or “ropes”. In some cases these measurements have allowed a direct determination of the diameters and wrapping vectors for the tubes and these observations, combined with scanning tunneling spectroscopy, have confirmed the idea that the semiconducting or conducting behavior of a tube is controlled by its wrapping vector.
However STM images of these systems obtained at low bias voltages contain a number of surprising features. At low bias voltages the images rarely display the full sixfold lattice symmetry even when the underlying graphene lattice is undistorted. Instead, these images frequently contain a broken symmetry in the form of “striped” patterns in which maxima in the electron density are observed in bond chains which spiral around the tube. In some images superlattice structures are present with a period commensurate with but larger than that of the underlying graphene sheet. Moreover, energy resolved images of short tubes show standing waves characteristic of individual eigenstates which also have a period longer than that of the graphene lattice.
In special cases broken translational or rotational symmetries obtained in an STM image can be attributed to asymmetries in the tunneling tip. In this Letter we point out that asymmetric images are expected even in an ideal tunneling experiment and contain important information about the low lying electronic states in these systems. The asymmetries are interference patterns which are sensitive to the coherence between the “forward” and “backward” moving electronic states propagating on the tube walls. This can arise from backscattering from tube ends, from various defects on the tube walls, and even from propagation in a translationally invariant potential on a semiconducting tube. We show that these interference patterns in the tunneling images are fingerprints which directly probe the off diagonal correlations in the single particle density matrix. We provide a theory for extracting these correlations from the observed images. This effect is illustrated with several examples of the tunneling densities of states calculated for defect free tubes and for tubes with point defects.
The low lying electronic states on a carbon nanotube are derived from the propagating states near the “Fermi points” of an isolated graphene sheet located at the Brilloun zone corners shown in the lower panel of Fig. 1. There are two inequivalent points which we will refer to as K = $`K_0`$ and K’=-$`K_0`$. Wrapping the graphene sheet into a cylinder requires that the electronic wavefunctions satisfy periodic boundary conditions $`\mathrm{\Psi }(\stackrel{}{r}+\stackrel{}{T}_{m,n})=\mathrm{\Psi }(\stackrel{}{r})`$ where $`\stackrel{}{T}_{m,n}=m\stackrel{}{\tau }_1+n\stackrel{}{\tau }_2`$ gives the wrapping vector around the tube circumference expressed in terms of the two primitive graphite translation vectors $`\stackrel{}{\tau }_1=(1/2,\sqrt{3}/2)`$ and $`\stackrel{}{\tau }_2=(1/2,\sqrt{3}/2)`$. When $`\mathrm{mod}(mn,3)=0`$ the Bloch waves exactly at the K and K’ points are allowed quantized waves on the tube. This leads to the metallic band structure in Fig. 1(a). In contrast, when $`\mathrm{mod}(mn,3)=(1,2)`$ the allowed quantized waves do not intersect K and K’, which leads to a semiconducting gap in the electronic spectrum as shown in Fig. 1(b).
The spectrum of Fig. 1(a) describes the propagating modes of a defect free conducting tube. However, these waves can be reflected from tube ends or from defects along the tube. The interference between the forward and backward moving waves produces a spatial modulation of the charge density. Since the carbon nanotubes have two forward and backward moving channels (associated with the K and K’ points shown in Fig. 1 the resulting interference patterns have a particularly rich structure. The coherent superposition of the forward and backward moving components of the scattering states produces off diagonal correlations in the density matrix at energy $`E`$,
$$\rho _{\alpha \beta }(E)=\psi _\alpha ^{}\delta (E)\psi _\beta .$$
(1)
where $``$ is the Hamiltonian and $`\alpha `$ is a four component index specifying the left and right moving bands at the K and K’ points.
To derive the local tunneling density of states from the density matrix in equation (1) we represent the Bloch waves $`\psi _\alpha (\stackrel{}{r})`$ as a sum atomic orbitals centered on sites $`\stackrel{}{\tau }_m`$ in cells $`\stackrel{}{T}_n`$
$$\psi _\alpha (\stackrel{}{r})=\underset{m,n}{}\gamma _{m\alpha }e^{i\stackrel{}{k}_\alpha \stackrel{}{T_n}}f(\stackrel{}{r}\stackrel{}{\tau }_m\stackrel{}{T}_n).$$
(2)
where $`\gamma _{m\alpha }`$ are the amplitudes for the Bloch state on sites $`m`$. These Bloch waves can be represented as an expansion in reciprocal lattice vectors
$$\psi _\alpha =\underset{m,n}{}\gamma _{m\alpha }e^{i(\stackrel{}{k}_\alpha +\stackrel{}{G}_n)\stackrel{}{\tau }_m}F(|\stackrel{}{k}_\alpha +\stackrel{}{G}_n|)e^{i(\stackrel{}{k}_\alpha +\stackrel{}{G}_n)\stackrel{}{r}}.$$
(3)
For tunneling from a tip which is smooth on a scale of the atomic spacing, $`F(q)`$ decreases rapidly for large $`q`$. In the following we will truncate this expansion, keeping only $`\stackrel{}{G}`$’s in the lowest “star” of $`\stackrel{}{k}_\alpha +\stackrel{}{G}`$. This becomes exact when the STM tip is sufficiently high above the surface. Including higher Fourier components does not significantly change our conclusions. We also assume the tip is isotropic, so within the first star $`F(|\stackrel{}{k}+\stackrel{}{G}|)`$ is independent of $`\stackrel{}{G}`$.
The local density of states at energy $`E`$ can be expressed
$$\rho (\stackrel{}{r},\mathrm{E})=\psi _\alpha ^{}(\stackrel{}{r})\rho _{\alpha \beta }(E)\psi _\beta (\stackrel{}{r}).$$
(4)
It is useful to characterize the tunneling image in terms of its longest wavelength Fourier components. Coupling between bands at the same K point leads to images with the periodicity of the lattice. These are described by Fourier components in the first star of reciprocal lattice vectors $`\stackrel{}{q}_{1i}=\stackrel{}{G}_i`$, indicated in Fig. 1. On the other hand coupling between the two K points leads to modulations with a $`\sqrt{3}\times \sqrt{3}`$ superlattice, which are described by the “$`\sqrt{3}`$” star, $`\stackrel{}{q}_{\sqrt{3}i}=\stackrel{}{K}_i`$. The discussion is further simplified by projecting onto the “triangular harmonics” defined in each star to be
$$\rho _{pm}=\frac{1}{3}\underset{n=0}{\overset{2}{}}e^{2\pi imn/3}d^2re^{i\stackrel{}{q}_{pn}\stackrel{}{r}}\rho (\stackrel{}{r})$$
(5)
with $`p=1`$ or $`\sqrt{3}`$ and $`m=1,0,1`$. Combining (1-5) yields a simple expression for the Fourier components $`\rho _{pm}(E)`$ in terms of the density matrix $`\rho _{\alpha \beta }(E)`$.
We first consider the effects of reflection either from the end of the tube or from an impurity. The reflection of waves associated with the K and K’ points is characterized by a $`2\times 2`$ matrix of complex reflection amplitudes. This matrix contains three independent amplitudes labeled $`r_a`$, $`r_b`$ and $`r_m`$ in Fig. 1(a). These describe respectively large momentum scattering from $`KK^{}`$, $`K^{}K`$ and small momentum scattering at the $`K`$ and $`K^{}`$ points. The equality between the two amplitudes described by $`r_m`$ follows from time reversal symmetry. These reflection amplitudes depend on the detailed structure of the scatterer, although for special high symmetry scatterers its form can be constrained by symmetry. In the following, we consider an infinitely long tube, so that the density matrix is diagonal in the basis of scattering states, which are a superposition of incoming and reflected waves.
The large momentum scattering amplitudes $`r_a`$ and $`r_b`$ illustrated in Fig. 1 produce a modulation of the TDOS in the $`\sqrt{3}`$ star. For a conducting tube we find that the TDOS measured at energy E and at a distance x from a point scatterer has the Fourier coefficients
$$\begin{array}{ccc}\hfill \rho _{\sqrt{3}0}(E)& =& \frac{1}{2}N(E)(r_be^{2iQx}r_a^{}e^{2iQx})\hfill \\ & & \\ \hfill \rho _{\sqrt{3}\pm 1}(E)& =& \frac{1}{2}N(E)e^{\pm i\theta }(r_be^{2iQx}+r_a^{}e^{2iQx})\hfill \end{array}$$
(6)
where $`N(E)`$ is the density of states, $`Q=E/\mathrm{}v_F`$, $`v_F`$ is the Fermi velocity and $`\theta `$ is the chiral angle which orients the zigzag bond direction with respect to the tube axis (i.e. $`\theta =0`$ defines a tube with an armchair wrapping). The dependence of the TDOS on $`r_a`$ and $`r_b`$ is most clearly seen for tunneling at low bias ($`E0`$). The $`Q`$ dependence in equation (5) arises from the fact that the scattered states at nonzero energy are not located precisely at the Fermi points.
As an example in Fig. 2 we display the calculated TDOS at E=0 for the values $`r_a=r_b=1`$ on an armchair tube. The scattering amplitudes produce a $`\sqrt{3}\times \sqrt{3}`$ modulation of the tunneling image which in which the bond charges are enhanced in a superlattice of bonds oriented along the circumferential direction of the tube. Similar $`\sqrt{3}\times \sqrt{3}`$ modulations occur in the presence of impurities on the surface of graphite. Those patterns follow from the two dimensional scattering between the $`K`$ and $`K^{}`$ points in the graphite plane.
The small momentum backscattering amplitudes $`r_m`$ produce cell periodic modulations of the TDOS which can nonetheless break the rotational symmetry of the image. These effects are produced by a modulation of the Fourier components of the TDOS in the first star of reciprocal lattice vectors $`\stackrel{}{G}_n`$ shown in Fig. 1. Using the expansion in triangular harmonics in equation (5) we find
$$\begin{array}{ccc}\hfill \rho _{10}(E)& =& N(E)(1+i\sqrt{3}\mathrm{Re}[r_me^{2iQx}])\hfill \\ & & \\ \hfill \rho _{1\pm 1}(E)& =& iN(E)e^{\pm i\theta }\mathrm{Im}[r_me^{2iQx}].\hfill \end{array}$$
(7)
Interesting structure in the TDOS is produced by the imaginary part of $`\rho _{1\pm 1}`$. This generally occurs for any chiral tube with $`\theta 0`$, but it can also occur for a nonchiral armchair tube, $`\theta =0`$, when the $`q0`$ backscattering amplitude $`r_m`$ develops a nonvanishing imaginary part. This leads to the interference pattern shown in Fig. 2(b), in which a bond density wave is deflected into a spiral pattern around the axis of the armchair tube. We find that a reflection amplitude with this symmetry can be produced by any point defect or end cap which breaks the two sublattice symmetry of the underlying graphene sheet.
Semiconducting tubes with $`\mathrm{mod}(mn,3)0`$ have a gap in the low energy spectrum as shown in the middle panel of Fig. 1. This gap arises from a coherent superposition of forward and backward moving components which is required for the wavefunction to satisfy periodic boundary conditions around the tube waist. Exactly at the band edges one obtains a perfect standing wave which contains an equal admixture of forward and backward propagating waves. Note however that the backscattering responsible for these states does not result from reflection from an isolated point defect, but instead arises from the presence of a “mass” operator in the low energy Hamiltonian which preserves the lattice translational symmetry, and breaks its rotational symmetry. Remarkably, the symmetry of this “mass” term depends sensitively on the wrapping vector and allows one to distinguish $`\mathrm{mod}(mn,3)=1`$ from $`\mathrm{mod}(mn,3)=2`$ tubes. To do this we define the chiral index $`s=(1)^{\mathrm{mod}(mn,3)}`$. Then we find that the tunneling density of states for a semiconducting tube with chiral index $`s`$, chiral angle $`\theta `$ and gap 2$`\mathrm{\Delta }`$ is given by
$$\begin{array}{ccc}\hfill \rho _{10}(E)& =& N(E)\hfill \\ & & \\ \hfill \rho _{1\pm 1}(E)& =& \pm isN(E)e^{\pm i\theta }\mathrm{\Delta }/E.\hfill \end{array}$$
(8)
In Fig. 3 we display tunneling densities of states calculated for the band edge states $`(E=\pm \mathrm{\Delta })`$ on tubes with wrapping indices \[m,n\] = , and (zigzag). These three tubes have a common chiral index $`s`$=1 and chiral angles which vary between $`8.6^{}`$ and $`30^{}`$. The band edge states imaged at positive and negative bias have complementary structures, that is the superposition of the two images gives an image with perfect sixfold symmetry although each image separately breaks the sixfold symmetry. For chiral angles near $`\theta 0`$ the tunneling images consist of a series of complementary spiral stripes. However the TDOS changes smoothly as a function of chiral angle, so that as one approaches the zigzag structure the negative energy states are enhanced in a pattern of isolated bonds in the structure, while the density for the positive energy states is confined to a connected zigzag bond chain. Note that the symmetry breaking terms in $`\rho _{1\pm 1}`$ depend on the product of the chiral index and the energy. Thus for a tube which has a chiral index -1 the symmetries of the positive and negative energy solutions are reversed.
Fig. 3 shows that for semiconducting tubes the tunneling images obtained near the band edges break the local point symmetry of the graphene sheet, and that for a given tube the sign of the symmetry breaking is switched by reversing the bias of the tunneling tip. Observation of such a reversal in the tunneling image would provide a striking identification of the chiral index $`s`$ of a semiconducting tube, even when the wrapping indices \[n,m\] can not be resolved. Moreover this reversal would clearly distinguish this effect from a tip artifact. We should note that several experiments have already suggested that a tube in contact with a conducting substrate can be doped “p” type and so one needs to correct such a measurement for the offset in the chemical potential for any such unintentionally “doped” tube. One also needs to ensure that the tunneling is carried out in an energy window where only one azimuthal subband is accessible, since the symmetries of the tube eigenstates alternate in successive conduction or valence subbands, tending to suppress the anisotropy in the tunneling images.
The data presented in Figures 2 and 3 demonstrate that scanning tunneling images of carbon nanotubes are very sensitive to the nodal structure of the underlying electronic states. Indeed at low energy these images are probing the internal structure of individual (or at best a small number) of electronic eigenstates on the tube surface. The appearance of these broken symmetry patterns in a tunneling image does not imply large structural perturbations to the covalent graphene network. Indeed the data presented here are calculated for structures which are all unstrained and perfectly locally sixfold symmetric. These density patterns do require coherence between forward and backward propagating waves. In fact a quantitative analysis of these images can be used to extract off diagonal correlations in the density matrix in equation (1) responsible for these patterns. These data can then be used to extract the scattering matrix, which characterizes the internal structure and symmetry of various scattering centers, and to identify the wrapping vector for semiconducting tubes. Finally, we remark that the off diagonal correlations responsible for these patterns could also arise due to interactions with a substrate, interactions between tubes in a rope or due to electron-electron interactions.
It is a pleasure to thank W. Clauss and A.T. Johnson for helpful discussions of their STM images. This work has been supported by the NSF under grants DMR 95-05425, DMR 96-32598 and DMR 98-02560 and by the DOE under grant DE-FG02-84ER45118.
|
no-problem/9902/hep-ph9902264.html
|
ar5iv
|
text
|
# On the utility and implication of convex effective potential for the Higgs particles
## Acknowledgement
The author is grateful to Dr. G.-h. Yang, Prof. G-j Ni and Prof. J-j Xu for helpful discussions.
|
no-problem/9902/astro-ph9902101.html
|
ar5iv
|
text
|
# HST/NICMOS Imaging of Disks and Envelopes Around Very Young Stars1footnote 11footnote 1Based on observations with the NASA/ESA Hubble Space Telescope obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.
## 1 Introduction
Theoretical and observational investigations of star formation suggest that the formation process for low mass stars results naturally in the formation of a circumstellar disk which may then evolve into a planetary system. The current paradigm for the birth of single stars (Adams, Lada, & Shu 1987; Strom 1994, etc.) includes the following phases:
$``$ Within a dusty molecular infall envelope, a star + nebular disk form. Most of the material destined for the star falls first onto the disk, due to the non-zero angular momentum of the infalling gas. As gas accretes onto the star, an accretion-driven stellar or disk wind develops which begins to clear envelope gas away from the rotational poles of the system. This is the “embedded young stellar object” (YSO) phase with a “Class I” spectral energy distribution (SED) characterized by a spectrum which rises beyond 2 $`\mathrm{\mu m}`$ because almost all the light from the star is absorbed and re-radiated by circumstellar dust at long wavelengths. In the most embedded objects, most of the scattered light may come from walls of the outflow cavity in the envelope, giving a “cometary” appearance to the circumstellar reflection nebulosity. Some extreme “Class 0” objects are completely invisible at wavelengths shorter than 25 microns and probably represent the early stages of the embedded YSO phase (André, Ward-Thompson, & Barsony 1993). This phase is thought to last for a few times 10<sup>5</sup> yr.
$``$ The infalling envelope disperses, leaving an optically visible “classical” T Tauri star (CTTS) with a circumstellar disk. Accretion through the disk slows, but continues as the star gains the final few percent of its mass. This “Class II” phase lasts from 10<sup>6</sup> to 10<sup>7</sup> yr and is characterized by a spectrum which has near-to-far IR emission in excess of photospheric values, but is flat or falling longward of 2 $`\mathrm{\mu m}`$. Long wavelength emission is diminished since the solid angle subtended by the disk is smaller than the envelope, intercepting and reprocessing less of the radiation from accretion processes and the stellar photosphere. As the envelope disperses, light reflected from the top and/or bottom surfaces of an optically thick disk will dominate the nebulosity, which will appear more “flattened” than for Class I sources.
$``$ Accretion through the disk subsides due to lack of replenishment or disk gaps created by planet formation. The disk becomes optically thin, and the system evolves into a “weak-line” T Tauri star (WTTS). Its “Class III” spectrum is basically that of a stellar photosphere with a small amount of infrared excess at mid and far IR wavelengths if an optically thin protoplanetary or planetary debris disk remains. These sources would look like the optically thin disk around Beta Pictoris (Backman $`\&`$ Paresce 1993) if edge-on, with starlight dominating a thin, bright, elongated nebula. Unfortunately, since the IR emission from these disks are below the IRAS detection limits for nearby star-formation regions, there are currently only loose constraints on the timescale for dispersal of optically thin or “debris” disks.
This picture provides a logical context for understanding the ensemble of infrared SEDs observed for young stars (Adams, Lada, $`\&`$ Shu 1987), as well as the sequence of millimeter continuum brightnesses among YSOs (Saraceno et al. 1996; André, Ward-Thompson, & Barsony 1993). However, there is as yet little direct observational confirmation at spatial scales which distinguish the morphology of the circumstellar material. In particular, empirical information is lacking re: (i) large-scale geometry and distribution of material in infalling envelopes; (ii) the geometry and extent of wind-created cavities; (iii) the large-scale distribution of material within the disks. Hubble Space Telescope (HST) optical (WFPC2) and near-infrared (NICMOS) observations potentially enable us for the first time to examine the morphology of circumstellar environments during all evolutionary stages with spatial resolution of 10 - 30 AU. At this resolution, we can readily resolve and quantify structure in envelopes (expected size $``$ 1000 AU) and infer information on the large scale structure of disks (which are expected to have sizes spanning the range 10 to several hundred AU). Our group is carrying out a series of imaging programs aimed at quantifying the morphology of YSOs spanning the full range of evolutionary states. Our goal is to characterize the basic morphologies of objects in each SED class observable by HST, to identify common features among objects of each class, to summarize complementary ground-based data, and to identify those features which correspond to current YSO models as well as those that are not/cannot yet be predicted from such models.
Ground-based observations have revealed that many Class I young stellar objects are nebulous in the optical and near infrared. Tamura et al. (1991) found that Class I objects which excite outflows are almost always associated with compact and/or extended near-infrared nebulae. The observations of Kenyon et al. (1993) revealed that Class I YSOs in the Taurus star-formation region (distance 140 pc; Kenyon, Dobrzycka, & Hartmann 1994) often display a NIR cometary morphology which can be fit by models of a flattened envelope with a polar outflow cavity. Many of the Taurus Class I sources are also highly polarized, indicating the dominance of scattered light at short wavelengths in these systems (Whitney, Kenyon, $`\&`$ Gomez 1997). Lucas & Roche (1997) were able to characterize the structure of several Taurus Class I sources on subarcsecond scales and found that these sources were completely nebulous at 1 $`\mathrm{\mu m}`$. A recent study by Kenyon et al. (1998) detected about half of the Taurus Class I sources in an optical spectroscopic survey using the Palomar 5-meter and the MMT, during which they determined spectral types for several of the central sources.
Previous HST observations of young stellar objects have demonstrated that high angular resolution imaging of circumstellar morphology can make significant contributions to the study of young stellar object disks and envelopes. HH 30 has been revealed to be a textbook validation of the circumstellar disk/wind paradigm (Burrows et al. 1996), with bipolar reflection nebulae separated by an optically thick flared absorption disk 450 AU in diameter and an extremely narrow jet emanating from within 30 AU of the central star. HST/WFPC2 imaging has also shown that HL Tauri, previously thought to be a directly visible classical T Tauri star, is actually an embedded system seen entirely by scattered light at optical wavelengths (Stapelfeldt et al. 1995a).
Our target sample consists of six established and probable Class I young stellar objects. The objects include three low luminosity IRAS point sources (IRAS 04016+2610, IRAS 04248+2612, IRAS 04302+2247) and three Herbig-Haro jet exciting stars (DG Tau B, Haro 6-5B, CoKu Tau/1). The three IRAS stars are Class I infrared sources. The SED classification of the remaining sources are uncertain because they are confused with bright nearby YSOs in the large IRAS beams. These objects span a considerable range in millimeter continuum flux, which is listed in Table 1. Millimeter continuum flux is interpreted as optically thin blackbody emission from the dust around young stellar objects, and the amount of flux is related to the amount of circumstellar material (Beckwith et al. 1990; Osterloh $`\&`$ Beckwith 1995). The span of millimeter continuum fluxes for our sample objects corresponds to more than an order of magnitude in circumstellar masses, from about 10<sup>-2</sup> M for F<sub>1mm</sub> $``$ 200 mJy to $``$ 10<sup>-3</sup> M for 10 mJy (Beckwith et al. 1990). Of the six objects in our sample, four have unresolved continuum peaks at 1.3 mm as mapped by millimeter interferometry. indicating the presence of an unresolved dust disk (IRAS 04302+2247, Padgett et al. 1999; DG Tau B, Stapelfeldt et al. 1999; Haro 6-5B, Dutrey et al. 1996; IRAS 04016+2610, Hogerheijde et al. 1997). The two remaining YSOs (CoKu Tau/1, IRAS 04248+2612) have only been mapped at millimeter wavelengths with relatively large (12$`\stackrel{}{\mathrm{.}}`$) beams; however, they have the lowest circumstellar masses in any case. In order to facilitate modelling at multiple wavelengths, we chose objects for which HST/WFPC2 observations were either planned or already taken. The pre-existing WFPC2 data enabled us to select several objects with known close to edge-on “disk” orientations. As predicted by Bastien & Menard (1990) and Whitney & Hartmann (1992), and observationally demonstrated by Burrows et al. (1996), the visibility of optically thick circumstellar disks is maximized when the disk material itself occults the star, improving the contrast of the nebulosity relative to the stellar point source. The low luminosity jet source CoKu Tau/1 was chosen on the basis of its faintness at 2 microns (Stapelfeldt et al. 1997b) as well as the low radial velocity of its jet, indicating that its optical outflow is nearly in the plane of the sky (Eislöffel & Mundt 1998). Detailed comparison and multiwavelength modeling of HST/WFPC2 and HST/NICMOS images of some of these YSOs will be presented in future papers. The current paper attempts to characterize the morphology of the observed Taurus YSOs at a scale of tens of AU at wavelengths from 1.1 to 2 $`\mathrm{\mu m}`$.
## 2 Observations and Data Analysis
### 2.1 HST/NICMOS observations
The observations have been obtained with HST/NICMOS (see Thompson et al. 1998 for a description of the NICMOS instrument) between August and December 1997. We used NIC2 and the filters F110W, F160W, F187W, and F205W. Exposure times in individual bands are shown in Table 2. Each target was imaged twice in each band on different portions of the NICMOS array, using the predefined spiral dither pattern. The two-step dither pattern allowed us to compensate for bad pixel/columns and the area blocked by the coronographic mask.
### 2.2 Re-reduction of NICMOS data
The raw data were re-reduced using the IRAF/STSDAS package CALNICA V3.1 and the latest set of reference files. Pixel masks based on the template bad pixel masks made available through STScI were created for each individual image and edited to include the actual position of the coronographic spot, transient bad pixels, and cosmic ray events which had not been detected and cleared out by CALNICA. The “photometrically challenged column #128” (Bergeron & Skinner 1997) was masked as well.
The offsets between the two subsets of each association were computed by cross-correlating the individual subsets. Using the IRAF/STSDAS package DRIZZLE (Fruchter & Hook 1997), the subsets were then re-mapped into one frame with the same pixel scale, but taking into account subpixel shifts between the individual frames, and the information in the bad pixel masks.
### 2.3 Deconvolution of images
The NICMOS-PSF, although well defined, has very extended wings. In order to improve the contrast of our images, we decided to deconvolve them with the aim to remove (or at least attenuate) the extended PSF features. After remapping, the images were rebinned by a factor of 2 in X and Y, and deconvolved with a synthetic point-spread function (PSF). The PSFs were computed with TINYTIM V4.4 (Krist 1997) and oversampled by a factor of 2. Because of finite sampling, deconvolution of point sources superposed on a diffuse bright background in general leads to “ringing” artifacts, unless one uses a narrower PSF (Lucy 1990, Snyder 1990). We therefore first deconvolved the TINYTIM PSF itself with Gaussian. The FWHM of the Gaussian was chosen to match the actual sampling of our data.
The deconvolution of the data was done within IDL, using an iterative Poisson maximum likelihood algorithm based on the Richardson-Lucy-scheme (Richardson 1972, Lucy 1974) and described in detail by Blecha & Richard (1989). The iteration was stopped after 5 iterations in F110W, 8-10 iterations in F160W, 10-15 iterations in F187W, and 15-20 iterations in F205W. The deconvolution reduced the flux in the extended PSF features by about a factor of 3, and gave the deconvolved images in the various filters approximately the same spatial resolution.
### 2.4 SYNPHOT flux calibration
In order to get an estimate on the brightness of our sources, we calculated the count rates within an aperture of 3$`\stackrel{}{\mathrm{.}}`$8 radius (50 pixel). Sky/background values were measured on areas of the NICMOS frames which were free of diffuse emission.
Flux calibration was computed in an iterative approach, using the IRAF/STSDAS package SYNPHOT. This step was necessary, as the photometric header keywords provided by the NICMOS calibration pipeline assume a spectral energy distribution with a constant flux per unit wavelength. The spectral energy distribution of our sources, in particular those of the three class I sources (IRAS 04016+2610, IRAS 04248+2612, and IRAS 04302+2247), deviates significantly from this assumption. Using IDL and IRAF, we first computed a model spectral energy distribution, based on the photometric header keywords. We then ran SYNPHOT for the model SED and computed a new set of photometric conversion factors. This allowed us to create a revised model SED and to derive improved photometric conversion factors. The iterative procedure was repeated until the flux values in the individual filters changed by less than 2% in consecutive iterations.
Depending on the actual shape of the spectral energy distribution, the resulting flux values in individual filters are between 25% and 180% of the initial estimates as derived from the pipeline header keywords. Aperture photometry of the sources is reported in Table 3. The quoted uncertainties are statistical errors. Systematic errors in the flux calibration (due to the unknown shape of the spectral energy distribution below 1$`\mu `$m and above 2$`\mu `$m) amount to an additional uncertainty of 10%–15%.
### 2.5 Photometry of Point Sources and Binaries
Magnitudes of point sources were determined using a 0$`\stackrel{}{\mathrm{.}}`$5 aperture, which excludes most of the extended, diffuse emission. The separation, position angle, and brightness ratio of the close binary sources were determined by least squares fitting based on Gaussian PSFs (see Brandner et al. 1996 for details). Point source photometry is shown in Table 4. The quoted uncertainties are statistical errors. For the binary companions deviations of the actual PSF from a Gaussian result in an additional uncertainty of $``$10%.
## 3 Results for Individual YSOs
Figure 1(a) - (f) show a three-color composite image for each of the young stellar objects in our sample. In this figure, we present “pseudo-true color” (sensitivity normalized) mappings of F110W to blue, F160W to green, and F205W to red. The scale and orientation of each image are depicted in the Figures. Figures 2(a) - (f) are the flux calibrated contour maps of each source for the F160W images. Although the images have been deconvolved and reconvolved to a common resolution, residual PSF features remain, especially in the F205W images. These artifacts appear as multicolored spots and rings surrounding the stellar PSF. We are therefore wary of interpreting structure near the Airy rings of the stars. The deconvolution has also added speckle noise to the background which has been de-emphasized in the stretches presented here.
The results are presented in order of decreasing millimeter continuum flux. To the extent that the objects have comparable luminosities, this is equivalent to sorting from largest to smallest circumstellar masses.
### 3.1 DG Tau B
DG Tau B is a low-luminosity jet source located 1 southwest of DG Tau, and its IR SED is confused with this brighter optical source. DG Tau B was first identified as a Herbig-Haro jet exciting star by Mundt & Fried (1983), and the kinematics of its optical outflow (HH 159) has been the subject of several papers (Eislöffel & Mundt 1998, Mundt, Brugel, & Bührke 1987). Radial velocities indicate that the jet is directed to within 15 of the plane of the sky. DG Tau B also drives a large molecular jet which is spatially coincident with its redshifted optical jet (Mitchell et al. 1997, 1994). The DG Tau B source has been detected in 6 cm radio continuum emission (Rodriguez et al. 1995) which is further evidence of this system’s youth. Stapelfeldt et al. (1997) found that DG Tau B is resolved in HST/WFPC2 images as a compact bipolar nebula with no optically visible star.
In the HST/NICMOS composite image presented in Figure 1a, DG Tau B appears as a bipolar reflection nebula. The eastern lobe appears V-shaped, with an axis of symmetry coinciding with the direction of the blueshifted jet (Eislöffel & Mundt 1998). Therefore, we conclude that the “V” traces the walls of the blueshifted outflow cavity. At 2 $`\mathrm{\mu m}`$, the stellar PSF becomes visible at the apex of the “V”. Immediately to the southwest of the central source is a bright spot in the nebula along the inner edge of “V”. Although this feature might be due to a companion source, the spot itself is extended relative to the PSF and, therefore, cannot be a companion seen directly. There appear to be no significant color gradients across the cavity walls. The western lobe of the nebula is several times fainter and has a much narrower opening angle. It encompasses the redshifted optical and CO jets. Several knots in the redshifted jet are visible in the 1 $`\mathrm{\mu m}`$ (F110W) image (Figure 3), probably due to 1.26 $`\mathrm{\mu m}`$ \[Fe II\] emission. Knots in the blueshifted jet are not obvious against the eastern lobe’s bright reflection nebulosity. Separating the two lobes of bright nebulosity is a thick dust lane perpendicular to the jet which can be traced to a length of 600 AU. A dense bar of <sup>13</sup>CO(1-0) emission has recently been detected at this location by Stapelfeldt et al. (1999) using the Owens Valley Radio Observatory (OVRO) Millimeter Array.
### 3.2 IRAS 04016+2610
IRAS 04016+2610 is a Class I infrared source with L<sub>bol</sub> = 3.7 L located in the L1489 dark cloud of the Taurus star-forming region (Kenyon & Hartmann 1995). Millimeter-wave observations of this source have detected an extended molecular gas core (Hogerheijde et al. 1998, Ohashi et al. 1991). This IRAS source appears as a scattered light nebula in the optical and near infrared (Lucas & Roche 1997, Whitney et al. 1997), but is centrally condensed at 2 microns. IRAS 04016+2610 powers a low velocity molecular outflow adjacent to the source (Hogerheijde et al. 1998) and has been suggested as the powering source of a more extended molecular outflow to the northwest (Moriarty-Schieven et al. 1992, Terebey et al. 1989). Several Herbig-Haro objects coincide with the lobes of these outflows (Gomez et al. 1997).
In Figure 1b, IRAS 04016+2610 appears as a unipolar reflection nebula with a point source at the apex at 1.6 $`\mathrm{\mu m}`$ and 2 $`\mathrm{\mu m}`$. The multicolored specks around the point source are PSF artifacts which were not entirely removed by the deconvolution process. Among the sources in our NICMOS imaging sample, IRAS 04016+2610 is the brightest point source at 2 $`\mathrm{\mu m}`$ (Table 2), with about 75$`\%`$ of the flux concentrated in the PSF (Table 3). In the 1 $`\mathrm{\mu m}`$ image, the object is dominated by scattered light, and the highest surface brightness region is adjacent to the point source position seen at longer wavelengths. This bright patch is separated from the main part of the reflection nebula by an area of extinction extended roughly east-west. This feature is visible as a swath of reddening which cuts diagonally across the reflection nebula in Figure 1b. Between 1 $`\mathrm{\mu m}`$ and 1.6 $`\mathrm{\mu m}`$, the shape of the reflection nebula changes significantly, from a V-shaped morphology at 1 $`\mathrm{\mu m}`$ to a broader bowl-shaped nebula at longer wavelengths. The symmetry axis of this nebula is well aligned with adjacent optical and blueshifted millimeter outflows (Hogerheijde et al. 1998, Gomez et al. 1997). About 3<sup>′′</sup> NNE of the point source is a faint triangular patch of reflection nebula detached from the main nebula. A dark lane oriented at PA$`=`$ 80 separates this counternebula from the main nebula but does not extend to the other side of main nebula’s symmetry axis. The dust lane can be traced for at least 600 AU before it loses definition in the extremely dark region to the northwest of the star. The position angle of this dark lane matches the orientation of the elongated <sup>13</sup>CO(1-0) and HCO<sup>+</sup> structures mapped by Hogerheijde et al. (1998).
### 3.3 IRAS 04302+2247
IRAS 04302+2247 is a Class I source with estimated L<sub>bol</sub> = 0.34 L (Kenyon & Hartmann 1995) located in the vicinity of the Lynds 1536b dark cloud. Undetected at 12 microns by IRAS, the infrared SED of this source peaks around 100 microns with a flux of about 10 Jy (Beichman et al. 1992). Bontemps et al. (1996) mapped a small, low-velocity CO outflow nearby which they associate with this source, and Gomez et al. (1997) have found two Herbig-Haro objects several arcminutes northwest overlying the blueshifted lobe of the outflow. IRAS 04302+2247 was studied in the near-IR at UKIRT by Lucas & Roche (1997, 1998), who published the first description of the remarkable dust lane and bipolar nebula of this system and presented near-IR polarimetry.
IRAS 04302+2247 is surely among the more spectacular young stellar objects observed by the Hubble Space Telescope. The HST/NICMOS near-IR appearance of this object in Figure 1c is dominated by the totally opaque band extending 900 AU north/south which bisects the scattered light nebulosity. No point source is detected in this source at any of the observed wavelengths. The apparent thickness of the extinction band decreases by about 30$`\%`$ from 1 $`\mathrm{\mu m}`$ to 2 $`\mathrm{\mu m}`$, which accounts for the reddening seen along its edges. At 2 $`\mathrm{\mu m}`$, the dust lane appears thicker at its ends than at its center. Although this dark feature has relatively straight edges at 1 $`\mathrm{\mu m}`$, at 2 $`\mathrm{\mu m}`$, the lane is thicker at the ends than in the middle by a factor of two. Owens Valley Millimeter Array mapping of this source in <sup>13</sup>CO(1-0) to be presented in Padgett et al. (1999) indicate that the dark lane coincides with a dense rotating disk of molecular gas. The dust lane of IRAS 04302+2247 may therefore be a large optically thick circumstellar disk seen precisely edge-on.
The scattered light nebula of IRAS 04302+2247 is dramatically bipolar in morphology, with approximately equal brightness between the eastern and western lobes. The eastern lobe is only a few percent brighter than the western lobe at 1 $`\mathrm{\mu m}`$ and 1.6 $`\mathrm{\mu m}`$, but is 15 $`\%`$ brighter at 2 $`\mathrm{\mu m}`$. The shape of the nebular lobes is roughly similar to the wings of a butterfly, giving the object its alias “Butterfly Star in Taurus” (Lucas & Roche 1997). At 1 $`\mathrm{\mu m}`$, the brightest parts of the nebula are confined to the central region along the presumed outflow axis perpendicular to the dust lane. However, the nebular morphology changes with increasing wavelength, from roughly “V”-shaped at 1 $`\mathrm{\mu m}`$ to more flattened along the dust lane at 2 $`\mathrm{\mu m}`$. Within each lobe are a variety of bright filamentary structures subarcsecond in length and unresolved in width, some of which are curved near the dust lane. There are also areas of non-uniform extinction within the lobes which suggest a rather clumpy distribution of material. A prominent swath of extinction extends asymmetrically across the northern part of the western reflection lobe, curving smoothly into the dust lane. The central region of each lobe is fainter than the outer edges. These extinction features divide the bright lobes into quadrants, accounting for the “quadrupolar” morphology seen by Lucas & Roche (1997) and modeled as an opaque jet. One possibility is that intervening clumps of absorbing material are superposed on the symmetry axis of the reflection lobes. This interpretation would be similar to one advocated by Stapelfeldt et al. (1995a) for HL Tau. However, the darker zone does not appear to be edge-reddened as would be expected for a dark clump. Another possibility is that the darker region is an evacuated zone along the presumed outflow axis which is cleared of the reflective streamers of material seen along the cavity walls.
### 3.4 Haro 6-5B
Haro 6-5B, located about 20<sup>′′</sup> west of the FS Tauri T-Tauri star binary, is the source of the HH 157 optical jet. Because of its proximity to this brighter pre-main sequence system, the IRAS SED of Haro 6-5B is confused with FS Tau. This YSO was first noted as a region of compact reflection nebulosity along the emission line jet (Mundt & Fried 1983). The kinematics of the jet suggest that the outflow for this source is nearly in the plane of the sky (Eislöffel & Mundt 1998). Millimeter interferometry indicates that Haro 6-5B has compact <sup>13</sup>CO(1-0) emission (Dutrey et al. 1996). HST/WFPC2 imaging revealed that Haro 6-5B is a compact bipolar nebula bisected by a dust lane which is similar to models of a nearly edge-on optically thick disk (Krist et al. 1998). Therefore, Haro 6-5B appears very similar to the edge-on young stellar disk system HH 30 (Burrows et al. 1996).
HST/NICMOS images of Haro 6-5B generally resemble the HST/WFPC2 images of Krist et al. (1998) with the exception that the point spread function of the star is directly detected in the near-IR images. The NICMOS color composite of Haro 6-5B is presented in Figure 1d. The source itself appears as two parallel curved reflection nebulae separated by a dark lane about 600 AU in length. The bipolar nebula is most extended along the dark lane, perpendicular to the optical outflow axis. The northeastern reflection lobe, from which the blueshifted optical jet emerges, is brighter than its southwestern counterpart at all observed wavelengths. The PSF is visible in the 1.1 $`\mathrm{\mu m}`$ image at the base of the northeastern lobe and contributes 60% of the light at 2 $`\mathrm{\mu m}`$. Using the upper limit for stellar V magnitude from Krist et al. (1998) and assuming a late type photosphere, we determine that the lower limit of extinction toward the Haro 6-5B star is A<sub>V</sub> $``$ 8. If an appreciable percentage of the emission identified as photospheric is actually produced by hot dust close to the star, the extinction could be larger. About 10<sup>′′</sup> north of Haro 6-5B is a large diffuse nebula detached from the circumstellar nebulosity. This nebulosity was also detected by WFPC2 and given the designation R1 in Mundt, Ray, & Raga (1991). There is also a faint suggestion of at least one knot in the redshifted jet in the F110W and F160W images.
### 3.5 IRAS 04248+2612
IRAS 04248+2612 is a Taurus Class I source with a luminosity of 0.36 L (Kenyon & Hartmann 1995). The object was weakly detected in HCO+ (Hogerheijde et al. 1997) by the James Clerk Maxwell Telescope. IRAS 04248+2612 apparently drives a small molecular outflow (Moriarty-Schieven et al. 1992) which has not been mapped. Also known as HH31 IRS, IRAS 04248+2612 is presumed to be the exciting source for HH 31 and several other small Herbig-Haro objects (Gomez et al. 1997). In the near infrared, this source has a complex bipolar reflection nebulosity which was studied with shift-and-add UKIRT infrared imaging and polarimetry by Lucas & Roche (1997). Imaging polarimetry performed by Whitney et al. (1997) led to their suggestion that this source is seen close to edge-on.
In the NICMOS images, IRAS 04248+2612 appears as a long, curving bipolar reflection nebula. The major axis of this nebula extends for at least 10<sup>′′</sup> (1400 AU) north-south, bending significantly east at the southern end and west at the northern end. The nearby string of HH objects lie along the long axis of the southern lobe close to the source, but curve eastward into an S-shaped jet at greater distances, suggesting a time-varying outflow axis (Gomez et al. 1997). The elongated reflection nebula of IRAS 04248+2612 therefore appears to define outflow channels. Although many YSO outflow cavities appear to be limb-brightened, the reflection nebula in this system is centrally brightened. Along the outflow axis within the southern lobe is an bright elongate structure which appears helical. This “corkscrew” nebulosity extends about 420 AU southwards from the central source. The northern lobe also has a bright structure which seems to be the mirror image of the southern helix. However, it appears to end or be disrupted within about 150 AU north of the central source. Although the morphology of these structures suggests an outflow origin, their similar brightness in all the wide NICMOS bands seems to indicate that they are reflection rather than emission nebulae. An additional faint patch of reflection nebulosity is located about 750 AU to the northeast of the binary, detached from the northwest lobe of reflection nebulosity. Polarization maps presented in Lucas $`\&`$ Roche (1997) indicate that the position angle of the polarization vector is consistent with illumination by the distant binary.
NICMOS imaging of the IRAS 04248+2612 central source reveals that it is actually a close binary with projected separation of $``$ 25 AU (Table 4). Although the components of the binary are comparable in brightness, the eastern star (A) appears slightly redder than its neighbor. The presence of a close binary in this system is in accordance with its lower millimeter continuum flux relative to other Class I sources in the study. In addition, orbital motion of a binary jet source offers a plausible explanation for both the helical dusty “trail” in the southern nebular lobe and the large scale sinusoidal curving of the southern jet seen by Gomez et al. (1997). The central binary appears to peek over the north edge of a dark lane which pinches the bright nebulosity into bipolar components. This apparent dust lane is at least 450 AU in diameter and appears to extend along a position angle perpendicular to the outflow axis. The PA of the absorption lane seems to be slightly offset from the separation vector between the two stellar components.
### 3.6 CoKu Tau/1
CoKu Tau/1 is another faint Herbig-Haro object exciting star located in the L1495 cloud near the embedded Ae star Elias 3-1, which confuses its 60 $`\mathrm{\mu m}`$ and 100 $`\mathrm{\mu m}`$ IRAS SED. CoKu Tau/1 is detected at the shorter wavelength IRAS bands with F(12$`\mathrm{\mu m}`$) = 1.18 $`\pm `$ 0.26 Jy and F(25$`\mathrm{\mu m}`$) = 2.74 $`\pm `$ 0.63 Jy (Weaver & Jones 1992). Its total luminosity is estimated at only 0.065 L by Strom & Strom (1994), who derived a spectral type of M2e. CoKu Tau/1 has been detected in the radio continuum at 6 cm (Skinner, Brown, & Stewart 1993), but is undetected at 1.3 mm (see Table 1). This object is the source of the small HH 156 bipolar jet (Strom et al. 1986); its kinematics suggest that the outflow is near the plane of the sky (Eislöffel & Mundt 1998).
In the HST/NICMOS near-IR images (Figure 1f), CoKu Tau/1 appears as a faint binary with four filamentary reflection nebulae curving parabolically away from the central sources. Since the optical outflow is known to emerge from between the southwestern “horns”, we interpret these structures as the limb-brightened walls of outflow cavities. Within the northern cavity is a filamentary arc of material which forms a closed loop about 200 AU in size. Although the morphology of this feature is suggestive of a dark clump backlit by bright nebulosity, no enhanced reddening is seen within the loop.
Like IRAS 04248+2612, CoKu Tau/1 is a previously unrecognized binary (see Table 4). Both of these binary systems are too faint at 2 microns to have been detected by published ground-based speckle surveys (Ghez et al. 1992, Leinert et al. 1993, etc.). The CoKu Tau/1 secondary is about a magnitude fainter than the primary and is somewhat redder. The filamentary loop in the northern outflow cavity is located in the vicinity of the secondary which suggests that it is a secondary outflow cavity.
HST/NICMOS images reveal a local minimum in surface brightness between the two cavities and along the plane of the binary stars. This feature is suggestive of a dust lane which appears much thinner than the other dust lanes seen in our survey. This might be a circumbinary ring or disk structure. The mass of the structure would have to be small, since the millimeter continuum limits the mass to less than 10<sup>-3</sup> M. However, very little dust mass is required to produce a disk which is optically thick at near-infrared wavelengths, and, therefore, visible as a dust lane when at near edge-on inclinations.
## 4 Discussion
### 4.1 Dust Lane Properties
All of the young stellar objects imaged in the current NICMOS survey have a morphology which includes a dark lane crossing the scattered light nebula. Table 5 lists morphological parameters for the dust lanes seen in the NICMOS images of young stellar objects. The lengths and thicknesses of dust lanes were determined by making cuts across the feature midplane perpendicular to the major axis. The minor axis was measured by averaging 5 cuts through the center of the dark lane. In cases where the center of the dust lane is adjacent to a PSF, we determined the dust lane thickness by taking the mean of cuts on both sides of the PSF, beyond the Airy ring and other PSF artifacts. The dip in the brightness profile caused by the dust lane was fit by a Gaussian using the IRAF tool IMPLOT, and the derived full width half maximum of this feature is the “apparent thickness” or minor axis given in the results section and Table 5. The major axes of the dark lanes were determined by noting the radial distance from the photocenter at which the dip in the surface brightness profile was less than 10% of its maximum depth or where the feature widened to twice the FWHM at or near the photocenter. Although some of these features can be traced to greater distances, our intent was to place a lower limit on the disk extent without confusing it with the separation of cavity walls. Better quantification of disk parameters awaits the application of multiple scattering models in future papers. “PSF visibility” gives the filter at which a stellar PSF was detected, and the “+” indicates that the PSF was detected at all longer wavelengths.
The lengths of these dust lanes vary within our sample from 500 AU to 1000 AU. The apparent thicknesses of these dark features range from 50 AU \- 340 AU. These widths do not represent the actual scale height of dense material, but rather define surfaces where optical depth $``$ 1 at NIR wavelengths for dusty circumstellar structures. The apparent thicknesses of the dust lanes seem to be related to the amount of millimeter continuum, in that objects with more 1 mm emission have thicker lanes. In addition, comparison of the dust lane position angles (Table 5) with the position angles of known outflows (Table 1) reveals that the lanes are perpendicular to outflows in almost every case. Finally, in the dust lanes of IRAS 04016+2610, Haro 6-5B, and IRAS 04302+2247, are spatially coincident with dense, possibly rotating, molecular bars mapped by millimeter interferometry (Hogerheijde et al. 1998, Dutrey et al. 1997, Padgett et al. 1999, Stapelfeldt et al. 1999).
Based on these lines of evidence, we conclude that the absorption bands seen in the HST/NICMOS images are probably optically thick circumstellar disks seen in silhouette against reflection nebulosity. The same interpretation has been offered for dark elongated features seen in HST imaging of several other YSOs including HH 30 (Burrows et al. 1996) Orion 114-426 (McCaughrean et al. 1998), Haro 6-5B (Krist et al. 1998), and HK Tau/c (Stapelfeldt et al. 1998). As explained in considerable detail by these authors, optically thick disks are completely opaque at optical and NIR wavelengths, with hundreds to thousands of magnitudes of extinction on a line of sight through the midplane. Therefore, the disks themselves are dark, while their upper and lower surfaces are illuminated by the central source. Models for disks of this sort are presented in Bastien & Menard (1990), Whitney & Hartmann (1992, 1993), Fischer, Henning, & Yorke (1996), Burrows et al. (1996), and Wood et al. (1998). Flattened envelopes may also produce dust lanes, as the models of Whitney & Hartmann (1993) indicate; kinematics from millimeter interferometry are required to differentiate these components of circumstellar material.
Scattered light models of optically thick circumstellar disks and flattened envelopes predict that the appearance of a YSO will vary according to disk mass and inclination relative to the line of sight. Optically thick circumstellar disks viewed within a few degrees of edge-on will entirely occult the star due to high extinction in the midplane. Conversely, if they are viewed pole-on, the dynamic range required to detect light reflected from the disk is predicted by theory to be beyond the capability of early 1990’s technology (Whitney et al. 1992). Edge-on disk systems are detected only in scattered light even at mid-IR wavelengths (Sonnhalter et al. 1995). There are currently three known edge-on disk sources: HH 30, Orion 114-426, and HK Tau/c, to which the current study can add a fourth - IRAS 04302+2247. Edge-on disks appear as concave nebulae of similar brightness which are separated by a flared dark lane. The apparent thickness of the lane increases with disk mass, ranging from 0$`\stackrel{}{\mathrm{.}}`$1 (at the distance of Taurus) for 10<sup>-4</sup> M to ten times that for 10<sup>-2</sup> M (Burrows et al. 1996). In addition, the apparent thickness of the disk will also vary with wavelength, becoming thinner at longer wavelengths. In systems which are slightly more than 10 from edge-on, the flared outer parts of the disk or the optically thick base of the envelope may occult the star at short wavelengths, but a PSF may be visible at longer wavelengths. In addition, one nebula will be considerably brighter than the other. In the current sample, DG Tau B fulfills these criteria, since the PSF is effectively undetected at 1 $`\mathrm{\mu m}`$. We also note that no PSF was detected for Haro 6-5B in WFPC2 8000Å images (Krist et al. 1998); however, the stellar PSF was detected for this source in all of our NIR bands. Unlike envelope sources, Class II T Tauri stars have no circumstellar material except in the disk plane; therefore, they rarely have the gradation of extinction seen in embedded YSOs (invisible at V, bright at K). Depending on their disk inclination, T Tauri stars will either be bright with little circumstellar extinction or nebulosity, or they will be almost invisible at optical and NIR wavelengths (Stapelfeldt et al. 1997).
Although the distribution of circumstellar material in YSOs is unquestionably flattened, the kinematics of this material is still controversial. Models which postulate a flattened, infalling envelope surrounding a small, geometrically flat rotating disk are quite successful in explaining the scattered light distribution around YSOs (Whitney & Hartmann 1992, 1993; Hartmann et al. 1994). Lucas & Roche (1997) simulated the scattered light distribution in their ground-based images of IRAS 04302+2247 with a model incorporating an Ulrich envelope. However, such models are less successful in reproducing the scattered light distribution from HH 30. In this case, the circumstellar nebulosity is better modeled as a large optically-thick edge-on circumstellar disk (Wood et al. 1998). Recent high-resolution molecular line observations are beginning to clarify the kinematic nature of the material in these large “disks”. Rotating CO structures around many YSOs with sizes from 100 AU - 1600 AU have been mapped using millimeter interferometers (Koerner $`\&`$ Sargent 1995, Dutrey et al. 1996, Hogerheidje et al. 1998). The disk around the T Tauri star GM Aurigae, which has been detected in reflected light by HST (Stapelfeldt et al. 1995b), shows a Keplerian rotation curve out to a radius of 500 AU (Dutrey et al. 1998). Those YSOs in the current survey which have been imaged via millimeter interferometry (IRAS 04016+2610, Hogerheijde et al. 1998; Haro 6-5B, Dutrey et al. 1996; DG Tau B, Stapelfeldt et al. 1999; IRAS 04302+2247, Padgett et al. 1999) all show concentrations of dense molecular gas close to the star. Unfortunately, the $``$ 3$`\stackrel{}{\mathrm{.}}`$0 resolution at which these YSOs have been mapped in the millimeter allows only a rough correspondence with features seen in HST images. OVRO Millimeter Array observation of IRAS 04302+2247 by Padgett et al. (1999) have confirmed that the dense material along the dust lane in this source also appears to be rotating. These observations suggest that the dense material within the YSO dark lanes seen by HST is rotating, and, possibly, centrifugally bound.
### 4.2 Properties of Bipolar Nebulosity
In most of the YSOs in the current sample, the symmetry axis of the bright reflection nebulae (perpendicular to the dust lanes) correspond well to the outflow position angles determined by previous studies. Table 1 lists outflow position angles, and Table 6 contains the position angle of the nebula symmetry axis ($`\theta _{sa}`$). The “major axis” in Table 6 is the extent of the nebula along the symmetry axis, and the “minor axis” is the extent perpendicular to the symmetry axis. Only IRAS 04302+2247 is linked to a jet which has a very different position angle from the reflection nebula symmetry axis. HH 394 lies several arcminutes to the northwest of IRAS 04302+2247, forming a loose chain of HH objects which point back in the direction of the IRAS source (Gomez et al. 1997). The HH objects coincide well with the position of a blue-shifted clump of <sup>12</sup>CO(1-0) which was presumed to be part of the IRAS 04302+2247 molecular outflow by Bontemps et al. (1996). However, this optical and molecular outflow has a position angle which is only 20 offset from the dust lane (“disk” plane)! One possible explanation is that the disk is not perpendicular to the outflow for this source. However, we prefer to conclude that these optical and molecular outflows are probably unrelated to IRAS 04302+2247 and are instead part of an outflow from another, more distant source, as seen frequently in Orion (Reipurth et al. 1997) and the Perseus Molecular Cloud (Bally et al. 1997). A similar situation may exist for the extensive molecular outflow located to the northwest of IRAS 04016+2610 (Terebey et al. 1989).
The reflection nebulosity associated with the current sample of young stellar objects is most often bipolar, although the detailed structure of each nebula is unique. Since the axis of known outflows tends to coincide with the symmetry axis of the reflection nebulae and the lobes are often extended along the outflow directions, it appears that the bright nebulosity most often traces the $`\tau `$ = 1 scattering surface of dusty material in outflow cavities. Outflow cavities are presumed to represent the polar regions of circumstellar disk/envelope systems which have been cleared of dense gas by stellar jets (e.g. Raga 1995) or wide-angle outflows (e.g. Li $`\&`$ Shu 1996). Larger scale versions of these structures have long been known from ground-based NIR imaging (Terebey et al. 1991, Kenyon et al. 1993, Lucas & Roche 1997) and molecular line observations with millimeter interferometry (Bontemps et al. 1996, Hogerheidje et al. 1998). Hogerheidje (1998) describes the walls of outflow cavities as regions where outflowing gas interacts with infalling material in the envelope. Outflow cavities are commonly limb-brightened, especially at millimeter wavelengths (e.g. Bontemps et al. 1996), as seen for DG Tau B and CoKu Tau/1 in the current sample, and may be either conical (V-shaped) or paraboloidal in shape. The object with the smallest clearly defined opening angle for its V-shaped cavity is DG Tau B which also has the largest mass of circumstellar material. The single star with the lowest circumstellar mass (Haro 6-5B) also has the largest opening angle. The two single sources with intermediate circumstellar masses (IRAS 04016+2610, IRAS 04302+2247) also have morphologies in between these extremes, with conical nebulae at short wavelengths and more flattened morphologies at 2 $`\mathrm{\mu m}`$. This suggests an evolutionary effect by which the cavity walls are widened as the circumstellar mass decreases.
The bipolar nebula of Haro 6-5B is unlike the other objects in that the scattered light is extended parallel, rather than perpendicular, to the dust lane bisecting the nebula. In this case, the scattered light appears qualitatively similar to models of edge-on disks used to fit the optical scattered light distribution of HH 30 (Burrows et al. 1996) and HK Tau/c (Stapelfeldt et al. 1998); however, the inclination is far enough from edge-on ($``$ 10; Krist et al. 1998) to permit the star to be directly detected in the NIR. Thus, it appears that the local circumstellar nebulosity of Haro 6-5B probably traces the upper and lower surfaces of a flared, optically thick circumstellar disk. Like HH 30, Haro 6-5B seems to have reached a stage where the scattered light distribution is dominated by the disk rather than the envelope, but sufficient material remains in the accretion disk to drive an energetic stellar jet.
Many YSO outflow “cavities” are not completely evacuated of dusty material. Every object (except possibly Haro 6-5B) that we observe in our HST/NICMOS survey has structures within the presumed outflow zone which are physically thin but are tens to hundreds of AU in length. In the case of IRAS 04302+2247, the limb brightened cavity walls encompass a plethora of these filaments. The most spectacular case of a filled outflow cavity is IRAS 04248+2612 where the polar regions are centrally brightened by an apparent dusty helical outflow channel. Many of these bright filaments features are arcuate, and some form loops elongated in the outflow direction as in the case of CoKu Tau/1. Similar features have been identified in the HST/NICMOS YSO observations of Terebey et al. (1999), as well as the ground-based imaging polarimetric studies of Lucas & Roche (1997) and Whitney et al. (1997). The kinematics of these high-surface brightness arcs are unknown; however, it is plausible that they are related to infalling or outflowing material in the envelope. Repeated observation of such features with HST/NICMOS might reveal the dynamics of small scale structures within the cavities of YSOs.
### 4.3 Effect of Binarity on Circumstellar Morphology
In the course of our survey, we found two previously unknown binaries with projected separations of $``$ 30 AU. These two sources have the lowest millimeter continuum fluxes of the YSOs in our sample (cf. Table 1). This is consistent with the results of Osterloh & Beckwith (1995) who found that millimeter continuum flux was diminished among known binaries with a separation of less than 100 AU. In both cases, we have evidence to suggest a circumbinary disk in the form of apparent dust lanes relatively well-aligned with the separation vector of the stellar components. For CoKu Tau/1 the possible dust lane is very thin and is only distinguished with difficulty between the curving arcs of the outflow cavity walls.
In young close binary systems, theory suggests that the orbital motion of the stellar components should clear a central hole in any circumbinary disk. Depending on the eccentricity of the binary orbits, the inner edge of the circumbinary disk is predicted to be at a distance of 2 - 3 times the semi-major axis (Artymowicz & Lubow 1994). Evidence for central holes in the CoKu Tau/1 and IRAS 04248+2612 systems is lacking for the NICMOS data, since the objects were selected to be nearly edge-on, and the stellar PSFs make direct detection of the gap impossible. However, the existence of central holes in these systems are plausible, given the evidence from other young binaries. Among young stars, the GG Tau circumbinary ring is a spectacular face-on example of a disk with a cleared central region (Roddier et al. 1996, Koerner, Sargent, & Beckwith 1993). Despite the central hole, the GG Tau binary components show ample spectroscopic evidence of accretion, indicating that material is bridging the gap and falling onto the stars. The active Herbig-Haro jets emanating from both IRAS 04248+2612 and CoKu Tau/1 indicate that accretion is continuing in these systems as well.
Evidence is mounting that disks around individual stars clear from the inside out, possibly as a result of planet formation processes. It has long been known that the central 100 AU of the Beta Pictoris debris disk is depleted of dusty material and this appears to be true for other debris disks as well (Backman & Paresce 1993). Recently, mid-infrared imaging of HR 4796A (Koerner et al. 1998, Jayawardhana et al. 1998) and submillimeter maps of Epsilon Eridani (Greaves et al. 1998) have shown that inner disk holes in the absence of close binary companions may be common in younger disks and disks around solar-type stars. These stars range in age from about 20 Myr (HR 4796A) to $``$ several hundred million years in age (Epsilon Eridani, Beta Pictoris, etc.). However, this disk clearing may occur at a far younger age for close binaries, dispersing the material required to make planets in the inner $``$ 100 AU region while retaining a ring of dusty gas in the outer parts of the circumbinary disk.
Despite the apparently diminished disk masses in the two binary systems, their large scale morphology suggests that both objects are very young. In CoKu Tau/1, nebulosity stretches for many hundreds of AU from the central sources, indicating the presence of extended circumstellar material in this system. The bipolar nebula of IRAS 04248+2612 is very elongated in the outflow direction, appearing as a relatively narrow channel in the surrounding cloud medium. Both YSOs are in extremely opaque parts of their respective clouds, without obvious neighbors or background stars. If these two objects are indeed very young ($``$ 1 Myr) sources, they provide further evidence that disk evolution may be accelerated in close binaries. (Osterloh & Beckwith 1995, Jensen & Mathieu 1997, Meyer et al. 1997).
It is interesting that the dust lane in IRAS 04248+2612 is not aligned with the separation vector between the stellar components. There are at least two possible explanations. The most tantalizing is that the disk is inclined relative to the orbit plane of the binary. Papaloizou & Terquem (1995) have theoretically modeled the tidal perturbations of YSO disks by companions on inclined circular orbits and have suggested that observable warps in the disk are a likely outcome. This interpretation has been applied to the warp in the Beta Pictoris disk, where an unseen planetary mass companion is presumed to excite the disk plane asymmetries (Mouillet et al. 1997). A more prosaic and likely explanation notes that because the disk is not precisely edge-on, the projected binary component separation vector need not be in the plane of the disk. From our vantage point, the orbit of the secondary would describe a long and narrow ellipse with minor axis angular size equivalent to the tilt of the orbital plane from an edge-on configuration. Another consequence of this configuration is that the projected separation of the binary components may be much smaller than their physical separation.
### 4.4 A Morphological Sequence of Pre-Main Sequence Circumstellar Material?
Combining the current NICMOS sample of YSOs with high resolution observations of young circumstellar disks in the literature, we can begin to relate high resolution scattered-light morphologies to evolutionary status of circumstellar material around solar-type young stars. Our goal is to estimate relative placement of the objects in our small sample in the context of pre-existing evolutionary scenarios. Presuming that millimeter continuum observations are a reasonable measure of circumstellar masses, we would order the present objects in the following sequence from most massive to least massive: DG Tau B ($``$ 320 mJy), IRAS 04016+2610 (180 mJy), IRAS 04302+2247 (149 mJy), Haro 6-5b (134 mJy), IRAS 04248+2612 (99 mJy), and CoKu Tau/1 ($``$ 12 mJy). In the objects with greater millimeter continuum flux such as DG Tau B and IRAS 04016+2610, the NICMOS images indicate a large amount of material above the midplane in the form of outflow cavity walls which are optically thick at visual wavelengths, but become largely transparent at 2 $`\mu `$m. It is these circumstellar structures, which can be understood as the interaction zone between the infalling circumstellar envelope and the polar outflows, which cause many Class I sources to be very faint in the optical, yet bright at 2 $`\mu `$m. The extinction in the cavity walls is extremely non-uniform, as evidenced by the curving swaths of reddening seen above the “disk” plane in IRAS 04016+2610 and IRAS 04302+2247. These streamlined structures could be related to infall as discussed in Terebey et al. (1999). According to the current paradigm of star formation, sources with an extensive envelope should be the youngest objects, and we identify DG Tau B and IRAS 04016+2610 as good examples of envelope dominated systems. The importance of the envelope component in the IRAS 04016+2610 system has been highlighted by Hogerheijde et al. (1997), who found that half or more of the circumstellar mass indicated by millimeter continuum may originate in a resolved envelope. Envelope systems have been often been called “optically invisible protostars” since the extinction in the material above the midplane guarantees that the source appears very red even at moderate inclinations.
Among our sample, IRAS 04302+2247 may be at an intermediate stage between envelope dominated and disk dominated systems. Although the NICMOS images show evidence of material above the disk plane, the dense matter appears to be confined to near the disk plane. The decreasing apparent thickness of its dust lane with increasing wavelength clearly indicates a vertical density gradient. The faintness of this object at all wavelengths is a function of inclination rather than youth; it is seen almost precisely edge-on. YSOs such as IRAS 04302+2247 are plausibly in a “big disk/dispersing envelope” stage. Probably most of the more active T Tauri stars with healthy millimeter continuum emission are in a similar stage. A good example of a similar object seen at a less favorable inclination is GM Aurigae (Stapelfeldt et al. 1995b). The disk extends far enough from the central stars in this system to be visible as compact circumstellar nebulosity even at optical wavelengths. Further analysis of recent millimeter interferometry for IRAS 04302+2247 should indicate the relative percentage of mass in the disk versus the envelope, clarifying the evolutionary state of this object (Padgett et al. 1999).
The next stage of evolution appears to be represented by objects such as HH 30 and possibly Haro 6-5B, where the scattered light nebulosity adjacent to the star appears to originate entirely from the surface of the optically thick disk. The difference between Haro 6-5B and its analogue HH 30 appears to be entirely due to variations in disk mass. It seems plausible that once the envelope has dispersed, the disk begins to diminish in mass due to accretion onto the star and planetary system, as well as associated outflow events. By the time that jet emission has ceased, edge-on disks will become difficult to see due to the thinness of the dust lane (Stapelfeldt et al. 1998). We have yet to see an optically thin disk in optical or NIR wavelengths around a certifiable solar-type pre-main sequence star since the IRAS survey was not sensitive enough to detect optically thin disks at the distance of nearby star-forming regions. We await WIRE and SIRTF to identify these potentially very interesting targets for high resolution imaging.
The very lowest circumstellar masses seen for pre-main sequence stars are in binary systems such as IRAS 04248+2612, CoKu Tau/1 and HK Tau/c (Stapelfeldt et al. 1998), where the effects of the companion have likely been to clear a hole in a circumbinary disk or tidally truncate the circumsecondary disk. If close binary systems indeed experience substantially accelerated disk evolution as suggested by the current study, then this sizable percentage of YSOs will defy the morphological sequence for single stars, since their disks may dissipate prior to envelope clearing. Again, high resolution millimeter observations may clarify the evolutionary status of these objects.
### 4.5 Detailed Morphology Versus IR SED
In the past decade, our progress in understanding star and planet formation has largely depended on interpretation of young star SEDs from the near-IR to millimeter wavelengths. The mid-to-far IR excesses measured by IRAS allow us to infer the existence of circumstellar disks and envelopes around young stellar objects and estimate their frequency and timescale for dissolution (Strom et al. 1989). The availability of 15 AU resolution observations of circumstellar structures provides a testing ground for models based on SEDs. For example, in the current study IRAS 04248+2612 and IRAS 04302+2247 have very similar spectral energy distributions (Figure 4), especially in the wavelength range covered by the IRAS satellite. However, comparison of the HST/NICMOS images for these objects demonstrate that the detailed morphology of these sources is quite distinct from each other, as discussed in Sections 3.2 and 3.3. In particular, the IRAS fluxes do not provide any indication of binarity for IRAS 04248+2612. Since the IRAS bands are sensitive to circumstellar material at $``$ 1 AU - 100 AU, it would seem that small circumstellar disks are probably still present in the close binary system. The only hint of binarity found in the SED is the lower 1 mm continuum flux for IRAS 04248+2612 (Osterloh & Beckwith 1995). The inclination of IRAS 04302+2247 is suggested by its IRAS non-detection at 12 microns. Radiation transfer models of edge-on disks suggest that they are viewed primarily in scattered light, and are thus exceptionally faint, out to wavelengths exceeding 25 $`\mu `$m (Sonnhalter et al. 1995). ISO observations of the edge-on disk system HH 30 appear to confirm these predictions for that object (Stapelfeldt & Moneti 1998). The significance of factors such as source multiplicity, disk extent, orientation, and gaps, dynamical structures in the polar outflow zones, and interstellar environment can only be determined by imaging at scales of smaller than tens of AU. Modelling of high-resolution optical and NIR images complements information derived from SED modelling in providing an accurate depiction of the circumstellar environment of YSOs. The extremely high sensitivity and improved resolution of upcoming mid-to-far IR space telescopes such as WIRE and SIRTF will extend the IRAS results to more evolved systems and identify potential new targets for high-resolution imaging.
## 5 Conclusions
We observed six young stellar objects at 1.1 $`\mathrm{\mu m}`$ \- 2 $`\mathrm{\mu m}`$ with HST/NICMOS. Images with 15 AU resolution were successfully obtained for young stellar objects systems spanning a range of circumstellar masses as derived from millimeter continuum emission. The near-infrared morphology reveal conical to parabolic bipolar reflection nebulae crossed by dark lanes. We identify the dark lanes as circumstellar disks of dimension 500 AU - 1000 AU seen at near edge-on inclinations. Millimeter interferometry available for some of the sources provides evidence of rotational motion for material in these disks. We identify the bipolar reflection nebulae as infalling envelopes illuminated by the central stars, and, in some cases, the top and bottom surfaces of optically thick circumstellar disks. The limb-brightened cavities noted in several sources most likely represents the boundary between the infalling envelope and accretion-driven outflow. If so, the opening angles of the cavities are in contrast to the narrowly collimated jets observed within the cavities for some sources. Evolutionary effects are suggested by the increase in cavity opening angle as disk mass (traced by millimeter continuum emission) decreases. In addition, although two sources found in our study to be close binaries have the smallest disk masses, their extended morphology and IRAS SEDs closely resembles other envelope-dominated YSOs. This suggests accelerated disk evolution for close binary stars.
The authors acknowledge contributions by S. Kenyon, as well as helpful comments by an anonymous referee. Deborah Padgett and Wolfgang Brandner gratefully acknowledge support from the NASA/WIRE project and STScI, as well as the tireless efforts of the NICMOS instrument team in making these observations possible. Support for this work was provided by NASA through grant number STScI GO-7418.01-96A from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555
|
no-problem/9902/astro-ph9902077.html
|
ar5iv
|
text
|
# Direct spectroscopic evidence for ionized PAHs in the interstellar medium
## 1 Introduction
Gillett, Forrest, & Merrill (1973), in a spectroscopic study of planetary nebulae, discovered a series of unidentified infrared (UIR) bands, now known to include features at 3.3, 6.2, 7.7, 8.6, and 12.7 $`\mu `$m. Since then the UIR bands have been observed in a rich variety of astronomical sources (see Allamandola 1996 for a recent review). Polycyclic aromatic hydrocarbons (PAHs) were first introduced as a possible carrier of the UIR bands by Leger & Puget (1984) and Allamandola, Tielens, & Barker (1985). While other carriers have been proposed for the UIR bands, none have proven as successful as the PAH model in explaining the observed spectral details. Nonetheless, many differences exist between earlier laboratory and astronomical spectra, including the relative strength of the features and the wavelengths at which they appear, and these differences have prevented the PAH model from gaining wider adherence.
A major discrepancy exists between the relative intensities of the features in the 10$``$13 $`\mu `$m region (C–H out-of-plane bending modes) and the features in the 6$``$$`\mu `$m region (C–C modes and the C–H in-plane bend at 8.6 $`\mu `$m). In laboratory spectra of neutral PAHs, the C–C modes are much weaker than the features at longer wavelengths, but in astronomical spectra, the C–C modes are stronger. Allamandola et al. (1985) originally suggested that most or all PAHs in the interstellar medium would be ionized, due to their low ionization potential ($``$6 eV), and laboratory investigations of PAH cations show better agreement with the observed UIR spectra, in terms of both the relative feature strengths and positions (Hudgins, Sandford, & Allamandola 1994; Hudgins & Allamandola 1995a, 1995b, 1997, 1998; Szczepanski & Vala 1993a, 1993b; Szczepanski, Chapo, & Vala 1993; Szczepanski et al. 1995a, 1995b).
Here we make a detailed comparison between recently available laboratory data and spatial and spectral variations in the infrared spectrum from the SVS 3 region in the reflection nebula NGC 1333. SVS 3 is an early B star (Strom, Vrba, & Strom 1976; Harvey, Wilking, & Joy 1984), producing a much milder UV spectrum than found in other PAH emission regions (e.g. the Orion Bar, NGC 7027). Joblin et al. (1996) obtained discrete mid-infrared spectra in 3 locations in the reflection nebula using a 5$`\mathrm{}`$ beam and found that the strength of the 8.6 and 11.2 $`\mu `$m PAH features varied inversely to each other, with the 8.6 $`\mu `$m feature emitting more strongly near SVS 3 and the 11.2 $`\mu `$m feature emitting more strongly to the south away from SVS 3. Since the 8.6 $`\mu `$m feature shows enhanced strength in the spectra of most PAH cations (along with the other bands between 6 and 10 $`\mu `$m), Joblin et al. concluded that SVS 3 was ionizing a larger fraction of the PAHs closer in than further away. This dependence of ionization fraction as a function of distance from the ionizing source could easily result from geometric dilution.
This investigation concentrates on emission features in the 10$``$13 $`\mu `$m spectral region, which arise from out-of-plane bends of C–H bonds on the periphery of PAH molecules (e.g. Bellamy 1958; Allamandola, Tielens, & Barker 1989). For neutral PAHs the strongest feature at 11.2 $`\mu `$m originates in PAH rings which contain only one C–H bond (the solo mode). The duo mode (two adjacent C–H bonds) produces a feature in the vicinity of 11.9 $`\mu `$m, but this feature is much weaker in astronomical sources. Since in the laboratory spectra of PAHs the wavelength of these bands can shift a substantial fraction of a micron, depending on the size and structure of the molecule, the positions of the solo and duo modes have long been used by chemists as a diagnostic of PAH structure. The trio mode produces a feature at $``$13 $`\mu `$m, but the wavelength range of this mode overlaps with the quartet and quintet modes at longer wavelengths, making unambiguous identification difficult.
## 2 Observations and analysis
In order to investigate the spectral variations in the PAH emission at higher spatial resolution, we obtained long-slit 8$``$13 $`\mu `$m spectra of NGC 1333 SVS 3 at the 5-m Hale Telescope at Palomar on the nights of 1996 September 29$``$30 (UT) using SpectroCam-10 (Hayward et al. 1993).<sup>1</sup><sup>1</sup>1Observations at Palomar Observatory were made as part of a continuing collaborative agreement between the California Institute of Technology, Cornell University, and the Jet Propulsion Laboratory. The data have a spectral resolution of 0.19 $`\mu `$m and a diffraction-limited angular resolution of $``$0$`\stackrel{}{\mathrm{.}}`$5. The slit was oriented N/S and covered a 2$`\times `$16$`\mathrm{}`$ region of the sky including SVS 3 and the nebulosity to the south. We used standard chop-and-nod sequences with 40<sup>′′</sup> and 60<sup>′′</sup> amplitudes E/W to correct for background emission from the telescope and sky. The data were flux-calibrated using spectra of $`\beta `$ Peg taken immediately before or after the NGC 1333 observations, together with archival SpectroCam-10 ratio spectra of $`\beta `$ Peg vs. $`\alpha `$ Lyr and the absolute $`\alpha `$ Lyr model from Cohen et al. (1992). The data from the two nights were combined into a single 2-D spectral image from which individual 1-D spectra were extracted for plotting.
Figure 1 illustrates the spectrum summed from 2$`\mathrm{}`$ south of SVS 3 to the end of the slit, showing the 8.6 $`\mu `$m feature on the shoulder of the stronger 7.7 $`\mu `$m feature, the 11.2 $`\mu `$m feature and the emission plateau extending to the weaker 12.7 $`\mu `$m feature. Figure 2 shows how the strengths of the PAH features vary with position along the slit. While the 11.2 and 12.7 $`\mu `$m features grow progressively stronger toward the PAH emission ridge 10$`\mathrm{}`$ south of SVS 3, the 8.6 $`\mu `$m feature is stronger closer in. This behavior confirms the results of Joblin et al. (1996).
Fig. 3 shows how the shape of the 11.2 $`\mu `$m feature (solo mode) changes as a function of distance from SVS 3. Close to the source, the feature has an excess on the short-wavelength wing, which appears to consist of multiple components. As the distance from SVS 3 increases, the shorter wavelength portion of the wing (centered at $``$10.8 $`\mu `$m) disappears first, followed by the longer wavelength portion (at $``$11.0 $`\mu `$m).
We have extracted the flux from these two components (Fig. 4), which we describe as the “blue outliers” to the 11.2 $`\mu `$m feature, by averaging the profile of the 11.2 $`\mu `$m feature 8$`\mathrm{}`$ south of SVS 3 (and beyond), normalizing this mean profile to each row, and subtracting it. The 10.8 $`\mu `$m outlier is summed from 10.6 to 10.9 $`\mu `$m and the 11.0 $`\mu `$m outlier is summed from 10.9 to 11.2 $`\mu `$m. While the 10.8 $`\mu `$m outlier goes to zero 8$`\mathrm{}`$ from SVS 3, the 11.0 $`\mu `$m outlier still makes a contribution, which we crudely estimate to be 0.91$`\times `$10<sup>-16</sup> W m<sup>-2</sup> arcsec<sup>-2</sup> at this position by fitting a gaussian to the blue edge of the main band at 11.2 $`\mu `$m. Both outliers increase in strength by a factor of $``$2 close to SVS 3 despite the fact that the stronger features at 8.6, 11.2, and 12.7 $`\mu `$m are at a mininum in this region (Fig. 2).
Our observations of NGC 1333 suggest that the 12.7 $`\mu `$m feature also develops a blue wing close to the central source. Because of the poor signal-to-noise in this spectral region, deep atmospheric absorption features (due to water vapor at 12.38, 12.44, 12.52, and 12.56 $`\mu `$m and CO<sub>2</sub> at 12.63 $`\mu `$m), and the complications introduced by possible \[Ne II\] emission at 12.78 $`\mu `$m, we cannot be more conclusive or quantitative with the present data.
An emission feature in the vicinity of 9.8 $`\mu `$m also appears within 1$`\mathrm{}`$ of SVS 3 (Fig. 5). This feature occurs on the wing of a very strong telluric absorption feature from O<sub>3</sub>, making its apparent wavelength dependent on the quality of the atmospheric correction and difficult to determine accurately. However, the feature in our spectrum cannot arise entirely from a poor telluric correction, or it would appear along the entire length of the slit and not just near SVS 3. The 10 $`\mu `$m feature also appears in the on-source spectrum of SVS 3 by Joblin et al. (1996), but without comment and only at the 1-$`\sigma `$ level. The feature has also appeared (faintly, and without comment) in spectra obtained from the Infrared Space Observatory (Beintema et al. 1996) and the Infrared Telescope in Space (Yamamura et al. 1996). Both of these telescopes are above all atmospheric ozone. This feature probably does not arise from silicate dust, because silicates would produce a much broader emission band.
## 3 Discussion
A component in the vicinity of 11.0 $`\mu `$m has appeared before in the spectra of several PAH sources, most strongly in TY CrA (at $``$11.05 $`\mu `$m; Roche, Aitken, & Smith 1991), but also in Elias 1 (at $``$11.06 $`\mu `$m; Hanner, Brooke, & Tokunaga 1994), and more weakly in several other sources, including Elias 14 (at $``$10.8 $`\mu `$m; Hanner, Brooke, & Tokunaga 1995) and WL 16 (DeVito & Hayward 1998). The WL 16 data (also obtained with SpectroCam-10 at Palomar) show spatial behavior similar to our NGC 1333 data, with the blue wing on the 11.2 $`\mu `$m feature becoming more pronounced closer to the central source and fading further away (Fig. 3 in DeVito & Hayward 1998).
To determine the nature of the blue outliers and the 10 $`\mu `$m feature, we compare our astronomical data to the database of spectra from the Astrochemistry Laboratory Group at NASA Ames Research Center (Hudgins et al. 1994; Hudgins and Allamandola 1995a, 1995b, 1997, 1998) and theoretical models by Langhoff (1996).
For neutral PAHs, Langhoff (1996) finds that the position of the 11.2 $`\mu `$m feature depends on the geometry of the molecule. Moving from a small molecule like anthracene (three adjacent rings) to tetracene (four rings) to pentacene (five rings), the feature shifts from 11.3 to 10.9 $`\mu `$m. This shift raises the possibility that the blue outliers might result from a change in the composition of the PAH mixture. However, the laboratory data of Hudgins et al. show a much smaller shift in wavelength as a function of molecular size. From anthracene to pentacene, the feature only shifts from 11.3 to 11.1 $`\mu `$m.
The relative strengths of the 11.2 and 12.7 $`\mu `$m bands provide a crude means of probing the size of the PAHs. In larger PAHs, the solo mode (11.2 $`\mu `$m) would dominate, since any ring along a straight edge of the molecule would have only one C–H bond, while the trio mode (12.7 $`\mu `$m) would appear more frequently in smaller PAHs, where a larger fraction of the rings occupy corners and not straight edges. As Fig. 2 shows, the spatial behaviors of both the 11.2 and 12.7 $`\mu `$m features agree with each other within the uncertainties, pointing to closely related sets of carries and not variations in the size of the PAHs. Consequently, we do not consider it likely that the blue outliers to the 11.2 $`\mu `$m feature result from changes in the size distribution or molecular geometry of the PAHs sampled by the slit.
Figures 6 and 7 compare the laboratory and theoretical spectra of neutral PAHs and PAH cations. In the cations, most of the C–H out-of-plane bending modes have shifted $``$0.4 $`\mu `$m to shorter wavelengths (Fig. 6), providing a straightforward interpretation of the blue outliers seen close to SVS 3. Figure 7 shows that the 10 $`\mu `$m feature seen near SVS 3 may also arise from PAH cations, since PAH cations consistently produce features in this wavelength region while neutral PAHs do not.
Joblin et al. (1996) argued that the fraction of PAH cations decreases further from SVS 3 due to decreasing fluxes of ionizing photons. In our data, the blue outliers to the 11.2 $`\mu `$m feature and the 10 $`\mu `$m feature show just this spatial dependence. The combination of this spatial behavior and the appearance of similar spectral features in laboratory spectra of ionized PAHs leads us to identify these features with PAH cations.
This identification substantially strengthens the case for PAHs as carriers of the UIR bands. The identification of the 3.29 $`\mu `$m band with the aromatic C–H stretch and the bands in the 11$``$13 $`\mu `$m region with out-of-plane C–H bends requires that the carrier consist of aromatic hydrocarbons, but the nature of these aromatic hydrocarbon molecules has remained in doubt (e.g. Sellgren 1994, Tokunaga 1997, Uchida et al. 1998). Energy requirements discussed in the proposal of PAHs as possible carriers of the UIR bands (Leger & Puget 1984; Allamandola et al. 1985) lead to estimates that these molecules must contain $``$40$``$80 carbon atoms. Therefore, they must be polycyclic. But these molecules might exist within a larger matrix of hydrocarbons and other molecules, commonly described as hydrogenated amorphous carbon (HAC; see the recent review by Duley 1993). In order for the molecules to be ionized, they must be free molecules, i.e. they must be separate from any HAC-like matrix. In the face of these combined arguments, individual gas-phase PAHs must be the dominant emitters of the narrow UIR bands.
The authors thank Walt Duley and Craig Smith for helpful discussions. We would also like to express our gratitude to an anonymous referee who returned comments to us in only two weeks. During the preparation of this manuscript, GCS was supported by NSF grant INT-9703665 and graciously hosted by the School of Physics, Australian Defence Force Academy, and the Division of Physics and Electronics Engineering, University of New England.
|
no-problem/9902/gr-qc9902024.html
|
ar5iv
|
text
|
# Evolving Einstein’s Field Equations with Matter: The “Hydro without Hydro” Test
\[
## Abstract
We include matter sources in Einstein’s field equations and show that our recently proposed 3+1 evolution scheme can stably evolve strong-field solutions. We insert in our code known matter solutions, namely the Oppenheimer-Volkoff solution for a static star and the Oppenheimer-Snyder solution for homogeneous dust sphere collapse to a black hole, and evolve the gravitational field equations. We find that we can evolve stably static, strong-field stars for arbitrarily long times and can follow dust sphere collapse accurately well past black hole formation. These tests are useful diagnostics for fully self-consistent, stable hydrodynamical simulations in 3+1 general relativity. Moreover, they suggest a successive approximation scheme for determining gravitational waveforms from strong-field sources dominated by longitudinal fields, like binary neutron stars: approximate quasi-equilibrium models can serve as sources for the transverse field equations, which can be evolved without having to re-solve the hydrodynamical equations (“hydro without hydro”).
\]
With the advent of gravitational-wave interferometry, the physics of compact objects is entering a particularly exciting phase. The new generation of gravitational-wave detectors, including LIGO, VIRGO, GEO and TAMA, may soon detect gravitational radiation directly for the first time, opening a gravitational-wave window to the Universe and making gravitational-wave astronomy a reality (see, e.g.).
To learn from such observations and to dramatically increase the likelihood of detection, one needs to predict the observed signal theoretically. Among the most promising sources are gravitational waves from the coalescences of black hole and neutron star binaries. Simulating such mergers requires self-consistent numerical solutions to Einstein’s field equations in three spatial dimensions plus time, which is extremely challenging. While several groups, including two “Grand Challenge Alliances” , have launched efforts to simulate compact binary coalescence (see also ), the problem is far from solved.
Many of the numerical codes, including those based on the ADM formulation of Einstein’s equations (after Arnowitt, Deser and Misner, ) develop instabilities and inevitably crash, even for small amplitude gravitational waves on a flat background (see, e.g.). To avoid this problem, several hyperbolic formulations have been developed , some of which have also been implemented numerically . In a recent paper (hereafter paper I), we have modified a formulation of Shibata and Nakamura , and have shown that our new formulation allows for stable, long-term evolution of gravitational waves. We pointed out its two advantages over many hyperbolic formulations: it requires far fewer equations and it does not require taking derivatives of the original 3+1 equations. The latter may be particularly important for the evolution of matter sources, where matter derivatives could augment numerical error.
In this paper, we put matter sources into the field equations and test the evolution behavior for two known strong-field solutions: the Oppenheimer-Volkoff solution for a static star, and the Oppenheimer-Snyder solution for collapse of a homogeneous dust sphere to a black hole. We do not evolve the matter, but instead insert the known matter solutions into the numerically evolved field equations. This allows us to study hydrodynamical scenarios without re-solving hydrodynamical equations: “hydro without hydro”.
The purpose of this Brief Report and the hydro-without-hydro approach is twofold. First, we demonstrate that our evolution scheme can stably evolve strong-field solutions with matter sources. These calculations for strong longitudinal fields complement the tests for wave solutions (transverse fields) presented in paper I. This may be an important diagnostic for overcoming stability problems in relativistic hydrodynamical calculations (cf. and discussion therein). Second, we demonstrate that we can integrate the field equations with given matter sources reliably. Specifically, we show that furnishing prescribed matter sources that already obey $`𝐓=0`$ (where $`𝐓`$ is the matter’s stress-energy tensor) rather than self-consistently evolving the matter together with the fields does not introduce instabilities. This decoupling (“hydro without hydro”) suggests new possibilities for determining gravitational waveforms emitted by, for example, inspiraling neutron stars binaries prior to reaching the innermost stable circular orbit by a successive approximation scheme.
In our formulation, we evolve the conformal metric $`\stackrel{~}{\gamma }_{ij}`$, the conformal exponent $`\varphi `$, the extrinsic curvature’s trace $`K`$ and conformal trace-free part $`\stackrel{~}{A}_{ij}`$, and conformal connection functions $`\stackrel{~}{\mathrm{\Gamma }}^i`$. For the sake of brevity, we refer the reader to paper I for all field equations and their numerical implementation.
We first study the evolution of the gravitational fields for the Oppenheimer-Volkoff solution of a relativistic, static star. We examine polytropic stellar models with equation of state $`P=\kappa \rho _0^{1+1/n}`$, focusing on polytropic index $`n=1`$. Here $`P`$ is the pressure and $`\rho _0`$ the rest-mass density; henceforth we set $`G=1=c`$, and we choose non-dimensional units in which $`\kappa =1`$. We present results for a model with central density $`\rho _0^\mathrm{c}=0.2`$, for which the star has mass $`M=0.157`$ and Schwarzschild radius $`R=0.866`$. The small compaction, $`R/M=5.5`$, indicates that the star is highly relativistic. The isotropic radius of this star is $`\overline{R}=0.7`$. For comparison, the maximum mass configuration has a central density $`\rho _0^\mathrm{c}=0.319`$ and a mass $`M=0.164`$.
We only evolve the gravitational fields, holding the matter sources to their OV values. We choose zero shift ($`\beta ^i=0`$) and set initial data for the lapse from the “Schwarzschild” lapse, $`\alpha _{\mathrm{OV}}`$. We evolve the lapse using harmonic slicing, which, for zero shift, reduces to $`_t\alpha =_te^{6\varphi }`$ (see, e.g., paper I). We found that fixing the lapse to the exact solution $`\alpha =\alpha _{\mathrm{OV}}`$ introduced instabilities, while integrating with harmonic slicing allowed stable evolutions while achieving the same lapse numberically. Note that the only non-vanishing matter sources appear in the evolution equation for $`K`$ \[cf. Eq. (15) of paper I\]: the conformal splitting explicitly decouples transverse fields from static matter sources.
In Figure 1, we show $`K`$ and $`\varphi `$ for a long-term evolution. We used a $`(32)^3`$ grid and imposed the outer boundaries at $`x,y,z=2`$. We terminated the calculation at $`t=512`$ (corresponding to $`t/M3255`$), and found no evidence of an instability. Numerical noise develops during the early part of the evolution, but this noise propagates off the grid, and the evolution settles down into a numerical equilibrium solution.
The numerical noise originates from three sources: finite-difference error, noise from the surface of the star, and error due to imposing outer boundaries at finite distance. We now discuss these three sources in detail.
To check the local finite difference error, we perform a convergence test and evolve the same initial data on grids of $`(16)^3`$, $`(32)^3`$ and $`(64)^3`$ gridpoints, all with the outer boundary at $`x,y,z=3`$. In Figure 2 we show results for $`K`$ at the center from these runs. Note that the analytic solution is $`K=0`$. We use second order accurate finite difference equations, and so expect error to decrease by a factor of four when doubling grid resolution. This behavior is seen at early times ($`t1`$) in Figure 2. There are deviations due to higher order error terms, but these decrease, and the scaled values of $`K`$ converge to the second order error term.
At later times ($`t1`$), second order convergence is spoiled by effects from the surface of the star. Note that the local speed of light at the center is $`dr/dt\alpha /e^{2\varphi }0.36<1`$. This delays signals from the surface, which otherwise would arrive at $`t=\overline{r}=0.7`$. The code still converges, but no longer to second order. This effect is well known and appears in other simulations (compare, e.g.). Second-order spatial derivatives are not smooth at the star’s surface, so the finite-difference convergence breaks down. We have run other cases with different stellar models (and radii) to show that this breakdown of second order convergence is due to errors originating at the star’s surface.
Next we analyze the effect of the outer boundary (OB). We place the OB at $`x,y,z=1`$, $`2`$ and $`4`$, and run the code on grids of $`(16)^3`$, $`(32)^3`$ and $`(64)^3`$ gridpoints, so that the resolution of the star is constant. The results for $`K`$ at the center are shown in Figure 3. As expected, all graphs agree until the center reaches the domain of dependence of the different OB locations. The run with the OB at 1 starts to deviate from the other two runs at $`t1`$. At $`t2`$ the run with OB at 2 starts to deviate from the run with OB at 4. The slight delay is again caused by the smaller value of the local speed of light toward the center of the star. The maximum error due to the OB (marked by dots in Figure 3) quickly decreases with larger OB location. Since our boundary conditions take into account the first order ($`1/r`$) fall-off of the fields (see paper I), one would expect the error to scale with the square of the OB ratios, and hence with factors of at least four in our simulations. We find that the errors decrease by even slightly larger factors (5.4 and 7.2). For our resolution, the error is dominated by the local finite-differencing error even when the outer boundary is imposed at only a few stellar radii.
Going back to Figure 1, we can now identify the different sources of error in the early part of the evolution. The first peak in $`K`$ around $`t0.42.5M`$ (see the panel in Figure 1) is caused by the local finite difference error. The next feature at $`t212M`$, with oscillations at a higher frequency, originates from the surface of the star. The largest peak, at $`t424M`$, is caused by the outer boundary. Reflections of these errors off the OB reappear at later times, but ultimately propagate off the grid and leave behind a stable numerical equilibrium solution.
Turn now to an analysis of the gravitational fields associated with the collapse of a sphere of dust, Oppenheimer-Snyder collapse . This configuration is highly dynamical — the matter very rapidly collapses to form a black hole. This case tests our ability to evolve into a very strong-field regime.
Again, we do not evolve the matter sources, but instead insert the exact solution for the matter ($`\rho `$, $`S_i`$, and $`S_{ij}`$) into the evolution code at each time step. The analytic solution for Oppenheimer-Snyder collapse is transformed into maximal slicing and isotropic coordinates following . This transformation involves only ordinary differential equations, which can be solved numerically. The lapse and shift corresponding to maximal slicing and isotropic coordinates are also obtained from this transformation and are inserted into the evolution code at each time step. Given the matter sources and the coordinate conditions, we independently evolve $`\varphi `$, $`\stackrel{~}{\gamma }_{ij}`$, $`\stackrel{~}{A}_{ij}`$, and $`\stackrel{~}{\mathrm{\Gamma }}^i`$ with our 3+1 code. Having chosen maximal slicing, we can either set $`K=0`$ or else evolve $`K`$ dynamically and check that it indeed converges to zero.
We present results for a star that collapses from an initial Schwarzschild radius $`R_{\mathrm{star}}=4M`$ (or isotropic radius $`\overline{R}_{\mathrm{star}}=2.94M`$). The star collapses to a black hole and all of the matter has passed inside the event horizon by $`t=12.31M`$. We terminate the evolution at $`t=17.39M`$, the time up to which we have constructed exact data. The matter is so compact at the end ($`\overline{R}_{\mathrm{matter}}0.14M`$) that it is very poorly resolved on our 3-D grid. We impose the outer boundary conditions at $`x=y=z=4M`$ (in isotropic coordinates), so that initially it is quite close to the star’s surface.
In Figure 4, we show the evolution of the conformal exponent $`\varphi `$ at the origin with $`(16)^3`$, $`(32)^3`$, and $`(64)^3`$ gridpoints and compare it with the exact solution. For this run, we evolved $`K`$. Figure 4 shows that the numerical evolution can follow the collapse well past black hole formation. The numerical solution converges to the exact solution (see especially the inner panel of Figure 4). At very late times, the grid resolution becomes increasingly poor, and ultimately convergence is spoiled by higher order finite difference errors. Up to these late times, the ADM and total rest mass are reliably conserved.
As with Oppenheimer-Volkoff stars, we find that non-smoothness of the gravitational fields at the surface spoils the second-order convergence of quantities in the domain of dependence of the surface. Here the effect is much stronger, since the matter density is non-zero all the way to the surface. However, we can lessen this effect and improve the behavior of the evolution by imposing the maximal slicing condition $`K=0`$. This decouples the evolution equation for $`\varphi `$ \[cf. Eq. (14) of paper I\] from the transverse fields and from the Ricci tensor, which contains second derivatives of the fields. This reduces errors in the longitudinal fields that arise from the discontinuous surface, highlighting the advantages of a conformal splitting.
In summary, we find that the system of equations described in paper I accurately evolves gravitational fields associated with matter sources. We can evolve the fields of an Oppenheimer-Volkoff star to extremely late times with harmonic slicing, and we can follow Oppenheimer-Snyder collapse well beyond black hole formation into the very strong-field regime. We used predetermined matter sources, and have decoupled the field evolution from hydrodynamic evolution — hydro without hydro.
Our findings have two important consequences. First, the ability to stably evolve the gravitational fields in the presence of strong-field matter sources is an important step towards constructing fully self-consistent, relativistic hydrodynamical codes. Second, our tests demonstrate that there are no fundamental difficulties evolving the fields with prescribed matter sources obeying $`𝐓=0`$, rather than self-consistently evolving the matter and fields together. This hydro-without-hydro approach suggests a possible successive approximation scheme for calculating the gravitational-wave signal emitted by, for example, binary neutron stars. Outside the innermost stable circular orbit, such binaries are dominated by longitudinal fields and change their radial separation on a radiative timescale which is much longer than the orbital timescale. They therefore may be considered in quasi-equilibrium (see, e.g.). Instead of evolving the matter hydrodynamically, we can insert the known quasi-equilibrium binary configuration into the field evolution code to get the transverse wave components approximately. Decreasing the orbital separation (and increasing the binding energy) at the rate found for the outflow of gravitational-wave energy would generate an approximate strong-field wave inspiral pattern. Such a hydro-without-hydro calculation may yield an approximate gravitational waveform from inspiraling neutron stars without having to couple the matter and field integrations.
We thank M. Shibata and S. Teukolsky for useful discussions. Calculations were performed on SGI CRAY Origin2000 computer systems at the National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign. This work was supported by NSF Grant AST 96-18524 and NASA Grant NAG 5-3420 at Illinois.
|
no-problem/9902/hep-ph9902376.html
|
ar5iv
|
text
|
# STATIC MASS SCALES IN HOT GAUGE THEORIES
## 1 Introduction
Gauge theories in 3d play an important role for high temperature particle physics, since they constitute the Matsubara zero mode sector of finite temperature quantum field theories in the imaginary time formalism. In the framework of dimensional reduction they emerge as effective theories describing all static properties and the equilibrium thermodynamics of the original 4d finite T theory. Hence, any static physical quantity of a finite T theory must have a corresponding quantity in the effective 3d theory, with which it agrees up to a (perturbative) part due to the non-zero Matsubara modes. For example, a static screening length in finite T field theory is defined as the exponential decay of some spatial correlation function. In the 3d theory, the direction of the correlation may be taken to be (euclidean) time, and hence the same quantity appears in the spectrum or some other physical property of the (2+1)d theory. While dimensional reduction is a perturbative procedure, the resulting effective theories for the symmetric phase of the electroweak Standard Model and hot QCD are 3d SU(N) gauge + fundamental and adjoint Higgs models, respectively, in their confining phases and hence entirely non-perturbative.
Here I discuss the physical properties of the 3d SU(2) Higgs models with fundamental and adjoint scalar fields. The scale is set by the dimensionful gauge coupling $`g_3^2=g^2T`$. Further, the physics of both models is fixed by two parameters $`x`$ and $`y`$, which are dimensionless ratios of the scalar coupling and bare mass with the gauge coupling, respectively. For effective high T theories, $`x,y`$ are fixed by dimensional reduction as functions of T and the Higgs mass (fundamental) or of T alone (adjoint). Both models have confinement and Higgs regions in their phase diagram, which are separated by a first order phase transition for small $`x`$, but analytically connected for large $`x`$.
## 2 Mass spectrum
The physical properties of gauge theories are encoded in gauge-invariant $`n`$-point functions. In particular, the mass spectrum is computed from the exponential fall-off of two-point correlation functions
$$\underset{xy\mathrm{}}{lim}\varphi ^{}(x)\varphi (y)\mathrm{e}^{Mxy},$$
(1)
where $`\varphi `$ generically denotes some gauge-invariant operator with quantum numbers $`J^{PC}`$. The results of lattice calculations of the lowest states of the spectra at various points in the parameter space of our models are displayed in Fig. 1.
First, consider the electroweak model for small Higgs mass, when confinement and Higgs region are separated by a first order phase transition. In the Higgs region we see the familiar and perturbatively calculable Higgs and W-boson in the $`0^{++}`$ and $`1^{}`$ channels, respectively. There is a large gap to the higher excitations which are scattering states. In the confinement region, in contrast, there is a dense spectrum of bound states in all channels, very much resembling the situation in QCD. Open symbols denote bound states of scalar fields, whereas full symbols represent glueballs. At a more realistic large Higgs mass, the situation in the confinement region (above the phase transition) is repeated, but the picture in the Higgs region has changed to a similarly dense spectrum. No phase transition separates the two regimes and the mass spectrum can be continuously connected. Nevertheless, the two regimes have different dynamics, e.g. no confining gauge string is formed between static sources in the Higgs regime right of the dashed line, whereas the properties of the glueball spectrum are entirely insensitive to variations of the scalar parameters to the left of it. The line marks the center of a rapid but smooth transition between these regimes.
Next, consider the SU(2) adjoint Higgs model, corresponding to dimensionally reduced SU(2) QCD with $`N_f=0`$. Fig. 1 right shows preliminary results for some low lying states at a point corresponding to $`T5T_c`$. In contrast to the fundamental Higgs model one can also define gauge-invariant operators odd under charge conjugation. Otherwise the situation is completely analogous, with again a repetition of the pure gauge glueball spectrum, denoted by the full symbols, and bound states of adjoint scalars, and similarly little dependence of the gauge part on the scalar parameters.
The properties of the pure gauge sector of the two types of Higgs model in their confinement phases are compared with those of pure gauge theory in Table 1. The masses of the glueballs as well as the string tension in the linear part of the potential agree remarkably well between the three models, demonstrating that the dynamics of the gauge degrees of freedom is almost entirely insensitive to the presence of matter fields.
## 3 Static potential and screening
Another quantity determining the physical properties of confining theories is the potential energy of static colour sources, which is calculated from the exponential decay of large Wilson loops,
$$V(r)=\underset{t\mathrm{}}{lim}\frac{1}{t}\mathrm{ln}W(r,t).$$
(2)
As in four dimensions, a string of colour flux connects the sources, both in fundamental and adjoint representation, leading to a potential rising linearly with their separation. For the fundamental potential in pure gauge theory, this linear rise continues to infinity. If fundamental matter fields are present as in the Higgs model, the string breaks at some scale $`r_b`$, when its energy is large enough to produce a pair of scalars. This results in a saturation of the potential at a constant value corresponding to the energy of two static-light mesons which are formed after string breaking, $`V(r\mathrm{})=const.`$ Considering adjoint sources instead, string breaking occurs already in pure gauge theory, because the adjoint string can couple to pair-produced gluons.
The string breaking scale $`r_b`$ is a physical quantity characterizing the range of the confining force. Its size depends on the string tension and the mass of the dynamical particles that have to be produced to break the string. For the fundamental potential in the SU(2) Higgs model it thus depends on the bare scalar mass in the Lagrangian. In a recent lattice simulation $`r_b`$ has been computed for the same parameter values as the light Higgs confining spectrum (cf. Fig. 1) by extracting it from the turnover of the potential as shown in Fig. 2. The continuum extrapolation of those results gives $`r_bg_3^28.5`$, $`r_b^10.12g_3^2`$. For comparison, the lightest scalar bound state from Fig. 1 is $`m_S=0.839(15)g_3^2`$, and the lightest glueball $`m_G=1.60(4)g_3^2`$. On the other hand, considering the adjoint potential in pure gauge theory, there is no bare mass in the Lagrangian allowing to tune the mass of the constituents, and hence $`r_b`$ is a purely dynamical quantity of the theory. In this case the continuum result is $`r_bg_3^2=6.50(94)`$, or $`r_bm_G=10.3\pm 1.5`$. In other words, the 3d pure gauge theory contains a mass scale $`r_b^1`$ which is by an order of magnitude smaller than the mass of the lightest physical state.
## 4 The Debye mass
An important concept in the phenomenology of high temperature QCD is the static electric screening mass, or the Debye mass $`m_D`$. Although its leading order contribution is perturbative, it couples to the 3d magnetic sector in next-to-leading order, and hence requires a non-perturbative treatment as well. The Debye mass can be expanded as
$$m_D=m_D^{\text{LO}}+\frac{Ng^2T}{4\pi }\mathrm{ln}\frac{m_D^{\text{LO}}}{g^2T}+c_Ng^2T+𝒪(g^3T),$$
(3)
where $`m_D^{\text{LO}}=(N/3+N_f/6)^{1/2}gT`$ and $`N_f`$ is the number of flavours. The logarithmic part of the $`𝒪(g^2)`$ correction can be extracted perturbatively , but $`c_N`$ and the higher terms are non-perturbative. To allow for a lattice determination, a non-perturbative definition was formulated employing the SU(N) adjoint Higgs model as the dimensionally reduced effective theory. By integrating out the heavy adoint Higgs this is further reduced to the pure SU(N) theory in 3d. The coefficient $`c_N`$ can then be determined from the exponential fall-off of an adjoint Wilson line $`U_{ab}^{\mathrm{adj}}(x,y)`$ with appropriately chosen adjoint charge operators at the ends, for example
$$G_F(x,y)=F_{ij}^a(x)U_{ab}^{\mathrm{adj}}(x,y)F_{kl}^b(y).$$
(4)
From its measurement in a lattice simulation of 3d pure SU(2) one finds the complete non-perturbative $`O(g^2T)`$ corrections to the Debye mass with high precision to be $`c_2=1.06(4)`$. Comparing this correction with the ($`N_f=0`$) leading term, one finds $`c_2g^2T/m_D^{\text{LO}}=1.3g`$ which is close to one even for couplings smaller than one.
## 5 Summary
Three-dimensional gauge theories exhibit a rich structure of physical mass scales. Although all of them are necessarily $`𝒪(g_3^2=g^2T)`$, the coefficients vary by more than an order of magnitude. For finite temperature field theory this means that non-perturbatively the hierarchy of scales in the purely magnetic sector is larger than that between the electric and the magnetic sector, which are not generally well separated for realistic couplings.
## References
|
no-problem/9902/cond-mat9902268.html
|
ar5iv
|
text
|
# 1 𝑞^(𝐿)(ϵ) in MKA at 𝑇≃0.38𝑇_𝑐 (bottom) and 0.14𝑇_𝑐 (top) as function of sign(ϵ)|ϵ|^0.6 for various system sizes.
Reply to “Comment on Evidence for the droplet picture of spin glasses”
Using Monte Carlo simulations (MCS) and the Migdal-Kadanoff approximation (MKA), Marinari et al. study in their comment on our paper the link overlap between two replicas of a three–dimensional Ising spin glass in the presence of a coupling between the replicas. They claim that the results of the MCS indicate replica symmetry breaking (RSB), while those of the MKA are trivial, and that moderate size lattices display the true low temperature behavior. Here we show that these claims are incorrect, and that the results of MCS and MKA both can be explained within the droplet picture.
The link overlap is defined as $`q^{(L)}(ϵ)=(1/3V)\sigma _i\sigma _j\tau _i\tau _j`$ where the sum is over all nearest-neighbor pairs $`\{ij\}`$, and the brackets denote the thermal and disorder average. $`\sigma `$ and $`\tau `$ denote the spins in the two replicas. The Hamiltonian used for the evaluation of the thermodynamic average is $`H[\sigma ,\tau ]=H_0[\sigma ]+H_0[\tau ]ϵ\sigma _i\sigma _j\tau _i\tau _j`$, where $`H_0`$ is the ordinary spin glass Hamiltonian. For the subsequent discussion, it is useful to write $`q^{(\mathrm{})}(ϵ)`$ in the form $`q^{(\mathrm{})}(ϵ)=q_++A_+|ϵ|^{\lambda _+}`$ for $`ϵ>0`$ and $`q^{(\mathrm{})}(ϵ)=q_{}+A_{}|ϵ|^\lambda _{}`$ for $`ϵ<0`$.
In the mean-field RSB picture, $`q_+>q_{}`$, and $`\lambda _+=\lambda _{}=1/2`$, and Marinari et al claim to see a trend towards this discontinuous behavior in their MCS data (Fig.1 of ). Alternatively, if they assume continuous behavior, they find a value $`\lambda _\pm 0.25`$. These conclusions are based on the assumptions that there are no corrections to the pure power-law behavior, and that $`\lambda _+=\lambda _{}`$. However, neither assumption is justified, and the most natural interpretation of Fig.1 of is $`q_+=q_{}`$, and $`\lambda _\pm 1/2`$.
This result, as well as the results of the MKA, is in fact fully compatible with the droplet picture. Using scaling arguments similar to those in , the value of $`\lambda _{}`$ and $`\lambda _+`$ at low temperatures can be derived in the following way: The energy cost of the formation of a spin-flipped “droplet” of radius $`l`$ in one of the replicas is of the order $`l^\theta +ϵl^{d_s}`$, where $`d_s`$ denotes the fractal dimension of the droplet surface, and $`\theta `$ is the scaling dimension for the domain wall energy. For negative $`ϵ`$, droplets of a characteristic size $`l^{}(1/|ϵ|)^{1/(d_s\theta )}`$ are formed, since they lower the energy of the system. Since flipping a cluster only affects links on the surface of the cluster, this gives $`q^{(\mathrm{})}(ϵ)q^{(\mathrm{})}(0)C|ϵ|^{(dd_s)/(d_s\theta )}`$. Within the MKA $`d_s=d1`$ and $`\theta 0.24`$, leading to $`\lambda _{}0.57`$. For a cubic lattice, one has $`\theta 0.2`$, and $`d_s2.2`$, leading to $`\lambda _{}0.4`$. For positive $`ϵ`$, the leading correction to the link overlap comes from the suppression of the thermal excitation of large droplets and has for low temperatures the form $`q^{(\mathrm{})}(ϵ)q^{(\mathrm{})}(0)k_BT(ϵ/k_BT)^{(d+\theta d_s)/d_s}`$, leading to $`\lambda _+=(d+\theta d_s)/d_s`$. Its value in MKA is $`\lambda _+0.62`$, very close to $`\lambda _{}`$.
For finite temperatures and small systems, there are corrections to this asymptotic behavior due to finite-size effects which replace the nonanalyticity at $`ϵ=0`$ with a linear behavior for small $`|ϵ|`$, and due to the influence of the critical fixed point, where the leading behavior is linear in $`ϵ`$. As we have argued in , the influence of the critical fixed point changes the apparent value of the low-temperature exponents for the system sizes studied in the MCS and the MKA. The MCS data shown in with an apparent value of 0.5 for $`\lambda _\pm `$ are fully compatible with these predictions of the droplet picture. For the MKA, the apparent exponent at $`0.7T_c`$ is close to 1 for $`L16`$, leading to the “trivial” behavior found in . However, at lower temperatures, for the same small system sizes the above-mentioned nontrivial features predicted by the droplet picture become clearly visible, as shown in Fig.1.
The apparent system size dependence of the exponents $`\lambda _\pm `$ allows us even to estimate numerically the system sizes needed to see the true low temperature scaling behavior. By iterating the recursion relations for the coupling constants within MKA, we find that these system sizes are of the order $`L1000`$ at $`T0.7T_c`$. We expect that similar system sizes would be needed for MCS to see the scaling behavior predicted by the droplet picture.
This work was supported by EPSRC Grants GR/K79307 and GR/L38578.
H. Bokil, A.J. Bray, B. Drossel, M.A. Moore
Department of Physics and Astronomy
University of Manchester
Manchester M13 9PL, U.K.
|
no-problem/9902/cond-mat9902108.html
|
ar5iv
|
text
|
# Monte Carlo Simulation of Magnetic Systems in the Tsallis Statistics
## 1 Introduction
Many systems seem to be well described by a non-extensive thermostatistic rather than the usual Boltzmann-Gibbs statistics (BGS). When the effective microscopic interactions and microscopic memory are short-ranged and the boundary conditions are non-(multi)fractal , the BGS provides a complete and consistent description of the system. Otherwise, it fails. An alternative approach is the use of the so-called Tsallis statistics (TS) . The entropy in the TS is defined as
$$S_q=k\frac{1_{i=1}^\mathrm{\Omega }p_i^q}{q1},$$
(1)
with $`_ip_i=1`$, where $`i`$ is a given state with energy $`ϵ_i`$ from $`\mathrm{\Omega }`$ possible states and $`k`$ is a positive constant. The index $`q`$ characterizes the degree of nonextensivity. The limiting case $`q=1`$ recovers the usual BGS entropy definition. The other contraints needed to obtain the thermodynamical averages have been recently discussed by Tsallis, Mendes and Plastino .
Magnetic systems - are to be counted among the large variety of systems to which TS has already been applied . The reason for such an interest is obvious: statistical physicists have spent a lot of time in the past developing tools to understand the critical phenomena presented by this kind of systems - and have succeeded in this task. As a natural consequence, one could expect some attempts at generalizing well established approaches developed for extensive systems. The real space renormalization group and the mean field approximation are examples of tools that have been considered in these generalizations. However, it is not surprising that results from these approximations seem to be controversial: even fundamental questions such as how to define and to obtain the correct expression for the thermodynamical averages have been revisited very recently.
The Monte Carlo method is a powerful tool frequently used in the BGS framework in order to solve ambiguities of this sort. However, within the TS framework, Monte Carlo simulations of magnetic systems have not been exploited so far because of technical difficulties that we are going to address in this work. One issue is how to the define the acceptance probabilities which lead to the correct distribution of probabilities. A first alternative was presented in the application of the generalized simulated annealing for the Traveling Salesman Problem, where the acceptance probability is a simple generalization of the Metropolis algorithm for $`q1`$. This alternative was motivated by the definition of the thermodynamical averages in use at that time, which imposed an undesirable dependence on the definition of the zero of the energy scale. The use of the above mentioned alternative for the acceptance probabilities conveniently removes this dependence. A second alternative was proposed by Andricioaei and Straub (AS) a couple of years after the first one. A new expression for the acceptance probabilities was obtained from the detailed balance condition (a sufficient but not necessary condition for thermodynamical equilibrium). This new alternative cleverly circumvents the ambiguity on the definition of the lowest level of energy.
The second reason why, in our opinion, Monte Carlo simulations are not frequently used within the TS is the fact that all the computational effort involved in a BGS simulation has to be spent again for each value of the parameter $`q`$. Since $`q`$ is a continuous variable, computer simulations are much more time-consuming in the TS than in the BGS, if one wants to investigate the $`q`$ dependence of the results. That is why it has been difficult, for instance, to answer the question about which acceptance probabilities to use in a Monte Carlo simulation of magnetic systems. One of the goals of the present work is to show that the recently proposed Broad Histogram Method (BHM) is the ideal tool for Monte Carlo simulations within the TS framework. Because the $`q`$-independent density of states $`g(E)`$ is, in the BHM, directly obtained from some measured microscopic quantities, we are able to obtain any thermodynamic observable for all values of $`q`$ and $`T`$ from only one computer run. This fact turns BHM simulations on the TS , for all values of $`q`$, as fast as in the BGS.
Briefly, in this paper we are suggesting the BHM as the ideal tool for computer studies on the TS. We show it through a simulation of the two-dimensional Ising model with short-range interactions. We are aware of the fact that the model we are going to study is an extensive one, therefore well described by the BGS. Among the reasons why we decided to study this system are:
* this model can be very easily simulated with great efficiency;
* the exact solution for the density of states of finite systems is known in the limit $`q=1`$ ; moreover, the BHM is able to reproduce this exact solution with great accuracy ;
* previous results for this model using other approximation methods are controversial and/or inconclusive; our simulations could shed some light on this ongoing discussion;
* we could easily and reliably show which choice for the acceptance probabilities reproduces the Tsallis distribution of statistical weights ;
* this simple system is here used as a testing ground for the methods we propose; building on this first step allows us to use the same approach to study , other more complex non-extensive systems such as the Ising model with long-range interactions.
In summary, our choice of a well known system is most convenient since we are dealing with two brand-new recipes: the TS with normalized q-expectation values and the BHM.
This paper is organized as follows: in the next section, we review the TS with normalized q-expectation values. We also include a discussion about the stability of the solutions for the free energy in this new formalism. In section 3, we review the Broad Histogram Monte Carlo Method and present its implementation for the TS. This is followed by a presentation of our results and conclusions.
## 2 The Tsallis Statistics with “normalized q-expectation values”
Tsallis, Mendes and Plastino have recently discussed the role of constraints within TS. In that work, they study three different alternatives for the internal energy constraint. The first two choices correspond to the ones which have been applied to many different systems in the last years . They are: (ia) $`_ip_iϵ_i=U`$ and (ib) $`_ip_i^qϵ_i=U_q`$. However, both constraints present difficulties. A third choice for the internal energy constraint is defined as
$$U_q=\frac{_{i=1}^\mathrm{\Omega }p_i^qϵ_i}{_{i=1}^\mathrm{\Omega }p_i^q},$$
(2)
where $`q`$ is the degree of non-extensivity, also present in the definition of entropy (eq. (1)). Each constraint (ia),(ib) and eq.(2) determines a different set of probabilities $`p_i`$ for each state with energy $`ϵ_i`$. The extremization of the generalized entropy (1), under constraint (2) gives us an implicit equation for the probabilities $`p_i`$:
$$p_i=\left[1\frac{(1q)\beta (ϵ_iU_q)}{_{j=1}^\mathrm{\Omega }p_j^q}\right]^{\frac{1}{1q}}/Z_q$$
(3)
with
$$Z_q(\beta )\underset{i=1}{\overset{\mathrm{\Omega }}{}}\left[1\frac{(1q)\beta (ϵ_iU_q)}{_{j=1}^\mathrm{\Omega }(p_j)^q}\right]^{\frac{1}{1q}}$$
(4)
The normalized q-expectation value of an observable is therefore defined as
$$O_qO_i_q\frac{_{i=1}^\mathrm{\Omega }p_i^qO_i}{_{i=1}^\mathrm{\Omega }p_i^q}$$
(5)
where $`O`$ is any observable which commutes with the Hamiltonian - otherwise we should make use of the density operator $`\rho `$. We will refer to this reformulation of the TS as “with normalized q-expectation values”. A very important consequence of this new definition of constraints is that the probabilities do not depend on the choice of the zero of energy.
In order to solve eq. (3) Tsallis et al. suggest two different approaches, namely the Iterative Procedure and the $`\beta \beta ^{}`$ transformation. In the iterative procedure, we start with an initial set of probabilities and iterate them self-consistently until the desirable precision is reached. In the $`\beta \beta ^{}`$ transformation the set of equations above is transformed to:
$$p_i=\left[1(1q)\beta ^{}ϵ_i\right]^{\frac{1}{1q}}/Z_q^{}$$
(6)
$$Z_q^{}\underset{j=1}{\overset{\mathrm{\Omega }}{}}\left[1(1q)\beta ^{}ϵ_j\right]^{\frac{1}{1q}}$$
(7)
with
$$\beta ^{}(\beta )\frac{\beta }{(1q)\beta U_q+_{j=1}^\mathrm{\Omega }p_j^q}.$$
(8)
This set of equations is similar to the one that is obtained using constraint (ib), except for its dependency on the renormalized temperature, given by eq. (8).
In order to obtain $`p_i`$, we go through the following steps:
1. Compute the quantities $`y_i=(1(1q)\beta ^{}ϵ_i),i\mathrm{\Omega }`$;
2. If $`y_i<0`$ them $`y_i=0`$;
3. Compute $`Z_q^{}=_{i=1}^\mathrm{\Omega }y_i^{1/(1q)}`$;
4. Compute $`p_i(\beta ^{})=y_i^{1/(1q)}/Z_q^{}`$;
5. Obtain $`U_q(\beta ^{})`$ and any other thermodynamical quantity using eq. (5);
6. Obtain $`\beta (\beta ^{})`$ from equation (8).
This recipe allows the determination of $`p_i(\beta )`$ for all $`\beta (\beta ^{})`$ and consequently $`U_q(\beta )`$ (and any other observable). The second step in the above procedure is the well known cutoff associated to “vanishing probabilities”. This cutoff is required only for $`q<1`$. Because the cutoff is applied before the actual computation of the probabilities, the norm constraint is still respected.
It has been shown that both recipes, the iterative method and $`\beta \beta ^{}`$ transformation, demand a careful analysis before its application. The free energy obtained from the iterative procedure presents a non-physical discontinuity whereas the free energy from the $`\beta \beta ^{}`$ transformation has loops. To get rid of these pathologies, one has to make explicit use of the minimization condition on the free energy, whenever an ambiguity appears. This procedure generates the correct internal energy and temperature dependency and restores the proper behaviour of the thermodynamic observables. Following the suggestion of ref., we use in this paper the $`\beta \beta ^{}`$ transformation with the proper corrections.
## 3 The Broad Histogram Monte Carlo Method
The approach that we are going to discuss in this section, the Broad Histogram Method (BHM) -, is one of the many attempts at doing very efficient simulations. In traditional simulations, we need a new run for each value of the temperature. However, some different approaches have been proposed in which we compute some quantities at a given temperature and reweight them for a different temperature (see, for instance, ref. and references therein). One of these approaches is the histogram method, first introduced by Salzburg and popularized by Ferrenberg and Swendsen . However, it has been shown that the histogram method presents some limitations, the most important concerning the range of temperatures for which just one run is sufficient. The BHM enables us to directly calculate the energy spectrum $`g(E)`$, without any need for a particular choice of the thermostatistics to be used -.
In the BHM the energy degeneracy is calculated through the steps:
Step 1: Choice of a micro reversible protocol of allowed movements in the state space of the system such that changing from an $`X_{\mathrm{old}}`$ to an $`X_{\mathrm{new}}`$ configuration is allowed if, and only if, the reverse change is also allowed (the protocol must be micro-reversible):
$$\underset{\mathrm{allowed}}{\underset{}{X_{\mathrm{old}}X_{\mathrm{new}}}}\underset{\mathrm{allowed}}{\underset{}{X_{\mathrm{new}}X_{\mathrm{old}}}};$$
(9)
it is important to note that these movements are virtual, since they are not actually performed.
Step 2: Choice of a fixed amount of energy change $`\mathrm{\Delta }E_{\mathrm{fix}}`$ and computation of $`N_{\mathrm{up}}(X)`$ ($`N_{\mathrm{dn}}(X)`$) for the configuration X, defined as the number of allowed movements that would increase (decrease) the energy of the configuration by $`\mathrm{\Delta }E_{\mathrm{fix}}`$. Then $`N_{\mathrm{up}}(E)`$ ($`N_{\mathrm{dn}}(E)`$) is the micro canonical average of $`N_{\mathrm{up}}(X)`$ ($`N_{\mathrm{dn}}(X)`$) at energy $`E`$;
Step 3: Since the total number of possible movements from level $`E+\mathrm{\Delta }E_{\mathrm{fix}}`$ to level $`E`$ is equal to the total number of possible movements from level $`E`$ to level $`E+\mathrm{\Delta }E_{\mathrm{fix}}`$, we can write down the equation
$$g(E)N_{\mathrm{up}}(E)=g(E+\mathrm{\Delta }E_{\mathrm{fix}})N_{\mathrm{dn}}(E+\mathrm{\Delta }E_{\mathrm{fix}}).$$
(10)
This relation is exact for any statistical model or any energy spectrum . It can be rewritten as
$$\mathrm{ln}g(E+\mathrm{\Delta }E_{\mathrm{fix}})\mathrm{ln}g(E)=\mathrm{ln}\frac{N_{\mathrm{up}}(E)}{N_{\mathrm{dn}}(E+\mathrm{\Delta }E_{\mathrm{fix}})}$$
(11)
If we choose $`\mathrm{\Delta }E_{\mathrm{fix}}E`$, the above equation can be approximated by
$$\frac{\mathrm{ln}g(E)}{E}=\frac{1}{\mathrm{\Delta }E_{\mathrm{fix}}}\mathrm{ln}\frac{N_{\mathrm{up}}(E)}{N_{\mathrm{dn}}(E)}$$
(12)
This equation can be easily solved for $`g(E)`$. Once this quantity is known, the expected value of some observable $`O`$ can be calculated by
$$O_{q,T}=\frac{_EO_Eg(E)[p(E)]^q}{_Eg(E)[p(E)]^q}$$
(13)
This method (in the q=1 BGS) was first applied to systems with discrete degrees of freedom. Recently it has been extended to continuous systems, such as the XY Model . To our knowledge, this is the first time the method is being applied to a different statistics. Besides the more accurate and faster results in comparison to traditional methods, the BHM is even more efficient in this particular case because eq. (13) is the only quantity to be recalculated for each new value of $`q`$.
## 4 Implementation of the BHM on the 2D Ising Model with first neighbour interactions
To clarify the application of our ideas to magnetic systems, we will use the square lattice ferromagnetic Ising Model with first neighbour interaction. The Hamiltonian for this system is given by
$$H/J=\underset{ij}{}\sigma _i\sigma _j$$
(14)
where $`\sigma _i=\pm 1`$, $`J`$ is a positive constant. The sum is performed over all pairs of first neighbours in a square lattice of size $`N=L\times L`$. For an efficient implementation, we rewrite the Hamiltonian as
$$_mH_m/J=\underset{ij}{}\zeta _i\zeta _j=\frac{_{ij}\sigma _i\sigma _j+2N}{2}$$
(15)
where $`\zeta _i=0`$ or $`1`$ and $``$ represents the exclusive OR operation, where $`00=11=0`$ and $`10=01=1`$. The critical temperature for $`q=1`$ in the thermodynamical limit for this renormalized Hamiltonian is $`T_c=[2\mathrm{a}\mathrm{r}\mathrm{c}\mathrm{t}\mathrm{a}\mathrm{n}\mathrm{h}(\sqrt{2}1)]^1=1.13459\mathrm{}`$.
We choose a single spin flip protocol of movements to obtain $`N_{\mathrm{up}}(E)`$ and $`N_{\mathrm{dn}}(E)`$. As proposed in and , an unbiased random walk is them performed on the energy axis in order to visit all values on the energy spectrum.
The number $`N_{\mathrm{up}}(E)`$ ($`N_{\mathrm{dn}}(E)`$) of movements that increase (decrease) the current value of energy $`E`$ by $`\mathrm{\Delta }E_{\mathrm{fix}}`$ is used to calculate $`g(E)`$. The magnetization $`M(E)`$ is also stored for each lattice size. Here we choose $`\mathrm{\Delta }E_{\mathrm{fix}}=4`$ out of the possible values $`|\mathrm{\Delta }E|`$=$`0,2,4`$. The histogram is obtained from 6.400.000 samples (distributed in the energy axis) for L=30,50,70,100. Obtaining the histograms for $`L=30`$ takes 20 minutes of CPU time on a DEC Alpha 400. For $`L=100`$, the time increases to 220 minutes.
The next step is to obtain the set of probabilities using the $`\beta \beta ^{}`$ procedure. After this step, we can use eq. (13) to obtain the internal energy $`U_q(T)`$ and the magnetization $`M_q(T)`$. The free energy $`F_q(T)`$ is also calculated using
$$F_qU_qTS_q=U_q\frac{1}{\beta }\frac{Z_q^{1q}1}{1q}$$
(16)
Instead of using eq. (4), it is more efficient to use the relation
$$Z_q^{1q}=\underset{i=1}{\overset{\mathrm{\Omega }}{}}p_i^q=\underset{E}{}g(E)[p(E)]^q$$
The CPU time for the determination of the values of any observable in the whole range of temperatures is independent of the lattice size. Typically, it takes 30s on a DEC Alpha 400 for 20000 values of temperature, for each value of q.
For the sake of comparison, we have implemented multispin versions of the Ising Model with the AS acceptance probability
$$p=\frac{1}{2}[1\mathrm{tanh}(\beta ^{}\mathrm{\Delta }\overline{E}/2)]$$
(17)
where $`\overline{E}=[1/\beta ^{}(q1)]\mathrm{ln}[1(1q)\beta ^{}E]`$, and with the TSP acceptance probability
$$p=(1(1q)\beta ^{}\mathrm{\Delta }E)^{\frac{1}{1q}}.$$
(18)
Notice that the equations above are written as functions of $`\beta ^{}`$ instead of $`\beta `$. The reason is that they were proposed before the publication of the solution for the constraints problem in the TS discussed in section 2 .
For an $`L=30`$ lattice, a simulation run, using the traditional method with the same numer of Monte Carlo steps as used in the BHM simulation, took 60s of CPU time for a single pair of values $`(q,T)`$. This should be compared to the 1200s of BHM for all values of q and the whole range of temperatures.
## 5 Results
We will now discuss the results obtained from the implementation described in the previous section. We begin with a comparison between the results obtained through the use of the two already mentioned techniques: the BHM and the traditional multispin simulation with both the AS and the TSP probabilities. The comparison must be made in terms of $`T^{}`$, because the AS and TSP probabilities were derived before the publication of Ref. , as pointed above. In Fig. 1 we show the results for the magnetization and the internal energy as functions of $`T^{}`$.
The remarkable agreement between the results coming from both the BHM and the AS techniques contrast with those coming from the TSP probabilities. Therefore, this last one should be discarded as an approach to the simulation of physical systems in the TS framework. In Fig. 1 the values for $`q`$ were chosen to be close to one.
We are going to discuss separately the $`q<1`$ and $`q>1`$ regimes, since the system behaves differently in these regions.
### 5.1 The $`q<1`$ regime
Fig. 2 shows the internal energy as a function of $`T`$, for some values of $`q`$ very close to one. The first surprising result is that a reentrant region develops as soon as $`q`$ gets smaller than $`1`$. This reentrant behaviour is even more noticeable as the lattice size increases, for the same value of $`q`$. This behaviour is already being reported for a two-level system. We are going to use the same recipe proposed therein to deal with this pathology.
The reentrant behaviour is also present in the free energy curve, as we can see in Fig. 3. To reconstruct the correct curve for the free energy and, as a consequence, for any thermodynamical macroscopic quantity, we have to choose a criterion to discard two of the three possible states in the loop (see the inset of Fig.3). We choose to consider only the state that corresponds to the lowest value of the free energy. The correct curve for the internal energy is also displayed in Fig.3, and is another illustration that supports the statement that TS with renormalized q-expectation values is thermodinamically stable. A deeper discussion of the reentrant behaviour and the technique developed to restore uniqueness is being published elsewhere .
When uniqueness is restored by removing the loop, there results a free energy with a discontinuous first derivative with respect to temperature at a point which could be identified as the transition temperature. This means that the entropy has a discontinuity at this temperature. In addition, the magnetization, as shown in Fig. 4, also presents a discontinuity at this point - after the reentrancies have been removed. These discontinuities become more noticeable as we consider ever smaller values of $`q`$ (see Fig. 4) and/or bigger lattices (see Fig. 5). All these results were obtained for finite lattices. To determine the transition temperature in the thermodynamic limit $`L\mathrm{}`$, we plot the transition temperature for finite lattices as a function of its inverse size, $`1/L`$, and take the limit $`1/L0`$.
Fig. 6 shows an attempt of finite-size scaling for different values of $`q`$. For $`q<1`$ and $`L\mathrm{}`$, the critical temperature vanishes extremely fast. Our results point out that there is no phase transition for $`q<1`$ in the thermodynamic limit.
### 5.2 The $`q>1`$ regime
Basically, the same approach we have used for $`q<1`$ is also used here. However, we have found that the 2D Ising model presents a quite different behaviour in both regimes (actually, it is also different from the $`q=1`$ case). Fig. 7 shows the internal energy and magnetization as functions of $`T`$ for some values of $`q`$ on a $`L=70`$ lattice. From the internal energy curve, we promptly find that the discontinuity in its derivative (specific heat) at $`T=T_c`$ is not present anymore, therefore there are no evidence of a phase transition. This is also clear from the behaviour of the magnetization as a function of temperature. Fig. 8 supports this conclusion; the magnetization gets smoother with increasing values of $`q`$ or the lattice size.
A careful examination of Fig. 7 and Fig. 8 reveals discontinuous changes in the derivative of the curves at two points (these changes are more easily seen for the largest values of $`q`$ and $`L`$). These discontinuous derivatives are related to the transformation $`TT^{}`$. Fig. 9 presents the relation between $`T`$ and $`T^{}`$ for the same values of $`L`$ used in Fig. 8 and for a range of values of $`q`$ slightly larger than what was used in Fig. 7. The curves display a region of abrupt change (but not a discontinuity) which gets larger as $`q`$ is increased, for a fixed $`L`$. The values of T that limit this region are also those that lead to a discontinuous specific heat, and appear to have zero and $`\mathrm{}`$ as limits when $`q`$ or $`L`$ increase. In the thermodynamic limit, the magnetization is always positive (therefore, the system is always in the ordered state, except at the limit of infinite temperature) and the derivative of the specific heat is always positive. Therefore, we argue that there is no phase transition for the 2D Ising model, for $`q>1`$.
## 6 Conclusion
We studied the simulation of magnetic systems in the Tsallis Statistics using the Broad Histogram Method. It was shown that this method is very efficient, since all thermodynamic observables of interest can be calculated for a new value of the parameter $`q`$ without the need for a new computer run. The square lattice Ising model with nearest neighbour interactions was chosen as an example to test the method. All previous work on the nearest neighbour Ising model suggest that there is a phase transition for each value of $`q1`$. However, our results show that there is no transition in this model for any value of $`q1`$. Only the $`q=1`$ system presents the usual second order phase transition.
Further step along these lines would be the simulation of spin models with long-range interactions. We believe that these intrinsically non-extensive systems can display a richer behaviour, within the Tsallis Statistics framework, than the model studied in this work. Actually, previous studies of magnetic systems in the Tsallis Statistics have already suggest that this is the case. However, a powerful simulational tool such as the Broad Histogram Monte Carlo Method used in this work was lacking and only recently has become available. The demonstration that we now present of its usefulness will surely bring considerable advance in the understanding of the behaviour of magnetic systems in a non-extensive regime.
## Acknowledgments
The authors are indebted to Professor C. Tsallis for extensive ($`q=1`$!) and enlightening discussions. This work was partially supported by CNPq and CAPES (Brazilian Agencies).
|
no-problem/9902/nucl-ex9902008.html
|
ar5iv
|
text
|
# Studying Short-Range Dynamics in Few-Body Systems
## 1 The Case for Few-Body Systems
Intermediate-energy electron scattering is the tool most suited to mapping the properties of individual nucleons in a nuclear medium . For heavier systems, theoretical calculations must use techniques which build the anticorrelation between nucleon locations, due to the short-range repulsion, into the strength of the interaction. The advantage of using a few-body system for the target is that the $`NN`$ interaction is directly used for the computation of the wave functions. For $`A=3`$ systems, Faddeev techniques allow a direct computation of the spectral function , and for heavier light nuclei, the technique of integral transforms can be used to construct the spectral function. The spectral function $`S(E_m,p_m)`$ is closely related to the electron-scattering cross section and provides a probability distribution of nuclear protons versus their momentum $`p_m`$ and binding energy $`E_m`$.
## 2 Results from Inclusive Electron Scattering
At intermediate energies and quasifree kinematics, many inclusive $`(e,e^{})`$ experiments have been performed. Plane-wave reasoning suggests that at large $`Q^2=q^2`$, the $`(e,e^{})`$ cross section should become a function of only two factors. The first is the incoherent cross section to scatter electrons from all the nucleons in the nucleus, and the second is a partial integral $`F(y)`$ over the proton spectral function . $`y`$ is essentially the component of the struck proton’s momentum along the $`(e,e^{})`$ momentum transfer $`\stackrel{}{q}`$; it is also closely related to the deviation of $`\omega =E_eE_e^{}`$ from the quasielastic value $`\omega |\stackrel{}{q}|^2/2m_N`$.
Figure 1 shows the most recent $`(e,e^{})`$ data from Jefferson Lab . $`F(y)`$ is constructed as
$$F(y)=\frac{d^2\sigma }{d\mathrm{\Omega }d\omega }[Z\sigma _{ep}+N\sigma _{en}]^1\frac{q}{(M^2+(y+q)^2)^{\frac{1}{2}}}$$
(1)
For $`y<0`$ (low $`\omega `$ relative to the quasielastic peak), data for different kinematics are in excellent agreement, indicating that the effects beyond PWIA are small.
In studying short-range phenomena, access to specific regions in $`S(E_m,p_m)`$ is desirable so data on $`F(y)`$ are not sufficient. Coincidence data are required to access these regions. However there is one further inclusive measurement of interest, namely that of the Coulomb Sum Rule. This sum rule relates the energy-integrated longitudinal response from $`(e,e^{})`$ to the proton-proton correlation function . However, analyses have so far been inconclusive due to large theoretical corrections for reaction effects (*e.g.* meson-exchange currents (MEC)) and for incomplete $`\omega `$ coverage in the experiments .
## 3 Coincidence $`(e,e^{}p)`$ experiments
Coincidence $`(e,e^{}p)`$ experiments can in principle more directly probe the spectral function $`S(E_m,p_m)`$. In plane wave, the cross section is
$$\frac{d^6\sigma }{d\mathrm{\Omega }_e^{}dE_e^{}d\mathrm{\Omega }_pdE_p}=|\stackrel{}{p}_p|E_p\sigma _{ep}S(E_m,p_m)$$
(2)
and an extraction of the spectral function is unambiguous. The variables
$`E_m`$ and $`p_m`$ are computed by using the measured four-momenta of the incident electrons, scattered electrons and knocked-out protons to reconstruct the four-momentum of the residual $`(A1)`$ system $`R=(E_R,\stackrel{}{p}_R)`$. $`p_m=|\stackrel{}{p}_R|`$ and $`E_m=\sqrt{R^2}+m_pM_A`$. However, additional reaction-mechanism effects can break the direct link between the cross section and spectral function.
Fig. 2 shows data measured at NIKHEF for the reaction $`{}_{}{}^{4}\text{He}(e,e^{}p)^3\text{H}`$. The dotted curve is the plane-wave prediction, and the sharp minimum is a feature of the spectral function which has been directly linked to the short-range part of the $`NN`$ interaction . The data do not exhibit this minimum, and the calculation attributes this discrepancy to $`p`$$`t`$ final-state interactions (FSI) and to MEC.
Another example of reaction effects thwarting access to interesting information comes from the large-$`E_m`$ data from the same experiment. Simple arguments lead to the prediction of a “ridge” in the spectral function, due to short-range $`NN`$ interactions, along the locus $`E_m2S_N+(p_m)^2/2m_N`$ where $`S_N`$ is the single-nucleon separation energy. Computations of the spectral function have supported this prediction.
Fig. 3 shows data for $`{}_{}{}^{4}\text{He}(e,e^{}p)`$ at large $`E_m`$ along with theoretical predictions. The peak in the cross section (for both the data and the curves) follows the ridge relation noted above. However, the theory indicates that only about half of the observed cross section is due to direct knockout (dashed line). The rest is due to MEC. Also, for the lowest-$`p_m`$ data (the top pane), the calculation severely underpredicts the data at large $`E_m`$.
## 4 The Few-Body Program at Jefferson Lab
The preceding discussion makes clear that accessing the spectral function in regions of $`(E_m,p_m)`$ relevant to short-range nuclear dynamics is difficult. The problem is that the spectral function is relatively much smaller in these regions than at lower momenta and energies. This leads to the possibility that other reaction processes, even if weak, can substantially contaminate the data.
Many ideas have been formulated about how to suppress these contaminant processes in experiments. These ideas were difficult to implement in experiments at labs such as NIKHEF and Mainz, mainly because their beam energies were too low to provide the necessary kinematic flexibility. I now discuss some of these ideas and how they are being implemented at Jefferson Lab.
### 4.1 Parallel Kinematics
Fig. 4 depicts how measurements (including those of ) of cross sections at large $`p_m`$ were previously made. The simultaneous constraints on $`\omega `$, $`q`$, and $`p_m`$ made it impossible to reach large $`p_m`$ values unless the knocked-out protons were detected at large angles with respect to $`\stackrel{}{q}`$. Elastic FSI can seriously distort measurements in this type of measurement, since at the same electron kinematics, reactions such as that at left in Fig. 4 are also possible. The associated spectral function is several orders of magnitude larger due to the lower $`p_m`$ involved. Such low-$`p_m`$ protons can rescatter through large angles and contribute to (and perhaps even dominate) the large-$`p_m`$ cross section. This qualitative argument is supported by calculations which show that FSI contributions are at a minimum in parallel kinematics.
The large beam energies available at Jefferson Lab make it possible to perform large-$`p_m`$ experiments at parallel kinematics, and several proposals utilizing this principle are already on the books.
### 4.2 Variation of $`Q^2`$
Experiments at lower-energy labs were not able to make substantial variations in $`Q^2`$ for a given $`(E_m,p_m)`$ region. $`Q^2`$ variations are useful in two respects: to help discriminate between one- and two-body currents contributing to the cross section; and to suppress the contaminant (two-body) currents. The one-body direct-knockout process of interest only depends on $`Q^2`$ through the electron-proton cross section, while MEC and IC contributions are expected to have a very different $`Q^2`$ behaviour. There is disagreement about whether larger or smaller $`Q^2`$ experiments are better for suppressing the two-body currents; $`(e,e^{})`$ analyses appear to favor smaller $`Q^2`$, but the difference is only significant for $`y0`$ (see Fig. 1). All of the experiments studying short-range dynamics at Jefferson Lab plan to make measurements at multiple values of $`Q^2`$.
### 4.3 Large Negative $`y`$ Values
In Fig. 1 the data clearly violate the scaling hypothesis for $`y>0`$. This is generally accepted to result from contributions outside the one-body impulse approximation framework. At negative values of $`y`$, the data scale well. In addition, theoretical studies have indicated that FSI are best suppressed when the ejected proton’s longitudinal (along $`\stackrel{}{q}`$) component is large and negative; this condition also yields a large, negative $`y`$ value. I should mention that these studies indicate that FSI are also suppressed when the longitudinal momentum is large and positive, but it is unclear how this condition constrains two-body currents. Two experiments in Hall A at Jefferson Lab plan to make measurements at large negative $`y`$ kinematics.
### 4.4 Suppression of Multistep FSI
Ingo Sick has pointed out an additional mechanism which contaminates $`(e,e^{}p)`$ measurements at large $`E_m`$. Multistep FSI, or $`p`$$`N`$ scattering within the nucleus, change both the energy and direction of knocked-out protons. This causes the proton to be detected with $`(E_m,p_m)`$ values much different than those at the $`(e,e^{}p)`$ reaction vertex. If these FSI “move” events from a region where the spectral function is large to a region where it is low, these “moved” events can generate cross sections larger than the “native” protons at this $`(E_m,p_m)`$ which did not undergo multistep FSI.
Fig. 5 shows the kinematics in the $`(E_m,p_m)`$ plane for the upper pane of Fig. 3. The dark line shows the “ridge” in the spectral function where the greatest strength is expected. The dashed lines show how multistep FSI move events in the $`(E_m,p_m)`$ plane; reactions with vertex $`(E_m,p_m)`$ values all along the dashed lines can contribute, by undergoing a $`(p,p^{}N)`$ reaction, to the experimental measurement (the box is the experimental acceptance, and the thin solid line gives the central kinematics for which these calculations were performed.) It is clear that for missing energies greater than about 65 MeV, one may expect increasing contributions from multistep FSI to the data. This is a plausible explanation for the calculation’s underprediction of the data for $`E_m>90`$ MeV.
An approved experiment in Hall C will make measurements on both sides of the “ridge” and in several different types of kinematics, to test whether this effect is indeed important.
## 5 Representative Expected Results at Jefferson Lab
Fig. 6 shows an example of what we hope to achieve at Jefferson Lab. This figure is a calculation for $`{}_{}{}^{4}\text{He}(e,e^{}p)^3\text{H}`$ at a beam energy of 4 GeV. Experiment in Hall A proposes to measure this reaction in an attempt to observe the spectral-function minimum discussed in relation to Fig. 2. The dashed lines in Fig. 6 are plane-wave calculations; the solid curves include FSI in the framework of the Generalized Eikonal Approximation . The upper curve corresponds to $`y100`$ MeV/*c*, and the bottom curve corresponds to $`y400`$ MeV/*c*. The bottom curve is also computed for parallel kinematics. The calculations display the expected reduction in FSI due to parallel kinematics and large $`y`$ values. Unfortunately, these calculations do not yet include two-body current contributions.
## 6 Outlook
A broad program exists to study $`(e,e^{}p)`$ reactions, with an emphasis on few-body nuclei, at Jefferson Lab. The experiments comprising this program have a new set of tools, courtesy of the large JLab beam energy, with which to (attempt to) force nature to give us clean information about the nuclear spectral function in regions relevant to short-range nuclear dynamics. Parallel kinematics will be an important feature of almost all these experiments. Furthermore, data will be taken at a variety of $`Q^2`$ and $`y`$ settings in an attempt to suppress two-body current contributions to a manageable level.
I have not mentioned two other powerful techniques which will be exploited at Jefferson Lab and elsewhere: response-function separations and multi-nucleon knockout experiments. Both techniques are in principle more selective for accessing the large-momentum one-body current of interest. However, both are experimentally more demanding, thus the program outlined above provides a better starting point for testing our understanding of the $`(e,e^{}p)`$ reaction mechanism at high energies. The results can be used to design more effective response-function separation or multinucleon-knockout experiments.
On the theoretical side, there are many nice frameworks, models and techniques in circulation for computing spectral functions exactly, treating two-body currents, computing FSI at large proton momenta, and so on. However, no one group seems to have all “nice” ingredients. Figure 6 provides a good example; it uses a state-of-the-art spectral function from the Argonne group, and a modern FSI computation, but no two-body currents are included. It is highly unlikely that the program outlined above will suppress reaction effects to the point that PWIA is valid; interpretation of these results will require close collaboration with our theoretical colleagues.
## Acknowledgements
The author wishes to thank Dr. E. Jans (NIKHEF), spokesperson for the NIKHEF experiment discussed here, for a critical reading of this manuscript and useful suggestions.
|
no-problem/9902/physics9902068.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The spin tune, the spin precession frequency divided by the orbit revolution frequency, is an important parameter in the description of spin motion in circular accelerators. When a particle is on the closed orbit, the definition of the spin tune is obvious; it is the spin precession angle over one turn divided by $`2\pi `$. However, when orbit oscillations are involved, the definition of the spin tune becomes more complicated. One needs the concept of the $`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$-axis which was first introduced by Derbenev and Kondratenko for radiative polarization phenomena in electron storage rings.
We assume that we have complete knowledge about the orbit motion, i.e. that we know the action and angle variables, $`JJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJ=(J_x,J_y,J_z)`$ and $`\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi =(\varphi _x,\varphi _y,\varphi _z)`$, corresponding to the three degrees of freedom of the orbit motion, which can be in general nonlinear.
A particle with initial coordinates $`(JJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJ,\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )`$ at a machine azimuth $`\theta `$ executes orbit oscillations and comes to $`(JJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJ,\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi +\mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu )`$ after one turn ($`\theta \theta +2\pi `$), where $`\mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu =(\mu _x,\mu _y,\mu _z)`$ is the orbit tune, $`\nu `$$`\nu `$$`\nu `$$`\nu `$$`\nu `$$`\nu `$$`\nu `$$`\nu `$$`\nu `$$`\nu `$$`\nu `$$`\nu `$$`\nu `$$`\nu `$$`\nu `$$`\nu `$$`\nu `$$`\nu `$$`\nu `$$`\nu `$$`\nu `$$`\nu `$$`\nu `$$`\nu `$$`\nu `$$`\nu `$$`\nu `$$`\nu `$$`\nu `$$`\nu `$$`\nu `$$`\nu `$$`\nu `$$`\nu `$$`\nu `$$`\nu `$, times $`2\pi `$. The spin motion over one turn can in general be expressed by a 3$`\times `$3 rotation matrix $`R(JJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJ,\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi ,\theta )`$. Obviously, it is a periodic function of $`\theta `$ and $`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$ with period $`2\pi `$. On the next turn the rotation is expressed by $`R(JJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJ,\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi +\mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu ,\theta +2\pi )`$ = $`R(JJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJ,\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi +\mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu ,\theta )`$ which differs from $`R(JJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJ,\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi ,\theta )`$ unless the orbit tunes are integers.
A particle on the closed orbit sees the same rotation $`R_0(\theta )`$ for every turn. $`R_0(\theta )`$ has eigenvalues $`1`$ and $`e^{\pm i\mu _{s0}}`$ and the spin tune $`\nu _{s0}`$ is $`\mu _{s0}/2\pi `$. One can show that $`\mu _{s0}`$ is independent of $`\theta `$. The eigenvector belonging to the eigenvalue $`1`$ is denoted by $`nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn_0`$, i.e., $`R_0nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn_0=nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn_0`$. It depends only on $`\theta `$. A spin parallel to $`nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn_0(\theta )`$ remains unchanged after one turn, and all other spins attached to closed orbit trajectories precess by the angle $`\mu _{s0}`$ around $`nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn_0`$ during one turn.
The vector $`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$ is a generalization of $`nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn_0`$ for particles off the closed orbit. It is a function of $`(JJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJ,\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi ,\theta )`$ periodic in $`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$ and $`\theta `$ and satisfies
$`R(JJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJ,\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi ,\theta )nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn(JJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJ,\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi ,\theta )=nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn(JJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJ,\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi +\mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu ,\theta ).`$ (1)
When $`JJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJ=0`$, $`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$ reduces to $`nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn_0`$. To define the spin tune for nonzero $`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$, we need two more vectors $`uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu_1`$ and $`uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu_2`$ which form an orthonormal basis together with $`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$. They are functions of $`(JJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJ,\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi ,\theta )`$ and periodic in $`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$ and $`\theta `$ like $`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$. The spin tune is defined as the precession angle in the frame $`(uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu_1,uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu_2,nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn)`$ divided by $`2\pi `$.
The concept of the vector $`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$ has been playing an important role in the description and calculation of radiative polarization in electron/positron storage rings since. Recently, it has also turned out to be useful for proton rings.
To calculate the vector $`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$ several algorithms have been invented. S. Mane developed a computer code SMILE using a perturbation expansion with respect to the orbit action variable. The present author suggested a perturbation algorithm using Lie algebra and Eidelmann and Yakimenko coded a program SPINLIE with (low order) orbit nonlinearity. Balandin, Golubeva and Barber also wrote a Lie Algebra code.
The present author considered another method which does not employ a perturbation expansion and wrote a program SODOM. Heinemann and Hoffstaetter use tracking and ‘stroboscopic averaging’ in the code SPRINT. The programs SODOM, SPRINT and additionally compute the spin tune.
The new method which we are going to describe is based on SODOM.
We shall briefly summarize the SODOM algorithm in the next section and describe the new method in Sec.3.
## 2 The SODOM Algorithm
Let us first briefly summarize the algorithm employed in SODOM. (See Sec.3 of .) Denote the one-turn SU2 spin transport map starting at a fixed prechosen azimuth $`\theta _0`$ for particles with initial orbital phase $`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$ by $`M(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )`$ and the spinor representing the $`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$-axis at $`\theta _0`$ by $`\psi (\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )`$. (Here, we simply write $`\psi (\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )`$ instead of $`\psi _+(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )`$ . We also omit the arguments $`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$ and $`\theta _0`$ since we shall deal with one set of $`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$ and consider the one-turn map from the origin $`\theta _0`$ only.) The fact that $`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$ is ‘invariant’ means
$`M(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )\psi (\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )=e^{iv(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )/2}\psi (\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi +\mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu ),`$ (2)
where $`v(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )`$ is a real periodic function. Once a solution ($`\psi (\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )`$,$`v(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )`$) is obtained, we solve the equation
$`v(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )+u(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi +\mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu )u(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )=\mu _s`$ (3)
and define
$`\mathrm{\Psi }(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )e^{iu(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )/2}\psi (\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )`$ (4)
Then, $`\mathrm{\Psi }(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )`$ satisfies
$`M(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )\mathrm{\Psi }(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )=e^{i\mu _s/2}\mathrm{\Psi }(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi +\mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu ),`$ (5)
where $`\mu _s`$ is the spin tune times $`2\pi `$. The $`uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu_{1,2}`$ axes are represented by a spinor
$`\mathrm{\Psi }_\phi {\displaystyle \frac{1}{\sqrt{2}}}\left[e^{i\phi /2}\mathrm{\Psi }+e^{i\phi /2}\widehat{\mathrm{\Psi }}^{}\right]`$ (6)
where we define the operation $`\widehat{}`$ as
$`\widehat{\mathrm{\Psi }}i\sigma _2\mathrm{\Psi }^{},`$ (7)
which was denoted by $`\mathrm{\Psi }_{}`$ in . Note that $`\widehat{\widehat{\mathrm{\Psi }}}=\mathrm{\Psi }`$ and $`\widehat{\mathrm{\Psi }}^{}\mathrm{\Psi }=0`$. The three spinors, $`\mathrm{\Psi }_0`$, $`\mathrm{\Psi }_{\pi /2}`$, $`\mathrm{\Psi }`$, represent the three vectors $`uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu_1`$, $`uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu_2`$, $`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$. The phase of $`\mathrm{\Psi }`$ is irrelevant for defining $`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$ but it is important for $`uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu_1`$ and $`uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu_2`$.
The original SODOM algorithm parametrizes $`\psi `$ as
$`\psi ={\displaystyle \frac{1}{\sqrt{1+\left|\zeta (\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )\right|^2}}}\left(\begin{array}{c}1\\ \zeta (\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )\end{array}\right).`$ (10)
The SU2 matrix $`M(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )`$ can be parametrized by two complex functions $`f(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )`$ and $`g(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )`$ as
$`M(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )=\left(\begin{array}{cc}ig(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )& if^{}(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )\\ if(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )& ig^{}(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )\end{array}\right)`$ (11)
Then, one gets an equation for $`\zeta `$:
$`g^{}(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )\zeta (\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )+g(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )\zeta (\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi +\mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu )=f(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )f^{}(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )\zeta (\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )\zeta (\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi +\mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu ).`$ (12)
By expanding $`f(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )`$, $`g(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )`$, and $`\zeta (\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )`$ into Fourier series like $`f_{mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm}e^{immmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi }`$, we get a nonlinear equation for $`\zeta _{mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm}`$.
A key component of SODOM is the calculation of the Fourier coefficients $`f_{mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm}`$ and $`g_{mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm}`$ from the tracking data over one turn for several particles having the same $`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$$`J`$ but equally-spaced $`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$ ($`0\varphi <2\pi `$).
The parametrization (10) is good only when $`\zeta (\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )`$ is small. Because of its up-down asymmetric form, many more Fourier terms are needed than required by the physics, when $`\zeta (\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )`$ is large.<sup>2</sup><sup>2</sup>2 For example, $`\mathrm{\Psi }=(\mathrm{cos}\varphi ,\mathrm{sin}\varphi )`$ is a mild function but leads to $`\zeta =\mathrm{tan}\varphi `$ which is hard to Fourier-expand. Also, the iterative method of solving the nonlinear equation easily fails when $`\zeta `$ is large.
## 3 The Matrix Eigenvalue Method
The new algorithm is much simpler and involves solving eq.(5) directly rather than eq.(12). By expanding $`M(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )`$ (actually the functions $`f(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )`$ and $`g(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )`$) and $`\mathrm{\Psi }(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )`$ into Fourier series as
$`M(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )={\displaystyle \underset{mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm}{}}M_{mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm}e^{immmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi },\mathrm{\Psi }(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )={\displaystyle \underset{mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm}{}}\mathrm{\Psi }_{mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm}e^{immmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi }`$ (13)
eq.(5) can be written as
$`e^{immmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu }{\displaystyle \underset{mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm^{}}{}}M_{mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm^{}}\mathrm{\Psi }_{mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm^{}}=e^{i\mu _s/2}\mathrm{\Psi }_{mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm}.`$ (14)
This is simply a matrix eigenvalue equation. Thus, the spin tune comes out as an eigenvalue.
However, obviously, eq.(14) has many eigenvalues. Which one gives the spin tune? What do the other eiganvalues and eigenvectors mean? In order to answer these questions let us return to eq.(5) and examine it as an eigenvalue system
$`M(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )\mathrm{\Psi }(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )=\lambda \mathrm{\Psi }(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi +\mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu )`$ (15)
Note that this is not a simple 2$`\times `$2 algebraic equation because of the $`\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi +\mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu `$.
Before going further we have to think about subtle problems associated with the ‘2-to-1’ correspondence between SU2 and SO3. Note that we use 2-component spinors and SU2 matrices instead of 3-vectors and SO3 matrices to achieve computational speed and to minimize storage but not because the particles have spin $`\mathrm{}/2`$. The classical spin motion can be completely described by 3-vectors and SO3 matrices.
What does the periodicity of a spinor with respect to $`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$ mean? The physical object is the 3-vector $`\mathrm{\Psi }^{}\sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \mathrm{\Psi }=nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn`$ rather than the spinor $`\mathrm{\Psi }`$. In this sense a complex phase factor in $`\mathrm{\Psi }`$ is irrelevant. However, a complex phase factor is still relevant when one constructs the $`uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu_1`$ and $`uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu_2`$ axes from $`\mathrm{\Psi }`$ via $`\mathrm{\Psi }_\phi `$.
On the other hand, a sign change of $`\mathrm{\Psi }`$ does not cause a change of $`nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn=\mathrm{\Psi }^{}\sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \mathrm{\Psi }`$ nor a change of $`uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu_1`$ and $`uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu_2`$ defined by $`\mathrm{\Psi }_\phi ^{}\sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \mathrm{\Psi }_\phi `$.
Thus, as the periodicity condition for $`\mathrm{\Psi }`$ with respect to $`\varphi _j`$ (one of the orbit angle variables), we have to allow both $`\mathrm{\Psi }(\varphi _j+2\pi )=\mathrm{\Psi }(\varphi _j)`$ and $`\mathrm{\Psi }(\varphi _j+2\pi )=\mathrm{\Psi }(\varphi _j)`$. Then with 3 degrees of freedom for orbit motion, we have 8 types of solutions $`\mathrm{\Psi }(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )`$ differing by their sign change behaviour under the transformation $`\varphi _j\varphi _j+2\pi `$. In Fourier expansion language, this means that $`\mathrm{\Psi }`$ can be expanded as
$`\mathrm{\Psi }(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )=e^{immmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm^0\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi /2}{\displaystyle \underset{mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm}{}}\mathrm{\Psi }_{mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm}e^{immmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi }`$ (16)
where $`mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm^0`$ is a set of three integers each of which is either 0 or 1.
We now define the scalar product of two arbitrary spinors $`\mathrm{\Psi }_1`$ and $`\mathrm{\Psi }_2`$ by
$`(\mathrm{\Psi }_1,\mathrm{\Psi }_2){\displaystyle \frac{1}{(4\pi )^3}}{\displaystyle _0^{4\pi }}\mathrm{\Psi }_1^{}(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )\mathrm{\Psi }_2(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )𝑑\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi =\delta _{mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm_1^0,mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm_2^0}{\displaystyle \underset{mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm}{}}\mathrm{\Psi }_{1,mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm}^{}\mathrm{\Psi }_{2,mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm}`$ (17)
Obviously, solutions of different types in eq.(16) are always orthogonal. In the following we consider the solutions of eq.(15) which are ‘periodic’ and smooth in $`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$.
Now, let us list a few lemmas.
\[a\] $`\left|\lambda \right|=1`$
\[b\] $`(\mathrm{\Psi }_1,\mathrm{\Psi }_2)=0`$ if $`\lambda _1\lambda _2`$ From the unitarity of $`M(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )`$ we get
$`\mathrm{\Psi }_i^{}(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )\mathrm{\Psi }_j(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )`$ $`=`$ $`\mathrm{\Psi }_i^{}(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )M(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )^{}M(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )\mathrm{\Psi }_j(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )=\left[M(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )\mathrm{\Psi }_i(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )\right]^{}M(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )\mathrm{\Psi }_j(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )`$
$`=`$ $`\lambda _i^{}\lambda _j\mathrm{\Psi }_i^{}(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi +\mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu )\mathrm{\Psi }_j(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi +\mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu ).`$
Integrating over $`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$ and using the definition (17), we get \[a\] for $`i=j`$ and \[b\] for $`\lambda _i\lambda _j`$. Note that \[b\] does not imply $`\mathrm{\Psi }_1(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )^{}\mathrm{\Psi }_2(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )=0`$ for $`\lambda _i\lambda _j`$.
\[c\] $`\left|\mathrm{\Psi }(\mathrm{\varphi }\mathrm{\varphi }\mathrm{\varphi }\mathrm{\varphi }\mathrm{\varphi }\mathrm{\varphi }\mathrm{\varphi }\mathrm{\varphi }\mathrm{\varphi }\mathrm{\varphi }\mathrm{\varphi }\mathrm{\varphi }\mathrm{\varphi }\mathrm{\varphi }\mathrm{\varphi }\mathrm{\varphi }\mathrm{\varphi }\mathrm{\varphi }\mathrm{\varphi }\mathrm{\varphi }\mathrm{\varphi }\mathrm{\varphi }\mathrm{\varphi }\mathrm{\varphi }\mathrm{\varphi }\mathrm{\varphi }\mathrm{\varphi }\mathrm{\varphi }\mathrm{\varphi }\mathrm{\varphi }\mathrm{\varphi }\mathrm{\varphi }\mathrm{\varphi }\mathrm{\varphi }\mathrm{\varphi }\mathrm{\varphi })\right|`$ is independent of $`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$ (and can be normalized to unity). The unitarity condition $`\left|\mathrm{\Psi }(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )\right|=\left|\mathrm{\Psi }(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi +\mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu )\right|`$, together with the smoothness of $`\mathrm{\Psi }(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )`$, and the non-commensurability of $`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$ are enough to guarantee \[c\].
\[d\] If $`(\lambda ,\mathrm{\Psi }(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi ))`$ is a solution, so is $`(\lambda ^{},\widehat{\mathrm{\Psi }}(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi ))`$ Take the complex conjugate of eq.(15) and use $`\sigma _2M^{}\sigma _2=M`$. If $`\mathrm{\Psi }`$ corresponds to $`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$, then $`\widehat{\mathrm{\Psi }}`$ corresponds to $`nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn`$ and the spin tune changes sign. (Since $`\sigma _2\sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma _2=\sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma ^{}`$, $`\widehat{\mathrm{\Psi }}^{}\sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \widehat{\mathrm{\Psi }}=\mathrm{\Psi }^{}\sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \mathrm{\Psi }`$.) Note that not only $`(\widehat{\mathrm{\Psi }},\mathrm{\Psi })=0`$ but also $`\widehat{\mathrm{\Psi }}^{}\mathrm{\Psi }=0`$ at every $`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$.
\[e\] If $`\lambda `$ is an eigenvalue, then so is $`\lambda e^{immmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu /2}`$, where $`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$ is a set of any integers. Multiply eq.(15) by $`e^{immmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi /2}`$ and define $`\stackrel{~}{\mathrm{\Psi }}(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )e^{immmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi /2}\mathrm{\Psi }(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )`$. Then
$`M(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )\stackrel{~}{\mathrm{\Psi }}(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )=\lambda e^{immmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi /2}\mathrm{\Psi }(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi +\mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu )=\lambda e^{immmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu /2}\stackrel{~}{\mathrm{\Psi }}(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi +\mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu )`$
Thus, $`\stackrel{~}{\mathrm{\Psi }}`$ is an eigenvector belonging to the eigenvalue $`\lambda e^{immmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu /2}`$.
This gives an ambiguity in the spin tune: $`\mu _s\mu _s+mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu `$. However, all the eigenvalues of the form $`\lambda e^{immmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu /2}`$ give the same vector $`nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn=\stackrel{~}{\mathrm{\Psi }}^{}\sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \stackrel{~}{\mathrm{\Psi }}=\mathrm{\Psi }^{}\sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \sigma \mathrm{\Psi }`$. The $`uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu_{1,2}`$ axes corresponding to $`\stackrel{~}{\mathrm{\Psi }}`$ are
$`\stackrel{~}{\mathrm{\Psi }}_\phi ={\displaystyle \frac{1}{\sqrt{2}}}\left[e^{i\phi /2}e^{immmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi /2}\mathrm{\Psi }+e^{i\phi /2}e^{immmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi /2}i\sigma _2\mathrm{\Psi }^{}\right]=\mathrm{\Psi }_{\phi +mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi }`$
Thus, the new $`uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu_{1,2}`$ axes rotate by $`mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi `$ with respect to the original ones.
From the lemmas above, we know that once a solution $`(\lambda ,\mathrm{\Psi })`$ is found, we can construct infinitely many solutions of the form $`(\lambda e^{immmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu /2},e^{immmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi /2}\mathrm{\Psi })`$ and $`(\lambda ^{}e^{immmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu /2},e^{immmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi /2}\widehat{\mathrm{\Psi }})`$, and that they all correspond to the same vector $`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$ or $`nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn`$.
A natural question is then ‘are there any other eigenvalues?’. The answer is ‘No’: \[f\] If $`\lambda `$ is an eigenvalue, all other eigenvalues are either $`\lambda e^{immmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu /2}`$ or $`\lambda ^{}e^{immmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu /2}`$ If $`(\lambda _1,\mathrm{\Psi }_1)`$ and $`(\lambda _2,\mathrm{\Psi }_2)`$ are solutions, $`a(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )\mathrm{\Psi }_2^{}(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )\mathrm{\Psi }_1(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )=(M\mathrm{\Psi }_2)^{}M\mathrm{\Psi }_1=\lambda _2^{}\lambda _1a(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi +\mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu )`$. From the periodicity and smoothness of $`a(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )`$ and the non-commensurability of $`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$$`\mu `$ one finds either that $`a(\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi )=e^{i\alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi }`$ and $`\lambda _2^{}\lambda _1=e^{i\alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \alpha \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu }`$, $`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$ being a constant 3-vector, or that $`a=0`$ ($`\mathrm{\Psi }_1`$ and $`\mathrm{\Psi }_2`$ are locally orthogonal). In the case $`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$$`\alpha `$ must be of the form $`mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm/2`$ from the periodicity requirement, where $`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$ is a set of three integers. Therefore, $`\lambda _2=\lambda _1e^{immmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu /2}`$. In the case , examine $`\widehat{\mathrm{\Psi }}_2`$ in place of $`\mathrm{\Psi }_2`$. Then we get either $`\lambda _2=\lambda _1^{}e^{immmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu /2}`$ or $`\widehat{\mathrm{\Psi }}_2^{}\mathrm{\Psi }_1=0`$. However, if both $`\mathrm{\Psi }_2^{}\mathrm{\Psi }_1`$ and $`\widehat{\mathrm{\Psi }}_2^{}\mathrm{\Psi }_1`$ vanish, then $`\mathrm{\Psi }_1=0`$ because $`\mathrm{\Psi }_2`$ and $`\widehat{\mathrm{\Psi }}_2`$ are orthogonal. Therefore $`\widehat{\mathrm{\Psi }}_2^{}\mathrm{\Psi }_1=0`$ cannot be the case. Thus, the cases and correspond to $`\lambda _2=\lambda _1e^{immmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu /2}`$ and $`\lambda _1^{}e^{immmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu /2}`$, respectively.
Let us consider the spin tune $`\nu _s=\mu _s/2\pi `$. It is obtained from the definition $`\lambda =e^{i\mu _s/2}=e^{i\pi \nu _s}`$. From the above arguments we find that if the set $`[\nu _s,nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn,uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu_1+iuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu_2]`$ is a solution, then $`[\nu _smmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu ,nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn,e^{immmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi }(uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu_1+iuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu_2)]`$ and $`[\nu _smmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu ,nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn,e^{immmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi \varphi }(uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu_1+iuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu_2)]`$ are also solutions. Thus, the spin tune has ambiguities up to a multiple of the orbit tunes and up to a sign. The latter is related to the choice of sign of $`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$.
When obtaining $`\nu _s`$ from $`\lambda `$, one finds an ambiguity only up to an even integer rather than up to an integer. At first sight this is puzzling but it is also due to the ‘2-to-1’ correspondence between SU2 and SO3. Obviously,
\[g\] If $`\mathrm{\Psi }`$ is an eigenvector of $`M`$ with eigenvalue $`\lambda `$, it is also an eigenvector of $`M`$ with eigenvalue $`\lambda `$. Since $`M`$ and $`M`$ represent the same SO3 rotation, we have also to include the solutions to $`M`$. However, $`M`$ has exactly the same eigenvectors as $`+M`$ (therefore the same $`uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu_1,uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu_2,nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn`$) with spin tunes $`\nu _s`$ shifted by one. This solves the above puzzle. Thus, we can define the spin tune in the interval \[0,1) or (-0.5,0.5\] and, if the sign of $`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$ is irrelevant, we can reduce the interval to \[0,0.5\].
Thus, we have found that we only need one of the sets of eigenvector and eigenvalue of eq.(5). All others can be constructed from this. Therefore one can Fourier expand $`\mathrm{\Psi }`$ as in eq.(13) rather than as in the general from (16). (Note, however, that one will find tunes of the form $`\pm \nu _s+2mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu `$ although odd-multiple solutions can be easily reconstructed.)
Let us briefly discuss the degeneracy. Within the eigenvalue group $`\lambda e^{immmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu \mu /2}`$ of the same sign of $`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$, a degeneracy is possible when $`mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu `$ is an even integer, i.e., when the orbit motion is in resonance, which we are not interested in. We may assume this is not the case.
On the other hand, a degeneracy between solutions of different signs of $`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$ corresponds to a spin-orbit resonance. When the two solutions $`(\lambda ,\mathrm{\Psi })`$ and $`(\lambda ^{},\widehat{\mathrm{\Psi }})`$ degenerate ($`\lambda =\lambda ^{}`$), the spin tune becomes an integer. Taking into account the ambiguity of spin tune, this is equivalent to the relation $`\nu _s=mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm\nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu \nu +\text{integer}`$.
## 4 Choice of the Spin Tune
We have shown in the previous section that there are many eigenvalues (spin tunes) representing the same vector $`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$ and different $`(uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu_1,uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu_2)`$ axes. Now, we must finally decide which eigenvalue to choose for the spin tune. Theoretically speaking there is no reason to choose one particular value. As pointed out in spin tune is intrinsically ambiguous up to a multiple of the orbit tunes. The choice of the spin tune, which is equivalent to a choice of $`(uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu_1,uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu_2)`$, is a matter of convention.
In practice, however, a solution is not desirable if $`(uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu_1,uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu_2)`$ is a strong function of $`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$$`\varphi `$. When one solves the equation by Fourier expansion, the most natural choice is to take the solution having the largest zero-Fourier harmonic $`\left|\mathrm{\Psi }_{mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm=(0,0,0)}\right|`$.
If one plots all the eigenvalues as a function of any parameter (beam energy, betatron amplitude, etc), one will find continuous curves. If one plots the spin tune selected as just described as a function of these parameters one may occasionally find a jump of spin tune although the whole spectrum content is continuous.
Let us give an example from a test calculation. The test ring consists of 100 FODO cells, each of which has two thin-lens quadrupole magnets and two bending magnets filling the entire space between quadrupoles. The focusing effect of the bending magnets is ignored. In order to avoid a too high symmetry of the orbit motion, an artificial phase advance of 90 degrees in both horizontal and vertical planes is introduced at one point in the ring. The tunes are $`\nu _x=15.3827`$ and $`\nu _y=25.6482`$. Only the vertical betatron oscillation is excited. The beam energy is so chosen that $`\nu _{s0}=\gamma a=1520.72`$.
Fig.1 shows the eigenvalue (spin tune) spectrum as a function of the betatron action $`J_y`$. Only those with small $`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$$`m`$ are plotted. The points linked by a solid line correspond to the spin tune selected by the criterion mentioned above. As one can see, each eigenvalue is a continuous function of $`J_y`$ (A few curves appear broken because not all the eigenvalues are plotted.) but the selected tune shows a jump at $`J_y0.7\times 10^8`$m$``$rad. The spin tune before and after the jump, $`\nu _{s1}`$ and $`\nu _{s2}`$ satisfies $`\nu _{s1}+\nu _{s2}=2\nu _y+\text{integer}`$. The dashed line (with the same scale) is the upper limit of polarization, i.e.,
$`P_{lim}=\left|nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn\right|,nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn={\displaystyle \frac{1}{2\pi }}{\displaystyle _0^{2\pi }}nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn(\varphi _y)𝑑\varphi _y`$ (18)
The minimum of $`P_{lim}`$ coincides with the point of the spin tune jump.
We have compared the results of our program with SPRINT for the amplitude dependence of the spin tune in the HERA ring. The agreement of the $`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$ axis and the spin tune was excellent. Not only the occurrence of spin tune jumps but also their location agree, which means that taking the largest zero-harmonic and the stroboscopic averaging are almost equivalent.
## 5 Truncation of Fourier Series
In numerical calculations one has to truncate the Fourier expansion. There are a few problems associated with the truncation.
When $`N`$ values of $`\varphi `$ are used (we deal with one degree of freedom for illustration. The extension to 3 degrees of freedom is obvious.), the range of the harmonics should be $`MmM`$ ($`N=2M+1`$).<sup>3</sup><sup>3</sup>3 If $`N`$ is even, we have to change the upper or lower limit by one. For a discrete Fourier transform the range can also be $`0mN1`$ (as in standard FFT routines), but this choice is not good when other values of $`\varphi `$ are needed (for example when calculating $`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$$`n`$ for arbitary values after the problem is solved). $`N`$ must be large enough to ensure that the Fourier components $`M_m`$ (actually $`f_m`$ and $`g_m`$) are small enough outside the region $`[M,M]`$.<sup>4</sup><sup>4</sup>4 This is not a sufficient condition for accuracy. Even if $`M_m`$ is small outside $`[M,M]`$, the solution $`\mathrm{\Psi }_m`$ can still be large in some cases.
The matrix $`e^{im\mu }M_{mm^{}}`$ in eq.(14) is then a $`(2M+1)\times (2M+1)`$ matrix (each element is a $`2\times 2`$ matrix and we are dealing with one degree of freedom.). One finds that the diagonal elements ($`m=m^{}`$) are normally large and that the elements with large $`\left|mm^{}\right|`$ are small. The elements in the upper-right and lower-left triangles ($`\left|mm^{}\right|>M`$) are exactly zero because they require the harmonics outside $`[M,M]`$. Owing to this truncation, the matrix does not exactly satisfy the lemmas in the previous section even if $`N`$ is very large. (For example, in the first row ($`m=M`$) even the first harmonic $`mm^{}=1`$ is lost because $`m^{}`$ would be $`M1`$.) Although the solution with the largest zero-harmonic is not affected much by this truncation, it is not easy to confirm the accuracy of the solutions.
On the other hand, one can fill the upper-right and lower-left triangles by treating the harmonics in a cyclic manner as in a discrete Fourier transformation (i.e. one identifies the ($`M+1`$)-th harmonic with the ($`M`$)-th.). With this prescription the truncated matrix becomes exactly unitary even if $`N`$ is not large enough. The solution with the largest zero-harmonic does not change much. The appearance of eigenvalues with modulus far from unity then means that the eigenvalue solver is not accurate.
When one adopts the cyclic use of the harmonics, the lemmas \[a\], \[b\], \[d\] and \[g\] hold exactly apart from round off errors, but \[c\] and \[e\] (and accordingly \[f\]) become inaccurate.
## 6 Conclusion
We have shown that the spin tune can be obtained as an eigenvalue of a matrix which is created from the one-turn maps calculated by particle tracking. The method is applicable to any system with linear or nonlinear orbit motion as long as the orbit action variables exist. The convergence is much better than with perturbation methods and the previous SODOM algorithm. The computation is very fast because it makes full use of the fact that the spin motion is linear and that we know the orbit tunes.
Acknowledgements The author thanks to Drs. D. Barber, K. Heinemann, G. Hoffstaetter, and M. Vogt for stimulating discussions.
|
no-problem/9902/quant-ph9902045.html
|
ar5iv
|
text
|
# The classical communication cost of entanglement manipulation: Is entanglement an inter-convertible resource?
## Abstract
Entanglement bits or “ebits” have been proposed as a quantitative measure of a fundamental resource in quantum information processing. For such an interpretation to be valid, it is important to show that the same number of ebits in different forms or concentrations are inter-convertible in the asymptotic limit. Here we draw attention to a very important but hitherto unnoticed aspect of entanglement manipulation — the classical communication cost. We construct an explicit procedure which demonstrates that for bi-partite pure states, in the asymptotic limit, entanglement can be concentrated or diluted with vanishing classical communication cost. Entanglement of bi-partite pure states is thus established as a truly inter-convertible resource.
During the last couple of years the study of quantum non-locality (entanglement) has undergone a substantial transformation. It has become clear that entanglement is a most important aspect of quantum mechanics, which plays a fundamental role in quantum information processing (including teleportation , dense coding , and communication complexity ). It is now customary to regard entanglement as a fungible resource, i.e., a resource which can be transformed from one form to another, can be created, stored or consumed for accomplishing useful tasks. It is however the aim of this paper to draw attention to an important and hitherto ignored aspect of entanglement manipulation which has to be clarified before one can regard entanglement as a completely fungible property. The problem is the classical information cost of entanglement manipulation.
Consider the most famous use of entanglement, namely teleportation. As Bennett et al. have shown, entanglement can be used to communicate unknown quantum states from one place to another; this task can be achieved even though neither the transmitter nor the receiver are able to find out the state to be transmitted.
The basic equation of teleportation is
$$1\mathrm{singlet}teleports1\mathrm{qubit}.$$
(1)
Equation (1) already contains a large degree of abstraction. In the original description of teleportation it was shown how a singlet can teleport “an unknown state of a quantum system which lives in a 2-dimensional Hilbert space” (for concreteness, states of a spin 1/2 particle). In Eq. (1) however, instead of states of a spin 1/2 particle we wrote “qubit”, where by qubit we understand the quantum information which can be encoded in one spin 1/2 particle. This information need not be originally encoded in one spin 1/2 particle. It could, for example, be distributed among many spins. Indeed, as Schumacher and others showed quantum information can be efficiently manipulated — compressed or diluted essentially without losses, similarly to classical information. Thus it makes sense to talk about “the quantum information which could be compressed into a spin 1/2 particle”.
The question is whether we could replace the left-hand side of Eq. (1) by a similar abstract quantity. That is, we would like to be able to say something like
$$1\mathrm{ebit}teleports1\mathrm{qubit},$$
(2)
where 1 ebit describes any quantum system which contains entanglement equivalent to that of a singlet.
As a matter of fact, Eq. (2) is in common use. The point is that, at least for pure states, there are efficient ways in which entanglement can be manipulated, and arbitrary states can be transformed — essentially without losses — into singlets . Indeed, suppose that two distant observers, Alice and Bob, initially share a large number $`n`$ of pairs of particles, each pair in the same arbitrary state $`\mathrm{\Psi }`$. Then, by performing suitable local operations and by communicating classically to each other, Alice and Bob can obtain from these $`n`$ copies of the state $`\mathrm{\Psi }`$ some number $`k`$ of pairs, each pair in a singlet state. The action is “essentially without losses” since Alice and Bob can transform the $`k`$ singlets back into $`n`$ $`\mathrm{\Psi }`$s. (The actions are reversible in the asymptotic limit of large $`n`$; the requirement of the asymptotic limit for reversibility is similar to that in compressing classical and quantum information.) The quantity of entanglement of an arbitrary state, measured in ebits, is simply $`k/n`$, the number of singlets which can be obtained reversibly from each pair of particles in the original state $`\mathrm{\Psi }`$ .
However, an important element is missing. While during concentrating and diluting entanglement by the efficient methods described in , entanglement is not lost, Alice and Bob might have to communicate classically to each other. They have thus to pay the price of exchanging some bits of classical communication.
The classical communication cost of entanglement manipulation is a largely ignored problem. Indeed, the general attitude is that entanglement is “expensive” while classical communication is “cheap”, and all the effort is generally directed only to preserving entanglement by all means. However, to claim that entanglement is truly a fungible resource, one must also consider the classical communication cost of entanglement manipulation.
The classical communication cost of entanglement manipulation has in fact implications for teleportation. Indeed, Eq. (1) which describes the original teleportation is rather incomplete. The complete statement is that
$$1\mathrm{singlet}+\mathrm{communicating}2\mathrm{classical}\mathrm{bits}teleports1\mathrm{qubit}.$$
(3)
Obviously, the more abstract equivalent of this equation, namely
$$1\mathrm{ebit}+\mathrm{communicating}2\mathrm{classical}\mathrm{bits}teleports1\mathrm{qubit},$$
(4)
or, following Bennett’s notation,
$$1\mathrm{ebit}+2\mathrm{bits}1\mathrm{qubit},$$
(5)
would not be valid if during transforming the original supply of entanglement (in some arbitrary form) into singlets required for teleportation, Alice and Bob had to exchange supplementary bits of classical information.
For teleportation the matter seems to be rather academic. It is the entanglement which has the fundamental role, while the classical bits are, to a large extent, secondary — in the absence of entanglement, no matter how many classical bits Alice and Bob exchange, teleportation would be impossible. However, for other quantum communication tasks, the classical communication cost is highly relevant. Consider for example the “dense coding” communication method . As Bennett and Wiesner showed, when Alice and Bob share a singlet, Alice can communicate to Bob two classical bits by sending a single qubit. The basic equation is thus
$$1\mathrm{singlet}+\mathrm{communicating}1\mathrm{qubit}communicates2\mathrm{classical}\mathrm{bits},$$
(6)
whose mathematical abstraction is
$$1\mathrm{ebit}+1\mathrm{qubit}2\mathrm{bits}.$$
(7)
In dense coding the main goal is to enhance the ability of performing classical communication by using entanglement. However, if in the process of transforming the original supply of arbitrary entanglement into singlet form we had to use a lot of classical communication, this would defeat the objective of the entire exercise.
In the present paper we show that for bi-partite pure states, (in the asymptotic limit) entanglement can be transformed — concentrated and diluted — in a reversible manner with zero classical communication cost. Hence, the notion of “ebit” is completely justified. In other words, it doesn’t matter in which form entanglement is supplied; all that matters is the total quantity of entanglement. Provided that they have the same von Neumann entropy, both singlets and partially entangled states have the same power to achieve any task in quantum information processing (in the asymptotic limit).
In order to establish entanglement as a fungible resource, we have to show that both entanglement concentration (transforming arbitrary states into singlets) and entanglement dilution (transforming singlets into arbitrary states) can be done without any classical communication cost. The first task is easy — the original entanglement concentration method presented in proceeds without any classical communication between the parties. In other words, the classical communication cost of the procedure is identically equal to zero. The rest of this paper is devoted to studying entanglement dilution. We will show that, although diluting entanglement may require classical communication, the amount of communication can be made to vanish in the asymptotic limit.
The standard entanglement dilution scheme requires a significant amount of classical communication (two classical bits per ebit). Therefore, it fails to demonstrate the complete inter-convertibility of entanglement. To establish entanglement as a truly fungible resource, we present a new entanglement dilution scheme which conserves entanglement and requires an asymptotically vanishing amount of classical communication. To construct our scheme, we first prove the following.
Lemma: Suppose Alice and Bob share $`n`$ singlets. Let $`\mathrm{\Pi }`$ be the state of a bi-partite system AB where each system has a $`2^n`$ dimensional Hilbert space, and let the Schmidt coefficients of $`\mathrm{\Pi }`$ be $`2^r`$-fold degenerate. Then, there is a procedure by which Alice and Bob can prepare $`\mathrm{\Pi }`$ shared between them such that only $`2(nr)`$ bits of classical communication and local operations are needed.
Proof: With the $`2^r`$-fold degeneracy in Schmidt coefficients, $`\mathrm{\Pi }`$ can be factorized into a direct product of $`r`$ singlets and a residual state, $`\mathrm{\Gamma }`$, whose Schmidt decomposition contains only $`2^{nr}`$ terms, i.e., up to bi-local unitary transformations,
$$\mathrm{\Pi }=\mathrm{\Phi }^r\mathrm{\Gamma },$$
(8)
where $`\mathrm{\Phi }`$ denotes a singlet state. Since Alice and Bob initially share singlets $`\mathrm{\Phi }`$, there is no need to teleport the $`\mathrm{\Phi }`$s. To share $`\mathrm{\Pi }`$ non-locally, Alice only needs to teleport the subsystem $`\mathrm{\Gamma }`$ to Bob. Alice and Bob can then apply bi-local unitary transformations to their state to recover $`\mathrm{\Pi }`$. (We do not know if such local computations can be done efficiently, but this is unimportant here.) Since the dimension of $`\mathrm{\Gamma }`$ is only $`2^{nr}`$, only $`2(nr)`$ bits are needed for its teleportation.
Remark. Compared with a direct teleportation of the whole state $`\mathrm{\Pi }`$, the above procedure provides a saving of $`2r`$ classical bits of communication because of the $`2^r`$-fold degeneracy of Schmidt coefficients.
The crux of this Letter is the following theorem.
Theorem: In the large $`N`$ limit, $`N`$ copies of any pure bi-partite state $`\psi `$ can be approximated with a fidelity arbitrarily close to $`1`$ by a state that has $`D=2^d=2^{[NSO(\sqrt{N})]}`$ degeneracies in its Schmidt decomposition where $`S`$ is the von Neumann entropy of a subsystem of $`\psi `$. In other words, given any $`ϵ>0`$, for a sufficiently large $`N`$, we have
$$\psi ^N=\mathrm{\Phi }^d\mathrm{\Delta }+u_2$$
(9)
where $`d=[NSO(\sqrt{N})]`$, $`\mathrm{\Delta }`$ is an un-normalized residual state whose Schmidt decomposition contains $`2^{O(\sqrt{N})}`$ terms, and $`u_2<ϵ`$.
Remark: When combined with the Lemma, the Theorem implies that Alice and Bob can perform entanglement dilution from $`N`$ copies of $`\psi `$ to $`NS`$ singlets using an asymptotically vanishing number, namely $`O(\sqrt{N}/N)=O(1/\sqrt{N})`$ of classical bits of communication per ebit. This establishes the main result of this Letter.
Proof of the Theorem: The idea of the proof is simple. We would like to decompose the state $`\psi ^N`$ into two pieces, $`\psi ^N=u_1+u_2`$ such that the dominant piece, $`u_1`$ has a large degree of degeneracy in its Schmidt coefficients as required in the Theorem, while $`u_2`$ is small.
While the idea of our proof is general, it is best understood by considering the special case when $`\psi =a|00+b|11`$. Consider the Schmidt coefficients of $`\psi ^N`$. They have the form $`a^kb^{Nk}`$ and are, in general, highly degenerate — the coefficient $`a^kb^{Nk}`$ appears $`\left(\genfrac{}{}{0pt}{}{N}{k}\right)`$ times.
The first step of our proof is to note that we can divide the different values of $`k`$ into two classes — “typical” and “atypical”. For a “typical” value of $`k`$, $`\mathrm{log}\left(\genfrac{}{}{0pt}{}{N}{k}\right)`$ lies between $`NS(\psi )O(\sqrt{N})`$ and $`NS(\psi )+O(\sqrt{N})`$, say between $`NS(\psi )10\sqrt{N}`$ and $`NS(\psi )+10\sqrt{N}`$. (The actual coefficient of the $`\sqrt{N}`$ term will depend on the value of $`ϵ`$ used in the Theorem. Here, we simply take it to be $`10`$ to illustrate the basic idea of the proof.) All other values of $`k`$ are “atypical”. It is well-known that, compared to the measure of the typical set, the overall measure of the atypical set is very small. (i.e. the norm of the projection of $`\psi ^N`$ on the Hilbert subspace spanned by the atypical terms in the Schmidt decomposition is small). We shall include all the atypical terms in $`u_2`$.
Let us now concentrate on the typical terms. According to the requirement of the theorem, all terms in $`u_1=\mathrm{\Phi }^d\mathrm{\Delta }`$ are degenerate and their degeneracies have a common factor of the order of $`2^d=2^{[NSO(\sqrt{N})]}`$. If the degrees of degeneracy of the typical terms all had a common factor of the order of $`2^{[NSO(\sqrt{N})]}`$, we could include all these terms in $`u_1`$, and the proof would be complete. Unfortunately, although indeed each term in the typical set has a degeneracy of the order $`2^{[NSO(\sqrt{N})]}`$, when one varies $`k`$ over the typical set, the various values of $`\left(\genfrac{}{}{0pt}{}{N}{k}\right)`$ do not have a large common factor. To deal with this problem we “coarse-grain” the number of terms of Schmidt decomposition grouping them in bins of say $`2^{NS(\psi )20\sqrt{N}}`$. More concretely, for each $`k`$ in the typical set, let the number of full bins $`n_k`$ be such that
$$n_k2^{NS(\psi )20\sqrt{N}}\left(\genfrac{}{}{0pt}{}{N}{k}\right)<(n_k+1)2^{NS(\psi )20\sqrt{N}}.$$
(10)
We simply keep only $`n_k2^{NS(\psi )20\sqrt{N}}`$ out of the original $`\left(\genfrac{}{}{0pt}{}{N}{k}\right)`$ terms in $`u_1`$ and put the remaining $`\left(\genfrac{}{}{0pt}{}{N}{k}\right)n_k2^{NS(\psi )20\sqrt{N}}<2^{NS(\psi )20\sqrt{N}}`$ terms in $`u_2`$. Now $`n_k`$ is at least of the order $`2^{10\sqrt{N}}`$ and is, therefore, very large. Consider $`u_1`$. The degeneracies of its Schmidt coefficients are multiples of $`2^{NS(\psi )20\sqrt{N}}`$, hence we can write $`u_1=\mathrm{\Phi }^d\mathrm{\Delta }`$ where $`d=NS(\psi )20\sqrt{N}`$.
Let us now summarize. By construction, the state $`u_1`$ is of the form $`\mathrm{\Phi }^d\mathrm{\Delta }`$. The norm $`u_2`$ is very small for two reasons: 1) the contribution to $`u_2`$ from the atypical set is small and 2) for each $`k`$ in the typical set, its contribution to $`u_1`$ is at least $`n_k`$ times its contribution to $`u_2`$ where $`n_k`$ is very large. Consequently, $`\varphi ^N=u_1+u_2=\mathrm{\Phi }^d\mathrm{\Delta }+u_2`$ where $`d=[NSO(\sqrt{N})]`$, $`\mathrm{\Delta }`$ is an un-normalized residual state of $`2^{O(\sqrt{N})}`$ dimensions, and $`u_2`$ is very small. Q.E.D.
In conclusion, we have shown that entanglement dilution from $`N[S(\psi )+\delta ]`$ singlets to $`N`$ pairs of a bi-partite pure state $`\psi `$ can be done with only $`O(\sqrt{N})`$ bits of classical communication. So the number of classical bit per ebit needed is $`O(\frac{1}{\sqrt{N}})`$, which vanishes asymptotically. In other words, states with the same amount of bi-partite entanglement are inter-convertible to one another in the asymptotic limit (with vanishing amount of classical bits of communication per ebit). Therefore, entanglement bits or “ebits” can be regarded as a universal quantum resource, as originally proposed by Bennett and others.
The above discussion has been done for the case of pairs of two spin 1/2 particles in pure states. The generalization to pure states of pairs of higher spin particles is immediate. However, generalization towards multi-particle entanglement and/or density matrices is problematic.
In the case of pure-state multi-party entanglement, not only do we not know about the classical communication cost of transforming entangled states from a form into another, but it is also not yet clear whether there exists a reversible procedure which can transform (in asymptotical limit) $`n`$ copies of an arbitrary multi-party pure state $`\mathrm{\Psi }`$ into some standard entangled state (or set of states ). In fact, it is not even clear what the standard entangled states should be. The existence of such a procedure is, however, quite probable.
The case of density matrices is even more complicated. Here, even in the simplest case of pairs of spin 1/2 particles, it is probable that reversible transformations do not exist at all. That is, although arbitrary entangled density matrices can be prepared from singlets, and then singlets can be reconstructed from the density matrices, the number $`k_{in}`$ of spins necessary to create $`n`$ copies of an arbitrary density matrix is probably always larger than the number $`k_{out}`$ of spins which can be obtained from the $`n`$ density matrices. (Following the terminology of , the entanglement of formation is larger than the entanglement of distillation). If indeed this is the case, it is then probable that these transformations require non-negligible classical communication. Actually, a reasonable conjecture is that there exists a very close connection (possibly a sort of conservation relation) between the amount of irreversibility in the transformation singlets $``$ density matrices $``$ singlets and the amount of classical communication needed for this process.
Finally, we would like to add some more general remarks. If we restrict the actions one is allowed to perform on the entangled states, entanglement might no longer be inter-convertible. For example, if we do not allow collective processing but insist that each pair of entangled particles should be processed separately, then entanglement is not inter-convertible anymore. Indeed, while one could still produce singlets from partially entangled states such as $`\alpha |1|1+\beta |2|2`$ by using the procustean method , this action is not reversible (that is, the overall probability of success for the chain of actions initial state $``$ singlet $``$ initial state is less than 1).
Thus entanglement is a fungible resource only when no restrictions are placed on the allowed entanglement manipulation procedures. This raises the question of what exactly do we mean by the “unrestricted” set of actions? The usual paradigm of manipulating entanglement is that of “collective local actions + classical communication”, and the basic statement is that:
“Entanglement cannot increase by collective local actions and classical communications.”
However, in the light of the new effects discovered by R., P., and M. Horodecki, that is, the existence of bound entanglement and especially the possibility of activating bound entanglement this paradigm might turn out to be insufficient. And, indeed, it is very restrictive. After all, why not allow also quantum communication? It is true that quantum communication does not conserve entanglement and permits creation of entanglement out of nothing. However, there is no reason why such non-conservation could not be easily kept under control. We would thus suggest the paradigm of “collective local actions + classical communication+ quantum communication”, and the basic statement that
“By local actions, classical communications and $`N`$ qubits of quantum communication, entanglement cannot increase by more than $`N`$ e-bits.”
Acknoledgements We would like to thank C. H. Bennett, D. Gottesman, and R. F. Werner for helpful discussions. Part of this work was completed during the 1997 Elsag-Bailey – I.S.I. Foundation research meeting on quantum computation.
|
no-problem/9902/nucl-th9902030.html
|
ar5iv
|
text
|
# Thomas-Ehrman shifts in nuclei around 16O and role of residual nuclear interaction
## 1 Introduction
Structures of proton-rich nuclei are important for the rapid-proton ($`rp`$) process of the nucleosynthesis, which takes place in the hydrogen burning stage in stellar site. Since the strong interaction keeps the charge symmetry very well and the Coulomb energies are almost state-independent in a nuclide, energy spectra are quite analogous between mirror nuclei. Hence we usually estimate the level structures of $`Z>N`$ exotic nuclei from their mirror partners. However, for example, the excitation energies of the $`1/2^+`$ first excited states in <sup>13</sup>C and <sup>13</sup>N show large discrepancy, which is called Thomas-Ehrman shift (TES)<sup>3</sup><sup>3</sup>3 There has been a confusion in the term ‘Thomas-Ehrman shift’. In some references it was used in a restrictive meaning of a specific effect of the Coulomb force. In this paper we use it in more general sense of the level shift between mirror nuclei.. The TES may have a significant influence on the scenario of the $`rp`$ process, and it is highly desired to predict the TES correctly. Recent experiments provide us with valuable information of energy levels of $`Z>N`$ nuclei around <sup>16</sup>O. The TES has conventionally been regarded as an effect of the Coulomb force on a loosely bound or unbound proton occupying an $`s`$-orbit. With the aid of the recent data, it is being possible to argue whether this mechanism is sufficient to account for the TES in various mirror nuclei. The difference in the single-particle (s.p.) energies leads to a certain difference in the s.p. wave functions between protons and neutrons, in general. This affects the matrix elements of residual nuclear interaction (RNI), even though the original $`NN`$ force is charge symmetric. The question is whether this effect on the TES is sizable. Since the nuclear interaction has short-range character, it is expected that the RNI becomes weaker as the s.p. wave functions distributes over a wider region. The RNI reduction due to the broad radial distribution of the s.p. wave functions typically amounts only to a few percent, which does not cause notable TES for low-lying states. However, because a loosely bound $`s`$-orbit can have very long tail owing to the lack of the centrifugal barrier, the levels involving such an $`s`$-orbit may have substantial contribution of the RNI to the TES. In this paper we investigate the TES in $`A16`$ nuclei from a phenomenological viewpoint, primarily focusing on effects of the RNI on the TES, via the data of the <sup>16</sup>N–<sup>16</sup>F, <sup>15</sup>C–<sup>15</sup>F and <sup>16</sup>C–<sup>16</sup>Ne mirror pairs.
## 2 Phenomenological study of Thomas-Ehrman shifts around <sup>16</sup>O
### 2.1 <sup>16</sup>N–<sup>16</sup>F
We shall take the $`(0p_{1/2})^{n_1}(0d_{5/2}1s_{1/2})^{n_2}`$ model space on top of the <sup>16</sup>O inert core. For neutron-rich nuclei with $`Z8N`$, $`n_1=8Z`$ and $`n_2=N8`$, and vice versa for their mirror partners. The single-particle (hole) energies are obtained from the data of <sup>17</sup>O and <sup>17</sup>F (<sup>15</sup>N and <sup>15</sup>O). Taking into account their mass differences from <sup>16</sup>O, we have (in MeV)
$`ϵ_\mathrm{n}(0d_{5/2})=4.144,ϵ_\mathrm{n}(1s_{1/2})=3.273,`$
$`ϵ_\mathrm{p}(0d_{5/2})=0.600,ϵ_\mathrm{p}(1s_{1/2})=0.105,`$ (1)
and
$$ϵ_\mathrm{p}(0p_{1/2}^1)=12.128,ϵ_\mathrm{n}(0p_{1/2}^1)=15.664.$$
(2)
The shift in $`E_x(1/2^+)`$ of <sup>17</sup>F (i.e. $`\mathrm{\Delta }ϵ_\mathrm{p}^{sd}ϵ_\mathrm{p}(1s_{1/2})ϵ_\mathrm{p}(0d_{5/2})=0.495`$ MeV) from that of <sup>17</sup>O (i.e. $`\mathrm{\Delta }ϵ_\mathrm{n}^{sd}ϵ_\mathrm{n}(1s_{1/2})ϵ_\mathrm{n}(0d_{5/2})=0.871`$ MeV) is a typical TES. Because the proton in the $`1s_{1/2}`$ orbit is loosely bound and free from the influence of the centrifugal barrier, its wave function spreads in a radial direction (like halo or skin structure), leading to weaker Coulomb repulsion than that of $`0d_{5/2}`$. This difference in the Coulomb energy gives rise to the TES for such a core plus one-particle system. The mechanism how the energy shift of $`\mathrm{\Delta }ϵ_\mathrm{p}^{sd}`$ from $`\mathrm{\Delta }ϵ_\mathrm{n}^{sd}`$ occurs has recently been investigated in some detail in Ref.. The observed energy spectra of <sup>16</sup>N and <sup>16</sup>F, the $`T_z=\pm 1`$ mirror nuclei, show a remarkable difference. Even the ground state spins do not match, being $`2^{}`$ in <sup>16</sup>N and $`0^{}`$ in <sup>16</sup>F. On top of the <sup>16</sup>O core, the lowest four states in these nuclei ($`2^{},0^{},3^{},1^{}`$ in <sup>16</sup>N and $`0^{},1^{},2^{},3^{}`$ in <sup>16</sup>F) are classified into the $`|0p_{1/2}^10d_{5/2};J=2^{},3^{}`$ and $`|0p_{1/2}^11s_{1/2};J=0^{},1^{}`$ multiplets. The difference in low-lying energy spectra is linked to the relatively low energy of the proton $`1s_{1/2}`$ orbit. The smaller $`\mathrm{\Delta }ϵ_\mathrm{p}^{sd}`$ than $`\mathrm{\Delta }ϵ_\mathrm{n}^{sd}`$ in Eq. (1) shifts down the $`0^{}`$ and $`1^{}`$ states of <sup>16</sup>F. However, the amount of the observed TES in the $`0^{}`$ and $`1^{}`$ states is $`0.70`$ MeV on average, notably larger than $`\mathrm{\Delta }ϵ_\mathrm{n}^{sd}\mathrm{\Delta }ϵ_\mathrm{p}^{sd}=0.376`$ MeV. It is likely that the two-body RNI has a substantial contribution to this TES. Since the last proton is unbound in <sup>16</sup>F while bound in <sup>17</sup>F, the relative energy of $`(1s_{1/2})_\mathrm{p}`$ may be further lowered in <sup>16</sup>F, mainly by the effect of the Coulomb force. However, it is not simple to evaluate separately the RNI and the nucleus-dependence of $`ϵ_\mathrm{p}(1s_{1/2})`$ (or $`\mathrm{\Delta }ϵ_\mathrm{p}^{sd}`$). We here assume the $`ϵ`$’s of Eqs. (1,2), ignoring the nucleus-dependence of $`\mathrm{\Delta }ϵ_\mathrm{p}^{sd}`$, whose argument will be given later. Then the matrix elements of the residual proton-neutron interaction between a $`0p_{1/2}`$ hole and an $`s,d`$ particle can be derived from the experimental levels of <sup>16</sup>N and <sup>16</sup>F. The results are presented in Table 1. We here denote the diagonal matrix element $`(j_1)_\rho (j_2)_\tau ;J|V|(j_1)_\rho (j_2)_\tau ;J`$ by $`V_{\rho \tau }(j_1j_2;J)`$, where $`(\rho ,\tau )=(\mathrm{p},\mathrm{n})`$. For example, $`V_{\mathrm{pn}}(0p_{1/2}^10d_{5/2};J)=(0p_{1/2}^1)_\mathrm{p}(0d_{5/2})_\mathrm{n};J|V|(0p_{1/2}^1)_\mathrm{p}(0d_{5/2})_\mathrm{n};J`$ and $`V_{\mathrm{np}}(0p_{1/2}^10d_{5/2};J)=(0p_{1/2}^1)_\mathrm{n}(0d_{5/2})_\mathrm{p};J|V|(0p_{1/2}^1)_\mathrm{n}(0d_{5/2})_\mathrm{p};J`$. While the matrix elements $`V_{\mathrm{np}}(0p_{1/2}^10d_{5/2};J)`$ deduced from <sup>16</sup>F are similar to $`V_{\mathrm{pn}}(0p_{1/2}^10d_{5/2};J)`$ from <sup>16</sup>N, the elements regarding $`1s_{1/2}`$, $`V_{\mathrm{pn}}(0p_{1/2}^11s_{1/2};J)`$ and $`V_{\mathrm{np}}(0p_{1/2}^11s_{1/2};J)`$, show obvious discrepancy. The $`V_{\mathrm{np}}(0p_{1/2}^11s_{1/2};J)`$ element is smaller by a factor of about 0.7 than $`V_{\mathrm{pn}}(0p_{1/2}^11s_{1/2};J)`$, both for $`J=0^{}`$ and $`1^{}`$. The reduction of the proton-neutron RNI matrix elements indicates that the TES in the <sup>16</sup>N–<sup>16</sup>F pair owes a part to the nuclear force, not only to the Coulomb force which relatively shifts down $`ϵ_\mathrm{p}(1s_{1/2})`$. The reduction of $`V_{\mathrm{np}}`$ compared with $`V_{\mathrm{pn}}`$ should originate in the difference of the s.p. radial wave functions between a proton and a neutron. The $`V_{\mathrm{pp}}`$$`V_{\mathrm{nn}}`$ difference in the RNI has been investigated in a wide mass region, which yields a few percent reduction of $`V_{\mathrm{pp}}`$ relative to $`V_{\mathrm{nn}}`$, as a result of the proton–neutron difference in the s.p. wave functions. This coincides with the $`V_{\mathrm{np}}/V_{\mathrm{pn}}`$ ratio for $`0d_{5/2}`$ shown in Table 1. On the other hand, the present RNI reduction with respect to $`1s_{1/2}`$ is remarkably stronger than the global systematics. The strong reduction of the RNI is possibly an effect of the loosely bound $`s`$-orbit. Because of the lack of the centrifugal barrier, the radial function of the $`s`$-orbit depends appreciably on the separation energy. Since the proton $`1s_{1/2}`$ orbit is loosely bound in the proton-rich nuclei of this mass region, the $`1s_{1/2}`$ proton wave function $`R_{1s_{1/2}}(r_\mathrm{p})`$ distributes with a long tail and its spatial overlap with another nucleon is depressed, in comparison with $`R_{1s_{1/2}}(r_\mathrm{n})`$. Therefore nuclear force is expected to give appreciably smaller matrix elements if loosely bound or unbound protons are involved. We shall examine this mechanism in Section 3. It is noted that the residual interaction relevant to the low-lying levels is repulsive in <sup>16</sup>N–<sup>16</sup>F, because of the particle-hole conjugation. In addition to the small $`\mathrm{\Delta }ϵ_\mathrm{p}^{sd}`$ relative to $`\mathrm{\Delta }ϵ_\mathrm{n}^{sd}`$, the reduction of the repulsive RNI lowers the levels involving $`(1s_{1/2})_\mathrm{p}`$, relative to those having $`(0d_{5/2})_\mathrm{p}`$. Thus the TES is enhanced in this pair of mirror nuclei. We shall investigate other mirror pairs based on the above phenomenological Hamiltonian. The validity of the current approach would be tested via the systematic study of this sort, whereas for the time being we put aside the possibility of the nucleus-dependence of $`ϵ`$’s. As will be shown, the systematic study supports the RNI reduction picture on the TES.
### 2.2 <sup>15</sup>C–<sup>15</sup>F
By using the empirical $`ϵ`$’s of Eqs. (1,2) and the $`V`$’s of Table 1, we calculate energies of the low-lying levels $`5/2^+`$ and $`1/2^+`$ of <sup>15</sup>C and <sup>15</sup>F ($`T_z=\pm 3/2`$) in the $`(0p_{1/2})^2(0d_{5/2}1s_{1/2})^1`$ model space. The calculated energy levels are shown in Fig. 1 together with the experimental data. The level inversion occurs; the $`1/2^+`$ states, instead of $`5/2^+`$, become lowest for both nuclei. This inversion is reproduced by the shell model Hamiltonian, due to the repulsion shown in Table 1 which is stronger between $`0p_{1/2}^1`$ and $`0d_{5/2}`$ than between $`0p_{1/2}^1`$ and $`1s_{1/2}`$. The $`5/2^+`$ level is observed at $`E_x=0.740`$ MeV in <sup>15</sup>C, while at 1.300 MeV in <sup>15</sup>F. The shell model yields $`E_x(5/2^+)=0.563`$ MeV for <sup>15</sup>C and 1.400 MeV for <sup>15</sup>F. The TES in the <sup>15</sup>C–<sup>15</sup>F pair is thus described with a reasonable accuracy, though slightly overshot, within the framework of the phenomenological shell model. Weaker repulsion in $`V_{\mathrm{np}}(0p_{1/2}^11s_{1/2};J)`$ than in $`V_{\mathrm{pn}}(0p_{1/2}^11s_{1/2};J)`$ plays an appreciable role in the TES. The $`V_{\mathrm{pp}}(0p_{1/2}^2;J=0^+)`$ and $`V_{\mathrm{nn}}(0p_{1/2}^2;J=0^+)`$ matrix elements, which do not affect the excitation energies, can be evaluated from the ground-state energies of <sup>14</sup>C and <sup>14</sup>O. We can then calculate the absolute values of the energies, not only the excitation energies, of the <sup>15</sup>C and <sup>15</sup>F levels. The biggest discrepancy is found in $`E(1/2^+)`$ of <sup>15</sup>C, which is overestimated by 0.166 MeV, whereas the other energies are reproduced within the 0.1 MeV accuracy. This may suggest that an additional effect is missed for the $`1s_{1/2}`$ neutron, whose separation energy is small (1.218 MeV) in <sup>15</sup>C.
### 2.3 <sup>16</sup>C–<sup>16</sup>Ne
The <sup>16</sup>C–<sup>16</sup>Ne pair ($`T_z=\pm 2`$) is significant as well, in investigating the effect of the RNI on the TES. The low-lying states of these nuclei have the $`0p_{1/2}^2(0d_{5/2}1s_{1/2})^2`$ configuration. Since the $`0p_{1/2}^2`$ part does not contribute to the excitation energy, the TES can disclose difference between proton-proton ($`V_{\mathrm{pp}}`$) and neutron-neutron ($`V_{\mathrm{nn}}`$) interactions in the $`sd`$-shell. As an effective interaction in the $`sd`$-shell, the so-called USD interaction has successfully been used. Although the USD interaction is derived for the full $`sd`$-shell calculation, we neglect the $`0d_{3/2}`$ components, since the $`0d_{3/2}`$ orbit is hardly occupied in low-lying states of the nuclei around <sup>16</sup>O. Indeed, we can well reproduce the low-lying levels of <sup>18</sup>O with the USD interaction in the $`(0d_{5/2}1s_{1/2})^2`$ space. We carry out the shell model calculation in the $`0p_{1/2}^2(0d_{5/2}1s_{1/2})^2`$ space, with the Hamiltonian comprised of the empirical $`ϵ`$’s and $`V`$’s (see Eqs. (1,2) and Table 1) as well as of the USD matrix elements. The binding energy of <sup>16</sup>C is reproduced with the accuracy of about 0.1 MeV. In computing the binding energy of <sup>16</sup>Ne, the residual two-body Coulomb interaction has to be estimated. With the s.p. wave functions in the Woods-Saxon potential which will be mentioned in Section 3, the two-body Coulomb force yields approximately constant energy shift of about 0.4 MeV for low-lying levels, within the accuracy of 0.1 MeV. If we use the charge-symmetric (i.e. $`V_{\mathrm{pp}}=V_{\mathrm{nn}}`$) USD interaction with this Coulomb correction, the binding energy of <sup>16</sup>Ne is seriously overestimated by as much as 0.8 MeV. Because of the level inversion in <sup>15</sup>C and <sup>15</sup>F, $`1s_{1/2}`$ lies lower than $`0d_{5/2}`$ in an effective sense. Thereby the ground state consists mainly of the $`0p_{1/2}^21s_{1/2}^2`$ configuration, with small admixture of $`0p_{1/2}^20d_{5/2}^2`$. It is likely for the RNI matrix elements involving $`(1s_{1/2})_\mathrm{p}`$ to suffer some amount of reduction, because the $`1s_{1/2}`$ protons are bound loosely (or unbound). For this reason we reduce the USD matrix elements concerning the $`(1s_{1/2})_\mathrm{p}`$ orbit by an overall factor $`\xi _s`$, while not changing the other matrix elements. The binding energy of <sup>16</sup>Ne is found to be reproduced if we set $`\xi _s0.7`$ (i.e. $`V_{\mathrm{pp}}(0d_{5/2}1s_{1/2};J)0.7\times V_{\mathrm{nn}}(0d_{5/2}1s_{1/2};J)`$, and so forth). It is notable that this factor is close to the proton-neutron ratio $`V_{\mathrm{np}}/V_{\mathrm{pn}}`$ concerning $`1s_{1/2}`$ in Table 1. Recently $`E_x(0_2^+)`$ of <sup>16</sup>Ne has been reported, indicating a large TES. $`E_x(0_2^+)`$ of <sup>16</sup>Ne is lower by about 1 MeV than that of <sup>16</sup>C. In Fig. 2 the results of the $`\xi _s=1`$ (i.e. no reduction of the RNI) and $`\xi _s=0.7`$ cases are compared with the experimental data. As has been noticed, the $`1s_{1/2}`$ energy relative to $`ϵ(0d_{5/2})`$ is deeper for protons than for neutrons. This tends to lower the ground state of <sup>16</sup>Ne, whose main configuration is $`0p_{1/2}^21s_{1/2}^2`$. Thus, if we use the charge-symmetric USD interaction (i.e. $`\xi _s=1`$), $`E_x(0_2^+)`$ of <sup>16</sup>Ne becomes necessarily higher than that of <sup>16</sup>C. The recent data of $`E_x(0_2^+)`$ clearly favors the reduction of the RNI regarding the $`(1s_{1/2})_\mathrm{p}`$ orbit. The reduction with $`\xi _s=0.7`$ almost reproduces $`E_x(0_2^+)`$ of <sup>16</sup>Ne, as well as the binding energy. Note that the residual Coulomb force does not contribute seriously to the TES, as far as the number of the valence protons is not large. With this reduction of $`\xi _s=0.7`$ the known energy spectra of the <sup>17</sup>Ne are also reproduced.
### 2.4 Discussion — $`\mathrm{\Delta }ϵ_\mathrm{p}^{sd}`$ decrease vs. RNI reduction
We now consider the possibility of the nucleus-dependence of $`\mathrm{\Delta }ϵ_\mathrm{p}^{sd}=ϵ_\mathrm{p}(1s_{1/2})ϵ_\mathrm{p}(0d_{5/2})`$. In extracting the RNI matrix elements from <sup>16</sup>N–<sup>16</sup>F, we have assumed the s.p. energies taken from the <sup>17</sup>O–<sup>17</sup>F data. One may argue that in the <sup>16</sup>F nucleus $`(1s_{1/2})_\mathrm{p}`$ receives less Coulomb repulsion than in <sup>17</sup>F, because the last proton is unbound, and that the TES in <sup>16</sup>N–<sup>16</sup>F should be ascribed to the corresponding lowering of $`ϵ_\mathrm{p}(1s_{1/2})`$ relative to $`ϵ_\mathrm{p}(0d_{5/2})`$, instead of the reduction of the RNI. As far as we view only the <sup>16</sup>N–<sup>16</sup>F data, the RNI reduction does not seem to be an exclusive solution. In this regard, the TES in the <sup>16</sup>C–<sup>16</sup>Ne pair has particular importance. Since <sup>16</sup>Ne has negative $`S_{2p}`$ (two proton separation energy), $`ϵ_\mathrm{p}(1s_{1/2})`$ (relative to $`ϵ_\mathrm{p}(0d_{5/2})`$) goes down in <sup>16</sup>Ne from the value obtained in <sup>17</sup>F, if we follow the argument of the nucleus-dependence of $`\mathrm{\Delta }ϵ_\mathrm{p}^{sd}`$ as in <sup>16</sup>F. However, this makes $`E_x(0_2^+)`$ in <sup>16</sup>Ne even higher than in the $`\xi _s=1`$ case of Fig. 2, where we already have the wrong sign of the TES. It is noted that the RNI derived from <sup>16</sup>N–<sup>16</sup>F (shown in Table 1) is repulsive because of the particle-hole nature, while the $`sd`$-shell interaction crucial to the TES in the <sup>16</sup>C–<sup>16</sup>Ne pair is attractive. Thereby, the $`\mathrm{\Delta }ϵ_\mathrm{p}^{sd}`$ decrease and the RNI reduction gives opposite effects on the loosely bound (or unbound) $`s`$-orbit in <sup>16</sup>C–<sup>16</sup>Ne. If only the $`\mathrm{\Delta }ϵ_\mathrm{p}^{sd}`$ decrease is considered, obvious contradiction to the data results. Thus, lower $`E_x(0_2^+)`$ in <sup>16</sup>Ne than in <sup>16</sup>C cannot be described without the reduction of the RNI for $`(1s_{1/2})_\mathrm{p}`$. The data implies that, although the nucleus-dependence of $`\mathrm{\Delta }ϵ_\mathrm{p}^{sd}`$ may exist, its effect on the TES seems much less significant than that of the RNI reduction. On the contrary, the reduction of the RNI naturally accounts for the TES’s around <sup>16</sup>O, in particular those of <sup>16</sup>N–<sup>16</sup>F and <sup>16</sup>C–<sup>16</sup>Ne, simultaneously.
### 2.5 $`E_x(3^+)`$ of <sup>18</sup>Ne
The TES is not apparent for the lowest-lying states of the <sup>18</sup>O–<sup>18</sup>Ne mirror pair, since their main configuration is $`0d_{5/2}^2`$. On the other hand, the lowest $`3^+`$ state, which is observed at $`E_x=5.378`$ MeV in <sup>18</sup>O, is expected to have the $`0d_{5/2}1s_{1/2}`$ configuration. Because the $`1s_{1/2}`$ orbit is relevant, the TES for this state may be sizable. The energy of the $`3^+`$ state of <sup>18</sup>Ne is important in the scenario of the $`rp`$ process. Whereas an earlier experiment gives $`E_x=4.56`$ MeV, this $`3^+`$ state has not been established by recent experiments. By using the USD interaction with the reduction factor $`\xi _s=0.7`$ for $`(1s_{1/2})_\mathrm{p}`$ (together with $`ϵ_\mathrm{p}`$’s deduced from <sup>17</sup>F), we evaluate TES of 0.86 MeV for this $`3^+`$ state. This leads to $`E_x(3_1^+)4.5`$ MeV, in good agreement with Ref..
## 3 Mechanism of RNI reduction
We next study the mechanism of the RNI reduction concerning the proton $`1s_{1/2}`$ orbit, from a qualitative (or semi-quantitative) standpoint. As has been pointed out, it is likely that the RNI reduction is connected to the broad distribution of $`R_{1s_{1/2}}(r_\mathrm{p})`$. The amount of the reduction, however, is notably large, compared with the global trend which has been estimated to be a few percent. It is a question whether $`R_{1s_{1/2}}(r_\mathrm{p})`$ distributes so broadly as to give RNI reduction by about 30%, despite the presence of the Coulomb barrier. It is not an easy task to estimate microscopically the RNI matrix elements to a good precision. Instead, we view proton-neutron ratio of the RNI matrix elements ($`V_{\mathrm{np}}/V_{\mathrm{pn}}`$ and $`V_{\mathrm{pp}}/V_{\mathrm{nn}}`$) of the M3Y interaction. The M3Y force basically represents the $`G`$-matrix and therefore somewhat realistic, and enables us to avoid tedious computation. To take into account effects of the loose binding with the centrifugal barrier and the Coulomb barrier, the single-particle wave functions are obtained under the Woods-Saxon (WS) plus Coulomb potential. We adopt the WS parameters of Ref. at <sup>16</sup>O, varying the WS potential depth $`V_0`$ around the normal value $`51\mathrm{MeV}`$. Even if the absolute values of the RNI matrix elements are not reliable, the proton-neutron ratios carries certain information, because they depend mainly on the proton-neutron difference of the s.p. wave functions. Note that core polarization effects are not taken into consideration in this WS+M3Y calculation, which should be included in the shell model interaction. The present proton-neutron ratios give only qualitative (or semi-quantitative) nature of the RNI, since they do not need to match precisely the shell model ones (for instance, Table 1). The WS potential with $`V_0=53`$ gives (in MeV)
$`ϵ_\mathrm{n}(0d_{5/2})=7.52,ϵ_\mathrm{n}(1s_{1/2})=4.80,`$
$`ϵ_\mathrm{p}(0d_{5/2})=3.90,ϵ_\mathrm{p}(1s_{1/2})=1.46.`$ (3)
As the potential becomes shallower, the s.p. orbits are bound more loosely. Indeed, at $`V_0=51`$ we have
$`ϵ_\mathrm{n}(0d_{5/2})=6.36,ϵ_\mathrm{n}(1s_{1/2})=3.98,`$
$`ϵ_\mathrm{p}(0d_{5/2})=2.80,ϵ_\mathrm{p}(1s_{1/2})=0.76,`$ (4)
and at $`V_0=49`$
$`ϵ_\mathrm{n}(0d_{5/2})=5.23,ϵ_\mathrm{n}(1s_{1/2})=3.22,`$
$`ϵ_\mathrm{p}(0d_{5/2})=1.75,ϵ_\mathrm{p}(1s_{1/2})=0.14.`$ (5)
Whereas the wave function is insensitive to $`ϵ`$ for the deeply bound orbits, it is not the case for the loosely bound orbit $`(1s_{1/2})_\mathrm{p}`$. The variation of the wave function is typically measured by the mean radius of the orbit $`r_\rho (j)\sqrt{(j)_\rho |r^2|(j)_\rho }`$ ($`\rho =\mathrm{p},\mathrm{n}`$). By varying the WS parameter $`V_0`$, we see how the RNI as well as $`r_\rho (j)`$ behave as $`ϵ`$ changes. For the M3Y matrix elements, the proton-neutron ratios $`V_{\mathrm{np}}/V_{\mathrm{pn}}`$ with respect to the $`(0p_{1/2})^1(0d_{5/2}1s_{1/2})^1`$ two-body states and $`V_{\mathrm{pp}}/V_{\mathrm{nn}}`$ with respect to the $`(0d_{5/2}1s_{1/2})^2`$ states are depicted in Fig. 3. Though the ratios of the off-diagonal elements $`(1s_{1/2}^2)_\rho ;0^+|V|(0d_{5/2}^2)_\rho ;0^+`$ and $`(0d_{5/2}1s_{1/2})_\rho ;2^+|V|(0d_{5/2}^2)_\rho ;2^+`$ are not shown, they behave in a similar manner to those of diagonal elements involving $`1s_{1/2}`$. The proton-neutron ratios of the rms radii of the s.p. orbits are also presented in Fig. 3 for $`j=0d_{5/2}`$ and $`1s_{1/2}`$. As is expected, $`R_{1s_{1/2}}(r_\mathrm{p})`$ distributes over a broader region than $`R_{1s_{1/2}}(r_\mathrm{n})`$, to a certain extent. In Fig. 3, the rms radius of $`(1s_{1/2})_\mathrm{p}`$ is larger by about 10–20% than that of $`(1s_{1/2})_\mathrm{n}`$, somewhat depending on $`V_0`$. In contrast, the rms radius of $`(0d_{5/2})_\mathrm{p}`$ differ only by a few percent from that of $`(0d_{5/2})_\mathrm{n}`$, insensitive to $`V_0`$. From Fig. 3 we confirm the following two points: (a) the RNI reduction well correlates to the increase of the rms radii of the relevant orbits, and (b) the matrix elements involving $`(1s_{1/2})_\mathrm{p}`$ can be reduced from those of $`(1s_{1/2})_\mathrm{n}`$ by as much as a few tens percent around <sup>16</sup>O. The former point is consistent with Ref., though we use more realistic s.p. wave functions (but less realistic $`G`$-matrix) than in Ref.. The latter implies that the broad distribution of $`R_{1s_{1/2}}(r_\mathrm{p})`$ seems accountable for the RNI reduction. Although there remain additional interests in the RNI reduction (e.g. accurate estimate of the reduction factor, nucleus- and/or state-dependence of the reduction factor), they will require reliable treatment of the core polarization effects, which is beyond the scope of the current study. We just point out at this moment that, due to the broad distribution of the s.p. wave function, the core polarization effects tend to diminish and therefore the shell model interaction may be reduced further. The residual Coulomb force hardly contributes to the excitation energies of low-lying states, for the nuclei around <sup>16</sup>O, as far as the number of valence protons remains small. The state-dependence of the residual Coulomb force is less than 0.1 MeV for the nuclei under discussion, if estimated with the above WS wave functions.
## 4 Summary
The Thomas-Ehrman shifts generally occur in the $`A16`$ region, where the $`1s_{1/2}`$ proton is unbound or loosely bound. As well as the difference between $`\mathrm{\Delta }ϵ_\mathrm{n}^{sd}`$ and $`\mathrm{\Delta }ϵ_\mathrm{p}^{sd}`$, the reduction of the residual nuclear interaction matrix elements involving the $`1s_{1/2}`$ proton plays an important role in the TES. As has been deduced from the nuclei <sup>16</sup>N and <sup>16</sup>F, the matrix elements $`V_{\mathrm{np}}(0p_{1/2}^11s_{1/2})`$ is notably smaller than $`V_{\mathrm{pn}}(0p_{1/2}^11s_{1/2})`$, by a factor of about 0.7. This factor is remarkably smaller than the general trend of the proton-neutron asymmetry in the RNI. Similar reduction of $`V_{\mathrm{pp}}`$ in the $`sd`$-shell (relative to $`V_{\mathrm{nn}}`$) accounts for the TES in $`E_x(0_2^+)`$ of the <sup>16</sup>C–<sup>16</sup>Ne pair as well as the mass of <sup>16</sup>Ne. It is remarked that the RNI reduction is far more significant than the nucleus-dependence of $`\mathrm{\Delta }ϵ_\mathrm{p}^{sd}`$, as is argued in connection to $`E_x(0_2^+)`$ of <sup>16</sup>Ne. Taking into account the RNI reduction, the TES’s observed in <sup>15</sup>C–<sup>15</sup>F and other pairs are understood within the phenomenological shell model. On the same ground the astrophysically important $`E_x(3_1^+)`$ of <sup>18</sup>Ne is predicted to be $`4.5`$ MeV. The reduction of the residual interaction seems to originate in the broad radial distribution of wave function of the $`1s_{1/2}`$ proton, which is bound loosely (or unbound) and is not affected by the centrifugal barrier. This picture is supported by viewing the proton-neutron ratio of the M3Y interaction matrix elements, which are evaluated with the single-particle wave functions under the Woods-Saxon plus Coulomb potential.
Discussions with S. Kubono, K. Katō and S. Aoyama are gratefully acknowledged.
|
no-problem/9902/physics9902018.html
|
ar5iv
|
text
|
# Untitled Document
ONE-DIMENSIONAL MOTION
IN POTENTIAL HOLE
OF SOMMERFELD SPHERE
IN CLASSICAL ELECTRODYNAMICS:
INSIDE THE HOLE
Alexander A. Vlasov
High Energy and Quantum Theory
Department of Physics
Moscow State University
Moscow, 119899
Russia
Equation of motion of Sommerfeld sphere in the one-dimensional potential hole, produced by two equal charges on some distance from each other, is numerically investigated. Two types of solutions are found: (i) damping oscillations, (ii) oscillations without damping (radiationless motion). Solutions with growing amplitude (”climbing-up-the-wall solution”) for chosen initial conditions were not founded.
03.50.De Here we continue our numerical investigation of one-dimensional motion of Sommerfeld sphere with total charge $`Q`$, mechanical mass $`m`$ and radius $`a`$ .
We consider the one dimensional motion in the symmetrical potential hole, produced by Coulomb fields of two equal point charges $`q`$ at distance $`2D`$ apart them (Coulomb field has one important property - it generates the force, acting on the uniformly charged sphere, of the same value as if the charge of sphere was concentrated at its center).
For dimensionless variables $`y=R/2a,x=ct/2a,d=D/2a`$ the equation of motion of the sphere is
$$\frac{d^2y}{dx^2}=\left(1(\frac{dy}{dx})^2\right)^{3/2}k$$
$$\left[\underset{x^{}}{\overset{x^+}{}}𝑑z\frac{z1}{L^2}+\mathrm{ln}\frac{L^+}{L^{}}+(\frac{1}{\beta ^2}1)\mathrm{ln}\frac{1+\beta }{1\beta }\frac{2}{\beta }+\frac{M}{(y+d)^2}\frac{M}{(dy)^2}\right]$$
$`(1)`$
here $`M=q/Q`$ ,
$$x^\pm =1\pm L^\pm ,L^\pm =y(x)y(xx^\pm ),L=y(x)y(xz),$$
$$\beta =dy/dx,k=\frac{Q^2}{2mc^2a}.$$
Later on we take $`k=1`$.
It is useful to compare solutions of (1) with point charge motion in the same field, governed by the following relativistic equation without radiation force:
$$\frac{d^2y}{dx^2}=\left(1(\frac{dy}{dx})^2\right)^{3/2}k\left[+\frac{M}{(y+d)^2}\frac{M}{(dy)^2}\right]$$
$`(2)`$
A.
For chosen initial conditions we have found two types of solutions of eq. (1):
(i) damping oscillations,
(ii) oscillations without damping (radiationless motion).
Existence of radiationless solutions for Sommerfeld model was discovered long time ago by Schott . In our case this type of solutions one can easily obtain from weak-velocity approximation for eq.(1), when $`dy/dx1,yd`$ and eq.(1) takes the form
$$\frac{d^2y}{dx^2}=w^2y+\frac{4k}{3}\left[\frac{d}{dx}y(x1)\frac{d}{dx}y(x)\right]$$
$`(3)`$
here $`w^2=4kM/d^3,`$
with solution
$$y(x)=A\mathrm{cos}(wx)$$
for $`w=2\pi n,n=\pm 1,2,\mathrm{}.`$
Solutions with growing amplitude (”climbing-up-the-wall solution”) for chosen initial conditions:
$$\frac{dy}{dx}=0forx0$$
were not founded for wide range of values of $`y(x=0),M,d`$.
Later on we’ll try to search for exotic solutions of Sommerfeld sphere motion taking another type of initial conditions.
I am glad to thank my colleagues:
P.A.Polyakov - for theoretical discussions;
V.A.Iljina and P.K.Silaev - for assistance rendered during numerical calculations.
REFERENCES
1. Alexander A.Vlasov, physics/9901051.
2. G.A. Schott, Phil.Mag. 15, 752(1933).
|
no-problem/9902/astro-ph9902362.html
|
ar5iv
|
text
|
# Magnetized Protostellar Bipolar Outflows
## 1 Introduction
General outflows and collimated jets are thought to be intimately related to infall and protostellar accretion. Recent observations (Donati et al. donal97 (1997), Guenther & Emerson gue96 (1996)) suggest the relevance of magnetic fields to star formation, first suggested by Mestel & Spitzer (ms56 (1956)). Tomisaka (tomisaka (1998)) has studied numerically the dynamical collapse of magnetized molecular cloud cores from the runaway cloud collapse phase to the central point mass accretion phase. He finds that the evolution of the cloud contracting under its self-gravity is well expressed by a self-similar solution. Moreover inflow-outflow circulation appears as a natural consequence of the initial configuration. His results support the magnetized self-similar models as recently presented by Fiege and Henriksen (fiege1 (1996) a,b, hereafter FH1, FH2) following Henriksen (henrik (1993), henrik2 (1994)), and Henriksen and Valls-Gabaud (HVG (1994), hereafter HV). In those models, the self-similar gravitationally driven convective circulation in a heated quadrupolar protostar envelope is solved rigorously. Moreover recent observations (Greaves & Holland GH (1998)) of the polarized dust emission from six star-forming cloud cores have revealed, for the first time, some very large twists in the magnetic field. This is consistent with FH1 and HV where circumstellar magnetic fields follow large-scale streamlines. This remains very nearly the case in the present model.
The self-similar models regard the molecular outflow as a natural consequence of the circulation established by the collapse of the pre-stellar cloud. The models describe an outflow velocity that increases toward the axis of rotation, a convective pattern of infall/outflow, self-consistent axial collimating magnetic fields and rotation, and “cored apple” type distributions of circum-protostellar gas (Andre et al. andre (1993)).
Such models may imply a natural connection between the fast ionized jets seen near the polar axis of the wind, and the slower and less-collimated molecular outflows that surround them. However velocities obtained by FH1 and HV are smaller than some observed velocities (e.g. EHV outflow ($`V100kms^1`$), Bachiller et al. bachiller (1990)) unless the solution is pushed very close to the central mass where the limiting speed is about the escape speed (and the jet speed) of a few hundred $`kms^1`$. But this is precisely the region where the inner boundary layer may be dominant and the assumptions of this type of model are liable to break down. Moreover the luminosity needed for radiative heating in FH1 is too large in order to drive high-velocity outflows if only dust opacity is used. In the present work we show that the Poynting flux increases both the velocity and collimation of the modeled outflows by helping to transport mass and energy from the equatorial regions. This is much as has been argued for some time by other authors (e.g. Pudritz and Norman PN (1986), Ouyed and Pudritz ouyedp (1997)), but our models are globally consistent in space. Self-similar models can not of course be globally consistent in time even if they are not stationary (we have found such models and will report them elsewhere), since they are ignorant of initial conditions. And in fact they must also be ‘intermediate’ in space since there are inevitably small regions near the equator, the axis and at the centre which are excluded from the domain of self-similarity. Thus, we derive globally self-consistent models of bipolar outflows and infall accretion, but specifically exclude the regions dominated by the disc, jet, and protostar.
Qualitatively we may understand the quadrupolar global circulation in the following fashion. As material falls towards the central object, it is gradually slowed down by increasing radial pressure gradients due to the rising temperature, density, and magnetic field encountered by material near the central object. This pressure barrier, with the help of centrifugal forces, deflects and accelerates much of the infalling matter into an outflow along the axis of symmetry. The outflow velocity is naturally the highest near the axis of rotation since the pressure gradients are strongest there. Mathematically the quadrupolar model can be seen as a form of instability of the spherical Bondi accretion flow, wherein rotation, magnetic fields and anisotropic heating transform a central nodal singularity into a saddle point singularity.
The pressure gradients also act to collimate the flow, forming consistently the now traditional de Laval nozzle (e.g. Blandford and Rees br (1974), Königl ko1 (1982)). We find however that the most convincing flows are everywhere super-Alfvénic, in agreement with FH1, but the nozzle may still be critical in terms of the fast magnetosonic speed as discussed in FH1 and below. We note that one can expect conical shocks near the rotational axis, which would seem to be necessary to explain the shocked molecular hydrogen emission often observed.
The paper is organized as follows: in §2 we introduce the model and its approximations, and discuss their consequences; numerical solutions are presented in §3 for virial-isothermal and radiative models describing outflows from low and high mass protostars. In §4 we study the ensuing synthetic radio maps; this section is followed by discussions of the results in §5, and conclusions are presented in §6.
## 2 The Analytical Model
The main difference between hydrodynamic and MHD outflows is the low asymptotic Mach number in the MHD case. For protostellar outflows, the magnetosonic speed is about $`100kms^1`$ at $`10^5`$ AU and therefore far exceeds the sonic speed. It is not a priori obvious that the internal Alfvénic Mach number of the jet or outflow is small. However most of our models are super-Alfvénic everywhere.
The plasma is supposed to be perfectly conducting and therefore the flow is governed by the basic equations of ideal MHD without taking into account resistivity or ambipolar diffusion. In order to make the system tractable we assume axisymmetric flow so that $`/\varphi =0`$ and all flow variables are functions only of $`r`$ and $`\theta `$.
Kudoh and Shibata (1997b ) have performed time dependent one-dimensional (1.5-dimensional) magnetohydrodynamic numerical simulations of astrophysical jets that are magnetically driven from Keplerian discs. They have found that the jets, which are ejected from the disc, have the same properties as the steady magnetically driven jets they had found before (Kudoh & Shibata 1997a ). Their numerical results suggest that a steady model, as we are assuming in the present work (i.e. $`/t=0`$), is a good first approximation to the time-dependent problem on time-scales short compared to that required to dissipate the circumstellar material. Time independence will be relaxed in future work.
Accretion discs near protostars are probably heavily convective and therefore prone to dynamo action (Brandenburg et al. brand (1995), FH1). Since the ensuing magnetic fields are loaded with disc plasma, currents are allowed to circulate between the disc surface and the wind region above the disc. For rapidly rotating discs, this dynamical system may evolve naturally into a quadrupolar structure (Camenzind cam (1990), Khanna and Camenzind kc (1996)). The strong differential rotation of accretion discs is responsible for the excitation of quadrupolar modes in discs. Consequently, in the present model, magnetic field and streamlines are required to be quadrupolar in the poloidal plane. A moderate amount of heating supplied near the turning point allows the outflow to possess finite velocities at infinity.
These considerations have led us to use the set of equations of steady, axisymmetric, ideal MHD and to seek solutions with a quadrupolar geometry. Even though the poloidal components of the magnetic and velocity fields have to be parallel, the toroidal components need not share the same constant of proportionality (Henriksen henrik3 (1996)). This permits a poloidal conservative electric field to exist in the inertial frame, and so admits steady Poynting flux driving. One needs then to introduce an electric potential function and an azimuthal magnetic field independent of the azimuthal velocity. This point is the major improvement of the model with respect to FH1. In the radiative case, the equations describing diffusive radiative transfer (to zero order in $`v/c`$) and a Kramer’s-type law for the Rosseland opacity are added as in FH1. We also neglect the self-gravity of the protostellar material by assuming that the gravitational field is dominated by the central mass that is assumed to be an external fixed parameter in our model. Henceforth the model only applies to a state where the protostellar core is already formed.
It is unlikely that this model can be extended to include the optical jets since these must originate very near to the central object where the model may not apply in its steady form. Nevertheless, the inclusion of a jet model in the axial region, such as given by lery et al.(lery1 (1998), 1999b ), could remedy this lack in order to make a more global model for protostellar objects.
### 2.1 Scales and Self-similar Laws
Our model is developed within the context of power law self-similar models as developed by Henriksen (henrik (1993)), and in HV and FH1. A model with such symmetry was already used by Bardeen & Berger (BB (1978)) for galactic winds, although the present development has been independent. Parallel developments have been largely occupied with the study of winds from an established accretion disc (Blandford & Payne bp (1982), Konigl ko (1989), Ferreira & Pelletier ferr1 (1993), Rosso & Pelletier RossoP (1996), Ferreira ferr2 (1997), Contopoulos & Lovelace conto (1994), Contopoulos contop (1995), Sauty & Tsinganos sautyt (1994), Tsinganos et al. tsinetal (1996), Vlahakis & Tsinganos VT (1998)). Our use of this model in order to study inflow and outflow as part of the global circulation dynamics around protostars seems unique. The form used corresponds to an example of ‘incomplete self-similarity’ in the classification of Barenblatt (baren (1996)), but fits into the general scheme of self-similarity advocated by Carter and Henriksen (CH (1991)) with $`r`$ as the direction of the Lie self-similar motion. We work in spherical polar coordinates $`r,\theta ,\varphi `$ centered on the point mass $`M`$ and having the polar axis directed along the mean angular momentum of the surrounding material.
The self-similar symmetry is identical to that used in FH1 except that two quantities $`y_\varphi `$ and $`y_p`$ defined such that $`\sqrt{4\pi \mu }y_\beta v_\beta /(B_\beta /\sqrt{4\pi \rho })`$ (where $`\beta `$ indicates the appropriate component) replace $`y`$ in FH1. The power laws of the self-similar symmetry are determined, up to a single parameter $`\alpha `$, if we assume that the local gravitational field is dominated by a fixed central mass. In terms of a fiducial radial distance, $`r_o`$, the self-similar symmetry is sought as a function of two scale invariants, $`r/r_o`$ and $`\theta `$, in a separated power-law form. The equations of radiative MHD require the following radial scaling relations for the variables:
$$𝐯=\sqrt{\frac{GM}{r_o}}\left(\frac{r}{r_o}\right)^{1/2}𝐮(\theta ),$$
(1)
$$B_\varphi =\sqrt{\frac{GM^2}{r_o^4}}\left(\frac{r}{r_o}\right)^{\alpha 3/4}\frac{u_\varphi (\theta )}{y_\varphi (\theta )},$$
(2)
$$B_p=\sqrt{\frac{GM^2}{r_o^4}}\left(\frac{r}{r_o}\right)^{\alpha 3/4}\frac{u_p(\theta )}{y_p(\theta )},$$
(3)
$$\rho =\frac{M}{r_o^3}\left(\frac{r}{r_o}\right)^{2\alpha 1/2}\mu (\theta ),$$
(4)
$$p=\frac{GM^2}{r_o^4}\left(\frac{r}{r_o}\right)^{2\alpha 3/2}P(\theta ),$$
(5)
$$\frac{kT}{\overline{\mu }m_H}=\frac{GM}{r_o}\left(\frac{r}{r_o}\right)^1\mathrm{\Theta }(\theta ),$$
(6)
$$F_{rad}=\left(\frac{GM}{r_o}\right)^{3/2}\frac{M}{r_o^3}\left(\frac{r}{r_o}\right)^{\alpha _f2}f(\theta ).$$
(7)
In these equations the microscopic constants are represented by $`k`$ for Boltzmann’s constant, $`\overline{\mu }`$ for the mean atomic weight, $`m_H`$ for the mass of the hydrogen atom. The self-similar index $`\alpha `$ is imposed as a parameter of the solution, but as in FH1 it is constrained to lie in the range $`1/4\alpha >1/2`$. In the last equation, the index $`\alpha _f`$ is a measure of the loss (if negative) or gain in radiation energy as a function of radial distance.
Whenever we take account of radiative diffusion explicitly, as well as for the definition of the luminosity, we will use the same formulation as in FH1. The simplest approach however is to assume that the temperature is some fraction of the ‘virialized’ value. That is, according to relation (6), that $`\mathrm{\Theta }`$ takes some constant value $`1`$. By using the first law and assuming an ideal gas (so that with the above relations for $`p`$ and $`\rho `$ we have $`P(\theta )=\mu \mathrm{\Theta }`$ ) this assumption implies that on a sphere the specific entropy $`s`$ varies according to
$$Tds=\frac{kT}{\overline{\mu }m_H}d\mathrm{ln}\mu .$$
(8)
Consequently we are explicitly adding or subtracting heat from the system on a spherical surface according to the sign of $`d\mathrm{ln}\mu `$. In our models this normally increases towards the rotational axis. Thus, at least implicitly, we always employ some form of heat transfer, presumably radiative, to the gas over the poles relative to the equatorial material.
The same procedure for variations in the radial direction yields
$$Tds=\frac{GM}{r_o}\mathrm{\Theta }\left[\frac{1+2(\gamma 1)(\alpha 1/4)}{\gamma 1}\right]\left(\frac{r_o}{r}\right)^2\frac{dr}{r_o}.$$
(9)
Energy is therefore lost to the material with increasing radius provided that the quantity in the square bracket is positive. This requires $`\alpha 1/4>\frac{1}{2(\gamma 1)}`$, which is true for reasonable ratios of specific heats throughout our permitted range of $`\alpha `$.
In order for radiative heating to be adequate we must have $`\tau F_{rad}/c`$ sufficiently large, where $`F_{rad}`$ is the net radiative flux through a mass element. We note that because of the dependence on $`F_{rad}`$, this is not the same as requiring an optical depth of order unity. Nevertheless in order for the radiative transfer by diffusion to be plausible, this latter condition must also be satisfied.
In either case the issue depends on the nature of the opacity $`\kappa `$. We follow FH1 in supposing that this is predominantly a dust opacity (at least at some distance from the ‘star’) which is taken in Kramer’s form
$$\kappa =\kappa _o\rho ^aT^b.$$
(10)
A fit to a dust model in FH1 yields $`a=0`$, and $`b2`$ and $`\kappa _o=6.9\times 10^6cgs`$. We can calculate the mean optical depth between an inner radius $`r_{}`$ and the fiducial radius $`r_o`$, namely $`<\tau >\frac{1}{4\pi }_r_{}^{r_o}\kappa \rho 𝑑r𝑑\mathrm{\Omega }`$, using this opacity and the self-similar scaling relations. We find
$$<\tau >=\frac{<\mu \mathrm{\Theta }^2>}{|2\alpha 3/2|}\left[\left(r_{}/r_o\right)^{(2\alpha 3/2)}1\right],$$
(11)
where we have defined the fiducial radius as
$$r_o\left[\kappa _o\left(\frac{\overline{\mu }m_H}{k}\right)^2G^2\right]^{1/4}M^{3/4},$$
(12)
which is numerically
$$r_o1400(M/M_{})^{3/4}AU.$$
(13)
Thus we obtain a fiducial radius that is characteristic of the bipolar outflow sources and is a convenient unit for our discussions. A similar scale (to within a factor 5) was previously deduced in Henriksen (henrik (1993)) based on the source luminosity and fundamental constants (including the radiation constant). We note that the external opacity from $`\mathrm{}`$ to $`r_o`$ is given simply by $`<\tau >_{ext}=<\mu \mathrm{\Theta }^2>/|2\alpha 3/2|`$ and the ‘photosphere’ of the cloud is therefore found where $`\tau _{ext}2/3`$.
The corresponding temperature at the fiducial radius is given by
$$T164\mathrm{\Theta }(M/M_{})^{1/4}K.$$
(14)
We observe that if we identify our radiative fiducial scale $`r_o`$ with an empirical scale $`r_{em}`$ from $`<\mu >/r_{em}^2=2\times 10^{9\pm 1}(AU)^2`$ (Richer et al. PPIV (1998)), we can infer a relation between the mean density parameter $`<\mu >`$ and the central mass in the form
$$<\mu >=3.38\times 10^{3\pm 1}\left(M/M_{}\right)^{3/2}.$$
(15)
This is a useful way to relate the key model parameter $`\mu `$ to the physical mass and luminosity. We note however that the optical depth is not likely to approach unity for reasonable temperatures, except for exceptionally massive objects. But this does not prevent radiative heating from playing an important dynamical role near the protostar.
In fact using $`r_o`$, as above, and the definition of radiation field $`F_{rad}`$ (Eq. 7), the local energy flow can be expressed by
$$r^2F_{rad}=0.015\left(r/r_o\right)^{\alpha _f}\left(M/M_{}\right)^{5/8}fL_{}.$$
(16)
The radiative force per unit mass is given by
$$\kappa F_{rad}/c10^{13}\left(M/M_{}\right)^{1/8}\left(r/r_o\right)^{(\alpha _f4)}fcms^2,$$
(17)
which must be less than the Eddington limit of $`GM/r^2=3.5\times 10^7(M/M_{})\left(r_o/r\right)^2cms^2`$, for consistency. These are helpful considerations for our radiative models below that always verify this condition a posteriori.
### 2.2 The Equations
We use the self-similar forms in the usual set of ideal MHD equations together with the radiative diffusion equation when applicable. The corresponding system of equations is reported in Appendix A, and the ensuing first integrals in Appendix B. As noted above, in the present model, there is no constraint requiring parallelism between velocity and magnetic field as used in FH1. This allows us to discern clearly the effects of a non-zero Poynting flux. The self-similar variable directly related to magnetic field $`y`$ has two components that are not equal in the present model. These quantities are either projected in the poloidal plane or correspond to the toroidal components and are respectively defined by
$$y_{p,\varphi }(\theta )=\frac{M_{ap,a\varphi }}{\sqrt{4\pi \mu (\theta )}}.$$
(18)
Consequently the system deals with two different components of the Alfvénic Mach number $`M_{ap}`$ and $`M_{a\varphi }`$ defined by:
$$M_{ap,a\varphi }^2(\theta )=\frac{v_{p,\varphi }^2}{B_{p,\varphi }^2/4\pi \rho }.$$
(19)
In this model, since $`r`$ and $`\varphi `$ respectively correspond to the directions of self-similarity and axisymmetry, only waves that propagate along $`\theta `$ in the poloidal plane can preserve these two symmetries. Therefore the relevant Alfvén mode propagates in the direction $`\theta `$ in the poloidal plane with a phase velocity:
$$V_{a\theta }=\frac{B_\theta }{\sqrt{4\pi \rho }}.$$
(20)
Moreover we define an Alfvénic point where the total poloidal flow velocity is equal to this value in magnitude and direction (i.e. $`u_r=0`$), which by (18) is equivalent to:
$$M_{ap}=1=4\pi \mu y_p^2.$$
(21)
One can then define a critical scaled density corresponding to the Alfvénic scaled density
$$\mu _a=\frac{1}{4\pi y_p^2}.$$
(22)
The flow is super-Alfvénic if the density $`\mu `$ is larger than the critical Alfvén density $`\mu _a`$. Moreover, the compressible slow and fast MHD waves propagate in a poloidal direction $`\theta `$ with phase velocities that satisfy the quartic (Sauty and Tsinganos sautyt (1994))
$$\left(V_{s/f}\right)_\theta ^4\left(V_{s/f}\right)_\theta ^2\left(V_a^2+C_s^2\right)+C_s^2\left(V_a\right)_\theta ^2=0,$$
(23)
where $`C_s^2`$ is the isothermal sound speed. The condition that $`u_\theta `$ be equal to one of these wave speeds (where we expect a singular point in the family of solutions) can be expressed in terms of self-similar variables as:
$$\left(u_\theta ^2\right)_{crit}=\mathrm{\Theta }+\frac{u_r^2+u_\varphi ^2y_p^2y_\varphi ^2}{4\pi \mu y_p^21},$$
(24)
and in physical units (cf FH1):
$$\left(v_\theta ^2\right)_{crit}=\frac{P}{\rho }\left(1M_{ap}^2\right)+\frac{B^2}{4\pi \rho }.$$
(25)
Equation (23) shows that whenever $`V_a>>C_s`$ the slow and fast speeds become $`V_{a\theta }C_s/V_a`$ and $`\sqrt{V_a^2V_{a\theta }^2C_s^2/V_a^2}`$ respectively. Thus a super-Alfvénic flow as defined above can only encounter the fast speed and then only where $`V_aV_{a\theta }`$. In a region where the sound speed dominates, the fast and slow speeds become $`\sqrt{C_s^2V_{a\theta }^2}`$ and $`V_{a\theta }`$ respectively. Either the sound speed or the Alfvén speed is dominant according as $`\mathrm{\Theta }`$ is larger or smaller than $`u_p^2/M_{ap}^2+u_\varphi ^2/M_{a\varphi }^2`$.
As in Henriksen (henrik3 (1996)) we can consider the energy balance in these models. A convenient form for the energy equation is
$$.\left(\rho v(\frac{v^2}{2}+\mathrm{\Phi })+S\right)=v.p,$$
(26)
where the Poynting flux vector is
$$S=\frac{B^2}{4\pi }v\left(\frac{B.v}{4\pi }\right)B,$$
(27)
and $`\mathrm{\Phi }=GM/r`$. In self-similar variables the Poynting flux becomes
$$S\left(\frac{GM}{r_o}\right)^{\frac{3}{2}}\left(\frac{M}{r_o^3}\right)\left(\frac{r}{r_o}\right)^{2\alpha 2}\sigma (\theta )$$
(28)
where the self-similar Poynting flux $`\sigma (\theta )`$ can be defined in terms of the other variables as:
$$\sigma (\theta )=\left(\frac{1}{y_p}\frac{1}{y_\varphi }\right)\left(u_\varphi \frac{u_p^2}{y_p}u_p\frac{u_\varphi ^2}{y_\varphi }\right),$$
(29)
The energy equation becomes
$`2\alpha \left[\mu u_r({\displaystyle \frac{u^2}{2}}1)+\sigma _r\right]+(2\alpha {\displaystyle \frac{3}{2}})u_rP`$
$`={\displaystyle \frac{1}{\mathrm{sin}\theta }}{\displaystyle \frac{d}{d\theta }}\left[\mathrm{sin}\theta (\mu u_\theta ({\displaystyle \frac{u^2}{2}}1)+\sigma _\theta )\right]{\displaystyle \frac{u_\theta }{\mathrm{sin}\theta }}{\displaystyle \frac{dP}{d\theta }}.`$ (30)
This equation is generally true for our models but it does not provide an independent integral for non-zero pressure. This we treat in the next sub-section.
### 2.3 The Zero Pressure Limit
In the zero pressure limit one can use the self-similar energy equation together with $`\sigma _r=(u_r/u_\theta )\sigma _\theta `$ and $`u_r/u_\theta =d\mathrm{ln}r/d\theta `$ on a stream line to write a Bernoulli integral
$$r^{2\alpha }\mathrm{sin}\theta \left[\mu u_\theta (u^2/21)+\sigma _\theta \right]=B,$$
(31)
where $`B`$ is the Bernoulli constant on each stream line. In view of Eq. 40 we can write this as
$$\frac{B}{\eta _1}=\frac{1}{r(\theta )}\left[u^2/21+\frac{u_\varphi ^2}{4\pi \mu y_\varphi }\left(\frac{1}{y_\varphi }\frac{1}{y_p}\right)\right],$$
(32)
where the second term evidently describes the Poynting driving (magnetic energy term) and the first term gives the kinetic and gravitational energies. Hence if $`B0`$, we require the expression contained in brackets on the RHS of this last equation to tend towards infinity both at the equator and on the polar axis since $`r(\theta )\mathrm{}`$ in these limits. This is an improbable outer boundary condition although physically realizable at some large but finite radius (which might be a function of angle and so trace out an excluded region for the self-similar solution). It does permit the exchange between say a large driving Poynting term near the equator and a large kinetic energy term near the axis, but the source of the ‘free’ magnetic energy is left undetermined.
We turn to the special case $`B=0`$, which requires this same factor to be zero everywhere on a streamline. In order that there be an exchange between the kinetic term and the Poynting term in the sum we must have either $`u^2<2`$ and $`y_\varphi <y_p`$ or the reverse of both inequalities. In the first case we do not produce velocities greater than the escape velocity anywhere, while in the second case we require $`M_{ap}<M_{a\varphi }`$ everywhere on the streamline which also seems improbable. We are left then with the first possibility, which includes the case where each term vanishes separately on the boundaries. This suggests that starting from a physically likely free-fall boundary condition, $`v`$ is less than or equal to the escape velocity on every stream line in the absence of pressure, becoming equal again to the escape velocity at large distances.
Thus without pressure effects one is unable to produce net outflows at infinity without appealing to a Poynting flux from an excluded equatorial region. However one does thus deflect and collimate the material (magnetically and centrifugally) in a lossless way, and so all energy added near the star may appear in the outflow at infinity rather than be lost to work against gravity. The fact that there is no net gain of kinetic energy from the magnetic field energy on a stream line is peculiar to the case with $`B=0`$. It is due to the fact that energy that is first gained by the magnetic field at the expense of gravity and rotation during infall is subsequently returned as gravitational potential energy and kinetic energy during the outflow. With $`B0`$, there may be a source of Poynting flux in the excluded region (see e.g. also Henriksen (henrik3 (1996)).
## 3 Numerical Analysis
### 3.1 The Numerical Method
The numerical solutions in this work are obtained by initiating the integrations of the system of six equations from a chosen angle between the axis ($`\theta =0`$) and the equator ($`\theta =\frac{\pi }{2}`$). Starting values are varied until the boundary conditions are met. Solutions can not strictly reach the axis and therefore are limited by a minimum angle $`\theta _{min}`$. Moreover $`\frac{\pi }{2}`$ bounds the solution only if we demand exact reflection symmetry about the equator. Solutions are thus generally defined on a wedge given by $`0<\theta _{min}\theta \theta _{max}`$. In order to be able to start at will from either a sub- or super-Alfvénic point, a systematic search for singular points of the system has been conducted. This allows us to restrict a first guess for the typical values to use as input for the system.
Several general criteria have been employed to further constrain the solutions. First the radial velocity $`u_r`$ must vanish once, to have inflow-outflow, and only once, to avoid a higher order convection pattern. We also require $`u_\theta `$ and $`u_\varphi `$ to vanish at both boundaries, since there must be no mass flux across them, and since only zero angular momentum material can reach the axis (which all material lying strictly in the equator does). In practice, the first of these two conditions is sufficient to impose the second one. The same constraints apply to the magnetic field components. We take $`u_\theta `$ to be negative in order to get a circulation pattern rising over the poles, and $`u_\varphi `$ to be positive as a convention. We require also that $`u_r`$ is either decreasing or constant with increasing angle and that the density and pressure attain their maxima close to $`\theta _{max}`$.
To find solutions satisfying all of the previous requirements a Monte Carlo shooting method has been used starting from values corresponding to either sub- or super-Alfvénic starting points. Solutions have been found using as starting points successively $`\theta _{min}`$, $`\theta _{max}`$ and the stream-line turning point where the radial velocity vanishes. It has been found that the most convenient starting point is $`\theta _{min}`$. Results of these investigations and representative cases are shown below.
### 3.2 Low Mass Protostars
Most of the present models for star formation deal only with low mass stars. It is then natural to begin with this type of object. The solution shown in Fig.1 is representative of this class in the virial-isothermal approximation. It shows the variations with angle of the six self-similar variables (related to velocity ($`u_r`$, $`u_\theta `$, $`u_\varphi `$), density ($`\mu `$ and $`\mu _a`$), magnetic field ($`y_p`$, $`y_\varphi `$)) together with poloidal and toroidal Mach numbers ($`M_{ap}`$, $`M_{a\varphi }`$) and poloidal component of the Poynting flux ($`S_p`$). In this solution we have taken $`\alpha =0.3`$ and $`\mathrm{\Theta }=0.41`$ in order to compare with solutions given in FH1. As discussed in the previous paragraph, various requirements have been met; $`u_\varphi `$ and $`u_\theta `$ both vanish at the polar and equatorial boundaries, as do the magnetic field components ($`y`$ is diverging). The maxima for rotation velocity and Poynting flux, corresponding to the minimum in $`u_\theta `$, occur where the radial velocity changes signs. This point defines the total opening angle of the outflow which in this model is of the order of $`60^{}`$. The radial component of the Poynting flux $`S_r`$ changes signs as $`u_r`$ does, but its poloidal component defined by $`S_p=\sqrt{S_r^2+S_\theta ^2}`$ remains positive by definition and shows only a slight dip near maximum.
The high velocity outflow is of course much more narrowly collimated than the general outflow ($`u_r>0`$). The density $`\mu `$ is always superior to the Alfvénic critical density and both densities are almost equal where $`u_r`$ changes sign. Thus the flow is always super-Alfvénic as confirmed by the Mach number components in Fig.1. Since differences between $`y_p`$ and $`y_\varphi `$ are not evident to the eye in this figure, we have plotted the poloidal component of Poynting flux as given by Eq. 29. It shows the peak energy carried by the magnetic field to be at the turning point of the flow.
Thus this calculation produces wide angle relatively slow outflows surrounding a fast component that is identifiable with the EHV outflows or ‘molecular jets’. We shall see below that inclusion of the Poynting flux allows more collimated and faster ‘jets’ surrounded by a slower wide angle outflow than is the case without it. Therefore the magnetic field plays a crucial role in the YSO outflow despite its conservative nature.
For embedded sources (class 0/I) most of the circumstellar matter is distributed in an envelope with a typical size of about $`10^4`$ AU (Adams et al. adams (1993), Terebey et al. tere (1998), André & Montmerle andreM (1994)). The size that we will use to present our solutions will be of this order. For example, two poloidal projections of the streamlines are represented in Fig.2 out to a radius of 4000 AU. On the left panel (case A), the previous solution is shown from (nearly) $`\theta =0`$ to (nearly) $`\frac{\pi }{2}`$. In the panel on the right (case B) we show the result of varying the magnetic field and density from the previous values. The solution shown stops at $`\theta =1.33`$ and the empty region is outside of the domain of the solution. If the pressure and tangential velocity matching were done properly at the boundaries, a rotating Bondi accretion flow (Henriksen and Heaton HenriksenHeaton (1975)) might describe naturally a true accretion disc.
The self-similar structure of the streamlines does not in fact strictly match an arbitrary boundary condition at infinity, but the calculated symmetry seems to be a natural small scale limit of a more general self-gravitating circulation flow. Moreover the maximal meridional velocity close to the axial region (see Fig.1) shows the tendency for the flow to collimate cylindrically along the axis of rotation. This is in agreement with Lery et al. (lery1 (1998), 1999b ) where it is shown that cylindrical collimation is a general asymptotic behavior of magnetized outflows surrounded by an external pressure, due in the present case to the outer part of the molecular cloud or the interstellar medium.
Fig.3 presents a zoom of streamlines in the axial region view from above . Streamlines start in the equatorial region and end in the axial region with small angles ($`\theta =0.005`$ to 0.15). In addition to the meridional behavior shown previously in Fig.2, each streamline makes a spiraling approach to the axis and then emerges in the form of an helix wrapped about the axis of symmetry. The closer in angle to the equator the streamline starts, the larger the angle of rotation it makes around the axis, the closer it gets to the central mass, and the nearer to the axis it asymptotically emerges. On the other hand, for streamlines beginning well above the equator, rotation is very nearly negligible and the path almost remains in a poloidal plane.
Indirect measurements of the magnetic field in protostars are presently available (inferred from the radio flux from gyro-synchrotron emission by Ray et al. Ray (1997), and Hughes hughes (1999)). Such measurements can provide a critical test of models. In Fig.4 the angular dependences of the components of the magnetic field are shown in Gauss at a distance of $`5\times 10^4`$ AU for the same illustrative example of a low mass protostar. Magnetic fields of the order of a milligauss or less are obtained with a peak around 1 mGauss that coincides in angle with the radial turning point of the flow. As required by the quadrupolar geometry $`B_\varphi `$ and $`B_\theta `$ vanish both on the equator and on the axis, while $`B_r`$ vanishes only on the axis and at the opening angle, where the largest field intensity exists.
One remarkable prediction of these models is that the magnetic field at a given radius varies dramatically with angle (and hence probably with inclination to the line of sight). The values range from 10 microGauss to a milliGauss at $`10^4`$ AU, from $`10^2`$ to a few Gauss at 20 to 40 AU in agreement with some observations (Ray et al. Ray (1997)). Peak field strengths may reach values as high as 1 to 100 Gauss at 1 AU (e.g. Hughes hughes (1999)), as shown in Fig.5. There is also a weaker dependence on the self-similar index as is also shown in Fig.5. These predicted magnetic field strengths are surprisingly large. This is nevertheless consistent with existing (rare) observations and reinforce the idea that magnetic fields play a major role around young stellar objects (Donati et al. donal97 (1997), Guenther & Emerson gue96 (1996)).
### 3.3 Massive Protostars
To adapt the model for high mass protostars, we must find solutions with higher self-similar density (according to Eq. 15) using the method outlined in Sect. 3.1 and adjust the other variables, mainly the magnetic field in fact, until the boundary conditions are satisfied. For radiative high mass protostars, we may speculate that we may not need a magnetic field as strong as in the low mass case to either launch or collimate the outflows since strong pressure gradients are produced by radiative heating (see also Shepherd & Churchwell shch (1996), Churchwell church (1997)). However the virial-isothermal model that we give below has a stronger magnetic field.
#### 3.3.1 The Virial-Isothermal Case
Fig.6 shows the self-similar variables for a virial-isothermal massive object. This solution is also completely super-Alfvénic, just as is the previous low mass example that is overplotted with dashed lines. Although the overall opening angle of the flow (as measured by the angle at which the radial velocity changes sign) is the same in both cases, we observe that massive protostars produce faster (scaled) radial velocities throughout the positive range. There is also a more negative $`u_\theta `$ throughout this range which leads to an enhanced ‘collimation’, corresponding to an increased axial density of material. In addition, the peak rotation at the ‘turning point’ and the infall velocity just before this peak are both larger for the massive star. The smaller Alfvénic Mach numbers also show that the magnetic field is more important in this flow. This seems consistent with the ability to confine the higher physical temperature.
Our high mass virial-isothermal protostar yields larger scaled velocities than does the low mass object. This is mainly due to the larger luminosity in the high mass case. Sufficiently close to the axis both the low ($`1M_{}`$) and high ($`30M_{}`$) mass objects can produce $`300kms^1`$. However as noted previously it is not clear that streamlines at such small angles are part of the present flow.
#### 3.3.2 The Radiative Case
Massive molecular outflows present large bolometric luminosity, and therefore radiation probably plays an important role in the dynamics. Consequently radiative heating has to be investigated, taking into account consistently the diffusion equation. The radiation field and the temperature are no longer constant with angle and the radiation flux is not purely radial. The index $`\alpha _f`$ in Eq. 7 is chosen to be negative ($`0.1`$ as in FH1) and so simulates a slight radiation loss. The numerical method is similar to that used above. The self-similar radiation flux $`f(\theta )`$ is fixed at $`\theta _{min}`$ to be zero , and the non-zero value at $`\theta _{max}`$ measures the radiation ‘loss’ from the self-similar region.
Fig.7 shows a solution for the radiative case where $`\alpha =0.2`$, $`a=0`$ and $`b=2`$. The temperature $`\mathrm{\Theta }`$ is at its maximum (0.5) in the jet region and decreases towards the equator. The components of the radiation flux are much smaller than in FH1. Thus the required heating is less important than in FH1.
Fig.7 presents two illustrative radiative solutions for low ($`M0.3M_{}`$) and high mass ($`M7M_{}`$) cases. Streamlines in the poloidal plane are overplotted on contours of the hydrogen number density in the upper part of the figure, while the intensity of the radiation field is represented at the bottom. Fig.7 clearly shows the saddle type singularity of the central point. We see that a Keplerian disc naturally appears as part of the global solution as does the collimated outflow.
\[\[Caption of the figure that is given in gif format because of its size\] Streamlines and density contours of the hydrogen number density (in logarithmic scale) for low and high mass radiative solutions with $`\alpha =0.2`$. The density levels shown set between $`10^3`$ to $`10^5`$ cm<sup>-3</sup> for the low mass case, and between $`10^5`$ to $`10^7`$ cm<sup>-3</sup>in the other case. Length scales are in AU. The lower panel presents the total intensity of the radiation field (on a logarithmic scale).\]
The radiation field is smaller in the massive case in Fig.7. This is remarkable since one should expect significant radiative heating surrounding massive protostars. However, we have chosen to have the same components of the self-similar velocity for both cases, as well as the same self-similar temperature and radiation flux as input parameters. Therefore Fig.7 shows that the necessary radiation field has to be larger in low mass outflows in order to get the same scaled velocity as in the massive case. Particularly in the latter case, the global shape of the radiation field is rather similar to the solution obtained by Madej et al. (MLH (1987)), where an anisotropic radiation field was produced by scattering in thick accretion discs. It was shown in their article that collimation was clearly displayed by the radiation field in the polar region. We also find, in the massive case, that the radiation field increases away from the equator. On the other hand the low mass case almost shows an almost spherical symmetry, except on the axis.
Radiative heating increases the pressure gradient pushing outwards and therefore peak and asymptotic velocities along streamlines are larger as represented in Fig.8. This figure shows variation of total velocity with integrated distance along a streamline for both radiative and virial-isothermal cases for the same input parameters. We begin the integration in the equatorial region at about $`10^4`$ AU from the protostar and integrate out to $`10^6`$ AU in the axial region, where we attain a final angle of 0.1 radian (see FH1 for comparison). Velocity is maximum at the turning point ($`u_r=0`$), which is the closest point to the source, and is almost completely toroidal there. The inclusion of radiative heating provides a net acceleration of material which clearly increases the asymptotic axial velocity. Moreover the asymmetry in this graph indicates that the angular variation dominates the radial decrease as dictated by the self-similar forms. In fact this asymmetry is the clearest indication of the production of high velocity outflow from the relatively slow (but massive) infall.
Global differences in the solutions between virial isothermal and radiative cases remain as described in FH1. The temperature is maximum in the jet region and decreases towards the equator. The radial component of the radiation flux decreases monotically in the same direction and its $`\theta `$-component increases, being always positive.
In order to show the influence of the mass on solutions, we present variations of the total velocity with the integrated distance along streamlines as defined in the previous plot for different masses in Fig.9. For all the curves, the input parameters are kept the same ($`\alpha =0.2`$, $`\mathrm{\Theta }=0.5`$) except for the mass of the central object and therefore the self-similar density $`\mu `$. Even the initial components of velocity and magnetic field remain the same. Only the starting distance of integration from the central object varies in order to differentiate the various curves. The larger luminosities for massive objects explain the larger velocities. In the solutions shown in Fig.9, the total velocity is multiplied by approximately a factor 5 as material flows from the equatorial region to the axial region. The efficiency of heating as a driving mechanism grows with the input values of the temperature $`\mathrm{\Theta }`$ and of the components of the self-similar radiation flux $`f`$. Indeed, a factor of 10 or more can be obtained just by increasing these parameters. Moreover, the figure shows that the highest velocity components ($`100kms^1`$) are most tightly collimated along the axis ($`\theta =0.02`$). Thus the most massive protostars produce the fastest flows with maximum radial velocities in the axial region.
We conclude that radiative heating combined with Poynting flux driving provides an efficient mechanism for producing high velocity outflows when the opacity is dominated by dust.
### 3.4 Parameter Study
\[\[Caption of the figure that is given in gif format because of its size\] Monte Carlo shooting: Plots of maxima of $`u_\varphi `$ at the turning point ($`u_r=0`$) with respect to the maxima of $`u_\theta `$ for different values of the temperature parameter ($`\mathrm{\Theta }`$=0.01;0.16;0.26;0.36). ($`\alpha =0.2`$)\]
We use a Monte Carlo exploration of our parameter space in order to study general trends in the model. The self-similar index and the temperature are fixed and the six input parameters (velocity and magnetic field scaled components and the density) are randomly chosen. The conditions presented in Sect. 3.1 are required for any solution to be considered ‘good’. One should note that there exists a broad range of solutions that possess favorable characteristics. Only the most significant results are reported here.
We plot the maxima of $`u_\varphi `$ as functions of maxima of $`u_\theta `$ for various values of the self-similar temperature ($`\mathrm{\Theta }`$=0.01;0.16;0.26;0.36), and for a given index ($`\alpha =0.2`$). These maxima occur at the turning point where the radial velocity vanishes. Each point shown here represents a ‘good’ solution. Rotation is directly measured by $`u_\varphi `$, and the tendency for material to go towards the axis, i.e. the collimation, is related to $`u_\theta `$. We find, as a global trend, that when $`(u_\theta )_{max}`$ decreases, $`(u_\varphi )_{max}`$ remains almost constant ($`(u_\varphi )_{max}0.25(u_\theta )_{max}+constant`$) until $`(u_\theta )_{max}`$ reaches a threshold value. Then $`(u_\varphi )_{max}`$ decreases faster than $`(u_\theta )_{max}`$, which is what was anticipated analytically for super-Alfvénic flow when $`\alpha =1/4`$ (see appendix B). In addition, there is a limiting $`(u_\varphi )_{max}`$ for each $`(u_\theta )_{max}`$ at a given temperature, below which solutions are not found. When $`\mathrm{\Theta }`$ increases, which physically corresponds to larger pressure gradients, $`(u_\varphi )_{max}`$ decreases while $`(u_\theta )_{max}`$ becomes less negative. If the magnitude of the latter quantity is taken as a measure of the collimation, we see that rotation and collimation decrease together with increasing temperature and therefore probably with the bolometric luminosity of the central object.
It is found that solutions are less influenced by density (and hence mass) than by temperature. Nevertheless the distribution of opening angle is not found to change its form with temperature. In particular the most probable value is always found essentially at the angle shown in the two sample solutions. The variation is such that the larger opening angles are found at the lower left of the band for each temperature, that is with maximal poloidal collimation flow and the smallest possible maximum rotation.
It is also found that larger opening angles are associated with smaller magnetic fields. The corresponding results are not reported here, but in fact the variation of the opening angle is rather large for a relatively small magnetic field variation (a reduction of 25$`\%`$ in the magnetic field magnitude can widen the opening angle from $`35^{}`$ to $`60^{}`$). If the magnetic field varies secularly as the evolution proceeds, then a sequence of our models could be regarded as a series of ‘snapshots’ of the protostellar evolution. This evolutionary sequence would show that the opening angle increases with time, as the magnetic field becomes less important. This would be consistent with the notion that the youngest outflows are generally the most collimated. Indeed, this has been observationally verified by Velusamy and Langer (VeluLan (1998)), who provide evidence that outflows widen in time. Moreover when the magnetic field reaches a lower threshold in our model, the solution becomes a pure outflow. Therefore, as obtained with our model, if the opening angle broadens sufficiently, it may ultimately cut off the accretion supplying new material for the outflow.
The angular dependence of the self-similar variables are represented in Fig.10. The only parameter that varies is the self-similar temperature. The figure (left panel) illustrates the fact that collimation decreases with temperature as already mentioned. Moreover variations of the self-similar temperature give rise to different density profiles in the infalling ‘disc’ region, as seen on the right hand panel of Fig.10. The density decreases close to the equator for the largest values of the temperature with a maximum well away from the equator. Such solutions accrete so rapidly in the equatorial plane that they have a reduced density relative to their rotationally supported surroundings. These solutions resemble the special case $`\alpha =1/4`$ as discussed in appendix B. On the other hand the temperature has little effect on the density profile in the axial region, although it broadens the central ‘funnel’ substantially.
We note that solutions for cold flows ($`\mathrm{\Theta }=0`$) have also been obtained. In such a case, the circulation is powered only by Poynting flux since our Bernoulli integral (Eq. 32) applies. We do not expect asymmetric velocities at the two infinities on the stream lines. For the same set of parameters, the total velocity in the ‘interacting’ region is smaller for cold flows and the fast rotating region (where $`u_\varphi `$ is maximum) is narrower relative to finite pressure flows. Moreover the Alfvénic Mach numbers almost reach unity at the peak rotation point, showing that the flow is only slightly super-Alfvénic there, and that in fact the two terms in Eq. 32 are probably comparable.
Dependences of the self-similar index have also been investigated. The main results are illustrated by Fig.11 where the radial and longitudinal components of velocity are plotted for three values of the self-similar index but with the same self-similar temperature. For more negative self-similar index one gets dramatically smaller $`\theta `$\- and $`\varphi `$\- velocities (recall that $`\alpha =1/2`$ yields pure radial accretion) and a smaller change in the radial velocity.
## 4 Observational Consequences of the Model: Synthetic $`{}_{}{}^{13}CO`$ Spectral Lines and Maps.
In this Section, we compute $`{}_{}{}^{13}CO`$ (J=$`10`$) emission lines for several of the outflow solutions discussed in Sect. 3.1. Specifically, we provide examples of spectra, channel maps, position-velocity diagrams, and intensity-velocity diagrams for one example of each of the following: the low mass solution of Sect. 3.2, the radiative high mass solution of Sect. 3.3.2, and the virial-isothermal high mass solution of Sect. 3.3.1. The importance of this analysis is that it allows us to directly compare the observable features of our models with real observational data. However, we cannot fully explore the observational consequences of our models in the present paper. Our purpose in this Section will be mainly to outline the qualitative features of our models, leaving a complete exploration of the parameter space for the second paper in our series.
The method that we follow is essentially the same as that outlined in FH2, except that the code used there has been substantially improved in a number of respects. Firstly, we are now able to generate results with much higher spectral and spatial resolution. Secondly, although we compute spectra on a grid of pencil beams through the outflow source, we convolve our spectra with a Gaussian telescope beam to more accurately simulate observational results. FH2, on the other hand, did not perform this convolution, which resulted in many spuriously sharp features in their maps. Thirdly, we do not embed our solutions in a background of molecular gas to simulate the molecular cloud, which presumably surrounds the outflow. FH2 demonstrated that the presence of background material has no significant effect on the spectra or maps, except for in the lowest, and hence least interesting, velocity channels. Since we are primarily interested in emission at relatively high velocities, corresponding to the outflow, emission or absorption by relatively slow moving background gas is of no consequence. As in FH2, the primary limitation of our analysis is that we assume local thermodynamic equilibrium. Although this is probably not strictly true, we do expect the $`{}_{}{}^{13}CO`$ level populations to be approximately in equilibrium with the local temperature provided that the optical depth is at least moderately high. Therefore, we hope to capture at least the character of the emission, if not the precise intensity.
### 4.1 Low Mass Solutions
We have assumed a mass of $`1M_{}`$ for the low mass solution, which implies a fiducial radius of $`r_o=1400AU`$ using Eq. 13. We have also chosen a inclination angle of $`45^{}`$ between the symmetry axis of the outflow and the plane of the sky. With the mass and $`r_o`$ determined, the density, temperature, and velocity are determined at any position by the dimensionless parameters $`\mu (\theta )`$ and $`\mathrm{\Theta }(\theta )`$, and the radial scalings given by Eqs. 1-7. Therefore, by constructing a line of sight through the cloud, we can compute the emission (expressed as a radiation temperature) as a function of velocity parallel to the line of sight; examples of these spectra are shown in Fig.12. We have computed spectra on a $`61\times 120`$ grid of map positions across the outflow.
We find that the spectra shown in Fig.12 vary substantially with map position across the outflow. Spectra computed near the map projection of the outflow axis typically show relatively high velocity wings that may extend up to several $`kms^1`$, but the emission generally becomes too weak to show past $`2kms^1`$. We note that the strongest emission always occurs at low velocities, with relatively weak emission in the wings, as is the case for real outflows. Spectra may contain either a single wing or both red and blue-shifted wings depending on whether a particular line of sight crosses only a single lobe of the outflow or both. There are also many cases where line wings are absent, but well-defined “shoulders” are present on the low velocity dominated spectra.
In the equatorial regions of the maps, double-peaked spectra are found with relatively low velocity peaks. These spectra are mainly due to the radial infall that dominates in the equatorial regions. It is unlikely that the double-peaked profiles are due to rotation, since radial infall velocities exceed the rotational velocities at all angles $`\theta `$ except for very near the radial turning point of the outflow. Furthermore, many of the spectra have a slight asymmetry in which the blue-shifted peak shows stronger emission. This is suggestive of the well-known outflow asymmetry, which is due to the more efficient self-absorption of blue-shifted emission by overlying material moving towards the observer (Shepherd & Churchwell shch (1996), Gregersen et al. greg (1997), Zhou et al. zhou (1996)). The effect is slight in this case since the low mass solution, with a $`1M_{}`$ central object, produces a peak optical depth of only $`0.2`$ along most lines of sight. The massive solutions, discussed in Sect. 3.3.2 and 3.3.1, produces much higher optical depths and leads to a much more pronounced asymmetry.
Fig.13 shows a set of channel maps for the low mass solution. To make these synthetic maps, we have integrated the emission over $`1kms^1`$ wide channels from $`6.5`$ to $`6.5kms^1`$. We have deliberately excluded the emission with $`|v|<1kms^1`$, since we find that the lowest velocity emission is essentially featureless. We have convolved the maps with a Gaussian telescope beam with a FWHM of $`800AU`$; for example, this corresponds to a $`5arcsec`$ beam at a distance of $`160pc`$.
The outflow is apparent at all velocities shown in our maps. We obtain a relatively wide outflow “cone” at the lowest velocities ($`0.5`$ to $`1.5kms^1`$) and very well collimated jet-like emission apparent at velocities between $`1.5`$ and $`3.5kms^1`$. At higher velocities, the jet-like feature breaks up into outflow “spots” that move away from the central object as the velocity increases. The most important feature of these channel maps is that the opening angle of the outflow gradually decreases as the magnitude of the velocity increases. As discussed in FH2, this effect is entirely due to the angular velocity sorting of the outflow solutions. Our models always have the property that the outflow velocity increases towards the axis of symmetry. Thus, our models naturally produce outflows in which most of the material is poorly collimated and moves at relatively low velocities, but the fastest jet-like components are very well collimated towards the axis of symmetry. This property has, in fact, been observed in a number of outflow sources (Bachiller et al. bachiller (1990), Guilloteau et al. Guilloteau (1992), Gueth et al Gueth (1996)).
Several authors have noted that molecular outflows seem to be characterized by a power-law dependence of total integrated intensity as a function of velocity (Rodriguez et al. Rodriguez (1982), Masson & Chernin Masson (1992), Chandler et al. Chandler (1996)). In Fig.14, we have summed all of the spectra in order to show how the intensity is distributed with velocity. The solid line in the figure represents the blue-shifted emission, while the dashed line represents the red-shifted emission. We note that there are no important differences due to the somewhat low optical depth of our solution. We find that the intensity $`I`$ is well fit by a power law $`Iv^{3.5}`$ over all velocities greater than approximately $`0.2kms^1`$. This power-law behavior has, in fact, been found for several real outflows , and the index is in reasonable agreement with the available data (Cabrit et al. cab (1996), and Richer et al. PPIV (1998)). It remains to be seen, however, how sensitive the power-law index is to the parameters of our model. This important issue will be resolved in the second paper in this series, where we more completely explore the line emission of our models.
An important feature of Fig.14 is that the emission falls very rapidly with increasing velocity. For example, the intensity at even a rather modest velocity of $`3kms^1`$ is only $`0.01\%`$ of the peak intensity. At some velocity, the emission will fall below the noise threshold that is always present in real observations. Past this velocity, the emission shown in Fig.13 would be undetectable.
In Fig.15, we show a velocity-position diagram for a cut along the outflow axis. We have actually computed spectra along several cuts parallel to the outflow axis and convolved them with a Gaussian beam to produce a single velocity-position diagram. We find that there is substantial emission at low velocities along the entire cut. The highest velocities are found near the map centre, as must be the case since $`vr^{1/2}`$ by Eq. 1. We note that the emission contours in this figure are logarithmically spaced; thus, we find that the most intense emission occurs at low velocities at all map positions.
The outflow is apparent in Fig.15 by the emission extending out to approximately $`\pm 8kms^1`$ on either side of the outflow. Moreover, the general appearance of the position-velocity diagram is suggestive of a globally accelerated flow since higher velocities generally appear at greater distances from the central source. How can this be reconciled with the fact that velocity decreases with radius in our model? It was shown in FH2 that the apparent acceleration is due to the unique velocity sorting that is predicted by our model. The radial velocity always increases with decreasing polar angle near the outflow axis. Our cut crosses progressively smaller angles with increasing distance from the central object, and therefore encounters higher velocity material.
The ragged appearance of the highest velocity contours is entirely due to the limitations of our numerical procedures at high velocities. In our model, the highest velocity components are extremely localized near the central star and the outflow axis. Thus, whether or not a pencil beam passes through such a component is a matter of chance when the spatial extent falls below the grid spacing. This apparently happens when $`|v|5kms^1`$.
### 4.2 Massive Solutions
#### 4.2.1 The Radiative Case
It is useful to compare the $`{}_{}{}^{13}CO`$ emission of the radiative solution discussed in Sect. 3.3.2, with the low mass solution discussed above. We have assumed a mass of $`10M_{}`$, which implies a fiducial radius of $`r_o=7900AU`$ according to Eq. 13. We find, heuristically, that the peak optical depth in the line centre decreases with increasing central mass. Since the mass is a free parameter, we have tuned it to achieve realistic optical depths of order unity. The mass is larger than for the low mass solution discussed above, but the effects of radiative heating are more likely to become important in outflows surrounding more massive protostars, in any case. The only other parameter that we have changed is the FWHM of the Gaussian beam that we convolve with our solution; here, we can use a slightly larger and more realistic 10 arcsec beam without smearing out the details of the solution. As in Sect. 4.1, we have computed spectra on a $`61\times 120`$ grid of map positions assumed a $`45^{}`$ inclination angle.
Qualitatively, we find a great deal of similarity with the low mass solution. The line profiles shown in Fig.16 have similar structure, and the outflow shown in Fig.17 is collimated to a similar degree. The main difference is the velocity of the flow. In the radiative case, we find that there is significant emission out to at least $`10kms^1`$, and we find some low intensity emission out to $`15kms^1`$; this is most clearly illustrated by the position velocity diagram shown in Fig.19. Examining the intensity-velocity diagram shown in Fig.18, we find that the intensity $`Iv^{3.6}`$ for velocities greater than about $`0.5kms^1`$. It is striking that nearly the same power law index should be obtained for both the low mass and radiative solutions. Admittedly, we do not fully understand the reasons for this similarity in behavior at present. We also doubt that this sort of behavior is a generic feature of our models. Nevertheless, we are encouraged that at least some of our models are capable of reproducing such realistic observed properties.
#### 4.2.2 The Virial-Isothermal Case
We have also computed the $`{}_{}{}^{13}CO`$ emission of the high mass solution discussed previously in Sect. 3.3.1. We find that the optical depth is extremely high unless the mass of the central protostar is chosen to be rather large; therefore, we have used a mass of $`100M_{}`$ and a corresponding fiducial radius of $`r_o=44000AU`$ to compute the spectra. Even using such a large central mass, we find that the peak optical depth in the line centre is very high, with values exceeding 5 in many positions. Such high optical depths cause real difficulty for the present version of our code, since any given line of sight encounters the “photosphere”, where the optical depth at some velocity is approximately unity, rather abruptly. Thus, a great deal of care was taken to ensure that the solution was well-sampled along each line of sight. We note that the corresponding increase in computational time required us to decrease the spatial resolution of our maps. Here, we employ a $`41\times 80`$ grid, as opposed to the $`61\times 120`$ grid used elsewhere. The inclination angle is $`45^{}`$, as in the previous two cases.
The spectra shown in Fig.20 are much more complex than the spectra of the low mass or radiative solutions. Many show multiple sharp peaks and very extended wing emission out to well past $`20kms^1`$ (as in Shepherd & Churchwell shch (1996), Gregersen et al. greg (1997) for CO maps of molecular outflows, and Doeleman et al. doeleman (1999) for SiO masers). We are quite certain that the multiple peaks are real, and not numerical artifacts, since we have verified that their location and appearance is independent of the spectral resolution and the degree to which a solution is sampled along a line of sight. It remains unclear to us in detail why such jagged peaks should be present, although it must depend on the variation of optical depth, density and velocity field within the ‘beam’. Thus behavior should be studied as a function of inclination to the line of sight.
Despite the unusual appearance of the spectra, the channel maps shown in Fig.21 are quite reasonable. A well-collimated outflow is apparent at velocities greater than about $`10kms^1`$. The position velocity diagram shown in Fig.23 shows significant emission at most positions out to approximately $`20kms^1`$. We obtain emission at higher velocities (up to approximately $`40kms^1`$, but the emission is restricted to the central map positions nearest the protostar. The intensity-velocity diagram (Fig.22) again shows a power law behavior, but the slope is different than that obtained in the previous two cases; here we find that $`Iv^{4.1}`$, which is still inside the range allowed by the available observations (Cabrit et al. cab (1996), Richer et al. PPIV (1998)).
## 5 Discussion
### 5.1 The Alfvénic Point
All the solutions presented previously were completely super-Alfvénic. Here we report a trans-Alfvénic solution in order to show why they are generally not preferred. In such a solution, the outflow size is reduced and the ultimate velocities are smaller. Therefore this type of solution is less efficient in producing high velocity outflows. Entirely sub-Alfvénic solutions can also be found but they admit only pure accretion. Fig.24 shows the density profile for two super-Alfvénic solutions (the two upper most plots A and B) and one trans-Alfvénic solution (C). Case (A) presents a model where the density is well above the critical Alfvénic density. In this case, the Alfvénic Mach number is larger than unity. The second case (B) has almost the critical density at the turning point, and consequently the Alfvénic Mach number almost reaches unity at this point. However the solution always remains super-Alfvénic. Finally the lower panel (case C) displays a trans-Alfvénic solution showing that the size of the outflow has been reduced drastically. Therefore the most interesting solutions for circulation flow appear to be super-Alfvénic solutions.
### 5.2 Comparison with FH1
Fiege & Henriksen (fiege1 (1996)a, FH1) have shown that $`r`$-self-similar quadrupolar circulation could be a relatively realistic model for bipolar outflow. They did not include Poynting flux that could assist the outflow as in the present paper. The inclusion of this effect allows us to obtain faster and more collimated outflows. The path followed by material also differs from FH1 as shown in Fig.25, where streamlines from FH1 and the present models are plotted. All the solutions start from approximately the same initial position in the equatorial region with the same set of parameters. Streamlines from the present model get closer to the source than the FH1 solution, for both radiative and virial-isothermal cases. Thus, in the present model, the flow passes relatively close to the central mass. The limiting speed is about the escape speed from the central mass (and the jet speed) of a few hundred $`kms^1`$. We note that the most collimated of the the three solutions is the radiative case.
### 5.3 Stability Analysis
The steady configurations presented in this article should develop strong shocks in the axial region due to the strong collimation created by magnetic and pressure forces. Moreover current-carrying jets, as in the present model, are liable to Kelvin-Helmholtz (KH), Pressure Driven (PD) and magnetic instabilities driven by the electrical current. These so called current driven (CD) instabilities have been studied only recently in the context of astrophysical jets (Appl appl (1996), Appl et al. LBA (1999)). We use the results of the latter linear stability analysis in order to get a rough estimate of CDI and KHI in our magnetic configuration. Growth rates $`\mathrm{\Gamma }`$ of the different modes show that the CD helical mode ($`m=1`$) dominates the CD pinching mode ($`m=0`$) and that CD and KH instabilities become comparable when the Mach number approaches unity. This is the case in the present self-similar model. The CD instability (with a typical wavelength $`\lambda _{CD}\frac{2}{5}R_{outflow}`$, $`R_{outflow}`$ being the characteristic size of the outflow) should be located on a current sheet where reconnection and particle acceleration might occur. KH pinching modes ($`\lambda _{KH}3\times R_{outflow}`$) should dominate KH helical modes, and consequently give rise to internal shocks in the outflow. All these results require numerical simulations that we plan to study in another paper.
## 6 Conclusions
In this paper we have presented a model based on the $`r`$-self-similarity assumption applied to the basic equations of ideal axisymmetric and stationary MHD, including Poynting flux. Detailed comparisons with the observations have been computed and characteristic scales of the problem have been given as functions of source properties. The luminosity needed for radiative heating is smaller even while driving faster outflows than in the previous model (FH1). The model geometry implies a natural connection between the fast ionized jets seen near the polar axis of the wind, and the slower and less-collimated molecular outflows that surround them, although the jets may be partly due to central activity not included in our model.
The most massive protostars produce the fastest flows with maximum radial velocities in the axial region. Radiative heating produces faster outflows compared to the virial-isothermal case, for both low and high mass objects. Larger opening angles are associated with smaller magnetic fields. Consequently, a gradual evolutionary loss of magnetic flux may result in outflows that widen as they age.
Synthetic spectral lines from $`{}_{}{}^{13}CO`$ (J=$`10`$) allow direct comparison with observational results via channel maps, maps of total emission, position-velocity and intensity-velocity diagrams. The model reproduces well observational features. Due to internal instabilities in the most collimated ‘jet’ part of the flow, the time evolution of the steady model should give rise to regularly spaced knots, and possibly excitation, ionization and non-thermal particle acceleration due to field annihilation and shock dissipation. Thus the model in its current state of development shows that radiative heating combined with Poynting flux driving is efficient in producing high velocity outflows when the characteristic luminosity of the forming star (as deduced from observations) is used. More extensive searches in parameter space, the detailed fit to specific sources, and time dependent modeling all remain to be done. Although we do not discuss the regions excluded from the self-similar region of the flow, the current correspondence to observational features is sufficiently exact that we expect simultaneous infall/outflow (‘circulation’) to be an essential part of any realistic model.
###### Acknowledgements.
We would like to thank Sylvie Cabrit for constructive and useful discussions.
## Appendix A The Equations
The mass flux conservation equation remains the same as in FH1:
$$\left(1+2\alpha \right)\mu u_r+\frac{1}{\mathrm{sin}\theta }\frac{d}{d\theta }\left(\mu u_\theta \mathrm{sin}\theta \right)=0,$$
(33)
while the self-similar variable $`y`$ is simply replaced by its poloidal component in the following set of equations:
Magnetic Flux conservation:
$$\frac{\left(\alpha +5/4\right)u_r}{y_p}+\frac{1}{\mathrm{sin}\theta }\frac{d}{d\theta }\left(\frac{u_\theta \mathrm{sin}\theta }{y_p}\right)=0$$
(34)
Radial component of momentum equation:
$`u_\theta {\displaystyle \frac{du_r}{d\theta }}\left(1M_{ap}^{}{}_{}{}^{2}\right)\left(u_{\theta }^{}{}_{}{}^{2}+u_{\varphi }^{}{}_{}{}^{2}\right)\left(1{\displaystyle \frac{\alpha +1/4}{M_{ap}^2}}\right)`$
$`{\displaystyle \frac{u_r^2}{2}}\left({\displaystyle \frac{3}{2}}2\alpha \right)\mathrm{\Theta }+1+{\displaystyle \frac{u_ru_\theta }{M_{ap}^2y_p}}{\displaystyle \frac{dy_p}{d\theta }}=0`$ (35)
Other equations of the system contain terms with mixed toroidal and poloidal components; Angular Momentum conservation:
$`{\displaystyle \frac{1}{u_\varphi }}{\displaystyle \frac{du_\varphi }{d\theta }}\left(1{\displaystyle \frac{1}{M_{ap}M_{a\varphi }}}\right)+{\displaystyle \frac{u_r}{u_\theta }}\left(1/2{\displaystyle \frac{\alpha +1/4}{M_{ap}M_{a\varphi }}}\right)`$
$`+\mathrm{cot}\theta \left(1{\displaystyle \frac{1}{M_{ap}M_{a\varphi }}}\right)+{\displaystyle \frac{1}{y_\varphi M_{ap}M_{a\varphi }}}{\displaystyle \frac{dy_\varphi }{d\theta }}=0`$ (36)
Faraday’s law plus zero comoving electric field:
$$\frac{d(u_\varphi u_\theta )}{d\theta }+\left[\alpha \frac{1}{4}\right]u_\varphi u_r+u_\varphi u_\theta \frac{d}{d\theta }\mathrm{ln}\left[\frac{1}{y_p}\frac{1}{y_\varphi }\right]=0$$
(37)
$`\theta `$-component of momentum equation:
$`{\displaystyle \frac{u_ru_\theta }{2}}\left(1{\displaystyle \frac{2\alpha +1/2}{M_{ap}^{}{}_{}{}^{2}}}\right)+u_{\varphi }^{}{}_{}{}^{2}\mathrm{cot}\theta \left(M_{a\varphi }^{}{}_{}{}^{2}1\right)`$
$`+{\displaystyle \frac{u_r}{M_{ap}^{}{}_{}{}^{2}}}{\displaystyle \frac{du_r}{d\theta }}+u_\theta {\displaystyle \frac{du_\theta }{d\theta }}+{\displaystyle \frac{u_\varphi }{M_{ap}^2}}{\displaystyle \frac{du_\varphi }{d\theta }}{\displaystyle \frac{u_{r}^{}{}_{}{}^{2}}{y_pM_{ap}^{}{}_{}{}^{2}}}{\displaystyle \frac{dy_p}{d\theta }}`$
$`{\displaystyle \frac{u_{\varphi }^{}{}_{}{}^{2}}{y_\varphi M_{a\varphi }^{}{}_{}{}^{2}}}{\displaystyle \frac{dy_\varphi }{d\theta }}+{\displaystyle \frac{d\mathrm{\Theta }}{d\theta }}+{\displaystyle \frac{\mathrm{\Theta }}{\mu }}{\displaystyle \frac{d\mu }{d\theta }}=0`$ (38)
To treat the radiative heating we either hold $`\mathrm{\Theta }`$ constant (virial-isothermal case) as discussed above, or we include the equations of radiative diffusion (radiative case) as in FH1. The equations (17), (18) and (22) of FH1 continue to apply provided that the optical depth given above is $`2/3`$.
## Appendix B The First integrals
Equations (33) and (34) together yield the first integral
$$\mu (u_\theta \mathrm{sin}\theta )^{\frac{1/4\alpha }{5/4+\alpha }}y_p^{\frac{1+2\alpha }{5/4+\alpha }}q_1/4\pi ,$$
(39)
where $`y_p`$ replaces $`y`$ in FH1. Moreover directly from Eq. 33 and the poloidal stream-line equation $`dr/rd\theta =u_r/u_\theta `$ we have the stream-line integral
$$r^{(1+2\alpha )}\mu u_\theta \mathrm{sin}\theta \eta _1.$$
(40)
From this we see immediately that $`\alpha >1/2`$ if we are to have quadrupolar stream-lines for which $`r\mathrm{}`$ as $`u_\theta 0`$. A second stream-line integral follows from Eq. 34 and the stream-line equation, but it is not independent of Eqs. 39 and 40 since it follows by eliminating $`\mu `$ between these two. The limit $`\alpha <1/4`$ follows immediately from Eq. 39 if we require finite $`\mu `$, $`u_\theta =0`$ and odd symmetry in the magnetic field at the equator. The integral then implies implies that $`y_p\mathrm{}`$ and so the field passes through zero in the equatorial plane. This conclusion is avoided on the polar axis since the radial velocity is free to become very large there. This choice has the additional merit that the equator is super Alfvénic as well as the axis.
Using Eq. 34 together with Faraday’s law (37) we obtain the integral
$$u_\varphi \left[\frac{1}{y_p}\frac{1}{y_\varphi }\right]\left(\frac{\mathrm{sin}\theta }{y_p}\right)^{\frac{1/4\alpha }{5/4+\alpha }}u_\theta ^{\frac{3}{2(5/4+\alpha )}}=q_2.$$
(41)
Note that the constant $`q_2`$ is very interesting because it can be used to measure the strength of the electric field which is in fact
$$E=\left[\frac{GM}{cr_o}\right]\sqrt{\frac{M}{r_o^3}}\left(\frac{r}{r_o}\right)^{\alpha \frac{5}{4}}\left[\frac{1}{y_p}\frac{1}{y_\varphi }\right]\left(u_\varphi \times u_p\right).$$
(42)
The integral is not present in FH1 since the electric field is everywhere zero there.
In general we can not find an angular momentum integral explicitly (this is related to the difficulty of expressing scale invariance in an action principle; B. Gaffet, private communication) but in the special case of similarity index $`\alpha =1/4`$ such an explicit integral exists in the form (cf FH1):
$$\frac{u_\theta \mathrm{sin}\theta }{M_{ap}^2}q_3\left(u_\varphi \mathrm{sin}\theta \right)^3\left(1\frac{1}{M_{ap}M_{a\varphi }}\right)^3.$$
(43)
Although this is a very special case not normally related to our numerical solutions (the mass density is constant in radius so that eventually the dominance of the central mass is broken as $`r`$ increases), it is instructive to consider it further. Eq. 39 becomes simply $`\mu =(q_1/4\pi )y_p^1`$ so that $`M_{ap}^2=q_1y_p`$ and $`M_{a\varphi }^2=q_1y_\varphi ^2/y_p`$. Then Eqs. 41 and 43 can be solved together for $`y_p`$ and $`y_\varphi `$ to give
$$\frac{1}{y_p}=\frac{q_2q_3\mathrm{sin}^2\theta +q_1q_3u_\varphi u_\theta \mathrm{sin}^2\theta }{q_3u_\varphi u_\theta \mathrm{sin}^2\theta +u_\theta ^2/u_\varphi ^2},$$
(44)
and
$$\frac{1}{y_\varphi }=\frac{q_1q_3u_\varphi ^4\mathrm{sin}^2\theta q_2}{q_3u_\varphi ^4\mathrm{sin}^2\theta +u_\varphi u_\theta }.$$
(45)
We therefore observe that near the equator $`y_p`$ can be infinite only if $`u_\varphi 0`$ faster than $`u_\theta `$. This feature is shared with our numerical solutions, but of course the equator is strictly outside the domain of the solution. From the second equation we see that $`y_\varphi `$ tends to zero at the equator under the same conditions. If one were to insist that $`u_\varphi 0`$ at the equator, then Eq. 43 shows that we need $`M_{a\varphi }M_{ap}=1`$ there, or by the above Mach number definitions $`y_\varphi =1/q_1`$. Our solutions for $`y_p`$ and $`y_\varphi `$ show that this is only possible if $`q_2=0`$, that is with zero Poynting flux. Thus it seems from our analysis of this special case (however we always find $`u_\varphi =0`$ at the equator in our solutions) that the presence of a non-zero Poynting flux is not compatible with a Keplerian equatorial disc. The material there is falling radially towards the star. This is a possible non-linear end state for an instability that couples disc rotation to Alfvén waves propagating out of the plane (Shu et al. shu94 (1994), Dendy et al. dentagger (1998)).
|
no-problem/9902/cond-mat9902339.html
|
ar5iv
|
text
|
# Reply to comment on “Dynamic scaling in the spatial distribution of persistent sites”
In their comment on our paper , Ben-Naim and Krapivsky claims to have presented numerical evidence against our conclusions. We show that their claims are not valid for the following reasons, which we discuss here.
(i) The arguments given in against non-universality are incorrect.
(ii) The apparent disagreement between the results arises possibly from a difference in the initial conditions used.
(iii) We present new numerical results, which support our earlier conclusions.
In , the authors emphasise that their scaling function is independent of all initial conditions. This is a surprising result, because unlike the case of particle density $`n(t)`$, the prefactor $`\mathrm{\Gamma }`$ appearing in the asymptotic expression $`P(t)=\mathrm{\Gamma }t^\theta `$ depends on the initial density $`n_0`$. To show this, we first note that for an initial particle density $`n_0`$, the effects of the annihilation reaction will be felt only after a time interval $`t_0(Dn_0^2)^1`$. For $`tt_0`$, $`n(t)t^{1/2}`$ and so $`P(t)=P(t_0)(t/t_0)^\theta `$. On the other hand, for $`tt_0`$, $`n(t)n_0`$ so that $`P(t)P(0)e^{\alpha n_0\sqrt{Dt}}`$ in this regime (following an argument given in ). Here $`P(0)`$ is the initial density of persistent sites and $`\alpha `$ is a numerical constant. Combining the two, one finds $`\mathrm{\Gamma }P(0)n_{0}^{}{}_{}{}^{2\theta }`$, which we have verified in simulations. (Similar arguments can be used to show that the prefactor for particle density $`n(t)`$ is independent of $`n_0`$).
It is easy to see that $`1/2`$ cannot be a universal value for the dynamical exponent $`z`$ proposed in , contrary to the claims in . To show this, let us consider the $`q`$-state Potts model in the limit $`q\mathrm{}`$, the dynamics of which is given by $`A+AA`$ reaction which also has $`n(t)t^{1/2}`$. It is known from the exact solution that $`\theta =1`$ for this model. If we now follow the arguments in , we find that their scaling function $`(x)`$ constant for $`x1`$ and decays exponentially for $`x1`$ (where $`x=lt^{1/2}`$). Recomputing the persistent fraction $`P(t)`$ using this distribution yields the inconsistent result $`P(t)t^{1/2}`$.
Coming to the numerics, first of all, we would like to point out that $`L(t)`$ curves in for $`n_0=0.5`$ and $`n_0=0.8`$ are almost identical, unlike the corresponding plots given in our paper. This is possibly due to a difference in the initial condition: the entire lattice being taken as persistent at $`t=0`$ i.e, $`P(0)=1`$ instead of $`P(0)=1n_0`$ as in . In the former case, the fragmentation of the lattice into clusters of persistent and non-persistent sites is delayed by one MC step (for $`D=1/2`$). Consequently, the effective particle density is only $`n_{0}^{}{}_{}{}^{}=n_0\delta n_0`$, where $`\delta n_0`$ is the decrease in density over one time step. We have found numerically that for $`n_0=0.2`$, 0.5 and 0.8, $`n_{0}^{}{}_{}{}^{}0.15`$, 0.25 and 0.29 respectively. In this small range, it is difficult to make any inference regarding the universality of exponents.
Finally, we present more numerical results (Fig. 1) which endorses the conclusions in our paper. In particular, we see that as $`n_01`$, $`z\frac{\theta }{n_0}`$ (Table I), in agreement with our earlier observations.
|
no-problem/9902/cond-mat9902263.html
|
ar5iv
|
text
|
# HEAT CAPACITY MEASUREMENTS IN PULSED MAGNETIC FIELDS
## I Technique
The 60 Tesla Long-Pulse (60TLP) magnet was recently commissioned at the Los Alamos National Laboratory. This magnet produces a flat-top field for a period of 100 ms at 60 Tesla, and for longer time at lower fields, e.g. 0.5 s at 35 Tesla. During the entire pulse, the magnetic field varies at a maximum ramp rate of $`dB/dt400`$ Tesla/sec. Together, these properties allow for the development of brand new tools to study materials in pulsed magnetic fields. Heat capacity measurement is one of these tools. We have built a probe made of plastic materials that allows us to perform heat capacity measurements at temperatures between 1.6 K and 20 K in fields up to 60 Tesla. To maximize the available experimental space a novel vacuum tapered seal was developed. The conical plug part of the joint is made out of G-10, and the matching vacuum can is made out of 1266 Stycast epoxy. The differential thermal contraction between the parts aids in producing a superfluid-tight joint. The simple construction of the joint resulted in a 16 mm diameter experimental region.
The main parts of the probe inside the vacuum can are the temperature regulated block (TRB) and the silicon heat capacity platform with sample, thin film resistive heater, and bare chip Cernox thermometer. The TRB is made of 2850 Stycast epoxy chosen for its fair thermal conductivity, and is thermally connected to the bath via two dozen thin (gauge 44 and 38) 4-inch long copper wires. The heat capacity platform is suspended with nylon strings, and electrical connections to thermometer and heater provide a thermal link as well. The resulting thermal equilibrium time constants for the TRB and the platform were measured to be on the order of few minutes. Therefore, during the magnetic field pulse, which lasts for about 2 seconds, both the TRB and the heat capacity platform can be regarded as thermally isolated from the bath and each other, and under adiabatic condition. The third time constant $`\tau _{st}`$ is that of the heat capacity stage including sample, platform, thermometer, and heater. The sample’s internal thermal relaxation time constant $`\tau _{int}`$ can be less than millisecond at a temperature of a few Kelvin. However it grows rapidly as the temperature is increased. At the low temperature end $`\tau _{st}`$ can increase substantially due to either increase in $`\tau _{int}`$ (electronic or nuclear magnetic entropy) or boundary thermal resistance between different constituents of the stage. The temperature interval between 1 K and 20 K is therefore a convenient starting point for developing heat capacity measurements in pulsed magnetic fields.
We use a heat pulse method to measure heat capacity, where a known amount of heat is delivered to the sample using a chip resistor as a heater element. The heat capacity stage must come to equilibrium both before and after the heat pulse is delivered while the magnetic field remains constant. The flat field plateau of the 60TLP magnet allows this to occur. The temperature of the stage is measured with a Cernox chip resistance thermometer, which was calibrated in both DC field up to 30 Tesla and pulsed fields up to 60 Tesla. The heat capacity of the sample is then determined as the ratio of the heat delivered to the sample to the change in its temperature. The low ramp rate of the long-pulse magnet (in comparison with short pulse, capacitively driven magnet with the total magnet pulse time of about 10 ms) reduces the unavoidable eddy current heating in metallic samples. However, during magnetic field sweep the temperature does not stay constant even in the total absence of eddy current heating, due to the magnetocaloric effect. The heat capacity stage is thermally isolated from the bath, and remains in adiabatic condition during the magnetic field pulse. The dependence of the temperature of the stage on the magnetic field during the pulse is then given by the expression
$$\left(\frac{\delta T}{\delta H}\right)_S=\frac{T}{C_H}\left(\frac{\delta M}{\delta T}\right)_H,$$
(1)
where T is temperature, H is magnetic field, M is magnetization, and C is the specific heat of the sample, and subscripts $`S`$ and $`H`$ indicate constant entropy and magnetic field, respectively. Eq. (1) is used to describe the process of adiabatic demagnization cooling, where $`\left(\delta M/\delta T\right)_H`$ is negative and therefore $`\left(\delta T/\delta H\right)_S`$ is positive. Such a system warms as the field is ramped up, and cools during the ramp down portion of the magnetic field pulse reversibly. However, magnetization in general can also increase with temperature. The sample then would cool during the ramp up, and warm reversibly during the ramp down of the magnetic field. Below we show examples of both types of behavior.
## II Results
The first heat capacity experiments in the 60TLP magnet were performed on a single crystal of metallic YbInCu<sub>4</sub>, grown from an In-Cu flux as described previously. This system undergoes a first order valence transition at 42 K in zero field. The specific volume is increased by 0.5% upon cooling through the phase transition, with accompanying rise in the Kondo temperature $`T_K`$ from 25 K to 500 K. It is believed that unlike in the case of Ce, where the phase transition is described within a Kondo-collapse scenario, the valence transition YbInCu<sub>4</sub> it is driven by the band structure effects. The complete magnetic field - temperature phase diagram was obtained in DC Bitter magnets at NHMFL/Tallahassee and in capacitor-driven pulsed magnets at NHMFL/Los Alamos. This work showed that the transition can be suppressed down to T = 0 K with an applied field of 34.3 Tesla.
The length of the flat top can be close to 0.5 s in the 60TLP magnet for magnetic fields less than or equal to 35 Tesla. When $`\tau _{st}`$ is much smaller than the length of the plateau, a sequence of heat pulses can be delivered to it within the flat portion of the field profile, with sufficient time for the calorimeter to come to equilibrium before and after each of the heat pulses. This situation is illustrated by Fig. 1(a) for a field of 20 Tesla, where a sequence of five 10 ms long heat pulses were delivered to the heat capacity stage during the plateau of a single magnetic field pulse. The thermometer comes to equilibrium after the heat pulse is delivered well before the next heat pulse, and temperature is determined before and after each of the pulses. In this way a series of five C<sub>H</sub>(T) data points is collected in a single ”shot” experiment, as the initial temperature for each of the heat capacity experiments is increased due to the previous heat pulse. The data from this and one zero field ”shot” is shown in Fig. 1(b). We fit the data with a sum of T-linear (electronic) and T-cubic (phononic) terms $`AT+BT^3`$. For zero field we obtain $`A=49.5\pm 0.4`$ $`\mathrm{mJ}/\mathrm{molK}^2`$ and $`B=(0.85\pm 0.03)`$ $`\mathrm{mJ}/\mathrm{molK}^4`$. The value of A is in excellent agreement with available data in the literature. At 20 Tesla we obtain $`A=80\pm 5`$ $`\mathrm{mJ}/\mathrm{molK}^2`$ and $`B=(0.81\pm 0.07)`$ $`\mathrm{mJ}/\mathrm{molK}^4`$. The magnitude of the cubic term due to phonons is field-independent as expected. The increase of the linear term with field is likely due to scaling with magnetic field observed for various properties of YbInCu<sub>4</sub>
Another way to increase the data acquisition rate relies on the programmable nature of the 60TLP magnet. A series of plateaus at different magnetic fields can be produced within a single experiment. Fig. 2 displays the data for one such experiment on YbInCu<sub>4</sub>, with four plateaus at 25, 30, 35, and 40 Tesla, each 130 ms long. At each of the magnetic field plateaus the heat pulse is applied to the stage, and heat capacity experiment is performed. In addition, as the field was changed between 30 and 35 Tesla through the first order phase boundary, the temperature of the sample was observed to go down on the up-sweep, and up on the down-sweep, due to the magnetocaloric effect. These features are very sharp, and allow direct determination of the phase diagram of YbInCu<sub>4</sub>. This is yet another complementary way to collect data using the heat capacity apparatus. It was not possible to measure heat capacity in the high field phase due to a large increase in $`\tau _{st}`$.
Preliminary measurements of heat capacity of UBe<sub>13</sub> and Ce<sub>3</sub>Bi<sub>4</sub>Pt<sub>3</sub> were performed up to 60 Tesla. Fig. 3 shows temperature variation of UBe<sub>13</sub> and Ce<sub>3</sub>Bi<sub>4</sub>Pt<sub>3</sub> as a function of field during magnetic field pulses to 60 T. This figure illustrates the magnetocaloric effect under adiabatic conditions of our apparatus. The temperature of the UBe<sub>13</sub> sample increases from 4 K up to 10 K during the ramp up, and returnes to 4 K during the ramp down in a very reversible fashion. The opposite is true for Ce<sub>3</sub>Bi<sub>4</sub>Pt<sub>3</sub>, which is colder at 60T than at 0 T. With available magnetization data it should be possible to calculate the specific heat of these compounds during the ramp portions of the field pulse, providing information in addition to the direct heat capacity measurements at field plateaus.
## Conclusion
We have demonstrated the feasibility of heat capacity measurements in the pulsed magnetic fields provided with the 60 Tesla Long Pulse magnet at NHMFL/LANL. Direct measurement of heat capacity at field plateaus was clearly demonstrated for a variety of compounds. It appears that thermal equilibrium can be achieved in some compounds even during the field sweep, given the low ramp rates which the 60TLP magnet is capable of achieving. In this situation specific heat data can be obtained via the magnetocaloric effect from the temperature vs. field traces and magnetization data for such compounds. Other types of thermal relaxation-related experiments like thermal conductivity and Seebeck effect are also under development.
## Acknowledgments
This work was conducted under the auspices of the Department of Energy. It was also supported by the In-House Research Program of the NHMFL.
|
no-problem/9902/astro-ph9902343.html
|
ar5iv
|
text
|
# Extragalactic Radio Sources and CMB Anisotropies
## 1. Introduction
In the last fifteen years, deep VLA surveys have allowed to extend direct determinations of radio source counts down to $`\mu `$Jy levels at 1.41, 4.86 and 8.44 GHz. At these frequencies, counts now cover about 7 orders of magnitude in flux and reach areal densities of several sources arcmin<sup>-2</sup>.
At bright fluxes, the radio source population is dominated by classical, strongly evolving, powerful radio galaxies (Fanaroff-Riley classes I and II) and quasars, whose counts begin to converge below $`100`$mJy. The VLA surveys, however, have revealed a flattening in differential source counts (normalized to Euclidean ones) below a few mJy at 1.41 GHz (Condon & Mitchell 1984), at 4.86 GHz (Donnelly et al. 1987; Fomalont et al. 1991), and, most recently, also at 8.44 GHz (Windhorst et al. 1993, 1995; Partridge et al. 1997; Kellermann et al. 1999; Richards et al. 1998).
Several scenarios have been developed to interpret this “excess” in the number counts of faint radio sources: a non-evolving population of local ($`z<0.1`$) low-luminosity galaxies (Wall et al. 1986); strongly evolving normal spirals (Condon 1984, 1989); and actively star-forming galaxies (Windhorst et al. 1985, 1987; Danese et al. 1987; Rowan–Robinson et al. 1993).
Thus, the currently available deep source counts are more than sensitive enough to include any radio source of the familiar steep and “flat”-spectrum classes contributing to fluctuations detectable by any of the forthcoming space borne CMB anisotropy experiments (see Toffolatti et al., 1998; De Zotti & Toffolatti, 1998). Extrapolations in flux density are not required: the real issue is the spectral behaviour of sources, since existing surveys extend only up to 8.4 GHz and hence a substantial extrapolation in frequency is necessary to reach the frequency bands of the MAP and Planck Surveyor missions. The point has to be carefully discussed, since important spectral features, carrying information on physical conditions of sources, are expected at cm to mm wavelengths. These include the transition from optically thick to thin synchrotron emission for “flat”-spectrum sources, the steepening of the synchrotron spectrum due to radiation energy losses by the relativistic electrons, and the mm-wave excesses due to cold dust emission.
On the other hand, future space missions will also provide complete samples of the extremely interesting classes of extragalactic radio sources characterized by inverted spectra (i.e. flux density increasing with frequency), which are very difficult to detect in radio frequency surveys. Strongly inverted spectra up to tens of GHz can be produced in very compact, high electron density regions, by synchrotron or free-free absorption. This is the case for GHz peaked spectrum radio sources (GPS), which are currently receiving an increasing amount of interest. Also of great interest are advection dominated sources (ADS), which turn out to have a particularly hard radio emission spectrum.
In §$`\mathrm{\hspace{0.17em}2}`$ we briefly discuss the spectral properties, at mm and sub-mm wavelengths, of the different classes of sources mentioned above. In §$`\mathrm{\hspace{0.17em}3}`$ we deal with number counts while, in §$`\mathrm{\hspace{0.17em}4}`$, we present estimates of the angular power spectrum of intensity and polarization fluctuations due to discrete extragalactic sources and discuss the effect of clustering. In §$`\mathrm{\hspace{0.17em}5}`$ we summarize our main conclusions.
## 2. Radio sources at mm and sub-mm wavelengths
### 2.1. Compact, “flat”-spectrum radio sources
The observed spectral energy distributions (SEDs) of “flat-”spectrum radio sources (compact radio galaxies, radio loud QSOs, BL Lacs) generally have a gap at mm/sub-mm wavelengths (see Figure 1). Those sources which have data in this interval frequently show a dip in the mm region, indicative of a cross-over of two components.
The spectral shape carries a good deal of extremely interesting information on the physical properties of sources. For example, in flow models of compact radio sources the spectrum steepens at the frequency at which the radiative cooling time equals the outflow time (cf. Begelman et al. 1984); for “hot spots”, this typically lies in the millimeter or far-IR part of the spectrum, while, in cocoons or extended regions of lower surface brightness, the break moves down to lower frequencies.
According to the basic model of Blandford & Rees (1974) and Scheuer (1974), which is supported by a large body of observational evidence, the spectral break frequency, $`\nu _b`$, at which the synchrotron spectrum steepens, is related to the magnetic field $`B`$ and to the “synchrotron age” $`t_s`$ (in Myr) by $`\nu _b96(30\mu \text{G}/B)^3t_s^2`$GHz. Thus, the systematic multifrequency study at the Planck and MAP frequencies will provide a statistical estimate of the radio source ages and of the evolution of the spectrum with cosmic time: both are pieces of information of great physical importance. Various evolutionary models of the radio emission spectrum have been proposed based on different assumptions (“one-shot” or continuous injection of relativistic electrons, complete or no isotropization of the pitch-angle distribution; see Myers & Spangler 1985 for a summary). These models strongly differ in the form of the falloff above $`\nu _b`$; hence measurements at mm and sub-mm wavelengths will provide crucial information on the physical effects operating in radio sources.
Also, many compact “flat”-spectrum sources are observed to become optically thin at high radio frequencies. Correspondingly, their spectral index steepens to values ($`\alpha 0.75`$) typical of extended, optically thin sources.
In the case of blazars (Brown et al. 1989) the component dominating at cm wavelengths is rather “quiescent” (variations normally occur on timescales of years) and has a spectral turnover at $`2`$–5 cm, where the transition between optically thick and optically thin synchrotron emission occurs. At higher frequencies the emission is dominated by a violently variable “flaring” component, which rises and decays on timescales of days to weeks, and has a spectral turnover at mm/sub-mm wavelengths. The “quiescent” emission may be identified with emission from the bulk of a relativistic jet aligned close to the line of sight, while the flaring component may arise in smaller regions of enhanced emission within the jet, where emitting electrons are injected or reaccelerated. The study of the flaring component is clearly central to understanding the mechanisms reponsible for variability in radio loud active nuclei; the mm/sub-mm region is crucial in this respect, since it is in this region that the flare spectra become self absorbed.
It is known from VLBI studies that the apparently smooth “flat” spectra of compact radio sources are in fact the combination of emissions from a number of components with varying synchrotron self absorption turnover frequencies which are higher for the denser knots. It may be argued (Lawrence 1997) that the mm/sub-mm region measures sub-parsec scale, unresolved regions, including the radio core.
Excess far-IR/sub-mm emission, possibly due to dust, is often observed from radio galaxies (Knapp & Patten 1991). Planck data will allow to assess whether this is a general property of these sources; this would have interesting implications for the presence of interstellar matter in the host galaxies, generally identified with giant ellipticals, which are usually thought to be devoid of interstellar matter. Observations of large mm fluxes attributed to dust emissions have been reported for several distant radio galaxies (see Mazzei & De Zotti, 1996 and references therein). The inferred dust masses are 1–2 orders of magnitude higher than found for nearby radio galaxies. The two components (synchrotron and dust emission) may well have different evolution properties.
### 2.2. GHz Peaked Spectrum radio sources
Current predictions on number counts do not explicitly include sources with strongly inverted spectra, peaking at mm wavelengths, that would be either missing from, or strongly underepresented in low frequency surveys and may be difficult to distinguish spectrally from fluctuations in the CMB (Crawford et al. 1996).
The recent study of GHz Peaked Spectrum radio sources (GPS) by O’Dea and Baum (1997) has revealed a fairly flat distribution of peak frequencies extending out to 15 GHz in the rest frame, suggesting the existence of an hitherto unknown population of sources peaking at high frequency (see also Lasenby 1996). The host galaxies appear to be a homogeneus class of giant ellipticals with old stellar populations whereas GPS quasars present a different redshift distribution and have radio morphologies quite unlike those of GPS galaxies (Snellen et al. 1998a,b,c).
It is very hard to guess how common such sources may be. Snellen (1997) exploited the sample of de Vries et al. (1997) to estimate a count of $`22\pm 10\text{Jy}^{3/2}\text{sr}^1`$ for sources having peak frequencies between 1 and 8 GHz and peak flux densities between 2 and 6 Jy. He also found that counts of GPS sources are only slowly decreasing with increasing peak frequency in that range. If indeed the distribution of peak frequencies extends up to several tens GHz keeping relatively flat, it is conceivable that from several tens to hundreds of GPS sources will be detected by Planck, whereas MAP could detect only a few of them.
Therefore, although these rare source will not be a threat for studies of CMB anisotropies, we may expect that Planck surveys will provide crucial information about their properties. GPS sources are important because they may be the younger stages of radio source evolution (Fanti et al. 1995; Readhead et al. 1996) and may thus provide insight into the genesis and evolution of radio sources; alternatively, they may be sources which are kept very compact by unusual conditions (high density and/or turbulence) in the interstellar medium of the host galaxy (van Breugel et al. 1984).
### 2.3. Advection-dominated sources
Another very interesting class of inverted spectrum radio sources, is that characterized by advection-dominated emission (Narayan & Yi 1995; Fabian & Rees 1995; Di Matteo & Fabian 1997). Convincing observational evidence for the presence of super-massive black-holes (BH) in many nearby galaxies have been accumulating in recent years (Kormendy & Richstone 1995; Magorrian et al. 1998; Ho 1999). These data seem to imply that an important, possibly dominant, fraction of all massive galaxies with appreciable spheroidal component (hence the E/S0 galaxies in particular, but also early-type spirals) host a black-hole with a mass roughly proportional to that of the hosting spheroid. Franceschini et al. (1998) have found a tight relationship between the BH mass and both the nuclear and total radio flux at centimetric wavelengths. The radio flux turns out to be proportional to $`M_{\mathrm{BH}}^{2.53}`$. This is consistent with the radio centimetric flux being contributed by cyclo-synchrotron emission in an advection-dominated accretion flow. The latter should correspond to a situation in which the accretion rate is low, as typically expected for the low-density ISM in local early type galaxies.
A property that distinguishes this emission is an inverted spectrum with spectral index $`\alpha =0.4`$ up to a frequency of 100–200 GHz, followed by fast convergence (far-IR and optical emission is expected to be very weak). Based on the analysis by Franceschini et al. (1998), we would expect that some sources of this kind may detected by Planck at 70 and 100 GHz. This assumes that the advection flows evolves in redshift $`(1+z)^3`$, as suggested by analyses of faint radio-optical samples of E/S0’s. In spite of the limited statistics, this would be a way to test for the presence of massive BH’s and of the evolution of the ISM in galaxies up to moderate redshifts.
### 2.4. Free-free self absorption cutoffs in AGN
High frequency free-free cutoffs may be present in AGN spectra. Ionized gas in the nuclear region absorbs radio photons up to a frequency:
$$\nu _{\mathrm{ff}}50\frac{g}{5}\frac{n_e}{10^5\mathrm{cm}^3}\left(\frac{\mathrm{T}}{10^4\mathrm{K}}\right)^{3/4}l_{\mathrm{pc}}^{1/2}\mathrm{GHz}.$$
(1)
Free-free absorption cutoffs at frequencies $`>10`$GHz may indeed be expected, in the framework of the standard torus scenario for type 1 and type 2 AGN, for radio cores seen edge on, and may have been observed in some cases (Barvainis & Lonsdale, 1998). They provide constraints on physical conditions in the parsec scale accretion disk or infall region for the nearest sources of this kind.
## 3. Counts of radio sources at cm to mm wavelenghts
Source counts are presently available only at cm wavelengths. In carrying out extrapolations to mm wavelengths, several effects need to be taken into account. On one side, the majority of sources with flat or inverted spectra at 5 GHz have spectral turnovers below 90 GHz (Kellermann & Pauliny-Toth 1971; Owen & Mufson 1977). This is not surprising since, as noted above, astrophysical processes work to steepen the high frequency source spectra.
On the other side, high frequency surveys preferentially select sources with harder spectra. For power law differential source counts, $`n(S,\nu _0)=k_0S^\gamma `$, and a Gaussian spectral index distribution with mean $`<\alpha >_0`$ and dispersion $`\sigma `$, the counts at a frequency $`\nu `$ are given by $`n(S,\nu )=n(S,\nu _0)(\nu /\nu _0)^{\alpha _{\mathrm{eff}}}`$ with (Kellermann 1964; Condon 1984): $`\alpha _{\mathrm{eff}}=<\alpha >_0+\mathrm{ln}(\nu /\nu _0)\sigma ^2(1\gamma )^2/2`$. Estimates neglecting the dispersion of spectral indices underestimate the counts by a factor $`\mathrm{exp}[\mathrm{ln}^2(\nu /\nu _0)\sigma ^2(1\gamma )^2/2]`$. The spectral index distribution between 5 and 90 GHz determined by Holdaway et al. (1994) has $`\sigma =0.34`$; for Euclidean counts, $`\gamma =2.5`$, the correction then amounts to about a factor of 3.
A good fraction of the observed spread of spectral indices is due to variability whose rms amplitude, in the case of blazars, increases with frequency, reaching a factor of about 1.5 at a few hundred GHz (Impey & Neugebauer 1988). In some cases, variations by a factor of 2 to 3, or more, have been observed (e.g. 3C345: Stevens et al. 1996; PKS$`\mathrm{\hspace{0.17em}0528}+134`$: Zhang et al. 1994; $`0738+545`$ and 3C279: Reich et al. 1998). The highest frequency outbursts are expected to be associated to the earliest phases of the flare evolution. Since the rise of the flare is often rather abrupt (timescale of weeks), they were probably frequently missed.
In view of the uncertainties on the spectra of radio selected AGNs, and the poor knowledge on number counts and spectral properties of inverted spectrum sources discussed in §2, an accurate modelling of radio source counts at $`\nu 100`$ GHz is currently impossible. However, the simple model adopted by Toffolatti et al. (1998) appears to be remarkably successful.
These authors adopted the luminosity evolution scheme of Danese et al. (1987), who considered three classes of sources: powerful radio galaxies and radio loud quasars (distinguishing between extended, steep spectrum, and compact, “flat”” spectrum sources), and evolving starburst/interacting galaxies. The average spectral index, $`\alpha `$ ($`S\nu ^\alpha `$), of “flat–spectrum” sources was taken to be $`\alpha =0.0`$ for $`\nu 200`$ GHz, with a steepening to $`\alpha =0.75`$ at higher frequencies. As for “steep”–spectrum sources (elliptical, S0 and starburst galaxies), whose contribution to source counts is actually minor in the whole frequency range of interest in connection with the MAP and Planck missions, the radio power–spectral index relation determined by Peacock and Gull (1981) was adopted. This simple recipe has allowed to reproduce, without any adjustment of the parameters, the deep counts at 8.44 GHz (Windhorst et al. 1993; Partridge et al. 1997), which were produced several years after the model (see Figure 2).
This scenario, implying a substantial contribution of active star-forming galaxies to sub-mJy counts at cm wavelengths, is consistent with the results by Kron et al. (1985) and Thuan & Condon (1987) indicating that the optical counterparts of sub-mJy sources are mainly faint blue galaxies, often showing peculiar optical morphologies indicative of high star formation activity and of interaction and merging phenomena. Moreover, the spectra are similar to those of the star–forming galaxies detected by IRAS (Franceschini et al. 1988; Benn et al. 1993). On the other hand, a recent work by Gruppioni, Mignoli & Zamorani (1999), reaching a fainter magnitude limit in the spectroscopic identifications of sources selected at 1.4 GHz with a flux limit of 0.2 mJy, finds that the majority of their radio sources are likely to be early type galaxies.
Optical identifications of sources at $`\mu `$Jy flux levels (Hammer et al. 1995; Windhorst et al., 1995; Richards et al., 1998) show that they are made mainly of star-forming galaxies, with a smaller fraction of ellipticals, late-type galaxies with nuclear activity and local spiral galaxies.
It may be noted that the counts predicted by Toffolatti et al. (1998) at frequencies above 100 GHz may be somewhat depressed by the assumption of a spectral break at 200 GHz for all “flat”-spectrum sources while examples are known of sources with a flat or inverted spectrum up to 1000 GHz. In view of this, we updated the the baseline model of Toffolatti et al. by assuming that the steepening of the spectral indices to $`\alpha =0.75`$ occurs at $`\nu =1000`$ GHz.
### 3.1. Comparison with other estimates of source counts
Sokasian et al. (1999) have produced skymaps of bright radio sources at frequencies up to 300 GHz by means of detailed individual fits of the spectra of a large number of sources compiled from several catalogs, including the all sky sample of sources with $`S_{5\mathrm{GHz}}1`$Jy (Kühr et al. 1981). Their estimated number of sources brighter than $`S_{90\mathrm{GHz}}=0.4`$Jy is about a factor of two below that predicted by Toffolatti et al. (1998). A very similar result was obtained by Holdaway et al. (1994) based on the observed distribution of 8.4–$`90`$GHz spectral indices. On the other hand, these empirical estimates yield, strictly speaking, lower limits, since sources with inverted spectra may be under-represented in the primary sample. Furthermore, in the presence of substantial variability, estimates using mean fluxes underpredict actual counts of bright sources.
A comparison of the $`\mathrm{log}N`$$`\mathrm{log}S`$ of radio sources with the predicted ones for dusty galaxies (even allowing for very different evolutionary scenarios: De Zotti et al. 1999, their Figure 1) shows an abrupt change in the populations of bright sources observed in channels above and below $`1`$mm: radio sources dominate at the longer wavelengths and dusty galaxies at the shorter ones. This is due to the steep increase with frequency of the dust emission spectrum in the mm/sub-mm region (typically $`S_\nu \nu ^{3.5}`$), which makes the crossover between radio and dust emission components only weakly dependent on their relative intensities; moreover, dust temperatures tend to be higher for distant high luminosity sources, partially compensating for the effect of redshift.
At, say, $`\nu 200`$–300 GHz dusty galaxies dominate the number counts of extragalactic sources. We only briefly remind the point in the next subsection and defer to the comprehensive reviews of Guiderdoni et al. (1999) and of Mann et al. (1999) for a thorough discussion.
### 3.2. Counts of dusty galaxies
Although the situation is rapidly improving, thanks to the deep ISO counts at $`175\mu `$m (Kawara et al. 1997; Puget et al. 1999), to the preliminary counts at $`850\mu `$m with SCUBA on JCMT (see Mann et al. 1999, for a review on the subject) and to the important constraints from measurements of the far-IR to mm extragalactic background (Schlegel et al. 1998; Hauser et al. 1998; Fixsen et al. 1998), first detected by Puget et al. (1996), current estimates are affected by bigger uncertainties than in the case of radio sources.
In fact, predicted counts have a higher responsiveness to the poorly known evolutionary properties, because of the boosting effect of the strongly negative K corrections. The most extensive surveys, carried out by IRAS at $`60\mu `$m, cover a limited range in flux and are rather uncertain at the faint end (Hacking & Houck 1987; Gregorich et al. 1995; Bertin et al. 1997). It is then not surprising that predictions of recent models differ by substantial factors.
Again, substantial extrapolations in frequency are required, and have to deal with the poor knowledge of the spectrum of galaxies in the mm/sub-mm region; the $`1.3`$mm/$`60\mu `$m flux ratios of galaxies are observed to span about a factor of 10 (Chini et al. 1995; Franceschini & Andreani 1995).
## 4. Angular power spectrum of source fluctuations
The effect of radio sources as a limiting factor for the detection of primordial CMB anisotropies has been extensively analyzed by many authors (see, e.g., Franceschini et al. 1989; Blain & Longair 1993; Danese et al. 1996; Gawiser & Smoot 1997). More recently, detailed analyses of the problem have been worked out by Toffolatti et al. (1998) for the Planck Surveyor mission, and by Refregier, Spergel & Herbig (1999) for the MAP mission.
The relevant formalism is readily available in the literature (see, e.g., De Zotti et al. 1996; Tegmark & Efstathiou 1996; Toffolatti et al. 1998; Scott & White 1999) and we skip it here. A Poisson distribution of extragalactic point sources produces a simple white-noise power spectrum, with the same power in all multipoles, so that their contribution to fluctuations in a unit logarithmic multipole interval increases with $`\mathrm{}`$ as $`\mathrm{}(\mathrm{}+1)C_{\mathrm{}}\mathrm{}^2`$ (for large values of $`\mathrm{}`$), while, at least for the standard inflationary models, which are consistent with the available anisotropy detections, the function $`\mathrm{}(\mathrm{}+1)C_{\mathrm{}}`$ yielded by primordial CMB fluctuations is approximately constant for $`\mathrm{}\genfrac{}{}{0pt}{}{<}{}\mathrm{\hspace{0.17em}100}`$, then oscillates and finally decreases quasi exponentially for $`\mathrm{}\genfrac{}{}{0pt}{}{>}{}\mathrm{\hspace{0.17em}1000}`$ ($`\theta \genfrac{}{}{0pt}{}{<}{}\mathrm{\hspace{0.17em}10}^{}`$). Hence confusion noise due to discrete sources will dominate at small enough angular scales.
Figure 3 shows the expected angular power spectra of the extragalactic foregrounds components at 30, 100 and 217 GHz, corresponding to the central frequencies of three Planck channels; also, MAP has channels centered at 30 and 94 GHz. The two lines for each population (radio sources and far-IR sources) are obtained assuming that sources can be identified and removed down to fluxes of 1 or 0.1 Jy.
Radio sources give the most relevant contribution to CMB fluctuations up to, say, $`200`$ GHz, whereas dusty galaxies dominate at higher frequencies. At 217 GHz, the Poisson fluctuation level due to far-IR selected sources predicted by the model of Guiderdoni et al. (1998) is higher than that of radio selected sources, whereas it falls below the radio component if we adopt the Toffolatti et al. model. As discussed by De Zotti et al. (1999), the latter model falls somewhat short of the best current estimate of the far-IR to mm extragalactic background (Fixsen et al. 1998), while model E by Guiderdoni et al. tends to exceed it.
Figure 3 also indicates that source removal is much more effective in reducing the Poisson fluctuation level at $`\lambda \genfrac{}{}{0pt}{}{>}{}\mathrm{\hspace{0.17em}1}`$mm. In fact, in this wavelength range fluctuations are dominated by the brightest sources below the detection limit, while at shorter wavelengths the dominant population are evolving dusty galaxies whose counts are so steep that a major contribution to fluctuations comes from fainter fluxes.
Fluctuations in the Galactic emissions at $`|b|>30^{}`$ are also plotted in Figure 3, for comparison. They are represented by long$`+`$short dashes (dust), dots$`+`$short dashes (free-free), and dots$`+`$long dashes \[synchrotron: we have plotted both the power spectrum derived by Tegmark & Efstathiou (1996), $`C_{\mathrm{}}\mathrm{}^3`$, and that observed in the Tenerife patch, $`C_{\mathrm{}}\mathrm{}^2`$ (Lasenby 1996); the latter is significantly lower than the former in the range of scales where it has actually been measured, but has a flatter slope so that it may become relatively more important on small scales\]. The heavy dashed line flattening at large $`\mathrm{}`$ shows the power spectrum of anisotropies due to the Sunyaev-Zeldovich effect computed by Atrio-Barandela & Mücket (1998) adopting a lower limit of $`10^{14}\mathrm{M}_{}`$ for cluster masses, a present ratio $`r_{\mathrm{virial}}/r_{\mathrm{core}}=10`$, and $`ϵ=0`$.
For $`\mathrm{}<300`$, corresponding to angular scales $`>30^{}`$ \[$`\mathrm{}180^{}/\theta (\mathrm{deg})`$\], diffuse Galactic emissions dominate total foreground fluctuations even at high Galactic latitudes. These are minimum at $`\nu 70`$GHz (Kogut 1996). For larger values of $`\mathrm{}`$ the dominant contribution is from extragalactic sources; their minimum contribution to the anisotropy signal occurs around 150-200 GHz. Therefore, the minimum in the global power spectrum of foreground fluctuations moves, at high galactic latitudes, from about 70 GHz for $`\mathrm{}<300`$, to 150-200 GHz at higher values of $`\mathrm{}`$. The detailed spectral and evolutionary behaviour of sources determines the exact value of the frequency of the minimum foreground fluctuations; such frequency decreases with decreasing Galactic latitude (De Zotti et al. 1999).
On the other hand, in the frequency range 50–200 GHz, a higher contamination level due to source confusion could be produced by a still undetected population of sources whose emission peaks at $`100`$ GHz (Gawiser, Jaffe & Silk, 1999). Anyway, the discussion in §1 and §2 indicates that unknown source populations are unlikely to give a fluctuation level much higher than the presently estimated one.
### 4.1. The effect of clustering
Toffolatti et al. (1998) found that clustering contribution to fluctuations due to extragalactic sources is generally small in comparison with the Poisson contribution. However, the latter, in the case of radio sources, comes mostly from the brightest sources below the detection limit, while the clustering term is dominated by fainter sources. Therefore, an efficient subtraction of radio sources decreases the Poisson term much more effectively than the clustering term, which therefore becomes relatively more and more important.
In the case of a power law angular correlation function ($`w(\theta )\theta ^{1\gamma }`$), the power spectrum of intensity fluctuations is $`C_{\mathrm{}}\mathrm{}^{\gamma 3}`$ (Peebles 1980; eq. 58.13). If this behaviour extends to large enough angular scales, i.e. to small enough values of $`\mathrm{}`$, the clustering signal, for a power law index not much steeper than the canonical value $`\gamma 1.8`$, will ultimately become larger than the Poisson anisotropy. On the other hand, for large values of $`\theta `$, $`w(\theta )`$ is expected to drop below the above power law approximation, and $`C_{\mathrm{}}`$ will correspondingly break down.
The preliminary estimates (Toffolatti et al., in preparation) reported in Figure 4 are based on the correlation functions of radio sources derived by Loan et al. (1997) and by Magliocchetti et al. (1999). The different slopes at large values of $`\mathrm{}`$ reflect different values of $`\gamma `$: the upper curve correspond to the larger value ($`\gamma =2.5`$) obtained by Magliocchetti et al. (1999).
Scott & White (1999) have recently shown that if dusty galaxies cluster like the $`z3`$ Lyman break galaxies (Giavalisco et al. 1998), at frequencies $`217`$GHz the anisotropies due to clustering may exceed the Poisson ones on all scales accessible to Planck; in the 353 GHz ($`850\mu `$m) channel the clustering signal may exceed the primordial CMB anisotropies on scales smaller than about $`30^{}`$. It should be noted, however, that current models (Toffolatti et al. 1998; Guiderdoni et al. 1998) strongly suggest a broad redshift distribution of sources contributing to the autocorrelation function of the intensity fluctuations, implying a strong dilution of the clustering signal. A further substantial overestimate of the effect of clustering may follow from the extrapolation to degree scales, with constant slope, of the angular correlation function determined on scales of up to a few arcmin. Our preliminary estimate shown in Figure 4, has been obtained by assuming that the angular correlation function of Lyman break galaxies substantially steepens at $`\theta 6^{}`$, consistent with the data of Giavalisco et al. (1998).
### 4.2. Polarization
The polarized spectra have not been studied as extensively as the intensity spectra. For a power law electron energy spectrum $`dN/dE=N_0E^p`$, the polarization level, $`\mathrm{\Pi }`$, yielded by a uniform magnetic field is $`\mathrm{\Pi }=3(p+1)/(3p+7)`$ (LeRoux 1961). For a typical high frequency value of $`p3`$, $`\mathrm{\Pi }75\%`$. Non-uniformities of the magnetic fields and differential Faraday rotation decrease the polarization level. However, the Faraday rotation optical depth is proportional to $`\nu ^2`$ so that Faraday depolarization is negligible at the high frequencies relevant for Planck and MAP.
Bouchet et al. (1998) have analized the possibility of extracting the power spectrum of CMB polarization fluctuations in the presence of polarized Galactic foregrounds using a multifrequency Wiener filtering of the data. They concluded that the power spectrum of E-mode polarization of the CMB can be extracted from Planck data with fractional errors $`\genfrac{}{}{0pt}{}{<}{}\mathrm{\hspace{0.17em}10}`$–30% for $`50\genfrac{}{}{0pt}{}{<}{}\mathrm{}\genfrac{}{}{0pt}{}{<}{}\mathrm{\hspace{0.17em}1000}`$. The B-mode CMB polarization, whose detection would unambiguously establish the presence of tensor perturbations (primordial gravitational waves), can be detected by Planck with signal-to-noise $`2`$–4 for $`20\mathrm{}100`$ by averaging over a 20% logarithmic range in multipoles.
Polarization of extragalactic radio sources, not considered by Bouchet et al. (1998), is an important issue as well. Flat spectrum radio sources are typically 4-7% polarized at cm and mm wavelengths (Nartallo et al. 1998; Aller et al. 1999). For random orientations of the magnetic fields of sources along the field of view, the rms polarization fluctuations are approximately equal to intensity fluctuations times the mean polarization degree (De Zotti et al., in preparation).
Our preliminary estimate of Figure 5 assumes a mean polarization of radio sources of 5%, close to the main peak of the percentage polarization distribution of the sample studied by Nartallo et al. (1998).
Measurements of polarized thermal emission from dust are only available for interstellar clouds in our own Galaxy. The distribution of observed polarization degrees of dense clouds at 100$`\mu `$m shows a peak at $`2\%`$ (Hildebrand 1996). We have adopted this value for mean polarization of dusty galaxies at mm/sub-mm wavelengths; this is likely to be an upper limit since the global percentage polarization from a galaxy is the average of contributions from regions with different polarizing efficiencies and different orientations of the magnetic field with respect to the plane of the sky; all this works to decrease the polarization level in comparison with the mean of individual clouds.
At 217 GHz the power spectrum obtained on the basis of the source counts of Guiderdoni et al. (model E) is also shown (upper thin dashed line). Note that, since the emission of radio sources appears to be more polarized than that of dusty galaxies, radio sources give the dominant contribution to extragalactic polarization fluctuations up to 217 GHz. Anyway, Figure 5 shows that the main limitations to CMB polarization measurements come from Galactic emissions, not from extragalactic sources.
A brief summary on polarization fluctuations of Galactic emissions can be found in De Zotti et al. (1999). The power spectrum of the polarized component of Galactic synchrotron emission, shown in Fig. 5, has been obtained from the corresponding temperature fluctuation power spectrum (as estimated by Tegmark & Efstathiou 1996) assuming the signal to be 40% polarized. As for dust emission, we have taken up the estimates by Prunet et al. (1998) for the E-mode; the extrapolations to to 100 and 30 GHz are made adopting their dust emission spectrum. Free-free emission is not polarized. However, Thompson scattering by electrons in the HII regions where it is produced, may polarize it tangentially to the edges of the electron cloud (Keating et al. 1998). The polarization level is expected to be small, with an upper limit of approximately 10% for an optically thick cloud.
## 5. Conclusions
The very deep VLA surveys have allowed to extend radio source counts down to $`\mu `$Jy levels at $`\lambda 3`$ cm; the bulk of radio sources in the Universe have probably been detected (Haarsma & Partridge 1998). On the other hand, estimates of the counts and of fluctuations due to extragalactic sources at mm/sub-mm wavelengths are made uncertain by the poor knowledge of their spectral properties in this spectral region as well as by the possibility that source populations with strongly inverted spectra may show up. However, as mentioned in §3, estimates based on very different approaches agree to within a factor of 2 up to $`100`$GHz.
Conservative estimates indicate that, in the frequency range 50–200 GHz, extragalactic foreground fluctuations are well below the expected amplitude of CMB fluctuations on all angular scales covered by the Planck and MAP missions.
Current data on the angular correlation function of radio sources imply that fluctuations due to clustering are generally small in comparison with Poisson fluctuations; however, the relative importance of the former increases if sources are subtracted from the sky maps down to faint flux levels.
Fluctuations due to clustering may be more important at frequencies higher than $`\nu 200300`$ GHz, if high redshift dusty galaxies cluster like the Lyman break galaxies at $`z3`$.
The polarized emission of extragalactic sources will not be a threat for measurements of the polarized component of primordial CMB anisotropies; much stronger limitations come from Galactic synchrotron and dust emissions.
Moreover, future space mission like Planck (and, to a lesser extent, MAP) will not only provide all sky maps of the primordial CMB anisotropies but also unique information on the physics of compact radio sources and in particular on the physical conditions in the most compact components (transition from optically thick to optically thin synchrotron emission, ageing of relativistic electrons, high frequency flares) and on their relationship with emissions at higher energies (SSC versus EC models). They will also allow us to study the population properties of inverted spectrum sources like GPS, sources with high frequency free-free self absorption, ADS, etc..
#### Acknowledgments.
LT thanks the organizing committee and the Sloan Foundation for their warm hospitality. We thank B. Guiderdoni for kindly providing us with the source counts predicted by his model E. The angular power spectrum of CMB anisotropies has been calculated by CMBFAST 2.4.1 (Seljak U., & Zaldarriaga M., 1996). We gratefully acknowledge the long-standing, very fruitful collaboration on extragalactic foregrounds with L. Danese, A. Franceschini and P. Mazzei. We thank N. Mandolesi for useful discussions on the capabilities of the Planck Surveyor mission and G.L. Granato for kindly providing us with Figure 1. LT also thanks F. Bouchet, R.D. Davies, E. Gawiser, E. Guerra, B. Partridge, A. Refregier and D. Scott for helpful discussions and comments during his stay in Princeton. This research has been supported in part by grants from ASI and CNR. LT acknowledges partial financial support from the Spanish DGES, project PB95–1132–C02–02, and Spanish CICYT Acción Especial n. ESP98-1545-E.
## References
Aller M.F, et al., 1999, ApJ, in press (astro-ph/9810485)
Atrio-Barandela F., & Mücket J.P, 1998, ApJ, in press (astro-ph/9811158)
Barvainis R., & Lonsdale C., 1998, AJ, 394, 248
Begelman M.C., Blandford R.D., & Rees M.J., 1984, Rev. Mod. Phys., 56, 255
Benn C.R., et al., 1993, MNRAS, 263, 98
Bennett C.L., et al., 1995, BAAS 187.7109
Bersanelli M., et al., 1996, COBRAS/SAMBA, Report on Phase A Study, ESA Report D/SCI(96)3.
Bertin E., Dennefeld M. & Moshir M., 1997, A&A, 323, 685
Blain A.W., & Longair, M.S., 1993, MNRAS, 264, 509
Blandford R.D., & Rees M.J., 1974, MNRAS, 165, 395
Bouchet, F.R., Prunet S., & Sethi S.K., 1998, MNRAS, 302, 663
Brown L.M.J., et al., 1989, ApJ, 340, 129
Chini R., Krügel E., Lemke R. & Ward-Thompson D., 1995, A&A, 295, 317
Condon J.J., 1984, ApJ, 287, 461
Condon J.J., 1989, ApJ, 338, 13
Condon J.J., & Mitchell, K.J., 1984, AJ, 89, 610
Crawford T., Marr J., Partridge R.B. & Strauss M.A., 1996, ApJ, 460, 225
Danese L., De Zotti G., Franceschini A., Toffolatti L., 1987, ApJ, 318, L15
Danese L., et al., 1996, Astrophys. Lett & Comm, 33, 257
De Zotti G., et al., 1996, Astrophys. Lett & Comm, 35, 289
De Zotti G., et al., 1999, Proc. of the Workshop ”3 K Cosmology”, AIP Conference Series, (astro-ph/9902103), in press
De Zotti G., & Toffolatti, L., 1998, Proc. of the Workshop ”The Cosmic Microwave Background and the Planck Mission”, Astrophys. Lett & Comm., in press (astro-ph/9812069)
de Vries, W.H., Barthel, P.D., O’Dea, C.P., 1997, A&A, 321, 105
Di Matteo T., & Fabian A.C., 1997, MNRAS, 286, L50
Donnelly, R.H., Partridge, R.B. & Windhorst, R.A., 1987, ApJ321 94
Elvis M., et al., 1994, ApJS, 95, 1
Fabian A.C., & Rees M.J., 1995, MNRAS, 277, L55
Fanti C., et al., 1995, A&A, 302, 317
Fixsen D.J., Dwek E., Mather J.C., Bennett C.L. & Shafer R.A., 1998, ApJ, 508, 123
Fomalont E.B., et al., 1997, ApJ, 475, L5
Fomalont E.B., Windhorst R.A., Kristian J.A., Kellermann K.I., 1991, AJ, 102, 1258
Franceschini A., & Andreani P., 1995, ApJ, 440, L5
Franceschini A., Danese L., De Zotti G., & Xu, C., 1988, MNRAS, 233, 175
Franceschini A., Toffolatti L., Danese L. & De Zotti G., 1989, ApJ, 344, 35
Franceschini A., Vercellone, S., Fabian, A.C, 1998, MNRAS, 297, 817
Gawiser E., Jaffe, A., &, Silk, J., 1999, ApJ, submitted (astro-ph/9811148)
Gawiser E., & Smoot G.F., 1997, ApJ, 480, L1
Giavalisco M., et al., 1998, ApJ, 503, 543
Gregorich D.T., et al., 1995, AJ, 110, 259
Gruppioni C., Mignoli M., & Zamorani G., 1999, MNRAS, in press (astro-ph/9811309)
Guiderdoni B., et al., 1999, these proceeedings
Guiderdoni B., Hivon E., Bouchet F.R. & Maffei B., 1998, MNRAS, 295, 877
Haarsma D.B., & Partridge R.B., 1998, ApJ, 503, 5
Hacking P. & Houck J.R., 1987, ApJS, 63, 311
Hammer F., et al., 1995, MNRAS, 276, 1085
Hauser M.G., et al., 1998, ApJ, 508, 25
Hildebrand R.H., 1996, in Polarimetry of the Interstellar Medium, ASP Conf. Ser., Vol. 97, ed. W.G. Roberge & D.C.B. Whittet, p. 254
Ho L.C., 1999, ApJ, 510, 631
Holdaway M.A., Owen F.N. & Rupen M.P., 1994, NRAO report
Impey C.D. & Neugebauer G., 1988, A. J., 95, 307
Kawara K., et al., 1997, ESA SP-401, 285
Keating B., et al., 1998, ApJ, 495, 580
Kellermann K.I., 1964, ApJ, 140, 969
Kellermann K.I., et al., 1999, in preparation
Kellermann K.I. & Pauliny-Toth I.I.K., 1971, Ap.L, 8, 153
Knapp G.R., & Patten B.M., 1991, AJ, 101, 1609
Kogut A., 1996, in “Microwave Background Anisotropies”, eds F.R. Bouchet, R. Gispert & B. Guiderdoni, Ed. Frontières, p. 445
Kormendy J., & Richstone D., 1995, ARA&A, 33, 581
Kron R.J., Koo D.C., & Windhorst, R.A., 1985, A&A, 146, 38
Kühr, H., Witzel, A., Pauliny-Toth, I.I.K., & Nauber, U., 1981, A&AS, 45, 367
Lasenby A. N., 1996, in “Microwave Background Anisotropies”, eds F.R. Bouchet, R. Gispert & B. Guiderdoni, Ed. Frontières, p. 453
Lawrence A., 1997, Proc. ESA Symp. “The Far Infrared and Submillimeter Universe”, ESA SP-401, p. 127
LeRoux E., 1961, Ann. Astrophys., 24, 71
Loan A.J., Wall J.V., & Lahav, O., 1997, MNRAS, 286, 994
Magliocchetti M., Maddox S.J., Lahav O., & Wall, J.V., 1999, MNRAS, in press (astro-ph/9806342)
Magorrian J., et al., 1998, AJ, 115, 2285
Mann R., et al., 1999, these proceedings
Mazzei P., & De Zotti, G., 1996, MNRAS, 279, 535
Myers S.T., & Spangler S.R., 1985, ApJ, 291, 52
Narayan R., & Yi I., 1995, ApJ, 444, 231
Nartallo R., et al., 1998, MNRAS, 297, 667
O’Dea C.P., & Baum S.A., 1997, AJ, 113, 148
Owen F.N. & Mufson S.L., 1977, AJ, 82, 776
Partridge R.B., et al., 1997, ApJ, 483, 38
Peacock J.A., & Gull S.F., 1981, MNRAS, 196, 611
Peebles, P.J.E., The Large Scale Structure of the Universe, Princeton University Press, 1980
Prunet S., et al., 1998, A&A, 339, 187
Puget J.-L., et al., 1996, A&A, 308, L5
Puget J.L., et al., 1999, A&A, in press, (astro-ph/9812039)
Readhead A.C.S., Taylor G.B., Pearson T.J., Wilkinson P.N., 1996, ApJ, 460, 634
Refregier A., Spergel D.N., & Herbig T., 1999, ApJ, submitted, (astro-ph/9806349)
Reich, W., et al., 1998, A&AS, 131, 11
Richards E.A., et al., 1998, AJ, 116, 1039
Rowan-Robinson M., et al., 1993, MNRAS, 263, 123
Scheuer P.A., 1974, MNRAS, 166, 513
Schlegel D.J., Finkbeiner D.P., Davis M., 1998, ApJ, 500, 525
Scott D., & White M., 1999, A&A, in press (astro-ph/9808003)
Seljak U., & Zaldarriaga, M. 1996, ApJ, 469, 437
Snellen, I.A.G., 1997, Ph. D. Thesis, University of Leiden
Snellen, I.A.G., et al., 1998a, A&A, 333, 70
Snellen, I.A.G., et al., 1998b, A&AS, 131, 435
Snellen, I.A.G., et al., 1998c, MNRAS, 301, 985
Sokasian A., et al., 1999, ApJ, submitted (astro-ph/9811311)
Stevens J.A., et al., 1996, ApJ, 466, 158
Tegmark M. & Efstathiou G., 1996, MNRAS, 281, 1297
Toffolatti L., et al., 1995, Astrophys. Lett & Comm., 32, 125
Toffolatti L., et al., 1998, MNRAS, 297, 117
Thuan T.X., & Condon J.J., 1987, ApJ, 322, L9
van Breugel W., Miley G., Heckman T., 1984, AJ, 89, 5
Wall J.V., Benn C.R., Grueff G., & Vigotti M., 1986, Highlights Astr., 7, 345
Windhorst, R.A., et al., 1985 ApJ, 289, 494
Windhorst R.A., et al., 1993, ApJ, 405, 498
Windhorst, R.A., et al., 1995, Nature, 375, 471
Windhorst, R.A., Dressler, A., Koo, D.C., 1987, ”Observational Cosmology”, IAU Symposium N 124, (Dordrecht:Reidel), p. 573.
Zhang Y.F., et al., 1994, ApJ, 432, 91
|
no-problem/9902/cs9902018.html
|
ar5iv
|
text
|
# ZBroker : A Query Routing Broker for Z39.50 Databases
#### KEYWORDS:
query routing, bibliographic databases, digital libraries
## 1. INTRODUCTION
As the number of information sources increases rapidly on the Internet, users are beginning to experience difficulties locating the relevant information sources to meet their search requirement. These information sources could be document collections, SQL databases, or other kinds of databases. Although many web search engines are available on the Internet, most of them are only useful for discovering individual web pages instead of information sources such as document collections and SQL databases. Hence, these search engines cannot be easily extended to index the content of information sources.
Given a user query and a set of information sources at different locations, query routing refers to the general problem of selecting from a large set of accessible information sources the ones relevant to a given query, and evaluating the query on the selected sources. The software agent that performs query routing on the Internet is known as a query routing broker. In this paper, we describe a query routing broker developed for bibliographic databases supporting Z39.50 protocol. This query routing broker, called ZBroker, is currently being developed at the Centre for Advanced Information Systems, Nanyang Technological University.
Z39.50 is an application-level communication protocol adopted by the International Organization for Standardization (ISO) and is designed to support remote access to bibliographic databases maintained by public libraries. Z39.50 is widely supported by library system vendors. It specifies a uniform interface for a Z39.50 client application to search and retrieve information from bibliographic databases managed by different Z39.50 servers. In other words, Z39.50 allows bibliographic databases implemented by different vendors on different computer hardware to look and behave the same to the client application. A bibliographic database with Z39.50 query support is also known as a Z39.50 database. A listing of public libraries supporting Z39.50 access can be found at http://www.mun.ca/library/cat/z3950hosts.htm.
The objective of the ZBroker project is to design and implement a software agent capable of routing bibliographic queries on the Internet populated by hundreds of bibliographic databases. A bibliographic database provides important meta-information about the material found in a library. Apart from allowing users to physically locate reading materials on the shelves, bibliographic databases play an important part in the learning and research processes that users have to go through. Usually, a user first refers to a bibliographic database before identifying the reading material that is relevant to his or her interest. In the context of Internet, a researcher can search multiple bibliographic databases to reveal reading material that may not be available in his or her library. The researcher can further request inter-library loan to obtain the desired material.
### Scope of Work
To enable Internet users to easily perform query routing tasks for bibliographic databases, a few functionalities have to be supported by the ZBroker:
* Acquisition of content knowledge: This refers to summarizing the content of a set of bibliographic databases in order to capture the knowledge about their content.
* Database ranking based on a given bibliographic query: When a bibliographic query is given to ZBroker, it should rank the bibliographic databases according to their relevance computed from the respective content knowledge. At present, the degree of relevance is measured by the estimated result size returned by a bibliographic database for the given query.
* Maintenance of content knowledge: As the bibliographic databases evolve, their content may change and this leads to the need to maintain the content knowledge previously acquired by the ZBroker.
* User-friendly interface: ZBroker should provide an easy-to-use web interface to the users. The user interface can support queries on multiple Z39.50 databases selected by the users who will be guided by the database ranks computed by ZBroker.
Although Z39.50 supports queries on multiple bibliographic attributes such as title, author, subject, ISBN, ISSN, etc., not all Z39.50 servers implement the same query capabilities. In our project scope, we first consider boolean queries that involve title, author and/or subject attributes. These are the attributes that are searchable in most Z39.50 databases. The query results returned by the Z39.50 databases are unranked. Once a set of Z39.50 databases have been ranked for a given query, ZBroker will present the location and rank information of the respective Z39.50 databases. When a bibliographic database fails to support a particular query, it will be assigned the lowest (least relevant) rank by ZBroker.
In this project, we experimented the query sampling technique to acquire the content knowledge of a Z39.50 database. The sampling is performed by submitting training queries to the Z39.50 database and summarizing the returned results. The salient feature of this sampling technique is that it does not require modification to the existing Z39.50 servers, thus not compromising the autonomy of databases owned by different institutions. Nevertheless, it is crucial that the sampled content knowledge demonstrates a certain level of accuracy in order for ZBroker to perform well.
### Paper Outline
The rest of this paper is structured as follows. In Section 2, we provide a brief survey of the related work. Section 3 presents the system architecture of ZBroker. Section 4 describes the information captured by the ZBroker’s content knowledge and database ranking technique. The sampling technique and content knowledge maintenance are given in Sections 5 and 6 respectively. Section 7 gives an overview of the ZBroker’s user interface. The performance issues of ZBroker are discussed in Section 8. Section 9 concludes the paper and presents the future work.
## 2. RELATED WORKS
Query routing in the context of document collections has been studied by several researchers. Most of these works involve the performance evaluation of different query routing techniques. Nevertheless, to our best knowledge, there has not been a query routing broker developed for Z39.50 databases. In this section, we survey some system research works that are related to query routing.
### Federated Searcher
Federated Searcher is a query routing broker implemented by Virginia Tech for mediating user queries to multiple heterogeneous search engines. Each search engine has to provide a description about its site using a specially designed XML markup language known as SearchDB. Federated Searcher has been implemented for the Networked Digital Library of Theses and Dissertations (NDLTD) (see http://www.theses.org). Instead of storing the content information about each site, Federated Searcher captures the types of documents indexed by each site, interface information, and general information about the search engine used by each site. Federated searcher also includes a translation request protocol that facilitates multilingual searches. Note that the site description required by Federated Searchers is manually created. Hence, it is difficult for the site description to capture complete content knowledge about the site.
### STARTS Protocol
STARTS Protocol is proposed by Stanford University researchers to support meta-searching or query routing on the Internet. The protocol defines source selection, distributed query evaluation and query result merging as the three main query routing tasks. In order to handle largely incompatible search engines, STARTS requires a standard set of metadata to be exported from different database servers. The acquisition of these metadata information has been incorporated into STARTS. When a database server supports STARTS protocol, it is able to provide detailed statistics about its collection upon request by a query routing broker, thus allowing the broker to have first hand detailed information about the database content. Nevertheless, it may not be possible to have all database servers supporting STARTS and the exchange of detailed site information as it may require modification to the existing database servers and their existing information retrieval protocols.
While STARTS requires some level of cooperation among database vendors and owners in order to have their database servers providing useful meta-data for query routing purposes, ZBroker adopts a less intrusive approach to solicit meta-data from the database servers by sending queries to probe the database content. We believe, however, that the two approaches can co-exist together to provide a more complete set of query routing solutions for different types of database servers.
### Glossary of Servers Server Project
In the GlOSS (Glossary of Servers Server) project, a keyword-based distributed database broker system is proposed and it can route queries containing a set of keyword field-designation pairs where the field-designation could be author, title, etc. The number of documents containing each term for each field-designation is stored and used to estimate the rank of each database. The databases considered by GlOSS are unstructured document collections. The main assumption behind GlOSS is that terms appearing in different documents of a collection follow independent and uniform distributions. GlOSS has been implemented for the Networked Computer Science Technical Reference Library (see http://www.ncstrl.org for the NCSTRL project).
## 3. SYSTEM ARCHITECTURE
In this section, we present an overall design of ZBroker. ZBroker consists of a few core modules as shown in Figure1.
The ZBroker’s web-based user interface is driven by a Common Gateway Interface (CGI) Program which allows Internet users to input bibliographic queries to be routed. The module also presents query results returned from bibliographic databases selected by the users. A detailed description of ZBroker’s user interface will be given in Section 7.
To access remote bibliographic databases using the Z39.50 protocol, a common gateway supporting Z39.50 application programming interface (API) is required. In the ZBroker project, we have chosen to use Z-Client, a gateway implemented by Harold Finkbeiner<sup>1</sup><sup>1</sup>1 The software is available at ftp://lindy.stanford.edu/pub/z3950/zclient.tar.gz.. Built upon Z39.50 API functions, Z-Client provides methods that submit queries to specified Z39.50 databases, and parse the returned query results into some required formats. Each Z39.50 database is identified by its Internet address, port number, and a database id.
Database Ranker(DB-Ranker) is designed to rank Z39.50 databases according to their relevance to the user query submitted through the user interface and the CGI program. The database ranks are computed based on statistical information about the content of participating Z39.50 databases. These information are collectively known as the Content Knowledge. The computed database ranks will be returned to the user interface program for meaningful presentation. The database ranking formula adopted by DB-Ranker resembles that of the GlOSS but it only makes use of smaller set of database records sampled by training queries. A detailed description of this ranking formula will be given in Section 4.
Content Sampler samples the participating Z39.50 databases and construct the content knowledge about them. The sampling procedure consists of submitting training queries to the Z39.50 databases via Z-Client and collecting the query results. To cope with the evolving content of these Z39.50 databases and to enlarge the range of sampled bibliographic records, Content Sampler accumulates user queries submitted to ZBroker and selects the appropriate ones as new training queries.
Figure 2 illustrates the four sub-components of Content Sampler, namely, query evaluator, content summarizer, record filter, and query filter. The query evaluator is responsible for retrieving training queries from a training query library, formatting and submitting them to the remote Z39.50 databases. This is done via Z-Client. Upon receiving a training query result set, the query evaluator passes the bibliographic records in the result set to the content summarizer which updates the content knowledge about the database concerned. To ensure that only unique bibliographic records are given to the content summarizer, the record filter will be invoked to discard bibliographic records that have already been returned by previous training queries. We maintain for each Z39.50 database a record id database which keeps the system ids of all records that have been examined by the content summarizer.
In Figure 2, we also indicate that user queries submitted to the ZBroker user interface can be captured by the User Query Library. The user queries accumulated in this manner can be later deployed as training queries if the user queries can potentially return new records not yet considered by the content summarizer. While the records can be filtered by the record filter, it would save the content sampler much processing overheads if the filtering can be performed on the user queries before they are added to the training query library. This filtering task is the responsibility of the query filter. We will describe query filtering in more detail in Section 5.
## 4. CONTENT SUMMARIZATION AND DATABASE RANKING
In this section, we describe the ranking technique adopted by the ZBroker’s database ranker. The ranking technique in turns determines the summarized information to be included by the content knowledge.
In ZBroker, databases are ranked according to their estimated relevance scores computed from some statistics about the databases. The relevance score formula, known as Training Query Result Summary using GlOSS (TQG), resembles that proposed by the GlOSS project at Stanford University except that it is applied to a set of records sampled from the database. Given a query $`q`$, the TQG formula assigns the estimated relevance score, $`_{db_i,q}`$, to database $`db_i`$ as follows:
###### Definition 1
$$_{db_i,q}=\widehat{Size}_{(db_i,q)}=N_i^{}\underset{k=1}{\overset{|A|}{}}\underset{j=1}{\overset{|A_k|}{}}\frac{TF_{i,j,k}^{}}{N_i^{}}$$
where $`|A_k|`$ denotes the number of search terms on attribute $`A_k`$ specified by the query $`q`$; $`|A|`$ denotes the number of attributes involved in the query $`q`$; $`TF_{i,j,k}^{}`$ denotes the tuple frequency of the $`j`$th search term in attribute $`A_k`$ computed from the set of records sampled from database $`db_i`$; and $`N_i^{}`$ denotes the total number of sampled records for database $`db_i`$
Note that the tuple frequency of a search term in an attribute refers to the number of records in the database containing the term in the specified attribute. Clearly, the number of sampled records $`N_i^{}`$ is always smaller than $`N_i`$, the actual number of records in database $`db_i`$. Since $`TF_{i,j,k}^{}N_i^{}`$, the estimated relevance score ranges between 0 and $`N_i^{}`$.
Therefore, both $`TF_{i,j,k}^{}`$’s and $`N_i^{}`$ have to be computed by the content summarizer for each database and be kept in the content knowledge for ranking purposes. In the following section, we will describe how database content sampling is performed by the query evaluator and record filter of ZBroker.
## 5. DATABASE CONTENT SAMPLING AND RECORD FILTERING
To sample the content of databases, training queries has to be generated and be used to extract records from the databases. There are essentially two possible approaches to create such training queries. One can either create synthetic training queries or collect user queries as training queries. In the current implementation, we have chosen to sample database content using both kinds of queries. Synthetic training queries have been generated to bootstrap the construction of content knowledge. User queries, on the other hand, have been used together with the initial set of synthetic queries to capture changes in the Z39.50 databases’ content as they evolve.
In this section, we describe the generation of synthetic queries, and the filtering processes that are applied to the user queries and the bibliographic records returned by these queries.
### Synthetic Training Query Generation
In order to ensure that the synthetic queries provide a good coverage of the Z39.50 database content, we require them to be simple enough such that a fair number of records can be sampled from the particpating databases. Hence, small number of search terms are used in these synthetic queries. At present, ZBroker keeps a database of 3000 synthetic training queries generated from the bibliographic records of the Nanyang Technological University (NTU) Library<sup>2</sup><sup>2</sup>2The web page of Nanyang Technological University library is available at:(http://web.ntu.ac.sg/library/).. We downloaded 218,000 bibliographic records from the NTU library, and generated the synthetic queries by repeating the following steps:
* Step 1: Randomly select a bibliographic record.
* Step 2: Randomly decide whether to use title, subject or both attributes.
* Step 3: Extract from the selected record up to four distinct terms from the chosen attribute(s). Since a bibliographic record may have several values for the subject attribute, all extracted subject terms must come from the same value. Moreover, stop words are not allowed to be included among the extracted terms.
* Step 4: The extracted title and/or subject terms form the search terms for a new synthetic training query.
### Deriving New Training Queries from User Queries
When a user query can potentially return new records not yet considered by content sampler, Z39.50 will deploy the user query as a new training query and use it together with other training queries in sampling the database content. Before a user query can be deployed as a training query, we have to determine if it returns a result set that is a subset of that of any other existing training query.
###### Definition 2
A query $`q_1`$ is said to be result-subsumed by query $`q_2`$ with respect to a database DB if the result set of query $`q_1`$ from DB is a subset of those of query $`q_2`$
Owing to the heterogeneity of query formats and database content, it is difficult to decide if a user query is result-subsumed by some existing training query. In our ZBroker project, we therefore adopt a more restrictive definition of subsumption relationship between queries. This restrictive definition is possible as each user query handled by ZBroker consists of conjunction of search terms and are known as conjunctive queries.
###### Definition 3
Two selection predicates $`p_1(A_1=val_1)`$ and $`p_2(A_2=val_2)`$ are said to match (predicate matching) if $`A_1A_2`$, where $`A_1`$ and $`A_2`$ are attribute names, and each $`val_i`$ ($`i=1`$ or $`2`$) represents a conjunctive set of terms.
###### Definition 4
Let $`q`$ be a conjunctive query, $`P(q)`$ denotes all search predicates in a query $`q`$. A query $`q_1`$ is predicate-subsumed by another query $`q_2`$ if $`P(q_2)P(q_1)`$ (i.e., every predicate $`p_{2j}`$ in $`q_2`$ has a matching predicate $`p_{1j}`$ in $`q_1`$) and $`val_{2j}val_{1j}`$, where $`val_{ij}`$ refer to the distinct term set in predicate $`p_{ij}`$ for query $`q_i`$ ($`i=1`$ or $`2`$).
###### Example 1
Let query $`q_1`$ = (title = “digital library”) and query $`q_2`$ = (title = “digital”), $`q_1`$ is predicate-subsumed by $`q_2`$ as the term “digital” in $`q_2`$ can also be found in the matching predicate in $`q_1`$.
Query $`q_1`$ = (title = “database management project” and subject = “database management”) is predicate-subsumed by $`q_2`$ = (title = “database management”).
On the other hand, $`q_3`$ = (title = “computer management”) is not predicate-subsumed by $`q_4`$ = (title =“bussiness management”).
Hence, when a user query is found not predicate-subsumed by any existing training query, it will be included as a new training query. The verification of predicate-subsumption is performed by the query filter as shown in Figure 2. The query filter examines all user queries that have been logged in the user query library, and determines the predicate-subsumption relationship between the existing training queries and these queries.
### Record Filtering
Despite the effort in ensuring that no training query is redundant, it is still possible for two training queries to return overlapping result sets. Such duplicate bibliographic records, if not discarded, will lead to incorrect tuple frequencies in the content knowledge.
In ZBroker, a Record Id Database is therefore maintained for each Z39.50 database and it is used by the record filter module to detect and discard duplicate duplicate records. For most Z39.50 databases, unique ids are assigned to their records and it is relatively easy to extract these ids from the training query results and keep them in the record id databases for record filtering. Nevertheless, not all Z39.50 databases provide such record ids. For a Z39.50 database that does not provide record ids, ZBroker determines the uniqueness of records by using either both author and title, or ISBN. To efficiently filter records using the record id databases, the record ids are stored as binary search trees in the databases. Only for those records that does not come with ids, binary search trees for author, title and ISBN are created.
### Global Term Id Assignment
As mentioned in Section 4, the ZBroker’s content knowledge stores for each Z39.50 database the tuple frequencies for terms appearing in different attributes of the database. To efficiently access the tuple frequencies, we require ids to be assigned to the terms. If a different set of term ids are created for each Z39.50 database, one would require a separate disk space for storing the mappings between terms and their ids. To reduce the amount of storage space required, ZBroker assigns unique global term ids to the terms found in all Z39.50 databases. Hence a Global Term Dictionary has been created to maintain the term ids for each attribute.
Using the global term dictionary of an attribute, a single unique global term id can be assigned to a term found in the attribute of any Z39.50 databases. A Global Term Id Manager, which is part of content sampler, is responsible for assigning the global term ids. Using the global term ids, the tuple frequencies in the content knowledge can be retrieved efficiently.
When ZBroker constructs the content knowledge, several instances of content samplers may run simultaneously, one for each Z39.50 database. To regulate concurrent accesses to the global term dictionary, each content sampler process has to acquire a lock before the dictionary can be accessed. By holding the lock, it prevents other content samplers from accessing the dictionary thereby protecting the integrity of content knowledge.
## 6. CONTENT KNOWLEDGE MAINTENANCE
The maintenance of content knowledge is essential for two reasons. It allows ZBroker to keep an up-to-date content knowledge about the participating Z39.50 databases. It also improves the accuracy of database ranking as new training queries are submitted. In the operational mode, ZBroker receives new incoming queries submitted by the users. The routing of these user queries has to be performed based on the content knowledge constructed prior to these queries. Since not all records that are relevant to the queries have been captured by the content knowledge, it is possible that inaccurate database ranks will be suggested by ZBroker. To maintain the content knowledge, ZBroker adopts two updating schedules for the content knowledge, i.e., daily and monthly updates.
* Daily Update To improve the the accuracy of database ranking, ZBroker collects further statistics about each Z39.50 database by submitting a batch of filtered user queries to all Z39.50 databases at the end of each day. The new information collected enables ZBroker to return more accurate database ranking when similar user queries are given in the near future.
* Monthly Update When new records are added to the Z39.50 databases, it is not possible for ZBroker to be alerted. Hence, ZBroker must re-sample the Z39.50 databases using all training queries, including the original synthetic training queries and filtered user queries. At present, ZBroker performs this update monthly.
## 7. WEB-BASED USER INTERFACE
In this section, we describe the web-based user interface of ZBroker that has been implemented. The user interface is now available at http://www.cais.ntu.edu.sg:8000/$``$jxu/z3950. The main functionalities of the user interface include:
* Allowing users to formulate queries to be routed for a list of accessible Z39.50 databases;
* Ranking the databases for a given user query, and allowing the user to select the target Z39.50 databases for query submission.
* Forwarding the user query to the selected databases and presenting the query results to the user.
At present, a list of eleven Z39.50 databases at different locations can be routed by ZBroker as shown in Figure 3.
### Query Formulation
The user interface supports the formulation of user queries by requiring the users to supply search terms to the three queryable attributes: Author, Title and Subject. Multiple terms supplied to a queryable attribute are combined together with the “and” semantics, and the search predicates on different attributes are “and” together as well. Figure 4 depicts a formulated query that searches for records containing “information” and “retrieval” in their titles, and “system” in their subject.
Once a user submits a query, the contents of the three fields will be transferred to the ZBroker’s web user-interface program. The CGI program extracts the query and invoke database ranker which in turns returns the database ids according to their ranks. Based on the ordered database list returned by database ranker, the user-interface program presents the ranked databases within the left frame of the web page as shown in Figure 5.
### Presentation of Query Results
As shown in Figure 5, the databases ranked by ZBroker are listed according to their relevance scores. At this point, the ranks are suggested by ZBroker without sending the query to the remote Z39.50 databases. The user can choose to submit the query to one or more Z39.50 databases by checking the boxes for the databases, and specifying the number of records to be retrieved from each database. ZBroker will forward the query to the selected databases via Z-Client, and display the returned result sets from the databases in multiple frames on the right, one for each selected database, as shown in Figure 5. User can also view the detailed information of a bibliographic record in the result sets by following the record links embedded in the result frames as shown in Figure 6.
## 8. PERFORMANCE ISSUES OF ZBROKER - A PRELIMINARY ANALYSIS
In this section, we present a preliminary analysis of the performance of ZBroker. The analysis focuses on examining the accuracy of query routing, the overheads of content sampling and the storage requirement of ZBroker. Since ZBroker can only construct its knowledge about each Z39.50 database by drawing records from the database using large number of training queries, it is expected that the sampling process incurs significant amount of overheads, including the network and CPU overheads. In addition, ZBroker requires storage space for its content knowledge, training queries, global term dictionaries, and record id databases. We will give some statistics in this section to show the amount of sampling overheads, and storage requirement.
### Accuracy of Database Ranking
At present, the accuracy of database ranks suggested by ZBroker has been encouraging. Most of the time, ZBroker is able to suggest relevant databases in response to a user query. Nevertheless, since the same boolean query can be interpreted and evaluated differently by the particpating Z39.50 databases, one may find the actual result sizes returned by the databases not consistent with the computed database ranks. For example, for a query involving search terms on title attribute, some Z39.50 database servers may impose the search criteria on not only the title attribute but also other related attributes leading to more records included in the query results. In these cases, a lowly ranked database may appear to return much more records than a highly ranked database but many of these records do not carry the search terms in their title attribute.
### Overhead of Content Sampling
The overhead of sampling some Z39.50 databases is tabulated in the second column of Table 1. The table indicates the number of hours taken to sample each database. The sampling overhead ranges from 6 to 159 hours (about 7 days) depending mostly on the network delays involved between ZBroker and the remote Z39.50 databases.
Figure 7 depicts the number of new records sampled versus the total number of records sampled over time when content sampling was performed on the Boston University Library. The figure indicates that large number of new records can be sampled at the first hour of sampling. It is interesting to note that the number of new records decreases as sampling continues. This is further substantiated by Figure 8 which shows the average percentage of new records sampled per hour decreases over time. Although the figures are derived from sampling the Boston University Library, similar observation can be made for other Z39.50 databases. The diminishing return effects suggest that early queries play a more important role in probing the database content. By examining the rate of return, indicated by the average percentage of new records sampled per hour, one could determine if the database content has been adequately sampled.
### Storage Requirement
For each Z39.50 database, ZBroker maintains the content knowledge consisting of tuple frequencies for the attributes author, title and subject. ZBroker also maintains global term dictionaries and a record id database for each Z39.50 database. As shown in Table 2, a total of 137MBytes of storage space has been used to maintain the eleven record id databases (one for each Z39.50 databases). The size of record id database directly depends on the size of the Z39.50 database concerned.
It takes only 2.8MBytes to store the tuple frequencies in the content knowledge for each database. The global term dictionaries also do not occupy too much storage space and they only scale up sub-linearly due common terms appearing in different Z39.50 databases.
## 9. CONCLUSIONS AND FUTURE WORKS
In this paper, we describe a query routing system for Z39.50 databases that provide only Z39.50 query interface to their content. This system, known as ZBroker, demonstrates the viability of using sampling technique to obtain content knowledge about Z39.50 databases. We have outlined the architecture, design and implementation of ZBroker. ZBroker is designed to cope with changes in the database content. It also consists of a web-based user interface that supports easy query formulation and is able to forward queries to multiple selected Z39.50 databases.
During the implementation of ZBroker, we encountered several difficulties due to heterogeneous Z39.50 server implementation. Although Z39.50 requires compliance from libraries created by different vendors, there exists a wide variation of Z39.50 implementations among different libraries. For example, “CD-ROM” might be interpreted as “CD ROM” by some servers, but as a single term by others. This inconsistency will introduce some inaccuracy into the query routing results.
### Future Works
As part of the future works, we plan to improve ZBroker in a number of ways:
* Note that ZBroker is not designed to be the only solution for routing queries to Z39.50 databases. For Z39.50 databases that can export their full content information for query routing, it may be more appropriate to adopt the STARTS approach of ranking databases. We plan to look into how a hybrid query routing system incorporating both STARTS and content sampling techniques can be developed.
* We plan to conduct more detailed performance evaluation experiments for ZBroker in order to determine the appropriate strategies to reduce the amount of time spent in content sampling and to improve upon the database ranking techniques.
|
no-problem/9902/physics9902013.html
|
ar5iv
|
text
|
# A Compton Backscattering Polarimeter for Measuring Longitudinal Electron Polarization
## Introduction
The NIKHEF Compton polarimeter has been constructed to measure the longitudinal polarization of electrons stored in the AmPS ring. The polarized electrons are provided by a recently commissioned polarized electron source (PES) cbp:bol96 . While Compton backscattering polarimeters are used to measure the polarization of transversely polarized stored electron beamscbp:pla89 ; cbp:bar93 , NIKHEF’s detector was the first to measure the polarization of a longitudinally polarized stored beamcbp:igo96 .
In this technique, a circularly polarized photon beam (polarization $`S_3`$, energy $`E_\lambda `$) is backscattered from a stored polarized electron beam (polarization $`P_e`$, energy $`E_e`$).
The cross section for Compton scattering of circularly polarized photons from longitudinally polarized electrons can be written as
$$\frac{d\sigma }{dE_\gamma }=\frac{d\sigma _0}{dE_\gamma }[1+S_3P_z\alpha _{3z}(E_\gamma )],$$
(1)
where $`\frac{d\sigma _0}{dE_\gamma }`$ follows from the energy spectrum for unpolarized electrons and photons and $`P_z`$ represents the longitudinal component of the electron polarization. For a given $`E_\lambda `$ and $`E_e`$ the asymmetry can be written as,
$$A(E_\gamma )=\frac{N_L(E_\gamma )N_R(E_\gamma )}{N_L(E_\gamma )+N_R(E_\gamma )}=\mathrm{\Delta }S_3P_z\alpha _{3z}(E_\gamma )$$
(2)
where $`N_L(E_\gamma )`$ ($`N_R(E_\gamma )`$) is the number of photons with energy $`E_\gamma `$ with incident left (right) handed helicity, and $`\mathrm{\Delta }S_3`$ is the difference between the two polarization states, divided by two. $`P_e`$ is determined by taking $`P_z`$ as a free parameter and fitting the measured asymmetry with eq. 2. The relation between $`P_z`$ and $`P_e`$ is determined by the lattice of the storage ring.
## Layout of the polarimeter
A schematic layout of the Compton polarimeter is shown in fig. 1. The polarimeter consists of a laser system with its associated optical system and a detector for the detection of backscattered photons.
Laser photons are produced by a 10 W CW Ar-ion laser, operated at 514 nm. Part of the mirrors in the optical path can be controlled remotely, in order to optimize the overlap of the electron and laser beam. A quarter-wave plate is used to convert the initially linearly polarized photons to circularly polarized. A Pockels cell is used to switch the helicity between left and right, while a half-wave plate can be inserted in the optical path to check for false asymmetries by reversing the sign of the Compton asymmetry.
Laser photons interact with stored electrons in the straight section (length $`3`$ m) between the first dipole and second dipole (bending angles $`11.25^{}`$) after the internal target facility. The backscattered photons leave the interaction region traveling in the same direction as the electrons of the beam and are separated from them after the second dipole. They are detected in a gamma detector, consisting of a block of $`100\times 100\times 240`$ mm<sup>3</sup> pure CsI.
A chopper mounted immediately after the Ar-ion laser is used to block the laser light for $`1/3`$ of the time for background measurements. The chopper is operated at 75 Hz and also generates the driving signal for the Pockels cell.
## Results
The storage ring could only be operated with a 10% partial snakecbp:ohm96 . Therefore, it was necessary to perform all measurements with an electron beam energy of 440 MeV, resulting in a maximum energy for the Compton photons of 7.04 MeV. This energy is lower than that of the design specification (500–900 MeV), resulting in a poor energy resolution. To reduce background at this rather low energy, we performed all measurements with beam currents smaller than 15 mA. The rate of backscattered photons was in the order of 8 kHz/mA at full laser power, in agreement with simulations.
To minimize the effects of false asymmetries (induced by a small steering effect of the Pockels Cell), we performed sets of six independent measurements to determine the electron polarization. Three measurements were done with different electron polarizations injected into the ring (positive helicity, unpolarized and negative helicity). These measurements were repeated with the half-wave plate of the polarimeter inserted in the optical path. The measurements with unpolarized electrons were used to determine and correct for false asymmetries, while the insertion of the half-wave plate was done as a consistency check. Figure 2 shows the asymmetry before and after correction for false asymmetries.
To determine the stability of the polarimeter, one measurement was repeated nine times. To exclude any sensitivity to variations in the polarization of the injected electrons or spin life time, those measurements were performed with unpolarized electrons. The total measurement time was $``$ 90 min, while a full set of six measurements normally takes about 60 min. The results are shown in fig. 3 and show good stability on this time scale.
The long-term stability is determined from polarization measurements done typically once a day. These measurements are sensitive not only to variations of the polarimeter, but also to any other time-dependent effect such as a degradation of the cathode used at PES. The results (see fig. 3) show no trend in the polarization of the electrons, indicating a good long-term stability for all components.
The polarimeter has been used successfully to optimize the settings of the Z-manipulator at PES. After the optimization, the spin life time ($`\tau `$) and initial polarization ($`P_0`$) has been determined by combining the data of nine measurements of six minutes each. The combined data have been rebinned as a function of time and the polarization has been determined for each bin separately. We found $`P_0=61.6\pm 1.4\text{\% (statistical)}`$, and $`\tau =4500_{1600}^{+5900}\text{ s}`$. The spin life time is in agreement with our calculations. The polarization measured with the Mott polarimeter at PES was $`82\pm 5\%`$. The difference between the polarization measured by the Mott polarimeter and by the Compton polarimeter may be caused by depolarization due to the focusing solenoids in the linac or depolarizing resonances during damping of the beam.
## Conclusions
Here, we describe the results of extensive tests done with a Compton backscattering electron polarimeter. The tests have been performed at an electron energy of 440 MeV and a partial snake. The results show that it is possible to operate the polarimeter in a reliably manner over a period of weeks. Furthermore, the polarimeter has been used to map out the full dependence of the electron polarization of stored electrons on the settings of the Z-manipulator, and to determine the spin life time and depolarization during acceleration and injection of the electrons.
## Acknowledgment
This work was supported in part by the Stichting voor Fundamenteel Onderzoek der Materie (FOM), which is financially supported by the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO), the Swiss National Foundation, the National Science Foundation under Grants No. PHY-9316221 (Wisconsin), PHY-9200435 (Arizona State) and HRD-9154080 (Hampton), Nato Grant No. CRG920219. and HCM Grant Nrs. ERBCHBICT-930606 and ERB4001GT931472.
|
no-problem/9902/cond-mat9902196.html
|
ar5iv
|
text
|
# Comment on “Diffusion of Ionic Particles in Charged Disordered Media”
\[
\]
In a recent Letter, Mehrabi and Sahimi discuss motion of ions in charged disordered media, presenting a variety of results obtained by Monte Carlo simulation on a lattice . Their observations of mean square displacements $`R^2(t)`$ suggest that this model exhibits anomalous diffusion in three dimensions:
$$R^2(t)(\mathrm{c}onst)t^{1\delta }\mathrm{a}st\mathrm{}.$$
(1)
They observe the same behavior in one and two dimensions, but do not present results for $`\delta `$. Mehrabi and Sahimi also make the physically-surprising claim that a suitably-defined “short-time diffusivity” can actually *increase* with increasing disorder strength (see Fig. 3 of Ref. ). Exact bounds, renormalization group calculations, and previous numerical simulations are inconsistent with these results in three dimensions.
At low concentrations of mobile ions, the Green function for a diffusing ion should obey the diffusion equation:
$$\frac{c_v(𝐫,t)}{t}=D_0^2c_v+\beta D_0[c_vv(𝐫)].$$
(2)
Here $`c_v`$ is the Green function of a single ion in a given realization of the quenched random potential $`v`$, $`D_0`$ is the “bare,” short-time diffusivity, and $`\beta `$ is the inverse temperature. The mean square displacement is given by $`R_v^2(t)=d^d𝐫|𝐫|^2c_v(𝐫,t)`$. The observable mean square displacement is given by an average over all realizations of the disorder: $`R^2(t)=R_v^2(t)`$. The effective diffusion diffusion coefficient is defined in $`d`$ dimensions by $`D=lim_t\mathrm{}R^2(t)/(2dt)`$.
Mehrabi and Sahimi model the disorder by a quenched Gaussian random potential field. The statistics of this potential field are chosen so that they obey bulk charge neutrality: $`\widehat{\chi }_{vv}(𝐤)=\gamma /[k^2(k^2+\kappa ^2)]`$. Here the potential-potential correlation function is $`\chi _{vv}(𝐫)=v(\mathrm{𝟎})v(𝐫)`$, $`\kappa `$ is an inverse correlation length, and $`\gamma `$ is a measure of the density of defects. The Fourier transform in $`d`$ dimensions is given by $`\widehat{f}(𝐤)=d^d𝐫f(𝐫)\mathrm{exp}(i𝐤𝐫)`$.
The single-ion, random diffusion model is a well-studied one in statistical physics, and a variety of exact results are known. First, there is an exact bound for the diffusivity in this system in any dimension :
$$\frac{D}{D_0}\mathrm{exp}[\beta ^2\chi _{vv}(0)].$$
(3)
Calculating this bound in three dimensions, one finds $`D/D_0\mathrm{exp}[\beta ^2\gamma /(4\pi \kappa )]`$. This result implies that the motion is diffusive in three dimensions, *i.e.* $`D>0`$. The motion is also diffusive at finite ion concentrations, since the dynamical exponent is 2 . Therefore, the motion should be asymptotically diffusive in three dimensions. Indeed, previous careful simulations by Dean, Drummond, and Horgan on related models have confirmed the bound . Moreover, these simulations have shown that Deem and Chandler’s single-ion prediction
$$\frac{D}{D_0}=\mathrm{exp}[\beta ^2\chi _{vv}(0)/d]$$
(4)
is accurate to at least moderate disorder strengths. In fact, this equation is correct to second order in $`\beta ^2\chi _{vv}(0)`$ in all dimensions and is exact in one dimension. Note that, as expected physically, the diffusion constant *decreases* with increasing disorder strength.
The situation is more interesting in two dimensions, where anomalous diffusion can occur \[the bound in Eq. (3) vanishes\]. Indeed, field-theoretic treatments have shown that the exponent in Eq. (1) is continuously variable and is given exactly by $`\delta =1/[1+8\pi \kappa ^2/(\beta ^2\gamma )]`$ . This scaling has been confirmed by numerical simulations . At finite ion concentrations, the anomalous diffusion persists at high temperature , although the mobile ions may partially screen the disorder. A Kosterlitz-Thouless transition can occur at low temperature .
|
no-problem/9902/nucl-th9902025.html
|
ar5iv
|
text
|
# Pre-critical Chiral Fluctuations in Nuclear Medium — precursors of chiral restoration at 𝜌_𝐵≠0 — An expanded version of this work is see in; T. Hatsuda, T. Kunihiro and H. Shimizu, nuc-th/9810022.
## 1 Introduction
When chiral symmetry is partially restored in nuclear medium one can expect a large fluctuation of the quark condensate $`\overline{q}q`$, i.e., the order parameter of the chiral transition. This implies that there arises a softening of a collective excitation in the scalar-isoscalar channel, leading to (1) partial degeneracy of the scalar-isoscalar particle (traditionally called the $`\sigma `$-meson) with the pion, and (2) decrease of the decay width of $`\sigma `$ due to the phase space suppression caused by (2) in the reaction $`\sigma 2\pi `$:The significance of the $`\sigma `$ meson is discussed in ;see also .
Although it is not a simple task to identify the $`\sigma `$ meson in free space, there will be a chance to see the elusive particle more clearly in nuclei where chiral symmetry might be partially restored. Some experiments to produce the sigma meson in a nucleus was proposed in .
One should, however, notice that describing the system with the meson in terms of the meson mass and its width may become inadequate because of the strong interaction with the environmental particles. Hence the proper observable to describes the system becomes the spectral function. Recently, a calculation of the spectral function in the sigma channel has been performed with the $`\sigma `$-2$`\pi `$ coupling incorporated in the linear $`\sigma `$ model at finite $`T`$; it was shown that the enhancement of the spectral function in the $`\sigma `$-channel just above the two-pion threshold is the most distinct signal of the softening .
In this report, we shall show that the spectral enhancement associated with the partial chiral restoration takes place also at finite baryon density close to $`\rho _0=0.17\mathrm{fm}^3`$.
## 2 Model calculation
Before entering into the explicit model-calculation, let us describe the general features of the spectral enhancement near the two-pion threshold. Consider the propagator of the $`\sigma `$-meson at rest in the medium : $`D_\sigma ^1(\omega )=\omega ^2m_\sigma ^2`$ $`\mathrm{\Sigma }_\sigma (\omega ;\rho )`$, where $`m_\sigma `$ is the mass of $`\sigma `$ in the tree-level, and $`\mathrm{\Sigma }_\sigma (\omega ;\rho )`$ is the loop corrections in the vacuum as well as in the medium. The corresponding spectral function is given by
$`\rho _\sigma (\omega )=\pi ^1\mathrm{Im}D_\sigma (\omega ).`$ (1)
Near the two-pion threshold, $`\mathrm{Im}\mathrm{\Sigma }_\sigma \theta (\omega 2m_\pi )\sqrt{1\frac{4m_\pi ^2}{\omega ^2}}`$ in the one-loop order. When chiral symmetry is being restored, $`m_\sigma ^{}`$ (“effective mass” of $`\sigma `$ defined as a zero of the real part of the propagator $`\mathrm{Re}D_\sigma ^1(\omega =m_\sigma ^{})=0`$) approaches to $`m_\pi `$. Therefore, there exists a density $`\rho _c`$ at which $`\mathrm{Re}D_\sigma ^1(\omega =2m_\pi )`$ vanishes even before the complete $`\sigma `$-$`\pi `$ degeneracy takes place; namely $`\mathrm{Re}D_\sigma ^1(\omega =2m_\pi )=[\omega ^2m_\sigma ^2\mathrm{Re}\mathrm{\Sigma }_\sigma ]_{\omega =2m_\pi }=0`$. At this point, the spectral function can be solely represented by the imaginary part of the self-energy;
$`\rho _\sigma (\omega 2m_\pi )={\displaystyle \frac{1}{\pi \mathrm{Im}\mathrm{\Sigma }_\sigma }}{\displaystyle \frac{\theta (\omega 2m_\pi )}{\sqrt{1\frac{4m_\pi ^2}{\omega ^2}}}},`$ (2)
which shows an enhancement of the spectral function at the $`2m_\pi `$ threshold. We remark that this enhancement is generically correlated with the partial restoration of chiral symmetry.
To make the argument more quantitative, let us evaluate $`\rho _\sigma (\omega )`$ in a toy model, namely the SU(2) linear $`\sigma `$-model:
$`={\displaystyle \frac{1}{4}}\mathrm{tr}[MM^{}\mu ^2MM^{}{\displaystyle \frac{2\lambda }{4!}}(MM^{})^2h(M+M^{})],`$ (3)
where tr is for the flavor index and $`M=\sigma +i\stackrel{}{\tau }\stackrel{}{\pi }`$. Although the model has only a limited number of parameters and is not a precise low energy representation of QCD, we emphasize that it does describe the pion dynamics qualitatively well up to 1GeV as shown by Chan and Haymaker , where the Pad$`\stackrel{´}{e}`$ approximant is used for the scattering matrix. The coupling constants $`\mu ^2,\lambda `$ and $`h`$ have been determined in the vacuum to reproduce $`f_\pi =93`$ MeV, $`m_\pi =140`$ MeV as well as the s-wave $`\pi `$-$`\pi `$ scattering phase shift in the one-loop order.
The nucleon sector and the interaction with the nucleon of the mesons are given by,
$`_I(N,M)=g\chi \overline{N}U_5Nm_0\overline{N}U_5N,`$ (4)
where we have used a polar representation $`\sigma +i\stackrel{}{\tau }\stackrel{}{\pi }\gamma _5\chi U_5`$ for convenience. The first term in (4) with the coupling constant $`g`$ is a standard chiral invariant coupling in the linear $`\sigma `$ model. Although the second term with a new parameter $`m_0`$ is not usually taken into account in the literatures, it is also chiral invariant and non-singular, so there is no compelling reason to dismiss it.
Under the dynamical breaking of chiral symmetry in the vacuum ($`\sigma _{\mathrm{vac}}\sigma _00`$), eq.(4) is expanded in terms of $`\sigma /\sigma _0`$ and $`\stackrel{}{\pi }/\sigma _0`$: it is found that the standard constraint $`g_s=g_p`$ is relaxed without conflicting with chiral symmetry due to the term with $`m_0`$; the term proportional to $`m_0\pi ^2`$ appears to preserve chiral symmetry.
In the following, we treat the effect of the meson-loop as well as the baryon density as a perturbation to the vacuum quantities. Therefore, our loop-expansion is valid only at relatively low densities. The full self-consistent treatment of the problem requires systematic resummation of loops similar to what was developed at finite $`T`$ .
We parametrize the chiral condensate in nuclear matter $`\sigma `$ as $`\sigma \sigma _0\mathrm{\Phi }(\rho )`$. In the linear density approximation, $`\mathrm{\Phi }(\rho )=1C\rho /\rho _0`$ with $`C=(g_\mathrm{s}/\sigma _0m_\sigma ^2)\rho _0`$. Instead of using $`g_\mathrm{s}`$, we use $`\mathrm{\Phi }`$ as a basic parameter in the following analysis. The plausible value of $`\mathrm{\Phi }(\rho =\rho _0)`$ is 0.7 $``$ 0.9 .
## 3 Results and discussions
The spectral function together with $`\mathrm{Re}D_\sigma ^1(\omega )`$ calculated with a linear sigma model are shown in Fig.1 and 2: The characteristic enhancements of the spectral function is seen just above the 2$`m_\pi `$. It is also to be noted that even before the $`\sigma `$-meson mass $`m_\sigma ^{}`$ and $`m_\pi `$ in the medium are degenerate,i.e., the chiral-restoring point, a large enhancement of the spectral function near the $`2m_\pi `$ is seen.
To confirm the threshold enhancement, measuring 2$`\pi ^0`$ and $`2\gamma `$ in experiments with hadron/photon beams off the heavy nuclear targets are useful. Measuring $`\sigma 2\pi ^04\gamma `$ is experimentally feasible , which is free from the $`\rho `$ meson background inherent in the $`\pi ^+\pi ^{}`$ measurement. Measuring of 2 $`\gamma `$’s from the electromagnetic decay of the $`\sigma `$ is interesting because of the small final state interactions, although the branching ratio is small.<sup>1</sup><sup>1</sup>1 One needs also to fight with large background of photons mainly coming from $`\pi ^0`$s. Nevertheless, if the enhancement is prominent, there is a chance to find the signal. When $`\sigma `$ has a finite three momentum, one can detect dileptons through the scalar-vector mixing in matter: $`\sigma \gamma ^{}e^+e^{}`$.
Recently CHAOS collaboration measured the $`\pi ^+\pi ^\pm `$ invariant mass distribution $`M_{\pi ^+\pi ^\pm }^A`$ in the reaction $`A(\pi ^+,\pi ^+\pi ^\pm )X`$ with the mass number $`A`$ ranging from 2 to 208: They observed that the yield for $`M_{\pi ^+\pi ^{}}^A`$ near the 2$`m_\pi `$ threshold is close to zero for $`A=2`$, but increases dramatically with increasing $`A`$. They identified that the $`\pi ^+\pi ^{}`$ pairs in this range of $`M_{\pi ^+\pi ^{}}^A`$ is in the $`I=J=0`$ state. The $`A`$ dependence of the the invariant mass distribution presented in near 2$`m_\pi `$ threshold has a close resemblance to our model calculation in Fig.1 and 2, which suggests that this experiment may already provide a hint about how the partial restoration of chiral symmetry manifest itself at finite density.<sup>2</sup><sup>2</sup>2See for other approaches to explain the CHAOS data.
We remark that (d, <sup>3</sup>He) reactions is also useful to explore the spectral enhancement because of the large incident flux. The incident kinetic energy $`E`$ of the deuteron in the laboratory system is estimated to be $`1.1\mathrm{GeV}<E<10`$ GeV, to cover the spectral function in the range $`2m_\pi <\omega <750`$ MeV.
To make the calculation more realistic, one needs to incorporate the two-loop diagrams, which is expected, however, to hardly change the enhancement near the two-pi threshold discussed here.
In conclusion, we would like to express our sincere thanks to Prof. Yazaki for his interest in our work summarized in and the encouragement given to us for a long time.
|
no-problem/9902/hep-ph9902256.html
|
ar5iv
|
text
|
# Indications for Factorization and 𝐑𝐞𝑽_{𝒖𝒃}<𝟎 from Rare B Decay Data
## Abstract
Surveying known hadronic rare B decays, we find that the factorization approximation can give a coherent account of $`K\pi `$, $`\pi \pi `$ and $`\rho ^0\pi ^+`$ data and give predictions for $`\omega ^0\pi ^+`$, $`\rho \pi `$ and $`K^{}\pi `$ modes, if $`ReV_{ub}`$ is taken as negative (in standard phase convention) rather than positive. As further confirmation, we expect a lower $`\mathrm{sin}2\beta `$ value at B Factories as compared to current fits, and $`B_s`$ mixing close to LEP bounds at SLD and CDF.
preprint:
The last few years have been quite exciting for the field of hadronic rare B decays . The observation of exclusive $`B\eta ^{}K^+`$, $`\eta ^{}K^0`$, $`K^+\pi ^{}`$, $`K^0\pi ^+`$, and $`K^+\pi ^0`$ modes give definite support for $`bs`$ penguins, while $`\omega h^+`$ and especially the newly observed $`\rho ^0\pi ^+`$ mode indicate that tree level hadronic $`bu`$ transitions do occur. In contrast, the limits on $`\varphi K^+`$ and $`\pi ^+\pi ^{}`$, $`\pi ^+\pi ^0`$ modes are rather stringent . Faced with the questions raised by these measurements, together with the fact that two new B Factories would turn on this year, there is a sense of urgency for us to reach better understanding of these modes.
Admittedly, much uncertainty clouds the theory of hadronic rare B decays. The effective Lagrangian that describes b quark decay is better understood, but the subsequent evolution of the decayed B meson into specific light two body hadronic final states is certainly very complicated, while our understanding of long distance QCD is limited. The usual approach is to assume factorization, then use parameters such as $`N_{\mathrm{eff}.}N_C3`$ to fit and quantify the apparent deviations from this assumption. The picture is further muddled by the possibility of rescattering between hadronic final states (FSI). Attempts have been made to take most uncertainties into account and project into the future on the many effective two body modes, where the experimental outlook is rather bright. But, can the navigation chart be simplified? In this Letter we make such an attempt at understanding present data.
We find a simple, coherent and therefore attractive view that can account for current trends in data, especially $`K\pi `$, $`\pi \pi `$ and $`V\pi `$ ($`V=\rho `$, $`\omega `$ and $`K^{}`$) modes: Naive factorization works without resort to $`N_{\mathrm{eff}.}`$ or FSI, but only with $`\mathrm{cos}\gamma `$ negative, where $`\gamma =\mathrm{arg}(V_{ub}^{})`$ in the standard phase convention . Smaller light quark masses may also help. Semi-quantitative predictions can be made which could be tested in the near future.
Current fits to the KM matrix elements, however, seem to favor $`\mathrm{cos}\gamma >0`$. The preference comes largely from the limit on $`\mathrm{\Delta }m_{B_s}/\mathrm{\Delta }m_{B_d}`$ where the hadronic uncertainty is restricted to $`\xi ^2f_{B_s}^2B_{B_s}/f_{B_d}^2B_{B_d}`$, which is probably the least uncertain. With the more conservative $`\mathrm{\Delta }m_{B_s}>10.2`$ ps<sup>-1</sup> at 95% C.L., which also corresponds to the current best single experiment sensitivity, some room is allowed for $`\mathrm{cos}\gamma <0`$. But with $`\mathrm{\Delta }m_{B_s}>12.4`$ ps<sup>-1</sup> from combining LEP, CDF and SLD data, one gets $`\gamma 60`$$`70^{}`$ with $`10^{}`$ errors, and $`\mathrm{cos}\gamma `$ seems definitely positive. We note that the 95% C.L. contour of one of the fits has a tail extending towards $`\mathrm{cos}\gamma <0`$, and would extend further if one enlarged the error on $`\xi `$. It may be prudent, therefore, to allow for the possibility that $`\mathrm{cos}\gamma <0`$ might still be the case in Nature. The current fit result may be implying that $`B_s`$ mixing is not far around the corner. In any case we should keep in mind that $`\gamma `$ is the most challenging unitarity angle to measure at B Factories, and any handle one may gain should be welcome.
When 1997 data suggested $`K^0\pi ^+>K^+\pi ^{}`$, a method for constraining $`\gamma `$ was proposed . With 1998 data, the $`K^+\pi ^0`$ mode was observed while the $`K^0\pi ^+`$ rate came down , and both branching ratios ($`Br`$) are now similar to $`K^+\pi ^{}`$ $`1.4\times 10^5`$. Although the method of Ref. is no longer effective, it was pointed out that the 1998 data suggest $`\mathrm{cos}\gamma <0`$ and prefer small or no FSI phase. Following this trail, we find that a negative $`\mathrm{cos}\gamma `$ could also explain the absence of the $`\pi ^+\pi ^{}`$ mode, the prominence of $`\rho ^0\pi ^+`$ over $`\omega ^0\pi ^+`$ and $`K^0\pi ^+`$, as well as predict emerging trends in $`\pi \pi `$, $`\rho \pi `$ and $`K^{}\pi `$ modes.
Let us retrace the main points of Ref. . We give the average $`K\pi `$ branching ratios vs. $`\gamma `$ in Fig. 1(a) for $`m_s=105`$ and 200 MeV. The light quark mass $`m_s`$ enters through the penguin $`O_6`$ operator via relations between axial current and pseudoscalar density matrix elements. We see that $`K^+\pi ^{}K^0\pi ^+K^+\pi ^0`$ prefers a larger $`m_s`$, and can only be achieved (allowing for some experimental uncertainty) for $`\gamma 90^{}130^{}`$, or $`\mathrm{cos}\gamma <0`$. Although the electroweak penguin (EWP) plays a crucial role in raising the $`K^+\pi ^0`$ rate, the change in sign of $`\mathrm{cos}\gamma `$ was important in allowing $`K^+\pi ^{}`$ to reach above $`K^0\pi ^+`$.
With present fit values for $`V_{ub}`$, one expects $`\pi ^+\pi ^0<\pi ^+\pi ^{}1\times 10^5`$. Instead, one finds $`\pi ^+\pi ^{}<0.84\times 10^5`$ and a weaker limit on $`\pi ^+\pi ^0`$ due to a larger event yield. Compared to the strength of the $`K\pi `$ modes, they pose some problem for theory. Again, the traditional approach is to resort to $`N_{\mathrm{eff}.}`$ or FSI, or a smaller $`|V_{ub}|`$. We find, rather interestingly, that a simple flip in sign of $`\mathrm{cos}\gamma `$ not only explains the smallness of the $`\pi ^+\pi ^{}`$ mode, but also allows for $`\pi ^+\pi ^0>\pi ^+\pi ^{}`$, without need for very small $`N_{\mathrm{eff}.}`$ or large $`\pi ^+\pi ^{}\pi ^0\pi ^0`$ rescattering . The amplitude for the $`\overline{B}^0\pi ^+\pi ^{}`$ mode is,
$`\sqrt{2}𝒜_{\pi ^+\pi ^{}}=iG_Ff_\pi F_0(m_B^2m_\pi ^2)\left\{V_{ud}^{}V_{ub}a_1V_{td}^{}V_{tb}[a_4+a_{10}+(a_6+a_8)R_1]\right\},`$ (1)
where $`F_0=F_0^{B\pi }(m_\pi ^2)`$ is a $`B\pi `$ (BSW) form factor, $`a_i`$’s are combinations of Wilson coefficients , and $`R_1=2m_\pi ^2/(m_bm_u)(m_u+m_d)`$. It is clear that tree–penguin ($`T`$$`P`$) interference for $`K\pi `$ and $`\pi \pi `$ modes differ in sign, because the KM factors $`\mathrm{Re}(V_{ts}^{}V_{tb})A\lambda ^2`$ and $`\mathrm{Re}(V_{td}^{}V_{tb})A\lambda ^3(1\rho )`$ have opposite sign. This observation is independent of factorization assumption. As a consequence, if $`K^+\pi `$ rates are enhanced for $`\mathrm{cos}\gamma <0`$, the $`\pi ^+\pi ^{}`$ rate gets suppressed. In contrast, the $`\pi ^+\pi ^0`$ mode is mainly $`T`$ plus small EWP terms, hence its $`\gamma `$ dependence is weak. Analogous to the $`K\pi `$ case, $`u`$ and $`d`$ quark masses enter through $`R_1`$. We plot $`Br`$ vs. $`\gamma `$ for $`\pi \pi `$ modes in Fig. 1(b) for $`m_d=2m_u=`$ 3 and 6.4 MeV. These quark masses are at the $`m_b`$ scale, and are within the range given by Particle Data Group . It is clear that $`\pi ^+\pi ^{}<\pi ^+\pi ^0`$ is not impossible for $`\mathrm{cos}\gamma <0`$ if $`m_{u,d}`$ are on the lighter side. In this case, however, $`P`$ would become comparable to $`T`$, complicating mixing dependent CP study in $`B^0\pi ^+\pi ^{}`$ channel. We note that in general the $`\pi ^0\pi ^0`$ mode is very small, which would not be the case if $`\pi ^+\pi ^{}`$ is suppressed by rescattering into $`\pi ^0\pi ^0`$.
The $`\rho ^0\pi ^+`$ mode has just been observed at the sizable rate of $`(1.5\pm 0.5\pm 0.4)\times 10^5`$ , and is seemingly larger than $`\omega ^0\pi ^+1\times 10^5`$ as indicated in . Both are at odds with the results of Ref. for $`N_C=3`$. Can changing the sign of $`\mathrm{cos}\gamma `$ help? Dropping EWP terms (but not numerically), the $`B^{}\rho ^0(\omega ^0)\pi ^{}`$ amplitude is
$`𝒜_{V^0\pi ^{}}=G_Fm_V\epsilon p_\pi \left\{f_\pi A_0\left[V_{ud}^{}V_{ub}a_1V_{td}^{}V_{tb}(a_4+a_6Q_1)\right]+f_VF_1\left[V_{ud}^{}V_{ub}a_2\pm V_{td}^{}V_{tb}a_4\right]\right\},`$ (2)
where $`Q_1=2m_\pi ^2/(m_b+m_u)(m_u+m_d)`$ is opposite in sign to $`R_1`$ of Eq. (1), $`A_0=A_0^{BV}(m_\pi ^2)`$ and $`F_1=F_1^{B\pi }(m_V^2)`$ are BSW form factors . The $`+/`$ sign for the last term is for $`\rho ^0/\omega ^0`$, and is traced to the $`d\overline{d}`$ content (PDG convention) of $`\rho ^0`$ and $`\omega ^0`$ when $`\pi ^+`$ comes from the spectator quark in a $`\overline{b}\overline{d}d\overline{d}`$ transition. As shown in Fig. 2(a), it splits $`\rho ^0\pi ^+`$ upwards from $`\omega ^0\pi ^+`$ for $`\mathrm{cos}\gamma <0`$. Because the difference between the two amplitudes is otherwise minute, this is a test for $`\mathrm{cos}\gamma <0`$ independent of normalization.
The normalization is still of some concern for $`N_C=3`$. To see how it might come about, we note that the $`a_4+a_6Q_1`$ term fortuitously cancels to within 10% for $`m_u+m_d=9.6`$ MeV. But if $`m_u+m_d=4.5`$ MeV for example, then $`a_4+a_6Q_1>a_6>0`$ which would push up $`\rho ^0\pi ^+`$ and $`\omega ^0\pi ^+`$ for $`\mathrm{cos}\gamma <0`$ (see Fig. 2(a)). Scaling up $`f_\pi A_0^{BV}`$ now by $`20`$–30% brings these rates above $`1\times 10^5`$. For higher $`m_u+m_d`$ values a larger $`f_\pi A_0^{BV}`$ value is needed. The other possibility of scaling up $`f_VF_1^{B\pi }`$ runs against the (updated ) limit $`\varphi ^0K^+<0.59\times 10^5`$, which is proportional to $`f_\varphi F_1^{BK}`$ in amplitude. This mode is also plotted in Fig. 2(a), and a slight reduction of $`f_\varphi F_1^{BK}`$ seems to be needed. The $`\varphi ^0K^+`$ rate is unaffected by $`m_{u,d,s}`$ since the $`\varphi ^0`$ vector meson cannot come from the spectator quark in $`B^+`$ decay.
For $`\mathrm{cos}\gamma >0`$ and $`N_{\mathrm{eff}.}=3`$ ($`2`$) one expects the combined $`\rho ^\pm \pi ^{}`$ (separating $`B^0`$ from $`\overline{B}^0`$ decay requires tagging) and $`\rho ^+\pi ^0`$ rates to be $`7`$ ($`4`$) and $`3`$ ($`2`$) times the $`\rho ^0\pi ^+`$ rate, respectively, which are very sizable. It is interesting that, while the $`\rho ^0\pi `$ rates are enhanced for $`\mathrm{cos}\gamma <0`$, the $`B\rho ^+\pi `$ rates are suppressed. Thus, lower $`\rho ^+\pi ^{}/\rho ^0\pi ^+`$ and $`\rho ^+\pi ^0/\rho ^0\pi ^+`$ ratios would also suggest that $`\mathrm{cos}\gamma <0`$ is preferred. We plot these effects in Fig. 2(b), again for $`m_d=2m_u=`$ 3 and 6.4 MeV. Note that the $`B^0\rho ^+\pi ^{}`$ mode is insensitive to $`m_{u,d}`$. The combined $`Br(B^0\rho ^\pm \pi ^{})`$ is still likely to be over $`4`$ times larger than $`\rho ^0\pi ^+`$, and since the final state contains only one $`\pi ^0`$, it should be observed soon \[See Note Added.\].
Experimental sensitivities in $`\rho \pi `$, $`K^{}\pi `$ and $`\rho K`$ modes are similar. With the $`\rho ^0\pi ^+`$ observation, a limit on $`K^0\pi ^+`$ is also reported. The event yields suggest that $`K^0\pi ^+>\rho ^0\pi ^+`$ is unlikely, which seems again at odds with factorization results for $`\mathrm{cos}\gamma >0`$. While too early to draw a conclusion, our earlier argument suggests that $`\rho ^0\pi ^+>K^0\pi ^+`$ is possible for $`\mathrm{cos}\gamma <0`$, especially since $`K^0\pi ^+`$ is insensitive to $`\gamma `$ and perhaps suppressed by $`f_K^{}F_1^{B\pi }`$ like the $`\varphi K`$ mode. We plot all the $`K^{}\pi `$ modes in Fig. 3(a). The $`\gamma `$ dependence is similar to the $`K\pi `$ modes of Fig. 1(a), but there is no sensitivity to $`m_s`$ since $`K^{}`$ is produced by vector currents. Thus, independent of $`m_s`$ and normalization, we predict that $`K^+\pi ^{}>K^+\pi ^0K^0\pi ^+`$ \[See Note Added.\] for $`\mathrm{cos}\gamma <0`$, while $`K^0\pi ^0`$ is $``$ factor of two lower. In contrast, $`\gamma 60^{}`$$`70^{}`$ would give $`K^0\pi ^+K^+\pi ^{}>K^+\pi ^0K^0\pi ^0`$.
The $`\rho K`$ modes are analogous to $`K^{}\pi `$ but with vector meson coming from the spectator quark. The tree contribution is color suppressed, so the rates are very sensitive to the penguin combination of $`a_4+a_6Q`$, where $`Q=2m_K^2/(m_b+m_q)(m_q+m_s)`$. For $`m_s=105`$ MeV, this term again largely cancels. Together with smaller form factors, the $`\rho K`$ modes are in general much lower than the $`K^{}\pi `$ modes, with $`\rho ^0K^0`$ the largest for $`\mathrm{cos}\gamma <0`$. The cancellation between $`a_4`$ and $`a_6`$, however, is less effective for larger $`m_s`$, which could enhance (suppress) the $`\rho K^+`$ ($`\rho K^0`$) modes considerably for $`\mathrm{cos}\gamma <0`$, as can be seen from Fig. 3(b). Thus, they could provide useful tests for $`m_s`$. Note that if the prominence of $`\rho ^0\pi ^+`$ is in part due to a larger $`A_0^{B\rho }`$, then some of the $`\rho K`$ modes could be $`0.5\times 10^5`$. However, these modes are too sensitive to $`m_s`$ for one to make firm predictions.
For the very prominent $`\eta ^{}K`$ modes, the $`g^{}g\eta ^{}`$ “anomaly” effect that seems to account for semi-inclusive $`B\eta ^{}+X_s`$, though still controversial, has to be treated properly. However, we do not know how to treat the possible $`|\overline{s}gq`$ Fock component of the $`K`$ meson. Since in general penguins dominate, the rates are not very sensitive to $`\gamma `$, but one still has the nice feature that $`\eta ^{}K^+`$ could be enhanced by 10–20% over $`\eta ^{}K^0`$ for $`\mathrm{cos}\gamma <0`$.
Direct CP asymmetries ($`a_{\mathrm{CP}}`$) can arise via penguin absorptive parts. The $`K\pi `$ modes have been discussed elsewhere . The CP eigenstate $`\pi ^+\pi ^{}`$ may have $`a_{\mathrm{CP}}`$ 15 (10) % for $`\mathrm{cos}\gamma <(>)0`$, opposite in sign to that of $`K^{()}\pi `$ modes, and measurement requires tagging . The $`a_{\mathrm{CP}}`$ for $`\pi ^+\pi ^0`$ is very small since strong penguin is absent by isospin symmetry. The $`K^{}\pi `$ and $`\rho \pi `$ modes are interesting since $`T/P`$ and $`P/T`$ are respectively of order 20–30%. As shown in Fig. 4, $`a_{\mathrm{CP}}`$s for $`\mathrm{cos}\gamma <0`$ would be smaller (larger) in $`K^+\pi `$ and $`\rho \pi ^+`$ ($`\rho ^+\pi `$) compared to $`\mathrm{cos}\gamma >0`$ case , and would again test our conjecture. The $`a_{\mathrm{CP}}`$s for $`K^0\pi `$ are small, but like $`K^0\pi `$ modes a sizable $`a_{\mathrm{CP}}`$ would signal the presence of FSI phases . The large $`a_{\mathrm{CP}}`$ in $`\rho ^0\pi ^0`$ corresponds to a very small rate and requires tagging to measure.
We offer some remarks before closing. First, as shown in Fig. 2(a), we are still unable to account for the $`\omega ^0K^+`$ rate . However, at the present level of statistics, and out of $`𝒪(10)`$ measurements or limits, having a problem or two is perhaps a virtue. Second, we have not discussed $`VV`$ modes. They in general depend on several $`BV`$ form factors, while their detection would likely come after prominent $`PP`$ and $`VP`$ modes. There is some indication for the $`\varphi K^{}`$ mode , but being pure $`bs`$ penguin, it has little bearing on $`\gamma `$. Third, the electroweak penguins have been numerically included. They are in general less significant than varying $`\mathrm{cos}\gamma `$. Four, larger $`a_2`$ (or lower $`N_{\mathrm{eff}.}`$) can enhance $`h^+\pi ^0`$ ($`h=\pi `$, $`K`$, $`\rho `$ and $`K^{}`$) and $`\rho ^0\pi ^+`$, $`\omega ^0\pi ^+`$ modes. Five, although we have kept a range for light quark masses, we note that for $`\mathrm{cos}\gamma <0`$, lower $`m_u`$, $`m_d`$ and $`m_s`$ values lead to interesting results such as further suppressing (enhancing) the $`\pi ^+\pi ^{}`$ ($`\rho \pi ^+`$ and $`\omega ^0\pi ^+`$) mode(s), but making the $`\rho K`$ modes difficult to predict. They also suggest the ordering $`K^+\pi ^{}>K^0\pi ^+K^+\pi ^0>K^0\pi ^0`$ for the $`K\pi `$ modes. Finally, it is surprising that factorization seems to account for present data if one simply changes $`\mathrm{cos}\gamma `$ from positive to negative, although the latter change runs against fits to KM matrix elements . That something as simple as factorization would work for rare hadronic B decays should be welcome, and it is further encouraging that the conjecture can be tested as more data unfolds, where one can perhaps even contemplate making a more systematic fit to model parameters in the near future. If the $`\mathrm{cos}\gamma `$ value from such fits continues to be at odds with updated CKM fits, we may be in store for some exciting physics at the B Factories or elsewhere. For example, $`\mathrm{sin}2\beta `$ would be lower than the CKM fit prediction and more consistent with $`\mathrm{cos}\gamma <0`$, and $`B_s`$ mixing would be measured soon at the Tevatron and/or SLD, or else we may have new physics.
In conclusion, we find the surprising result that a simple change in sign for $`\mathrm{cos}\gamma `$ from current fit values can account for present rare B decay data within factorization approximation. The size of the $`K\pi `$ modes and the newly observed $`\rho ^0\pi ^+`$ mode, the absence of $`\pi ^+\pi ^{}`$ (perhaps below $`\pi ^+\pi ^0`$) etc., can all be due to having constructive rather than destructive tree-penguin interference, or vice versa. Prominence of $`\rho ^0\pi ^+`$ probably implies a larger $`A_0^{BV}`$ form factor, while absence of $`\varphi ^0K^+`$ suggests a smaller $`F_1^{BP}`$, which may also contribute to the absence of $`K^0\pi ^+`$. Chief predictions for $`\mathrm{cos}\gamma <0`$ are: $`\rho ^0\pi ^+>\omega ^0\pi ^+`$, $`K^+\pi ^{}>K^0\pi ^+`$, reduced but still prominent $`\rho ^+\pi ^{}/\rho ^0\pi ^+`$ and $`\rho ^+\pi ^0/\rho ^0\pi ^+`$ ratios, and $`K^+\pi ^{}>K^0\pi ^+`$ if $`m_s`$ is on lighter end. One expects a lower $`\mathrm{sin}2\beta `$ value at B Factories compared to current fit results, and $`B_s`$ mixing close to present LEP bounds..
This work is supported in part by grants NSC 88-2112-M-002-033, NSC 88-2112-M-002-041 and NSC 88-2112-M-001-006 of the Republic of China. We thank J. Alexander, H.Y. Cheng, Y.S. Gao, W. Marciano, A. Soni, J. Smith, W. Sun, F. Würthwein and L. Wolfenstein for discussions. WSH thanks the Theory Group of Brookhaven National Lab for partial support, and K. Berkelman, G. Brandenburg and E. Thorndike for hospitality during frequent visits to the CLEO Collaboration at Cornell University.
Note Added.
After this work was posted, CLEO announced the measurement of $`Br(B\rho ^\pm \pi ^{})=(3.5_{1.0}^{+1.1}\pm 0.5)\times 10^5`$ and $`Br(BK^+\pi ^{})=(2.2_{0.60.5}^{+0.8+0.4})\times 10^5`$, which further confirm our conjecture that $`\mathrm{cos}\gamma <0`$. The ratio $`\rho ^\pm \pi ^{}/\rho ^0\pi ^+2.3`$ turns out to be less than 4 which we had advocated. From hindsight, since $`𝒜(B^0\rho ^+\pi ^{})F_1^{B\pi }`$, this can be attributed to our observation that $`A_0^{BV}`$ is enhanced to account for $`\rho ^0\pi ^+`$ rate, while $`F_1^{B\pi }`$ is suppressed as indicated by $`\pi ^+\pi ^{}`$ and $`\varphi K^+`$ nonobservation.
|
no-problem/9902/astro-ph9902336.html
|
ar5iv
|
text
|
# On the Influence of the Environment in the Star Formation Rates of a Sample of Galaxies in Nearby Compact Groups
## 1 Introduction
There has been considerable discussion in the literature about the possibility that compact groups are chance alignments within looser groups or clusters, rather than genuinely dense aggregates of galaxies (Mamon 1995, Ostriker et al. 1995). The typical median value of the velocity dispersions of the well known Hickson Compact Groups is about 200 km sec<sup>-1</sup> (Hickson 1982), so that given that this value is of the order of the typical rotational velocity of the disks of the spirals, strong effects of the interactions would be expected between their members. In fact, Mendes de Oliveira & Hickson (1994) reported morphological signs of interactions for a large fraction of galaxies within compact groups, what was interpreted in favor of the bound system hypothesis. However, Walke & Mamon (1989) interpreted this result suggesting that a large number of binaries combined with projected background galaxies would give the same ratio of interactions as the one found for the compact groups.
Observations devoted to disentangle the real nature of compact groups have been carried out by several authors at different wavelength ranges: Optical (Moles et al. 1994, Pildis et al. 1995), far infrared (Hickson et al. 1989, Sulentic & de Mello Rabaça 1993, Venugopal 1995, Allam et al. 1996, Verdes-Montenegro et al. 1998), X rays (Ebeling et al. 1994, Saracco & Ciliegi 1995, Ponman et al. 1996), radio (Williams & Rood 1987, Menon 1995, Huchtmeier 1997). Nevertheless, the problem still remains unclear.
The study of the Star Formation Rates (SFRs) of the galaxies in compact groups could also supply information concerning their real nature. It is known that interactions between disk galaxies can modify their SFRs under some circumstances of spin alignment, velocity differences and impact parameter (Mihos et al. 1991). In particular, if the interaction is so strong to morphologically disturb the disks, an enhancement in the SFR is expected. This theoretical result has been observationally confirmed for nearby pairs (Kennicutt et al. 1987, Laurikainen & Moles 1989) and for samples of peculiar galaxies (e.g. Larson & Tinsley 1978, Mazzarella et al. 1991). Moles et al. (1994), using broad band photometric data, performed a statistical study on the SFRs of a large sample of galaxies in compact groups obtaining a slight enhancement compared to the normal galaxies but with less star formation activity than paired galaxies. This result suggested that the star formation properties of compact groups of galaxies are not dominated by the effects of strong interactions.
However, broad band optical colors are only sensitive to the SFRs on time scales of the order of 10<sup>8</sup>yr. Better indicators of the evolution of the SFR on shorter timescales are the FIR luminosities (Telesco 1988, Mazzarella et al. 1991) and the luminosity of the hydrogen recombination lines (e.g. Kennicutt 1983). In this work, the SFRs of a sample of galaxies in compact groups from the “Catalog of Compact Groups of Galaxies” (Hickson 1982) were compared with those of a sample of field galaxies in order to study the influence of the compact group environment on the SFRs of galaxies. The star formation properties of the most interesting groups have already been studied in detail in previous papers (Iglesias-Páramo & Vílchez 1997a, 1997b, 1998). Also the complete sample of groups has been presented in a previous article (Vílchez & Iglesias-Páramo 1998a). In §2 we describe our sample of groups and all the information relevant to the acquisition and reduction of the data. The observational results of our work together with the implications on the nature of the compact groups are described in §3 and §4. Finally, §5 contains the conclusions of this work.
## 2 Data Reduction and Photometric Results
The sample of galaxies was previously described by Vílchez & Iglesias-Páramo (1998a), together with their H$`\alpha `$ images, and the details of to the acquisition and reduction of the data.
Table 1 shows the H$`\alpha `$ photometry of all the accordant<sup>1</sup><sup>1</sup>1Those galaxies with radial velocity within 1000 $`kmsec^1`$ of the median velocity of the group galaxies of our sample. The data are corrected for Galactic extinction following Burstein & Heiles (1984) and using a standard extinction law (Rieke & Lebofsky 1985). Column (2) shows the Galactic extinction in the $`B`$ band. Column (3) shows the absolute $`B`$ magnitude of the galaxies from de Vaucouleurs et al. (1991). Column (4) shows the H$`\alpha `$ luminosity expressed in erg sec<sup>-1</sup>. Upper limits correspond to $`3\sigma `$ over the sky level. Given that the FWHM of the filters are about 50Å wide, we could not avoid the contamination due to the \[Nii\] lines, so hereafter we will refer to H$`\alpha +`$\[Nii\] as H$`\alpha `$. Column (5) shows the observational uncertainty of the logarithm of the H$`\alpha `$ luminosity, computed as the quadratic sum of the observational plus the Poissonian errors. However, as commented above, there are additional sources of uncertainty in the H$`\alpha `$ luminosity: The internal extinction, the presence of the \[Nii\] lines, the emission of the galactic nuclei and the uncertainty in the continuum level. Young et al. (1996) estimated the total uncertainty due to these factors at $``$20% of the real value. Column (6) shows the equivalent width of H$`\alpha `$ expressed in Å. The equivalent width of H$`\alpha `$ was computed following this formula
$$EW(\mathrm{H}\alpha )=\frac{C_\alpha }{C_{cont}}\times W_f,$$
(1)
where $`C_\alpha `$ is the number of counts from the galaxy in the net H$`\alpha `$ frame, $`C_{cont}`$ is the number of counts of the galaxy in the scaled continuum frame and $`W_f`$ is the FWHM of the filter in Å. This gives a reasonable estimation of the total H$`\alpha `$ equivalent width, since we are including the whole galaxy.
Column (7) shows the present day SFR expressed in solar masses per year. This parameter only accounts for the stars heavier than $`10M_{}`$ and has been computed following Kennicutt (1983):
$$SFR(10M_{})=\frac{L(\mathrm{H}\alpha )}{7.02\times 10^{41}\mathrm{erg}\mathrm{sec}^1}M_{}yr^1$$
(2)
assuming an IMF of the form:
$$\psi (m)\{\begin{array}{cc}m^{1.4}\hfill & (0.1m1M_{})\hfill \\ m^{2.5}\hfill & (1m100M_{})\hfill \end{array}$$
(3)
Column (8) expresses the confidence level of the measured values for the H$`\alpha `$ luminosities: A value of 0 indicates that the galaxy was well isolated. A value of 1 indicates that the galaxy may have been slightly contaminated by light from a nearby star or a nearby companion. A value of 2 means that the flux of the galaxy was strongly contaminated by a saturated star or that strong difficulties arose in the determination of the sky. All the data are corrected for Galactic extinction but not for internal extinction.
Two of the galaxies, HCG44C and HCG92C, are classified as Seyfert galaxies (Huchra & Burg 1992). The H$`\alpha `$ luminosity listed in Table 1 accounts for the contribution due to the whole galaxy. However, for the subsequent analysis, we reject the nuclear point source contribution to the H$`\alpha `$ luminosity of these galaxies because the ionizing source responsible for the nuclear H$`\alpha `$ photons may well be non-thermal. The total nuclear contributions of HCG44C and HCG92C amount to 68% and 78% of the global H$`\alpha `$ luminosity respectively. Also, HCG68c was found to show LINER-type emission (Giuricin et al. 1990) in the nucleus. The nuclear emission in this galaxy amounts to 13% of the total H$`\alpha `$ emission. Galaxies HCG16a and HCG16b have been reported to show a LINER-type spectrum. However, it was shown by Vílchez & Iglesias-Páramo (1998a), that HCG16a exhibits a ring of circumnuclear emission but it does not show a nuclear H$`\alpha `$ emission region. Thus, we argue that this galaxy could have been misclassified as a LINER due to the low resolution of the spectrum analyzed or to the fact that the H$`\alpha `$ line could be absorbed by the underlying population of the bulge. The nuclear emission region in HCG16b contributes more than 90% to the total emission of this galaxy. For the subsequent analysis of the SFRs we will neglect the non-thermal contributions to the H$`\alpha `$ luminosities of the galaxies HCG16b, 44c, 68c and 92c.
## 3 Observational Results
### 3.1 H$`\alpha `$ Equivalent Widths
One of the tools available to study the SFRs of the galaxies in our sample is the H$`\alpha `$ equivalent width. It is well known that this parameter is related to the star formation history of a galaxy, i.e. to the ratio of the current to past SFR, and also to the high mass end of the IMF (e.g. Kennicutt et al. 1994 and references therein).
We have selected the sample of galaxies by Kennicutt & Kent (1983, hereafter KK83) to compare the distribution of the H$`\alpha `$ equivalent widths with the one for our sample. This sample of disk galaxies is the largest published with measured equivalent widths. For our purposes, galaxies belonging to the Virgo cluster have been removed from the KK83 sample. Also, we have removed the galaxies that were observed with an aperture smaller than the diameter of the galaxy. Figure On the Influence of the Environment in the Star Formation Rates of a Sample of Galaxies in Nearby Compact Groups shows the histograms of the H$`\alpha `$ equivalent widths for both samples. The KK83 distribution is narrower than the compact group one. This effect could be due to the fact that the morphological composition of both samples are not homogeneous. However the median values of the distributions are almost coincident: $`\mathrm{log}EW=1.41`$ and $`1.45`$ for the KK83 and the compact group samples respectively.
In Figure On the Influence of the Environment in the Star Formation Rates of a Sample of Galaxies in Nearby Compact Groups we have also plotted the distribution of the H$`\alpha `$ equivalent widths for the galaxies in our sample binned by Hubble type. The vertical bars indicate the ranges covered by the galaxies in the KK83 sample for each bin. The Y axis shows also the birthrate parameter – defined in Kennicutt et al. (1994) as a parameterization of the star formation history of a disk galaxy –. There are no significant differences between both samples, excepting the extreme H$`\alpha `$ equivalent width measured for the Im merging galaxy in HCG31.
### 3.2 Present Day SFRs
As has been shown in the previous section, no statistical differences in the star formation history between the compact groups sample and the KK83 sample were found from their H$`\alpha `$ equivalent widths. In this section we are trying to shed light in the possible influence of the environment on the present day SFRs of the galaxies in compact groups as compared to a sample of field galaxies. For this purpose we have built an extended sample of disk galaxies from several sources in the literature: Young et al. (1996), KK83, Hunter & Gallagher (1986) and Miller & Hodge (1994). For galaxies with more than one estimation of the H$`\alpha `$ flux, the most recent was chosen.
The distances of the galaxies were derived from the Kraan-Korteweg Catalogue (1986), or assuming a pure Hubble flow with $`H_0=100kmsec^1Mpc^1`$ for those galaxies not included in this catalogue. The H$`\alpha `$ luminosities and absolute magnitudes were calculated using these distances. The data were corrected for Galactic extinction following Burstein & Heiles (1984). No further correction was applied to the H$`\alpha `$ fluxes so that we could compare them directly to our own data. However, as the amount of internal extinction is not expected to present large variations from a mean value for the two samples and taking into account that the main conclusions of our work are statistical, we claim that the effect of internal extinction will hardly affect the results of this paper.
We have discarded from the field sample those galaxies belonging to the Virgo Cluster and to any of the Abell clusters as well as paired galaxies, merged galaxies and galaxies presenting nearby satellites. Thus, our field sample is free of environmental effects. The fractional abundance of galaxies of each morphological type is very similar to the compact groups sample, so that any effect that depends on the composition of the samples is excluded. Table 2 contains the main observational properties of the galaxies of the field sample. Columns (1) to (5) contain selected properties of the galaxies.
Given that the compact group and the field samples cover a large range in magnitudes and masses, we have normalized the H$`\alpha `$ luminosities to the $`B`$ luminosity in order to compare their current SFRs. Figure On the Influence of the Environment in the Star Formation Rates of a Sample of Galaxies in Nearby Compact Groups shows $`L_{\mathrm{H}\alpha }`$ against the absolute $`B`$ magnitude for the galaxies of our sample – open squares – and the field sample – asterisks –. As this figure shows, no galaxies fainter than $`M_B=14`$ are present in the compact group sample. This bias is mostly due to the selection criterion for compact groups which limits the magnitude difference between two galaxies in the group to 3 magnitudes. However, the high luminosity limits are similar for the two samples. Also, it can be seen that for magnitudes brighter than $`M_B=17`$, the scatter in the distribution of the compact group galaxies is larger than for the field galaxies.
Figure On the Influence of the Environment in the Star Formation Rates of a Sample of Galaxies in Nearby Compact Groups shows the corresponding histograms of $`L_{\mathrm{H}\alpha }/L_B`$ for both samples (the upper histogram shows the distribution for the compact group sample and the lower one shows the distribution for the field sample). The irregular galaxies of the two samples have been highlighted with grey bins. It can be seen that the irregular galaxies from the compact group sample extend over one order of magnitude whereas the irregulars from the field sample extend over almost two orders of magnitude in $`L_{\mathrm{H}\alpha }/L_B`$. This effect is reflecting the selection criterion mentioned above for Hickson compact groups. Thus, no low surface brightness irregulars are present in the compact groups of our sample. Both distributions show a maximum around $`\mathrm{log}L_{\mathrm{H}\alpha }/L_B3`$. The histogram of compact groups also shows a secondary maximum at around $`\mathrm{log}L_{\mathrm{H}\alpha }/L_B4`$ which is due to the existence of spiral galaxies with a low level of H$`\alpha `$ emission. However, although this secondary maximum cannot be claimed with a great statistical significance because of the small number of data points, it is clear that this feature does not have a counterpart in the histogram of field galaxies. The highest values of $`\mathrm{log}L_{\mathrm{H}\alpha }/L_B`$ are also found in the compact group distribution. The median values of $`\mathrm{log}L_{\mathrm{H}\alpha }/L_B`$ are $`2.85`$ for the sample of galaxies in compact groups and $`2.92`$ for the field sample. These values are not significantly different given the overall errors in $`L_{\mathrm{H}\alpha }/L_B`$. The significance level obtained was 0.57 after applying the Kolmogorov-Smirnoff test to both distributions.
A further criterium to compare the SFRs of different samples of galaxies is the SFR per unit surface area of the galaxies. Given that for disk galaxies most of the SFR is concentrated in the disk, this indicator should be a better indicator of the level of star formation, since for the earlier spirals the blue luminosity has a strong contribution from the bulge, where very little star formation activity normally occurs. Figure On the Influence of the Environment in the Star Formation Rates of a Sample of Galaxies in Nearby Compact Groups shows the histogram of the present day SFRs per unit area for the galaxies of the compact group sample (a) and for the field sample (b). The grey bins in the plot correspond to the irregulars in both samples. The areas of the galaxies were parameterized according to the product of the lengths of the major and minor axes following de Vaucouleurs et al. (1991). The maxima of the distributions are slightly displaced: $`\mathrm{log}SFR/Area=8.9M_{}yr^1pc^2`$ for the compact group sample and $`\mathrm{log}SFR/Area=8.3M_{}yr^1pc^2`$ for the field sample. Moreover, the distribution corresponding to the compact group galaxies is less symmetric and broader than that of the field galaxies and seems to be weighted towards low SFRs per unit area. The distribution of the irregular galaxies in the two samples are quite different: It covers a range of about 3 orders of magnitude for the field galaxies whereas it is restricted to the high SFRs per unit surface area for the compact group galaxies, again due to the selection criterion mentioned above. The median values of the distributions are $`\mathrm{log}SFR/Area=8.66M_{}yr^1pc^2`$ for the compact group sample and $`\mathrm{log}SFR/Area=8.42M_{}yr^1pc^2`$ for the field sample. The significance level obtained was 0.50 after applying the Kolmogorov-Smirnoff test. The present day SFR per unit surface area was found to be enhanced in samples of interacting galaxies (Bushouse 1987, Kennicutt et al. 1987, Laurikainen & Moles 1989). However, the same result does not hold for the galaxies in our compact group sample. Concerning the shape of the distributions, the sample of compact groups shows an excess of low H$`\alpha `$ luminosity spirals which does not have a counterpart in the distribution for the field galaxies.
The main result found in this section is that there is no global enhancement in the star formation histories of the galaxies in the compact groups sample neither in their present day SFRs, when compared to galaxies of the field sample. This suggests that although interactions among galaxies in compact groups are expected to occur, their characteristic signature should be different from a straight enhancement of the SFR of the galaxies involved.
### 3.3 Total H$`\alpha `$ Luminosity of the Compact Groups
In this section we analyze the total H$`\alpha `$ luminosity of the groups normalized to their $`B`$ luminosity. In the previous section we have found that the individual disk galaxies in our sample do not show a significant enhancement in the present day SFR with respect to the galaxies in our field sample. An enhancement in the normalized H$`\alpha `$ luminosity was found, however, for the early-type galaxies of the compact group sample (see Vílchez & Iglesias-Páramo 1998b) compared to the sample of field early-type galaxies. This enhancement was attributed to the accretion of gas by the early-type galaxies in the groups from the outer envelopes of gas-rich galaxies during close passages. In this section we will test whether there is any environmental effect related to the dynamical state of the groups which affects the total H$`\alpha `$ emission of a given group.
For this purpose, we built synthetic groups composed of galaxies taken from the field sample. We have added to our field sample the sample of field ellipticals and lenticulars listed in Vílchez & Iglesias-Páramo (1998b) in order to cover all the morphological types. For each real group of our sample a set of synthetic groups was constructed with the same number of galaxies and fractional abundance of morphological types as the real group, and restricted by the condition that the maximum difference in absolute magnitude of the members of the synthetic groups is three, following the selection criterion imposed by Hickson (1982). Using these criteria, for each real group of our sample we selected all the possible synthetic groups of galaxies from the field sample. Then, the ratio $`\mathrm{log}L_{\mathrm{H}\alpha }/L_B`$ was computed for all the groups. The final result is plotted in Figure On the Influence of the Environment in the Star Formation Rates of a Sample of Galaxies in Nearby Compact Groups. This Figure shows the ratio $`\mathrm{log}L_{\mathrm{H}\alpha }/L_B`$ for the groups of our sample represented by open squares. The median value of this ratio for the corresponding synthetic groups is indicated by open triangles. Vertical error bars correspond to one standard deviation in $`\mathrm{log}L_{\mathrm{H}\alpha }/L_B`$.
As can be seen, 6 of the groups – HCG16, 31, 54, 92, 93 and 100 – show a $`L_{\mathrm{H}\alpha }/L_B`$ ratio higher than $`1\sigma `$ above the median values of the distribution of synthetic groups, 4 of the groups – HCG7, 30, 44 and 61 – show a $`L_{\mathrm{H}\alpha }/L_B`$ ratio lower than $`1\sigma `$ below the median values and 6 of the groups – HCG2, 23, 37, 68, 79 and 95 – lie within $`1\sigma `$ of the median values of the synthetic groups. Of the 6 groups that lie more than $`1\sigma `$ above the median values of the synthetic groups, 5 of them – HCG16, 31, 54, 92 and 100 – lie even above the highest value reached by any of the corresponding synthetic groups. These 5 groups are composed by nearly 100% of spirals or irregulars. On the contrary, all those groups that lie lower than $`1\sigma `$ below the median values contain a lower fraction of late-type galaxies.
Four of the groups of our sample have a first-ranked<sup>2</sup><sup>2</sup>2Brightest galaxy in the group elliptical – HCG37, 79, 93 and 95 – whereas two of them have a first-ranked S0 – HCG61 and 68 –. These groups do not show any preference for a value of $`L_{\mathrm{H}\alpha }/L_B`$ above or below the median value of the synthetic groups. Thus, although the ellipticals in the groups show a higher value of $`L_{\mathrm{H}\alpha }/L_B`$ than the field ellipticals, the presence of a bright elliptical in the group is not the key factor controlling the total $`L_{\mathrm{H}\alpha }/L_B`$ of the group.
There are 4 groups in the compact group sample that contain interacting pairs showing clear tidal tails: HCG16, 31, 92 and 95. Three of these groups show $`L_{\mathrm{H}\alpha }/L_B`$ $`1\sigma `$ above the median value of the synthetic groups. Thus it seems that when interactions are so strong to disrupt the disks of the galaxies in such a way that tidal features are developed, then a clear enhancement in the present day SFR of at least one of the galaxies involved is observed.
It can be expected that the H$`\alpha `$ emission of the groups may be related to the gaseous content of the group. Several authors have studied the Hi content of the HCGs and have generated very recently an almost complete database of the catalogue. The Hi masses for all the groups of our sample are available in the literature (see Williams & Rood 1987, Huchtmeier 1997). With these published data we plotted the ratio $`L_{\mathrm{H}\alpha }/L_B`$ against $`M`$(Hi)/$`L_B`$ for the groups of our sample in Figure On the Influence of the Environment in the Star Formation Rates of a Sample of Galaxies in Nearby Compact Groups, and we found a slight correlation in the expected sense that the H$`\alpha `$ luminosity increases with the Hi content (both relative to the total $`B`$ luminosity). However, the data show a large scatter for the groups with a higher content of Hi. This means that Hi is necessary for a high SFR, but some groups with a high level of Hi show a low value of $`L_{\mathrm{H}\alpha }/L_B`$ and thus a low SFR.
The molecular gas content of disk galaxies is known to be a better indicator of the present day star formation activity of galaxies. Perea et al. (1997) carried out a program devoted to study the relationship between the quantity of molecular gas and the $`B`$ luminosity for isolated galaxies and they found a nonlinear dependence between these two quantities for spiral galaxies spanning 4 orders of magnitude. Furthermore, Verdes-Montenegro et al. (1998) found that the CO and FIR properties of a distance limited complete sample of Hickson compact group galaxies were surprisingly similar to isolated spirals. This result agrees well with our finding that the present day SFRs of the compact group galaxies are on average indistinguishable from the one corresponding to the field galaxies.
As compact groups are supposed to be dynamical entities, and given that some interactions are known to enhance the SFRs of the disk galaxies, it could be thought that some connection between the dynamical properties of the groups and the SFRs of the member galaxies is expected. In order to explore any possible relationship between the global SFR of the groups and their dynamical state, we have plotted the ratio $`L_{\mathrm{H}\alpha }/L_B`$ against four different dynamical parameters of the groups. Figure On the Influence of the Environment in the Star Formation Rates of a Sample of Galaxies in Nearby Compact Groups shows the ratio $`L_{\mathrm{H}\alpha }/L_B`$ plotted against (a) the velocity dispersion, (b) the dimensionless crossing time, (c) the median projected separation and (d) the mass-to-light ratio. No correlation with the $`L_{\mathrm{H}\alpha }/L_B`$ luminosities was found in any of the four plots.
As plot (a) shows, the range in velocity dispersion covered by the groups in our sample is around 200 km sec<sup>-1</sup>, that is of the order of the typical values for the rotation velocity of disk galaxies. This fact implies that the damage to be produced by the interactions should be maximal under these conditions. However, the lack of correlation with the normalized H$`\alpha `$ luminosity means that some other parameter governing the interaction plays a role in the regulation of the star formation of galaxies. The range of crossing times covered by the groups of our sample is very extended, as plot (b) shows, and again no correlation was found with the normalized H$`\alpha `$ luminosity. The number of interactions is related to the crossing time in the sense that the larger the crossing time, the less interactions occur. Thus, this result means that the number of interactions that a galaxy is susceptible to experience is not an important factor that controls the SFRs of the galaxies in the groups. The median projected separation of the groups are shown in plot (c). As this plot shows, there is no correlation between this parameter and the normalized H$`\alpha `$ luminosity. This result suggests that the impact parameter alone does not regulate the star formation induced by interactions. Finally, plot (d) shows the mass to luminosity ratio of the groups against the normalized H$`\alpha `$ luminosity. The absence of correlation between these two quantities means that the total amount of dark matter is not a key parameter controlling the SFRs of the galaxies in the group. The overall result of this analysis is that the dynamical parameters considered above have no influence on the SFR of the system as a whole.
Thus, in this section we have seen that the total normalized H$`\alpha `$ luminosities of the compact groups are on average very similar to the computed for the sample of synthetic groups of field galaxies. Only when interactions are strong so as to develop tidal tails, the level of H$`\alpha `$ is enhanced in the compact groups with respect to the synthetic ones.
## 4 Discussion
In the previous sections, we have shown that the present day SFRs of the disk galaxies of our sample of compact groups are not enhanced on average with respect to the values measured for the field sample. But, the distribution of normalized H$`\alpha `$ luminosities is broadened compared to the corresponding to the field sample, thus meaning that the group environment is modifying the star formation properties of the galaxies in the groups. In particular, when tidal features are present, the galaxies in the groups tend to present the highest values of their present day SFR.
As it was demonstrated by the study by Mendes de Oliveira and Hickson (1994), and it would be easily inferred from the values of the relevant dynamical parameters of the groups listed in the previous section, it is clear that galaxy interactions are ubiquitous in most of the HCGs. Thus, from the environmental point of view, interactions are expected to be one of the most efficient agents affecting star formation in group galaxies. However, what appears not straightforward is the prediction of the actual effect of interactions on the present day SFR. Published simulations on induced star formation and interactions (Olson & Kwan 1990, Mihos et al. 1991) predict that only in some cases their SFR is expected to be enhanced during this process, whereas the same simulations predict that the SFR could even be depleted for given combinations of parameters describing the interaction. The results found in this work agree with these predictions, in the sense that we can explain the broadening of the normalized H$`\alpha `$ luminosity as the result of the modification of the SFR due to interactions. The galaxies very luminous in H$`\alpha `$ present in some of our groups would correspond to the privileged cases of interaction in which the SFR is enhanced.
Our result also agrees with the previous finding by Hashimoto et al. (1998) who found that the highest levels of star formation are more prevalent in the intermediate environment of poor clusters than in either the field or rich clusters. Tidal features produced during interactions are difficult to occur in rich clusters because their velocity dispersions are around 750 km/sec<sup>-1</sup> (Zabludoff et al. 1990), that is, several times higher than for compact groups, thus preventing disrupting interactions between galaxies. For the field galaxies, interactions are not likely to happen because of the lack of neighbor galaxies.
Further information can be extracted combining the Hi content of the spirals and the H$`\alpha `$ luminosities of the early-type galaxies in our sample of compact groups. Williams & Rood (1987) and Huchtmeier (1997) reported that most of the spirals in the Hickson compact groups are Hi deficient. This result can be explained assuming that most interactions between galaxies just produce harassment in their external halos. The depletion of the gaseous halos would result in no changes or an inhibition of the SFR of the galaxies. The gas lost by the disk galaxies could be accreted by the more massive ellipticals and lenticulars, where it could result in some new extra star formation after compression during the accretion process. This would explain the enhancement in the H$`\alpha `$ luminosity found for the early-type galaxies of this sample (Vílchez & Iglesias-Páramo, 1998b). However, in order to set this hypothesis, more observational data of early-type galaxies are required.
Concerning the dynamical nature of the compact groups of galaxies, the order of magnitude of their lifetimes still remains unclear: Barnes (1989) showed that if we assume that the dark matter is attached to the individual galaxies, the lifetimes of compact groups of galaxies should be no longer than few crossing times. The final fate of the groups would be a massive elliptical galaxy originated by merging of the initial galaxies. But, if this were the case, we should see more merger remnants with strongly enhanced SFRs instead of a majority of normal groups showing present day SFRs similar to the field galaxies, as it is actually observed. This problem can be sorted out assuming that the bulk of dark matter is mostly arranged in a central halo and shared by all the galaxies in the group. The recent simulations by Gómez-Flechoso (1997) indicate that if the galaxies are immersed in a dark matter envelope, the lifetime of the group could be as large as $`10^{10}`$yr without substantial changes in the structure of the galaxies except in their outer parts. She made numerical simulations of a compact group of four galaxies, assuming that dark matter was mostly placed in a common halo and assuming that the galaxies are aggregates of particles with velocity dispersions between 100 and 200 km sec<sup>-1</sup>. She found that the sizes of the galaxies are reduced by less than 30% after $`10^{10}`$yr and that the aspect of the groups remained almost similar to the initial one. The assumption that galaxies are composed systems, rather than point particles, coupled with a new mathematical formulation of the dynamical friction can enlarge the timescales as long as double of the classical estimations.
Finally, assuming that the lifetimes of compact groups are so large, we still can address the question about the morphological transformation of spirals into spheroidals in clusters of galaxies, proposed by Moore et al. (1998). These authors suggest that after approximately 5 Gyr, the spirals in clusters transform into spheroidal systems due to encounters with bright galaxies and the cluster’s tidal field. The transformation is mostly due to the cumulative effect of encounters at speeds of several thousands of km sec<sup>-1</sup>. However, the typical velocity dispersions in compact groups, of the order of 200 km sec<sup>-1</sup>, make the probability of a high velocity encounters very low, and thus there is little room for galaxy transformation via harassment. In fact, although it is known that the fraction of spirals in compact groups is lower than in the field (Hickson 1982), till the date no faint spheroidals have been reported in compact groups.
## 5 Conclusions
In this paper we have studied the effects of the environment on the SFRs of a sample of disk galaxies in compact groups, using H$`\alpha `$ equivalent widths and luminosities to derive their star formation histories and present day SFRs respectively. A direct comparison with a sample of field galaxies has yielded the following results:
The present day SFRs of the galaxies in compact groups are slightly modified by the environment, but there is not an overall enhancement with respect to the field galaxies. This finding agrees with previous results from theoretical simulations that only some kind of interactions are able to enhance the SFR of the galaxies involved, whereas the rest of them keep it unchanged or even tend to inhibit it.
The total H$`\alpha `$ luminosity of the groups relative to their $`B`$ luminosity is not significantly enhanced compared to the values of a sample of synthetic groups of field galaxies. In fact, most of the groups show normalized H$`\alpha `$ luminosities below or equal to the ones shown by their corresponding synthetic groups. The $`L_{\mathrm{H}\alpha }/L_B`$ ratio is slightly correlated with the $`M`$(Hi)/$`L_B`$ ratio in the sense that the groups that show a high level of $`L_{\mathrm{H}\alpha }/L_B`$ also have a high content of Hi. However, some of the groups show a high content of Hi but quite a low level of $`L_{\mathrm{H}\alpha }/L_B`$. No clear correlations were found between the total $`L_{\mathrm{H}\alpha }/L_B`$ ratio of the groups and several relevant dynamical parameters, a fact that suggests that the exact dynamical state of the groups does not control the SFRs of the member galaxies.
Finally, our results appear compatible with a scenario for compact groups of galaxies in which galaxies are embedded in a dark matter halo that enlarges the lifetime of the group, preventing the galaxies for rapid merging and collapse, what would result in significantly enhanced SFRs for the observed compact groups.
We want to acknowledge to John Beckman for careful reading of the last version of this document and for interesting comments and suggestions. Thanks must also be given to Mariángeles Gómez-Flechoso for interesting comments about dynamical friction. The INT and the JKT are operated on the island of La Palma by the RGO in the Spanish Observatorio del Roque de Los Muchachos of the Instituto de Astrofísica de Canarias. The 2.2m Telescope is operated by the MPIA in the Spanish Observatorio de Calar Alto. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
|
no-problem/9902/hep-th9902097.html
|
ar5iv
|
text
|
# 1 INTRODUCTION
## 1 INTRODUCTION
I would like to begin this talk by asking a very simple question: Did the Universe start “small”? The naive answer is: Yes, of course! However, a serious answer can only be given after defining the two keywords in the question: What do we mean by “start”? and What is “small” relative to? In order to be on the safe side, let us take the “initial” time to be a bit larger than Planck’s time, $`t_P10^{43}\mathrm{s}`$. Then, in standard Friedmann–Robertson–Walker (FRW) cosmology, the initial size of the (presently observable) Universe was about $`10^2\mathrm{cm}`$. This is of course tiny w.r.t. its present size ($`10^{28}\mathrm{cm}`$), yet huge w.r.t. the horizon at that time, i.e. w.r.t. $`l_P=ct_P10^{33}\mathrm{cm}`$. In other words, a few Planck times after the big bang, our observable Universe consisted of $`(10^{30})^3=10^{90}`$ Planckian-size, causally disconnected regions.
More precisely, soon after $`t=t_P`$, the Universe was characterized by a huge hierarchy between its Hubble radius and inverse temperature on one side, and its spatial-curvature radius and homogeneity scale on the other. The relative factor of (at least) $`10^{30}`$ appears as an incredible amount of fine-tuning on the initial state of the Universe, corresponding to a huge asymmetry between time and space derivatives. Was this asymmetry really there? And if so, can it be explained in any more natural way?
It is well known that a generic way to wash out inhomogeneities and spatial curvature consists in introducing, in the history of the Universe, a long period of accelerated expansion, called inflation . This still leaves two alternative solutions: either the Universe was generic at the big bang and became flat and smooth because of a long post-bangian inflationary phase; or it was already flat and smooth at the big bang as the result of a long pre-bangian inflationary phase.
Assuming, dogmatically, that the Universe (and time itself) started at the big bang, leaves only the first alternative. However, that solution has its own problems, in particular those of fine-tuned initial conditions and inflaton potentials. Besides, it is quite difficult to base standard inflation in the only known candidate theory of quantum gravity, superstring theory. Rather, as we shall argue, superstring theory gives strong hints in favour of the second (pre-big bang) possibility through two of its very basic properties, the first in relation to its short-distance behaviour, the second from its modifications of General Relativity even at large distance. Let us briefly comment on both.
## 2 (Super)String inspiration
### 2.1 Short Distance
Since the classical (Nambu–Goto) action of a string is proportional to the area $`A`$ of the surface it sweeps, its quantization must introduce a quantum of length $`\lambda _s`$ through:
$$S/\mathrm{}=A/\lambda _s^2.$$
(1)
This fundamental length, replacing Planck’s constant in quantum string theory , plays the role of a minimal observable length, of an ultraviolet cut-off. Thus, in string theory, physical quantities are expected to be bound by appropriate powers of $`\lambda _s`$, e.g.
$`H^2RG\rho <\lambda _s^2`$
$`k_BT/\mathrm{}<c\lambda _s^1`$
$`R_{comp}>\lambda _s.`$ (2)
In other words, in quantum string theory (QST), relativistic quantum mechanics should solve the singularity problems in much the same way as non-relativistic quantum mechanics solved the singularity problem of the hydrogen atom by putting the electron and the proton a finite distance apart. By the same token, QST gives us a rationale for asking daring questions such as: What was there before the big bang? Certainly, in no other present theory, can such a question be meaningfully asked.
### 2.2 Large Distance
Even at large distance (low-energy, small curvatures), superstring theory does not automatically give Einstein’s General Relativity. Rather, it leads to a scalar-tensor theory of the JBD variety. The new scalar particle/field $`\varphi `$, the so-called dilaton, is unavoidable in string theory, and gets reinterpreted as the radius of a new dimension of space in so-called M-theory . By supersymmetry, the dilaton is massless to all orders in perturbation theory, i.e. as long as supersymmetry remains unbroken. This raises the question: Is the dilaton a problem or an opportunity? My answer is that it is possibly both; and while we can try to avoid its potential dangers, we may try to use some of its properties to our advantage … Let me discuss how.
In string theory $`\varphi `$ controls the strength of all forces, gravitational and gauge alike. One finds, typically:
$$l_P^2/\lambda _s^2\alpha _{gauge}e^\varphi ,$$
(3)
showing the basic unification of all forces in string theory and the fact that, in our conventions, the weak-coupling region coincides with $`\varphi 1`$. In order not to contradict precision tests of the Equivalence Principle and of the constancy of the gauge and gravitational couplings in the recent past (possibly meaning several million years!) we require the dilaton to have a mass and to be frozen at the bottom of its own potential today. This does not exclude, however, the possibility of the dilaton having evolved cosmologically (after all the metric did!) within the weak coupling region where it was practically massless. The amazing (yet simple) observation is that, by so doing, the dilaton may have inflated the Universe!
A simplified argument, which, although not completely accurate, captures the essential physical point, consists in writing the ($`k=0`$) Friedmann equation:
$$3H^2=8\pi G\rho ,$$
(4)
and in noticing that a growing dilaton (meaning through (3) a growing $`G`$) can drive the growth of $`H`$ even if the energy density of standard matter decreases in an expanding Universe. This new kind of inflation (characterized by growing $`H`$ and $`\varphi `$) has been termed dilaton-driven inflation (DDI). The basic idea of pre-big bang cosmology is thus illustrated in Fig. 1: the dilaton started at very large negative values (where it is massless), ran over a potential hill, and finally reached, sometime in our recent past, its final destination at the bottom of its potential ($`\varphi =\varphi _0`$). Incidentally, as shown in Fig. 1, the dilaton of string theory can easily roll-up —rather than down— potential hills, as a consequence of its non-standard coupling to gravity.
DDI is not just possible. It exists as a class of (lowest-order) cosmological solutions thanks to the duality symmetries of string cosmology. Under a prototype example of these symmetries, the so-called scale-factor duality , a FRW cosmology evolving (at lowest order in derivatives) from a singularity in the past is mapped into a DDI cosmology going towards a singularity in the future. Of course, the lowest order approximation breaks down before either singularity is reached. A (stringy) moment away from their respective singularities, these two branches can easily be joined smoothly to give a single non-singular cosmology, at least mathematically. Leaving aside this issue for the moment (see Section V for more discussion), let us go back to DDI. Since such a phase is characterized by growing coupling and curvature, it must itself have originated from a regime in which both quantities were very small. We take this as the main lesson/hint to be learned from low-energy string theory by raising it to the level of a new cosmological principle of “Asymptotic Past Triviality”.
## 3 Asymptotic Past Triviality
The concept of Asymptotic Past Triviality (APT) is quite similar to that of “Asymptotic Flatness”, familiar from General Relativity . The main differences consist in making only assumptions concerning the asymptotic past (rather than future or space-like infinity) and in the additional presence of the dilaton. It seems physically (and philosophically) satisfactory to identify the beginning with simplicity (see e.g. entropy-related arguments concerning the arrow of time). What could be simpler than a trivial, empty and flat Universe? Nothing of course! The problem is that such a Universe, besides being uninteresting, is also non-generic. By contrast, asymptotically flat/trivial Universes are initially simple, yet generic in a precise mathematical sense. Their definition involves exactly the right number of arbitrary “integration constants” (here full functions of three variables) to describe a general solution (one with some general, qualitative features, though). This is why, by its very construction, this cosmology cannot be easily dismissed as being fine-tuned.
It is useful to represent the situation in a Carter–Penrose diagram, as in Fig. 2. Here past infinity consists of two pieces: time-like past infinity, which is shrunk to a point $`I_{}`$, and past null-infinity, $`_{}`$ represented by a line at $`45`$ degrees. Note that this region of the diagram is “non-physical” in FRW cosmology, since it lies behind (i.e. before) the big bang singularity (also shown in the diagram). Instead, we shall be giving initial data infinitesimally close to $`I_{}`$ and $`_{}`$, and ask whether they will evolve in such a way as to generate a physically interesting big bang-like state at some later time. Generating so much from so little looks a bit like a miracle. However, we will argue that it is precisely what should be expected, owing to well-known classical and quantum gravitational instabilities.
## 4 Inflation as a classical gravitational instability
The assumption of APT entitles us to treat the early history of the Universe through the classical field equations of the low-energy (because of the small curvature) tree-level (because of the weak coupling) effective action of string theory. For simplicity, we will illustrate here the simplest case of the gravi-dilaton system already compactified to four space-time dimensions. Other fields and extra dimensions will be mentioned below, when we discuss observable consequences. The (string frame) effective action then reads:
$$\mathrm{\Gamma }_{eff}=\frac{1}{2\lambda _s^2}d^4x\sqrt{g}e^\varphi (+_\mu \varphi ^\mu \varphi ).$$
(5)
In this frame, the string-length parameter $`\lambda _s`$ is a constant and the same is true of the curvature scale at which we have to supplement eq. (5) with corrections. Similarly, string masses, when measured with the string metric, are fixed, while test strings sweep geodesic surfaces with respect to that metric. For all these reasons, even if we will allow metric redefinitions in order to simplify our calculations, we shall eventually turn back to the string frame for the physical interpretation of the results. We stress, however, that, while our intuition is not frame independent, physically measurable quantities are.
Even assuming APT, the problem of determining the properties of a generic solution to the field equations implied by (5) is a formidable one. Very luckily, however, we are able to map our problem into one that has been much investigated, both analytically and numerically, in the literature. This is done by going to the so-called “Einstein frame”. For our purposes, it simply amounts to the field redefinition
$$g_{\mu \nu }=g_{\mu \nu }^{(E)}e^{\varphi \varphi _0},$$
(6)
in terms of which (5) becomes:
$$\mathrm{\Gamma }_{eff}=\frac{1}{2l_P^2}d^4x\sqrt{g^{(E)}}\left(^{(E)}\text{1000}\frac{1}{2}g_{(E)}^{\mu \nu }_\mu \varphi _\nu \varphi \right),$$
(7)
where $`\varphi _0`$ ($`l_P=\lambda _se^{\varphi _0/2}`$) is the present value of the dilaton (of Planck’s length).
Our problem is thus reduced to that of studying a massless scalar field minimally coupled to gravity. Such a system has been considered by many authors, in particular by Christodoulou , precisely in the regime of interest to us. In line with the APT postulate, in the analogue gravitational collapse problem, one assumes very “weak” initial data with the aim of finding under which conditions gravitational collapse later occurs. Gravitational collapse means that the (Einstein) metric (and the volume of 3-space) shrinks to zero at a space-like singularity. However, typically, the dilaton blows up at that same singularity. Given the relation (6) between the Einstein and the (physical) string metric, we can easily imagine that the latter blows up near the singularity as implied by DDI.
How generically does this happen? In this connection it is crucial to recall the singularity theorems of Hawking and Penrose , which state that, under some general assumptions, singularities are inescapable in GR. One can look at the validity of those assumptions in the case at hand and finds that all but one are automatically satisfied. The only condition to be imposed is the existence of a closed trapped surface (a closed surface from where future light cones lie entirely in the region inside the surface). Rigorous results show that this condition cannot be waived: sufficiently weak initial data do not lead to closed trapped surfaces, to collapse, or to singularities. Sufficiently strong initial data do. But where is the border-line? This is not known in general, but precise criteria do exist for particularly symmetric space-times, e.g. for those endowed with spherical symmetry. However, no matter what the general collapse/singularity criterion will eventually turn out to be, we do know that:
* it cannot depend on an over-all additive constant in $`\varphi `$;
* it cannot depend on an over-all multiplicative factor in $`g_{\mu \nu }`$.
This is a simple consequence of the invariance (up to an over-all factor) of the effective action (7) under shifts of the dilaton and rescaling of the metric (these properties depend crucially on the validity of the tree-level low-energy approximation and on the absence of a cosmological constant).
We conclude that, generically, some regions of space will undergo gravitational collapse, will form horizons and singularities therein, but nothing, at the level of our approximations, will be able to fix either the size of the horizon or the value of $`\varphi `$ at the onset of collapse. When this is translated into the string frame, one is describing, in the region of space-time within the horizon, a period of DDI in which both the initial value of the Hubble parameter and that of $`\varphi `$ are left arbitrary. These two initial parameters are very important, since they determine the range of validity of our description. In fact, since both curvature and coupling increase during DDI, at some point the low-energy and/or tree-level description is bound to break down. The smaller the initial Hubble parameter (i.e. the larger the initial horizon size) and the smaller the initial coupling, the longer we can follow DDI through the effective action equations and the larger the number of reliable e-folds that we shall gain.
This does answer, in my opinion, the objections raised recently to the PBB scenario according to which it is fine-tuned. The situation here actually resembles that of chaotic inflation . Given some generic (though APT) initial data, we should ask which is the distribution of sizes of the collapsing regions and of couplings therein. Then, only the “tails” of these distributions, i.e. those corresponding to sufficiently large, and sufficiently weakly coupled regions, will produce Universes like ours, the rest will not. The question of how likely a “good” big bang is to take place is not very well posed and can be greatly affected by anthropic considerations.
In conclusion, we may summarize recent progress on the problem of initial conditions by saying that : Dilaton-driven inflation in string cosmology is as generic as gravitational collapse in General Relativity. At the same time, having a sufficiently long period of DDI amounts to setting upper limits on two arbitrary moduli of the classical solutions.
Our scenario is illustrated in Figs. 3 and 4, both taken from Ref.. In Fig. 3, I show, for the spherically symmetric case, a Carter–Penrose diagram in which generic (but asymptotically trivial) dilatonic waves are given around time-like ($`I^{}`$) and null ($`^{}`$) past-infinity. In the shaded region near $`I^{},^{}`$, a weak-field solution holds. However, if a collapse criterion is met, an apparent horizon, inside which a cosmological (generally inhomogeneous) PBB-like solution takes over, forms at some later time. The future singularity of the PBB solution at $`t=0`$ is identified with the space-like singularity of the black hole at $`r=0`$ (remember that $`r`$ is a time-like coordinate inside the horizon). Figure 4 gives a $`(2+1)`$-dimensional sketch of a possible PBB Universe: an original “sea” of dilatonic and gravity waves leads to collapsing regions of different initial size, possibly to a scale-invariant distribution of them. Each one of these collapses is reinterpreted, in the string frame, as the process by which a baby Universe is born after a period of PBB inflationary “pregnancy”, with the size of each baby Universe determined by the duration of its pregnancy, i.e. by the initial size of the corresponding collapsing region. Regions initially larger than $`10^{13}\mathrm{cm}`$ can generate Universes like ours, smaller ones cannot.
A basic difference between the large numbers needed in (non-inflationary) FRW cosmology and the large numbers needed in PBB cosmology should be stressed at this point. In the former, the ratio of two classical scales, e.g. of total curvature to its spatial component, which is expected to be $`O(1)`$, has to be taken as large as $`10^{60}`$. In the latter, the above ratio is initially $`O(1)`$ in the collapsing/inflating region, and ends up being very large in that region thanks to DDI. However, the common order of magnitude of these two classical quantities is a free parameter, and is taken to be much larger than a classically irrelevant quantum scale.
We can visualize analogies and differences between standard and pre-big bang inflation by comparing Figs. 5a and 5b. In these, we sketch the evolution of the Hubble radius and of a fixed comoving scale (here the one corresponding to the part of the Universe presently observable to us) as a function of time in the two scenarios. The common feature is that the fixed comoving scale was “inside the horizon” for some time during inflation, and possibly very deeply inside at its onset. Also, in both cases, the Hubble radius at the beginning of inflation had to be large in Planck units and the scale of homogeneity had to be at least as large. The difference between the two scenarios is just in the behaviour of the Hubble radius during inflation: increasing in standard inflation (a), decreasing in string cosmology (b). This is what makes PBB’s “wine glass” more elegant, and stable! Thus, while standard inflation is still facing the initial-singularity question and needs a non-adiabatic phenomenon to reheat the Universe (a kind of small bang), PBB cosmology faces the singularity problem later, combining it to the exit and heating problems (discussed in Sections V and VIB, respectively).
In the end, what saves PBB cosmology from fine-tuning is (not surprisingly!) supersymmetry. This is what protects us from the appearance of a cosmological constant in the weak-coupling regime. Even a relatively small cosmological constant would invalidate our scale-invariance arguments and force DDI to be very short . Thus, amusingly, while an effective cosmological constant is at the basis of standard (post-big bang) inflation, its absence in the weak coupling region is at the basis of PBB inflation. This may allow us to speculate that the absence (or extreme smallness) of the present cosmological constant may be related to a mysterious degeneracy between the perturbative and the non-perturbative vacuum of superstring theory.
## 5 The exit problem/conjecture
We have argued that, generically, DDI, when studied at lowest order in derivatives and coupling, evolves towards a singularity of the big bang type. Similarly, at the same level of approximation, the non-inflationary solutions emerge from a singularity. Matching these two branches in a smooth, non-singular way has become known as the (graceful) exit problem in string cosmology . It is, undoubtedly, the most important theoretical problem facing the whole PBB scenario.
There has been quite some progress recently on the exit problem. However, for lack of space, I shall refer the reader to the literature for details. Generically speaking, toy examples have shown that DDI can flow, thanks to higher-curvature corrections, into a de-Sitter-like phase, i.e. into a phase of constant $`H`$ (curvature) and constant $`\dot{\varphi }`$. This phase is expected to last until loop corrections become important (see next section) and give rise to a transition to a radiation-dominated phase. If these toy models serve as an indication, the full exit problem can only be achieved at large coupling and curvature, a situation that should be described by the newly invented M-theory .
It was recently pointed out that the reverse order of events is also possible. The coupling may become large before the curvature. In this case, at least for some time, the low-energy limit of M-theory should be adequate: this limit is known to give $`D=11`$ supergravity and is therefore amenable to reliable study. It is likely, though not yet clear, that, also in this case, strong curvatures will have to be reached before the exit can be completed. In the following, we will assume that:
* the big bang singularity is avoided thanks to the softness of string theory;
* full exit to radiation occurs at strong coupling and curvature, according to a criterion given in Section VIB.
## 6 Observable relics and heating the pre-bang Universe
### 6.1 PBB relics
Since there are already several review papers on this subject (e.g. ), I will limit myself to mentioning the most recent developments, after recalling the basic physical mechanism underlying particle production in cosmology . A cosmological (i.e. time-dependent) background coupled to a given type of (small) inhomogeneous perturbation $`\mathrm{\Psi }`$ enters the effective low-energy action in the form:
$$I=\frac{1}{2}𝑑\eta d^3xS(\eta )\left[\mathrm{\Psi }^2(\mathrm{\Psi })^2\right].$$
(8)
Here $`\eta `$ is the conformal-time coordinate, and a prime denotes $`/\eta `$. The function $`S(\eta )`$ (sometimes called the “pump” field) is, for any given $`\mathrm{\Psi }`$, a given function of the scale factor $`a(\eta )`$, and of other scalar fields (four-dimensional dilaton $`\varphi (\eta )`$, moduli $`b_i(\eta )`$, etc.), which may appear non-trivially in the background.
While it is clear that a constant pump field $`S`$ can be reabsorbed in a rescaling of $`\mathrm{\Psi }`$, and is thus ineffective, a time-dependent $`S`$ couples non-trivially to the fluctuation and leads to the production of pairs of quanta (with equal and opposite momenta). One can easily determine the pump fields for each one of the most interesting perturbations. The result is:
$`\mathrm{Gravity}\mathrm{waves},\mathrm{dilaton}`$ $`:`$ $`S=a^2e^\varphi `$
$`\mathrm{Heterotic}\mathrm{gauge}\mathrm{bosons}`$ $`:`$ $`S=e^\varphi `$
$`\mathrm{Kalb}\mathrm{Ramond},\mathrm{axions}`$ $`:`$ $`S=a^2e^\varphi .`$ (9)
A distinctive property of string cosmology is that the dilaton $`\varphi `$ appears in some very specific way in the pump fields. The consequences of this are very interesting:
* For gravitational waves and dilatons, the effect of $`\varphi `$ is to slow down the behaviour of $`a`$ (remember that both $`a`$ and $`\varphi `$ grow in the pre-big bang phase). This is the reason why those spectra are quite steep and give small contributions at large scales. Thus one of the most robust predictions of PBB cosmology is a small tensor component in the CMB anisotropy<sup>1</sup><sup>1</sup>1This, however, refers just to first-order tensor perturbations; the mechanism —described below— of seeding CMB anisotropy through axions would also give a tensor (and a vector) contribution whose relative magnitude is being computed.. The reverse is also true: at short scales, the expected yield in a stochastic background of gravitational waves is much larger than in standard inflationary cosmology. This is easily understood: in standard inflation the GW spectrum is either flat or slowly decreasing (as a function of frequency). Since COBE data set a limit on the GW contribution at large scales, this bound holds a fortiori at shorter scales, as those of interest for direct GW detection. Thus, in standard inflation, one expects
$$\mathrm{\Omega }_{GW}<10^{14}.$$
(10)
Since the GW spectra of PBB cosmology are “blue”, the bound by COBE is automatically satisfied, with no implication on the GW yield at interesting frequencies. Values of $`\mathrm{\Omega }_{GW}`$ in the range of $`10^6`$$`10^7`$ are possible in some regions of parameter space, which, according to some estimates of sensitivities , could be inside detection capabilities in the near future.
* For gauge bosons there is no amplification of vacuum fluctuations in standard cosmology, since a conformally flat metric (of the type forced upon by inflation) decouples from the electromagnetic (EM) field precisely in $`D=3+1`$ dimensions. As a very general remark, apart from pathological solutions, the only background field which, through its cosmological variation, can amplify EM (more generally gauge-field) quantum fluctuations is the effective gauge coupling itself . By its very nature, in the pre-big bang scenario the effective gauge coupling inflates together with space during the PBB phase. It is thus automatic that any efficient PBB inflation brings together a huge variation of the effective gauge coupling and thus a very large amplification of the primordial EM fluctuations . This can possibly provide the long-sought origin for the primordial seeds of the observed galactic magnetic fields. Notice, however, that, unlike GW, EM perturbations interact quite considerably with the hot plasma of the early (post-big bang) Universe. Thus, converting the primordial seeds into those that may have existed at the proto-galaxy formation epoch is by no means a trivial exercise. Work is in progress to try to adapt existing codes to the evolution of our primordial seeds.
* Finally, for Kalb–Ramond fields and axions, $`a`$ and $`\varphi `$ work in the same direction and spectra can be large even at large scales . An interesting fact is that, unlike the GW spectrum, that of axions is very sensitive to the cosmological behaviour of internal dimensions during the DDI epoch. On one side, this makes the model less predictive. On the other, it tells us that axions represent a window over the multidimensional cosmology expected generically from string theories, which must live in more that four dimensions. Curiously enough, the axion spectrum becomes exactly HZ (i.e. scale-invariant) when all the nine spatial dimensions of superstring theory evolve in a rather symmetric way . In situations near this particularly symmetric one, axions are able to provide a new mechanism for generating large-scale CMB anisotropy and LSS.
A recent calculation of the effect gives, for massless axions,
$$l(l+1)C_lO(1)\left(\frac{H_{max}}{M_P}\right)^4(\eta _0k_{max})^{2\alpha }\frac{\mathrm{\Gamma }(l+\alpha )}{\mathrm{\Gamma }(l\alpha )},$$
(11)
where $`C_l`$ are the usual coefficients of the multipole expansion of $`\mathrm{\Delta }T/T`$
$$\mathrm{\Delta }T/T(\stackrel{}{n})\mathrm{\Delta }T/T(\stackrel{}{n}^{})=\underset{l}{}(2l+1)C_lP_l(\mathrm{cos}\theta ),$$
(12)
and the parameters $`H_{max},k_{max},\alpha `$ are defined by the primordial axion energy spectrum in critical units as:
$$\mathrm{\Omega }_{ax}(k)=\left(\frac{H_{max}}{M_P}\right)^2(k/k_{max})^\alpha .$$
(13)
In string theory, as repeatedly mentioned, we expect $`H_{max}/M_PM_s/M_P1/10`$ and $`\eta _0k_{max}10^{30}`$, while the exponent $`\alpha `$ depends on the explicit PBB background with the above-mentioned HZ case corresponding to $`\alpha =0`$. The standard tilt parameter $`n=n_s`$ ($`s`$ for scalar) is given by $`n=1+2\alpha `$ and is found, by COBE, to lie between $`0.9`$ and $`1.5`$, corresponding to $`0<\alpha <0.25`$ (a negative $`\alpha `$ leads to some theoretical problems). With these inputs we can see that the correct normalization ($`C_210^{10}`$) is reached for $`\alpha 0.2`$, which is just in the middle of the allowed range. In other words, unlike in standard inflation, we cannot predict the tilt, but when this is given, we can predict (again unlike in standard inflation) the normalization.
Our model, being of the isocurvature type, bears some resemblance to the one recently advocated by Peebles and, like his, is expected to contain some calculable amount of non-Gaussianity, which is being calculated and will be checked by the future satellite measurements (MAP, PLANCK).
* Many other perturbations, which arise in generic compactifications of superstrings, have also been studied, and lead to interesting spectra. For lack of time, I will refer to the existing literature .
### 6.2 Heat and entropy as a quantum gravitational instability
Before closing this section, I wish to recall how one sees the very origin of the hot big bang in this scenario. One can easily estimate the total energy stored in the quantum fluctuations, which were amplified by the pre-big bang backgrounds. The result is, roughly,
$$\rho _{quantum}N_{eff}H_{max}^4,$$
(14)
where $`N_{eff}`$ is the effective number of species that are amplified and $`H_{max}`$ is the maximal curvature scale reached around $`t=0`$. We have already argued that $`H_{max}M_s=\lambda _s^1`$, and we know that, in heterotic string theory, $`N_{eff}`$ is in the hundreds. Yet this rather huge energy density is very far from critical, as long as the dilaton is still in the weak-coupling region, justifying our neglect of back-reaction effects. It is very tempting to assume that, precisely when the dilaton reaches a value such that $`\rho _{quantum}`$ is critical, the Universe will enter the radiation-dominated phase. This PBBB (PBB bootstrap) constraint gives, typically:
$$e^{\varphi _{exit}}1/N_{eff},$$
(15)
i.e. a value for the dilaton close to its present value.
The entropy in these quantum fluctuations can also be estimated following some general results . The result for the density of entropy $`S`$ is, as expected
$$SN_{eff}H_{max}^3.$$
(16)
It is easy to check that, at the assumed time of exit given by (15), this entropy saturates a recently proposed holography bound . This also turns out to be a physically acceptable value for the entropy of the Universe just after the big bang: a large entropy on the one hand (about $`10^{90}`$); a small entropy for the total mass and size of the observable Universe on the other, as often pointed out by Penrose . Thus, PBB cosmology neatly explains why the Universe, at the big bang, looks so fine-tuned (without being so) and provides a natural arrow of time in the direction of higher entropy.
## 7 Conclusions
* Pre-big bang (PBB) cosmology is a “top–down” rather than a “bottom–up” approach to cosmology. This should not be forgotten when testing its predictions.
* It does not need to invent an inflaton, or to fine-tune its potential; inflation is “natural” thanks to the duality symmetries of string cosmology.
* It makes use of a classical gravitational instability to inflate the Universe, and of a quantum instability to warm it up.
* The problem of initial conditions “decouples” from the singularity problem; it is classical, scale-free, and unambiguously defined. Issues of fine tuning can be addressed and, I believe, answered.
* The spectrum of large-scale perturbations has become more promising through the invisible axion of string theory, while the possibility of explaining the seeds of galactic magnetic fields remains a unique prediction of the model.
* The main conceptual (technical?) problem remains that of providing a fully convincing mechanism for (and a detailed description of) the pre-to-post-big bang transition. It is very likely that such a mechanism will involve both high curvatures and large coupling and should therefore be discussed in the (yet to be fully constructed) M-theory . New ideas borrowed from such theory and from D-branes could help in this respect.
* Once/if this problem will be solved, predictions will become more precise and robust, but, even now, with some mild assumptions, several tests are (or will soon become) possible, e.g.
+ the tensor contribution to $`\mathrm{\Delta }T/T`$ should be very small (see, however, footnote Section VI);
+ some non-Gaussianity in $`\mathrm{\Delta }T/T`$ correlations is expected, and calculable.
+ the axion-seed mechanism should lead to a characteristic acoustic-peak structure, which is being calculated;
+ it should be possible to convert the predicted seed magnetic fields into observables by using some reliable code for their late evolution;
+ a characteristic spectrum of stochastic gravitational waves is expected to surround us and could be large enough to be measurable within a decade or so.
|
no-problem/9902/hep-ph9902246.html
|
ar5iv
|
text
|
# Untitled Document
Is topological Skyrme Model consistent with the Standard Model?
Afsar Abbas
Institute of Physics
Bhubaneswar-751005, India
e-mail: afsar@iopb.res.in
Abstract
The topological Skyrme model is known to give a successful description of baryons. As a consistency check, here it is shown that in view of the recent discovery of charge quantization as an intrinsic and basic property of the Standard Model and the color dependence arising therein, the Skyrme Model is indeed completely consistent with the Standard Model.
It is well known that in SU($`N_c`$) Quantum Chromodynamics in the limit of $`N_c`$ going to infinity the baryons behave as solitons in an effective meson field theory . A popular candidate for such an effective field theory is the topological Skyrme Model . It has been extensively studied for two or more flavours and it has been shown that the resemblance of the topological soliton to the baryon in the quark model in the large $`N_c`$ limit is very strong . It’s baryon number and the fermionic character is also well understood .
Theoretically the most well studied and experimentally the best established model of particle physics is the Standard Model ( SM ) based on the group $`SU(3_c)SU(2)_LU(1)_Y`$ . The model consists of a priori several disparate concepts which are brought together to give the SM its structure as a whole. The successes of the SM are many however, it is believed to have a few shortcomings.It has been a folklore in particle physics that the electric charge is not quantized in the SM. It was felt that one has to go to the Grand Unified Theories to obtain quantization of the electric charge. It turned out to be a false accusation against the SM. It was clearly and convincingly demostarted in 1989/1990 that the electric charge is actually quantized in the SM . The author showed that the property of charge quantization in the SM requires the complete machinery which goes in to make it. The SM property of having anomaly cancellation generation by generation, the breaking of symmetry spontaneously through a Higgs doublet which also generates all the masses etc., all go into bringing in quantization of the electric charge in SM. These facts are important as there were several attempts to demonstrate charge quantization in SM using only part of the whole scheme, eg. using only anomaly cancelation . The flaws in such logic have been pointed out by the author .
Also analytically the author obtained the color dependence of the electric charge in the SM as
$$Q(u)=Q(c)=Q(t)=\frac{1}{2}(1+\frac{1}{N_c})$$
(1)
$$Q(d)=Q(s)=Q(b)=\frac{1}{2}(1+\frac{1}{N_c})$$
(2)
for $`N_c`$ = 3 this gives the correct charges. A short derivation of the result is given in the Appendix. It was also demonstrated by the author that these were the correct charges to use in studies for QCD for arbitrary $`N_c`$. This was contrary to many who had been using static ( ie. independent of color ) charges 2/3 and -1/3 .
Hence in addition to the other well known properties of the SM, I would like to stress that the quantization of the electric charge and the structure of the electric charge arising therein, especially its color dependence, should be treated as an intrinsic property of the SM. A consistency with the SM should be an essential requirement for phenomenological models which are supposed to work at low energies and for any extensions of the SM which should be relevant at high temperatures especially in the context of the early universe.
The color dependence of the electric charge shown above should be viewed in two independent but complementary ways. Firstly for $`N_c3`$ it is different from the static charges Q(u)=2/3 and Q(d)=-1/3. Secondly even for $`N_c=3`$ it should be viewed as providing an anatomic view of the internal structure of the electric charge, meaning as to how are 2/3 and -1/3 built up and in what way the three colors contribute to it. For example the SM is making the statement that in 1/3 the 3 is not entirely due to the 3 of the QCD group $`SU(3_c)`$. However this is what the SU(5) Grand Unified Theory says , wherein Q(d) =-1/3 = $`\frac{1}{N_c=3}`$. This is conflict with the SM expression where Q(d)= -1/3 = $`\frac{1}{2}(1+\frac{1}{N_c=3})`$. Hence the expression for the electric charge can be a very discriminating and restrictive tool for extensions beyond the SM. This has been used to check consistency of various models in a fruitful manner .
Quite clearly low energy phenomenological models of hadrons should be consistent with the SM in all respects. Is it true for the topological Skyrme Model? It shall be demonstrated below that the answer to the question in the title of the paper is in the affirmative.
To do so let us start with the Skyrme Lagrangian
$$L_S=\frac{f_{\pi }^{}{}_{}{}^{2}}{4}Tr(L_\mu L^\mu )+\frac{1}{32e^2}Tr[L_\mu ,L_\nu ]^2$$
(3)
where $`L_\mu =U^{}_\mu U`$ . The U field for the three flavour case for example is
$`U(x)=exp[\frac{i\lambda ^a\varphi ^a(x)}{f_\pi }]`$
with $`\varphi ^a`$ the pseudoscalar octet of $`\pi `$, K and $`\eta `$ mesons. In the full topological Skyrme this is supplemented with a Wess-Zumino effective action
$$\mathrm{\Gamma }_{WZ}=\frac{i}{240\pi ^2}_\mathrm{\Sigma }d^5xϵ^{\mu \nu \alpha \beta \gamma }Tr[L_\mu L_\nu L_\alpha L_\beta L_\gamma ]$$
(4)
on surface $`\mathrm{\Sigma }`$. Let the field U be transformed by the charge operator Q as
$`U(x)e^{i\mathrm{\Lambda }Q}U(x)e^{i\mathrm{\Lambda }Q}`$.
where all the charges are counted in units of the absolute value of the electronic charge.
Making $`\mathrm{\Lambda }=\mathrm{\Lambda }(x)`$ a local transformation the Noether current is
$$J_{\mu }^{}{}_{}{}^{em}(x)=j_{\mu }^{}{}_{}{}^{em}(x)+j_{\mu }^{}{}_{}{}^{WZ}(x)$$
(5)
where the first one is the standard Skyrme term and the second is the Wess-Zumino term
$$j_{\mu }^{}{}_{}{}^{WZ}(x)=\frac{N_c}{48\pi ^2}ϵ_{\mu \nu \lambda \sigma }TrL^\nu L^\lambda L^\sigma (Q+U^{}QU)$$
(6)
In the standard way we take the U(1) of electromagnetism as a subgroup of the three flavour SU(3). Its generators can be found by the canonical methods. As the charge operator can be simultaneously diagonalized along with the third component of isospin and hypercharge we write it as
$$Q=\left(\begin{array}{ccc}q_1& 0& 0\\ 0& q_2& 0\\ 0& 0& q_3\end{array}\right)$$
The electric charge of pseudoscalar octet mesons are known. these give
$$q_1q_2=1,q_2=q_3$$
(7)
Hence one obtains
$$Q=(q_2+\frac{1}{3})\mathrm{𝟏}_{3x3}+\frac{1}{2}\lambda _3+\frac{1}{2\sqrt{3}}\lambda _8$$
(8)
In the standard way we use $`U=A(t)U_c(𝐱)A(t)^1`$ where A is the collective coordinate. We obtain the B=1 electric charge from the Skyrme term in terms of the left-handed generators $`L_\alpha `$ only as
$$Q^{em}=\frac{1}{2}(L_3(A^{}\lambda _3A)_8\frac{N_cB(U_c)}{\sqrt{3}})+\frac{1}{2\sqrt{3}}(L_8(A^{}\lambda _8A)_8\frac{N_cB(U_c)}{\sqrt{3}})$$
(9)
The Wess-Zumino term contributes
$$Q^{WZ}=N_cB(U_c)(q_2+\frac{1}{3}+\frac{1}{2\sqrt{3}}(A^{}\lambda _3A)_8+\frac{1}{6}(A^{}\lambda _3A)_8)$$
(10)
Hence the total electric charge is
$$Q=I_3+\frac{1}{2}Y+(q_2+\frac{1}{3})N_cB(U_c)$$
(11)
For the hypercharge we take Y = $`\frac{N_3}{3}`$ and demanding that the proton charge be unit for any arbitrary value of $`N_c`$ we find that $`q_2`$ is equal to Q(d) as given in eq. (2) and hence all the correct color dependent electric charges as demanded by the Standard Model are reproduced by the Skyrme model. Hence it is heartening to conclude that the Skyrme model is fully consistent with the Standard Model.
Acknowledgement
The Author would like to thank Dr. Hans Walliser (Siegen) for pointing out that the proper color dependent hypercharge along with the requirement of unit color-independent charge for the proton be used to obtain correct charges in the Skyrme model.
Appendix
To demostrate charge quantization as an intrinsic property of the SM the complete machinery which makes the SM is required. As required by the SM one has the repetitive structure for each generation of the fermions. Let us start by looking at the first generation of quarks and leptons (u, d, e,$`\nu `$ ) and assign them to $`SU(N_c)SU(2)_LU(1)_Y`$ representation as follows .
$$q_L=\left(\begin{array}{c}u\\ d\end{array}\right)_L,(N_c,2,Y_q)$$
$$u_R;(N_c,1,Y_u)$$
$$d_R;(N_c,1,Y_d)$$
$$l_L=\left(\begin{array}{c}\nu \\ e\end{array}\right);(1,2,Y_l)$$
$$e_R;(1,1,Y_e)$$
(12)
$`N_c`$ = 3 corresponds to the Standard Model case. To keep things as general as possible this brings in five unknown hypercharges.
Let us now define the electric charge in the most general way in terms of the diagonal generators of $`SU(2)_LU(1)_Y`$ as
$$Q^{}=a^{}I_3+b^{}Y$$
(13)
We can always scale the electric charge once as $`Q=\frac{Q^{}}{a^{}}`$ and hence ($`b=\frac{b^{}}{a^{}}`$)
$$Q=I_3+bY$$
(14)
In the SM $`SU(N_c)`$ $``$ $`SU(2)_L`$ $``$ $`U(1)_Y`$ is spontaniously broken through the Higgs mechanism to the group $`SU(N_c)`$ $``$ $`U(1)_{em}`$ . In this model the Higgs is assumed to be doublet $`\varphi `$ with arbitrary hypercharge $`Y_\varphi `$. The isospin $`I_3=\frac{1}{2}`$ component of the Higgs develops a nonzero vacuum expectation value $`<\varphi >_o`$. Since we want the $`U(1)_{em}`$ generator Q to be unbroken we require $`Q<\varphi >_o=0`$. This right away fixes b in (3) and we get
$$Q=I_3+(\frac{1}{2Y_\varphi })Y$$
(15)
Next one requires that the fermion masses arise through Yukawa coupling and also by demanding that the triangular anomaly cancels (to ensure renormaligability of the theory) ( see for details); one obtaines all the unknown hypercharge in terms of the unknown Higgs hypercharge $`Y_\varphi `$. Ultimately $`Y_\varphi `$ is cancelled out and one obtains the correct charge quantization as follows.
$$q_L=\left(\begin{array}{c}u\\ d\end{array}\right)_L,Y_q=\frac{Y_\varphi }{N_c},$$
$$Q(u)=\frac{1}{2}(1+\frac{1}{N_c}),Q(d)=\frac{1}{2}(1+\frac{1}{N_c})$$
$$u_R,Y_u=Y_\varphi (1+\frac{1}{N_c}),Q(u_R)=\frac{1}{2}(1+\frac{1}{N_c})$$
$$d_R,Y_d=Y_\varphi (1+\frac{1}{N_c}),Q(d_R)=\frac{1}{2}(1+\frac{1}{N_c})$$
$$l_L=\left(\begin{array}{c}\nu \\ e\end{array}\right),Y_l=Y_\varphi ,Q(\nu )=0,Q(e)=1$$
$$e_R,Y_e=2Y_\varphi ,Q(e_R)=1$$
(16)
A repetitive structure gives charges for the other generation of fermions also .
Note that the Generalized Gell Mann Nishijima expression of the SU(6) (flavour) quark model is consistent with the above SM expression ( eqn. 1 and 2 ). One takes $`B=\frac{1}{N_c}`$ in the expression Q =$`I_3`$ \+ ( B+S+C+b+t ) with the standard values of S,C,b,t for the quarks .
References
1. E. Witten, Nucl. Phys. B 160 (1979) 57
2. T. H. R. Skyrme, Proc. Roy. Soc. London A 260 (1961) 127; Nucl. Phys. 31 (1962) 556
3. R. E. Marshak, ”Conceptual foundations of modern particle physics”, World Scientific, Singapore, 1993
4. G. Karl and J. E. Paton, Phys. Rev. D 30 (1984) 238
5. S. J. Perantonis, Phys. Rev. D 37 (1988) 2687
6. A. P. Balachandran, G. Marmo, B. S. Skagerstam and A.Stern, ”Classical topology and quantum states”, World Scientific, Singapore, 1991
7. A. Abbas, Phys.Lett. B 238, (1990) 344
8. A. Abbas, J.Phys. G 16, (1990) L163
9. A. Abbas, Hadronic J. 15 (1992) 475
10. A. Abbas, Nuovo Cim. 106 A, (1993) 985
11. A. Abbas, Ind. J. Phys. 67 A (1993) 541
12. H. Walliser, Phys. Lett. B 432 (1998) 15
|
no-problem/9902/hep-ph9902433.html
|
ar5iv
|
text
|
# ISU-HET-99-1 |Δ𝑰|=3/2 Decays of Hyperons in Chiral Perturbation TheoryTalk presented at DPF ‘99, Los Angeles, 5-9 January 1999.
## I Introduction
Nonleptonic decays of hyperons have been studied by various authors in the framework of chiral perturbation theory ($`\chi `$PT). For the hyperons belonging to the baryon octet, the decay modes are $`\mathrm{\Sigma }^+n\pi ^+,`$ $`\mathrm{\Sigma }^+p\pi ^0,`$ $`\mathrm{\Sigma }^{}n\pi ^{},`$ $`\mathrm{\Lambda }p\pi ^{},`$ $`\mathrm{\Lambda }n\pi ^0,`$ $`\mathrm{\Xi }^{}\mathrm{\Lambda }\pi ^{},`$ and $`\mathrm{\Xi }^0\mathrm{\Lambda }\pi ^0.`$ Calculations of the dominant $`|\mathrm{\Delta }𝑰|=1/2`$ amplitudes of these decays have led to mixed results . Specifically, the theory can give a good description of either the S-waves or the P-waves, but not both simultaneously. Now, while these amplitudes have been much studied in $`\chi `$PT, the same cannot be said of their $`|\mathrm{\Delta }𝑰|=3/2`$ counterparts. In view of the situation in the $`|\mathrm{\Delta }𝑰|=1/2`$ sector, it is instructive to carry out a similar analysis of the $`|\mathrm{\Delta }𝑰|=3/2`$ amplitudes. Such an analysis has been done recently , and some of its results will be presented here.
In the baryon-decuplet sector, only the $`\mathrm{\Omega }^{}`$ hyperon decays weakly. For $`\mathrm{\Omega }^{}\mathrm{\Xi }\pi `$ decays, a purely $`|\mathrm{\Delta }𝑰|=1/2`$ weak interaction would imply the ratio of decay rates $`\mathrm{\Gamma }(\mathrm{\Omega }^{}\mathrm{\Xi }^0\pi ^{})/\mathrm{\Gamma }(\mathrm{\Omega }^{}\mathrm{\Xi }^{}\pi ^0)=2.`$ Instead, this ratio is measured to be approximately $`2.7`$ , which seems to suggest that the $`|\mathrm{\Delta }𝑰|=1/2`$ rule is violated in $`\mathrm{\Omega }^{}`$ decays . This situation has recently been examined in some detail using $`\chi `$PT. The result will also be presented here, for the couplings generating the $`|\mathrm{\Delta }𝑰|=3/2`$ decays of the $`\mathrm{\Omega }^{}`$ also contribute to the octet-hyperon decays.
To apply $`\chi `$PT to interactions involving the lowest-lying mesons and baryons, we employ the heavy-baryon formalism . In this approach, the theory has a consistent chiral expansion, and the octet and decuplet baryons in the effective chiral Lagrangian are described by velocity-dependent fields. We include the decuplet baryons in the Lagrangian because the octet-decuplet mass difference is small enough to make their effects significant on the low-energy theory .
## II $`|\mathrm{\Delta }𝑰|=3/2`$ Decays of Octet Hyperons
The leading-order chiral Lagrangian for the strong interactions is well known , and so we will discuss only the weak sector. Within the standard model, the $`|\mathrm{\Delta }S|=1`$, $`|\mathrm{\Delta }𝑰|=3/2`$ weak transitions are induced by an effective Hamiltonian that transforms as $`(27_\mathrm{L},1_\mathrm{R})`$ under chiral rotations. At lowest order in $`\chi `$PT, the Lagrangian that describes such weak interactions of baryons and has the required transformation properties is
$`^\mathrm{w}=\beta _{27}T_{ij,kl}\left(\xi \overline{B}_v\xi ^{}\right)_{ki}\left(\xi B_v\xi ^{}\right)_{lj}+\delta _{27}T_{ij,kl}\xi _{kd}\xi _{bi}^{}\xi _{le}\xi _{cj}^{}\left(\overline{T}_v^\mu \right)_{abc}\left(T_{v\mu }\right)_{ade}+\mathrm{h}.\mathrm{c}.,`$ (1)
where $`\beta _{27}`$ ($`\delta _{27}`$) is the coupling constant for the baryon-octet (baryon-decuplet) sector, and $`T_{ij,kl}`$ is the tensor that project out the $`|\mathrm{\Delta }S|=1`$, $`|\mathrm{\Delta }𝑰|=3/2`$ transitions (further details are given in Ref. ).
We now turn to the calculation of the amplitudes. In the heavy-baryon approach, the amplitude for the decay $`BB^{}\pi `$ can be written as
$`\mathrm{i}_{BB^{}\pi }=G_\mathrm{F}m_\pi ^2\overline{u}_B^{}\left(𝒜_{BB^{}\pi }^{(\mathrm{S})}+2kS_v𝒜_{BB^{}\pi }^{(\mathrm{P})}\right)u_B,`$ (2)
where the superscripts refer to S- and P-wave contributions, the $`u`$’s are baryon spinors, $`k`$ is the outgoing four-momentum of the pion, and $`S_v`$ is the velocity-dependent spin operator .
At tree level, $`𝒪(1)`$ in $`\chi `$PT, contributions to the amplitudes come from diagrams each with a weak vertex from $`^\mathrm{w}`$ in (1) and, for the P-waves, a vertex from the lowest-order strong Lagrangian. At next order in $`\chi `$PT, there are amplitudes of order $`m_s`$, the strange-quark mass, arising both from one-loop diagrams with leading-order vertices and from counterterms. Currently there is not enough experimental input to determine the value of the counterterms. For this reason, we follow the approach that has been used for the $`|\mathrm{\Delta }𝑰|=1/2`$ amplitudes and calculate only nonanalytic terms up to $`𝒪(m_s\mathrm{ln}m_s)`$. These terms are uniquely determined from the one-loop amplitudes because they cannot arise from local counterterm Lagrangians. With a complete calculation at next-to-leading order, it would be possible to fit all the amplitudes (as was done in Ref. for the $`|\mathrm{\Delta }𝑰|=1/2`$ sector), but we feel that this exercise is not instructive given the large number of free parameters available. In this work, we limit ourselves to study the question of whether the lowest-order predictions are subject to large higher-order corrections.
To compare our theoretical results with experiment, we introduce the amplitudes
$`s=𝒜^{(\mathrm{S})},p=|𝒌|𝒜^{(\mathrm{P})},`$ (3)
in the rest frame of the decaying baryon. From these amplitudes, we can extract for the S-waves the $`|\mathrm{\Delta }𝑰|=3/2`$ components
$`\begin{array}{c}S_3^{(\mathrm{\Lambda })}=\frac{1}{\sqrt{3}}\left(\sqrt{2}s_{\mathrm{\Lambda }n\pi ^0}+s_{\mathrm{\Lambda }p\pi ^{}}\right),S_3^{(\mathrm{\Xi })}=\frac{2}{3}\left(\sqrt{2}s_{\mathrm{\Xi }^0\mathrm{\Lambda }\pi ^0}+s_{\mathrm{\Xi }^{}\mathrm{\Lambda }\pi ^{}}\right),\\ S_3^{(\mathrm{\Sigma })}=\sqrt{\frac{5}{18}}\left(s_{\mathrm{\Sigma }^+n\pi ^+}\sqrt{2}s_{\mathrm{\Sigma }^+p\pi ^0}s_{\mathrm{\Sigma }^{}n\pi ^{}}\right),\end{array}`$ (6)
and the $`|\mathrm{\Delta }𝑰|=1/2`$ components (for $`\mathrm{\Lambda }`$ and $`\mathrm{\Xi }`$ decays)
$`S_1^{(\mathrm{\Lambda })}=\frac{1}{\sqrt{3}}\left(s_{\mathrm{\Lambda }n\pi ^0}\sqrt{2}s_{\mathrm{\Lambda }p\pi ^{}}\right),S_1^{(\mathrm{\Xi })}=\frac{\sqrt{2}}{3}\left(s_{\mathrm{\Xi }^0\mathrm{\Lambda }\pi ^0}\sqrt{2}s_{\mathrm{\Xi }^{}\mathrm{\Lambda }\pi ^{}}\right),`$ (7)
as well as analogous ones for the P-waves. We can then compute from data the ratios collected in Table I, which show the $`|\mathrm{\Delta }𝑰|=1/2`$ rule for hyperon decays. The experimental values for $`S_3`$ and $`P_3`$ are listed in the column labeled “Experiment” in Table II.
To begin discussing our theoretical results, we note that our calculation yields no contributions to the S-wave amplitudes $`S_3^{(\mathrm{\Lambda })}`$ and $`S_3^{(\mathrm{\Xi })}`$, as shown in Table II. This only indicates that the two amplitudes are predicted to be smaller than $`S_3^{(\mathrm{\Sigma })}`$ by about a factor of three because there are nonvanishing contributions from operators that occur at the next order, $`𝒪(m_s/\mathrm{\Lambda }_{\chi \mathrm{SB}})`$, with $`\mathrm{\Lambda }_{\chi \mathrm{SB}}1\mathrm{GeV}`$ being the scale of chiral-symmetry breaking. (An example of such operators is considered in Refs. .) The experimental values of $`S_3^{(\mathrm{\Lambda })}`$ and $`S_3^{(\mathrm{\Xi })}`$ are seen to support this prediction.
The other four amplitudes are predicted to be nonzero. They depend on the two weak parameters $`\beta _{27}`$ and $`\delta _{27}`$ of $`^\mathrm{w}`$ (as well as on parameters from the strong Lagrangian, which are already determined), with $`\delta _{27}`$ appearing only in loop diagrams. Since we consider only the nonanalytic part of the loop diagrams, and since the errors in the measurements of the P-wave amplitudes are larger than those in the S-wave amplitudes, we can take the point of view that we will extract the value of $`\beta _{27}`$ by fitting the tree-level $`S_3^{(\mathrm{\Sigma })}`$ amplitude to experiment, and then treat the tree-level P-waves as predictions and the loop results as a measure of the uncertainties of the lowest-order predictions.
Thus, we obtain $`\beta _{27}=0.068\sqrt{2}f_\pi G_\mathrm{F}m_\pi ^2,`$ and the resulting P-wave amplitudes are placed in the column labeled “Tree” in Table II. These lowest-order predictions are not impressive, but they have the right order of magnitude and differ from the central value of the measurements by at most three standard deviations. For comparison, in the $`|\mathrm{\Delta }𝑰|=1/2`$ case the tree-level predictions for the P-wave amplitudes are completely wrong , differing from the measurements by factors of up to 30.
To address the reliability of the leading-order predictions, we look at our calculation of the one-loop corrections, presented in two columns in Table II. The numbers in the column marked “Octet” come from all loop diagrams that do not have any decuplet-baryon lines, with $`\beta _{27}`$ being the only weak parameter in the diagrams. Contributions of loop diagrams with decuplet baryons depend on one additional constant, $`\delta _{27}`$, which cannot be fixed from experiment as it does not appear in any of the observed weak decays of a decuplet baryon. To illustrate the effect of these terms, we choose $`\delta _{27}=\beta _{27},`$ a choice consistent with dimensional analysis and the normalization of $`^\mathrm{w}`$, and collect the results in the column labeled “Decuplet”.
We can see that some of the loop corrections in Table II are comparable to or even larger than the lowest-order results even though they are expected to be smaller by about a factor of $`M_K^2/(4\pi f_\pi )^20.2.`$ These large corrections occur when several different diagrams yield contributions that add up constructively, resulting in deviations of up to an order of magnitude from the power-counting expectation. This is an inherent flaw in a perturbative calculation where the expansion parameter is not sufficiently small. We can, therefore, say that these numbers are consistent with naive expectations.
Although the one-loop corrections are large, they are all much smaller than their counterparts in $`|\mathrm{\Delta }𝑰|=1/2`$ transitions, where they can be as large as 15 times the lowest-order amplitude in the case of the P-wave in $`\mathrm{\Sigma }^+n\pi ^+.`$ In that case, the discrepancy was due to an anomalously small lowest-order prediction arising from the cancellation of two nearly identical terms .
In conclusion, we have presented a discussion of $`|\mathrm{\Delta }𝑰|=3/2`$ amplitudes for hyperon nonleptonic decays in $`\chi `$PT. At leading order these amplitudes are described in terms of only one weak parameter. This parameter can be fixed from the observed value of the S-wave amplitudes in $`\mathrm{\Sigma }`$ decays. After fitting this number, we have predicted the P-waves and used our one-loop calculation to discuss uncertainties of the lowest-order predictions. Our predictions are not contradicted by current data, but current experimental errors are too large for a meaningful conclusion. We have shown that the one-loop nonanalytic corrections have the relative size expected from naive power counting. The combined efforts of E871 and KTeV experiments at Fermilab could give us improved accuracy in the measurements of some of the decay modes that we have discussed and allow a more quantitative comparison of theory and experiment.
## III $`|\mathrm{\Delta }𝑰|=3/2`$ Decays of the $`\mathrm{\Omega }^{}`$
In the heavy-baryon formalism, we can write the amplitude for $`\mathrm{\Omega }^{}\mathrm{\Xi }\pi `$ as
$`\mathrm{i}_{\mathrm{\Omega }^{}\mathrm{\Xi }\pi }=G_\mathrm{F}m_\pi ^2\overline{u}_\mathrm{\Xi }𝒜_{\mathrm{\Omega }^{}\mathrm{\Xi }\pi }^{(\mathrm{P})}k_\mu u_\mathrm{\Omega }^\mu G_\mathrm{F}m_\pi ^2\overline{u}_\mathrm{\Xi }{\displaystyle \frac{\alpha _{\mathrm{\Omega }^{}\mathrm{\Xi }}^{(\mathrm{P})}}{\sqrt{2}f_\pi }}k_\mu u_\mathrm{\Omega }^\mu ,`$ (8)
where the $`u`$’s are baryon spinors, $`k`$ is the outgoing four-momentum of the pion, and only the dominant P-wave piece of the amplitude is included. We will consider only the P-wave because, experimentally, the asymmetry parameter in these decays is small and consistent with zero , indicating that they are dominated by a P-wave.
From the measured decay rates, we obtain
$`𝒜_{\mathrm{\Omega }^{}\mathrm{\Xi }^{}\pi ^0}^{(\mathrm{P})}=(3.31\pm 0.08)\mathrm{GeV}^1,𝒜_{\mathrm{\Omega }^{}\mathrm{\Xi }^0\pi ^{}}^{(\mathrm{P})}=(5.48\pm 0.09)\mathrm{GeV}^1.`$ (9)
Upon defining the $`|\mathrm{\Delta }𝑰|=1/2,\mathrm{\hspace{0.17em}3}/2`$ amplitudes
$`\alpha _1^{(\mathrm{\Omega })}\frac{1}{\sqrt{3}}\left(\alpha _{\mathrm{\Omega }^{}\mathrm{\Xi }^{}}^{(\mathrm{P})}+\sqrt{2}\alpha _{\mathrm{\Omega }^{}\mathrm{\Xi }^0}^{(\mathrm{P})}\right),\alpha _3^{(\mathrm{\Omega })}\frac{1}{\sqrt{3}}\left(\sqrt{2}\alpha _{\mathrm{\Omega }^{}\mathrm{\Xi }^{}}^{(\mathrm{P})}\alpha _{\mathrm{\Omega }^{}\mathrm{\Xi }^0}^{(\mathrm{P})}\right),`$ (10)
respectively, we can extract the ratio
$`\alpha _3^{(\mathrm{\Omega })}/\alpha _1^{(\mathrm{\Omega })}=0.072\pm 0.013,`$ (11)
which is higher than the corresponding ratios in octet-hyperon decays listed in Table I, but not significantly so.
Although the size of this ratio is not clear evidence for violation of the $`|\mathrm{\Delta }𝑰|=1/2`$ rule in $`\mathrm{\Omega }^{}`$ decays, it leads to a different question, that of the compatibility of the measurements of these decays and those of the octet-hyperon decays. To address this question, we will first extract a $`|\mathrm{\Delta }𝑰|=3/2`$ coupling from $`\mathrm{\Omega }^{}\mathrm{\Xi }\pi `$ decays and then examine its contribution to the octet-hyperon decays.
Employing standard group-theory techniques, we find two different operators that transform as $`(27_\mathrm{L},1_\mathrm{R})`$ and generate $`\mathrm{\Delta }S=1,`$ $`|\mathrm{\Delta }𝑰|=3/2`$ transitions involving $`\mathrm{\Omega }^{}`$ fields. We write them as
$`_1^\mathrm{w}=T_{ij,kl}\xi _{ka}\xi _{lb}\left(𝒞_{27}I_{ab,cd}+𝒞_{27}^{}I_{ab,cd}^{}\right)\xi _{ci}^{}\xi _{dj}^{},`$ (12)
where $`𝒞_{27}`$ and $`𝒞_{27}^{}`$ are the weak parameters for the two operators, the baryon fields are contained in the tensors $`I`$ and $`I^{}`$, and additional details can be found in Ref. . This Lagrangian contains the terms
$`_{\mathrm{\Omega }^{}B\varphi }^\mathrm{w}`$ $`=`$ $`{\displaystyle \frac{𝒞_{27}}{f}}\mathrm{\hspace{0.17em}6}\left(\sqrt{2}\overline{\mathrm{\Sigma }}_v^{}^\mu K^0+2\overline{\mathrm{\Sigma }}_v^0^\mu K^+2\overline{\mathrm{\Xi }}_v^{}^\mu \pi ^0+\sqrt{2}\overline{\mathrm{\Xi }}_v^0^\mu \pi ^+\right)\mathrm{\Omega }_{v\mu }^{}`$ (14)
$`+{\displaystyle \frac{𝒞_{27}^{}}{f}}\mathrm{\hspace{0.17em}2}\left(\sqrt{2}\overline{\mathrm{\Sigma }}_v^{}^\mu K^02\overline{\mathrm{\Sigma }}_v^0^\mu K^+2\overline{\mathrm{\Xi }}_v^{}^\mu \pi ^0+\sqrt{2}\overline{\mathrm{\Xi }}_v^0^\mu \pi ^+\right)\mathrm{\Omega }_{v\mu }^{}.`$
From this expression, one can see that the decay modes $`\mathrm{\Omega }^{}\mathrm{\Xi }\pi `$ measure the combination $`\mathrm{\hspace{0.17em}3}𝒞_{27}+𝒞_{27}^{}.`$ Since the decays $`\mathrm{\Omega }^{}\mathrm{\Sigma }K`$ are kinematically forbidden, and since three body decays of the $`\mathrm{\Omega }^{}`$ are poorly measured, it is not possible at present to extract these two constants separately.
At tree level, the P-wave amplitudes arise from contact diagrams generated by $`_1^\mathrm{w}`$ in (12) and are given by
$`\alpha _{\mathrm{\Omega }^{}\mathrm{\Xi }^{}}^{(\mathrm{P})}=4\sqrt{2}\left(3𝒞_{27}+𝒞_{27}^{}\right),\alpha _{\mathrm{\Omega }^{}\mathrm{\Xi }^0}^{(\mathrm{P})}=\mathrm{\hspace{0.33em}4}\left(3𝒞_{27}+𝒞_{27}^{}\right).`$ (15)
The value of the constant $`\mathrm{\hspace{0.17em}3}𝒞_{27}+𝒞_{27}^{}`$ is then found to be
$`3𝒞_{27}+𝒞_{27}^{}=(8.7\pm 1.6)\times 10^3G_\mathrm{F}m_\pi ^2.`$ (16)
This value is consistent with power counting, being suppressed by approximately a factor of $`\mathrm{\Lambda }_{\chi \mathrm{SB}}`$ with respect to the parameter $`\beta _{27}`$ previously discussed.
We now address the question of the size of the contribution of $`_1^\mathrm{w}`$ in (12) to the $`|\mathrm{\Delta }𝑰|=3/2`$ decays of octet hyperons at one-loop. We again keep only the nonanalytic terms of the loop results. As an illustration of the effect of these terms on the octet-hyperon decays, we present numerical results in Table III, where we look at four simple scenarios to satisfy Eq. (16) in terms of only one parameter. Interestingly, there are no contributions to $`S_3^{(\mathrm{\Lambda })}`$ and $`S_3^{(\mathrm{\Xi })}`$ as before, and so only the amplitudes predicted to be nonzero are displayed. For comparison, we show in the same Table the experimental value of the amplitudes as well as the best theoretical fit at $`𝒪(m_s\mathrm{log}m_s)`$ obtained in Ref. .
The new terms calculated here (with $`\mu =1\mathrm{GeV}`$), induced by $`_1^\mathrm{w}`$, are of higher order in $`m_s`$ and are therefore expected to be at most comparable to the best theoretical fit. A quick glance at Table III shows that in some cases the new contributions are much larger. Another way to gauge the size of the new contributions is to compare them with the experimental error in the octet-hyperon decay amplitudes. Since the theory provides a good fit at $`𝒪(m_s\mathrm{log}m_s)`$ , we would like the new contributions (which are of higher order in $`m_s`$) to be at most at the level of the experimental error. From Table III, we see that in some cases the new contributions are significantly larger than these errors. In a few cases they are significantly larger than the experimental amplitudes. All this indicates to us that the measured $`\mathrm{\Omega }^{}\mathrm{\Xi }\pi `$ decay rates imply a $`|\mathrm{\Delta }𝑰|=3/2`$ amplitude that may be too large and in contradiction with the $`|\mathrm{\Delta }𝑰|=3/2`$ amplitudes measured in octet-hyperon decays.
Nevertheless, it is premature to conclude that the measured values for the $`\mathrm{\Omega }^{}\mathrm{\Xi }\pi `$ decay rates must be incorrect because, strictly speaking, none of the contributions to octet-baryon decay amplitudes is proportional to the same combination of parameters measured in $`\mathrm{\Omega }^{}\mathrm{\Xi }\pi `$ decays, $`\mathrm{\hspace{0.17em}3}𝒞_{27}+𝒞_{27}^{}.`$ It is possible to construct linear combinations of the four amplitudes $`S_3^{(\mathrm{\Sigma })}`$, $`P_3^{(\mathrm{\Sigma })}`$, $`P_3^{(\mathrm{\Lambda })}`$ and $`P_3^{(\mathrm{\Xi })}`$ that are proportional to $`\mathrm{\hspace{0.17em}3}𝒞_{27}+𝒞_{27}^{}.`$ We find that the most sensitive one is
$$\left(S_3^{(\mathrm{\Sigma })}4.2P_3^{(\mathrm{\Xi })}\right)_{\mathrm{Exp}}=0.2\pm 0.1,$$
(17)
where we have simply combined the errors in quadrature. The contribution from $`_1^\mathrm{w}`$ to this combination is
$$\left(S_3^{(\mathrm{\Sigma })}4.2P_3^{(\mathrm{\Xi })}\right)_{\mathrm{Theory},\mathrm{new}}\mathrm{\hspace{0.33em}13}\left(3𝒞_{27}+𝒞_{27}^{}\right)\mathrm{\hspace{0.33em}0.1},$$
(18)
which falls within the error in the measurement.
Our conclusion is that the current measurement of the rates for $`\mathrm{\Omega }^{}\mathrm{\Xi }\pi `$ implies a $`|\mathrm{\Delta }𝑰|=3/2`$ amplitude that appears large enough to be in conflict with measurements of $`|\mathrm{\Delta }𝑰|=3/2`$ amplitudes in octet-hyperon decays. However, within current errors and without any additional assumptions about the relative size of $`𝒞_{27}`$ and $`𝒞_{27}^{}`$, the two sets of measurements are not in conflict.
## ACKNOWLEDGMENTS
The material presented here has been drawn from recent papers done in collaboration with A. A. El-Hady and G. Valencia. This work was supported in part by DOE under contract number DE-FG02-92ER40730.
|
no-problem/9902/hep-th9902116.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Recently it was noted that topological $`K`$-theory can be usefully employed to describe D-brane charges . In this paper we shall introduce new technical tools which give a refinement of $`K`$-theory (more precisely, a holomorphic version of $`K`$ theory), at the cost of less general applicability. As an example of their application, we will apply these tools to a $`𝐙_2`$ subgroup of T-duality (identified with a Fourier-Mukai transform) and show how these tools can be used to understand Fourier-Mukai transforms beyond the subclass of sheaves usually considered in the physics literature.
It has been observed elsewhere (for example, ) that branes supported on complex submanifolds of complex varieties are naturally described in terms of coherent sheaves. We shall describe how Grothendieck groups of coherent sheaves, the holomorphic version of $`K`$-theory referred to above, can be used to describe D-branes, in the case that all D-branes are wrapped on complex submanifolds. Since we ultimately wish to study T-duality realized as a Fourier-Mukai transform, and Fourier-Mukai transforms are defined, in general, on derived categories, not individual sheaves, we shall also discuss derived categories. In particular, we shall point out a physical interpretation of objects of a derived category, and give a physically-motivated map from objects of a derived category to Grothendieck group elements. We conclude with a discussion of T-duality symmetries in terms of Fourier-Mukai transforms. In particular, we shall examine how Grothendieck groups can be used to extend the action of Fourier-Mukai transforms beyond the class of W.I.T. sheaves considered previously in the physics literature. For completeness, we have also included a short appendix on the basics of topological $`K`$-theory. We suspect the application of these technical tools may have much broader applicability (to the study of Kontsevich’s mirror conjecture, for example), but unfortunately we shall have little to say on such extensions.
We shall only consider D-branes in type II theories, which are described by the $`K`$-theory of complex vector bundles . We will not usually work with space-filling D-branes, and so we shall not concern ourselves with tadpole-cancellation issues.
In it was noted that branes can only consistently wrap a submanifold when the normal bundle to the submanifold admits a $`\text{Spin}^c`$ structure. In this paper we will only work in complex geometry, and as all $`U(N)`$ bundles admit a canonical $`\text{Spin}^c`$ structure \[4, appendix D\], we shall never have to consider this subtlety in this paper. (For a more thorough discussion of the $`\text{Spin}^c`$ constraint in the context of type II compactifications with vanishing cosmological constant, see .)
We shall assume throughout this paper that all varieties are smooth and projective. (For example, all complex tori appearing will implicitly be assumed to be abelian varieties.)
Since the publication of , several other papers have appeared on topological $`K`$ theory and D-branes . We should also mention that the work built upon the earlier works . We have also been informed that another discussion of T-duality in the context of $`K`$-theory will appear in . Also, as this paper was being finalized, another paper on T-duality and $`K`$-theory appeared .
## 2 Grothendieck groups
It has recently been argued by E. Witten that D-brane charges should be understood in terms of topological $`K`$-theory . In this paper, we shall argue that in certain cases it is more useful to work with Grothendieck groups of coherent sheaves. In this particular section we shall define Grothendieck groups, then in later sections we shall show their relation to derived categories and describe how they can give insight into a $`𝐙_2`$ subgroup of T-duality realized as a Fourier-Mukai transformation.
In this paper we shall only work on complex varieties, and will only wrap branes on (complex) subvarieties. This constraint reduces us to a proper subset of all possible D-brane configurations, but by making this restriction we will be able to use more powerful tools. For example, in these circumstances we can make some strong statements concerning supersymmetric vacuum configurations of a D-brane \[2, section 4.2\]. Consider a set of $`N`$ branes on some Kähler variety of dimension $`n`$. If $`F`$ is the curvature of the connection on the $`U(N)`$ bundle, and $`J`$ the Kähler form, then in order to get a supersymmetric vacuum some necessary conditions<sup>1</sup><sup>1</sup>1The attentive reader will note that these are almost, but not quite, two necessary conditions for supersymmetric heterotic vacua. on $`F`$ are \[2, section 4.2\]
$`F`$ $``$ $`\mathrm{\Omega }^{1,1}`$ (1)
$`FJ^{n1}`$ $`=`$ $`\lambda J^n`$ (2)
for some constant $`\lambda `$.
Given a $`C^{\mathrm{}}`$ bundle $``$ (with a fixed Hermitian structure) on a complex manifold, there is a one-to-one correspondence between connections $`D_A`$ on $``$ that satisfy equation (1) (in other words, holomorphic connections) and holomorphic structures on $``$ \[18, section VII.1\]. Thus, specifying a bundle with a fixed holomorphic structure is equivalent to specifying a $`C^{\mathrm{}}`$ bundle with a choice of holomorphic connection. (If in addition the holomorphic connection satisfies equation (2), then the corresponding holomorphic bundle will be Mumford-Takemoto semistable.)
Thus, within the context of the restriction to complex subvarieties and holomorphic bundles, the specification of a holomorphic bundle on some subvariety is equivalent to specifying a complex $`C^{\mathrm{}}`$ bundle together with a choice of holomorphic connection on the bundle – data associated with a D-brane.
Instead of working with topological $`K`$ theory, which only encodes $`C^{\mathrm{}}`$ bundles, it can be advantageous to work with a “holomorphic” version of $`K`$-theory, which implicitly encodes not only choices of $`C^{\mathrm{}}`$ bundles, but also specific choices of (holomorphic) connections on the bundles. Such a holomorphic version of topological $`K`$-theory exists, and is known as a Grothendieck group (of locally free sheaves).
Before we actually define Grothendieck groups, we need to make some general observations. The motivation given above for working with Grothendieck groups is clearly rather weak, but in later sections we shall give stronger arguments. We pointed out that the conditions for a supersymmetric D-brane vacuum on a complex Kähler manifold imply that the connection on the $`C^{\mathrm{}}`$ bundle is holomorphic, and so the combined $`C^{\mathrm{}}`$ bundle plus connection can be described equivalently in terms of a holomorphic bundle. However, when we start working with configurations of both branes and antibranes, we should not expect conditions for a supersymmetric vacuum to be of great relevance, and so it is not completely clear from this description that Grothendieck groups are necessarily useful objects. We shall see later that working with Grothendieck groups give us a natural arena in which to examine T-duality, for example, so by working with Grothendieck groups we do get some useful insights.
Before defining Grothendieck groups, another technical observation should be made. In order to specify a supersymmetric vacuum for a D-brane, we must specify not just any holomorphic connection, but one which is Hermitian-Einstein (equation (2)). Thus, to specify a supersymmetric vacuum, not any holomorphic bundle will do, but only those which are Mumford-Takemoto semistable. Note this means that given a general element of the Grothendieck group, there is not one but two reasons why it will not describe a supersymmetric vacuum – not only because of the simultaneous presence of branes and antibranes, but also because the (holomorphic) bundles are not necessarily Mumford-Takemoto semistable.
Strictly speaking there are two distinct Grothendieck groups relevant here, which we shall denote $`K^0(X)`$ and $`K_0^{}(X)`$ . We shall first define both, then point out that in reasonably nice circumstances they are isomorphic. To distinguish Grothendieck groups from topological $`K`$-theory, we shall use $`K^{}(X)`$ to denote Grothendieck groups and $`K(X)`$ to denote topological $`K`$-theory.
The Grothendieck group $`K_0^{}(X)`$ of coherent sheaves is defined to be the free abelian group on coherent sheaves on $`X`$, modulo elements $`^{}^{\prime \prime }`$, where $``$, $`^{}`$, and $`^{\prime \prime }`$ are coherent sheaves related by short exact sequences of the form
$$0^{\prime \prime }^{}0$$
The Grothendieck group $`K^0(X)`$ of locally free sheaves is defined to be the free abelian group on locally free sheaves on $`X`$, modulo elements $`^{}^{\prime \prime }`$, where $``$, $`^{}`$, and $`^{\prime \prime }`$ are locally free sheaves related by short exact sequences of the form
$$0^{\prime \prime }^{}0$$
More formally \[19, prop. 4.4\], $`K^0`$ is a contravariant functor from the category of noetherian schemes to the category of rings. Note in passing that these definitions of $`K_0^{}`$ and $`K^0`$ are closely analogous to the definition of topological $`K^0`$.
For further information on Grothendieck groups (and their relation to derived categories, which shall appear shortly), see for example .
It can be shown (,\[22, exercise III.6.9\]) that on a smooth projective variety $`X`$, the natural map $`K^0(X)K_0^{}(X)`$ is an isomorphism. In the rest of this paper we shall assume that we are always working on a smooth projective variety, and so we shall use $`K^0`$ and $`K_0^{}`$ more or less interchangeably. We shall also often refer to “the” Grothendieck group.
The reader may wonder how precisely Grothendieck groups are related to topological $`K`$-theory. In order to get some insight into the relation between these objects, let us consider an example. Suppose $`X`$ is a smooth compact Riemann surface. It is straightforward to compute<sup>2</sup><sup>2</sup>2Using the Atiyah-Hirzebruch spectral sequence. See \[21, section 2\]. that topological $`K^0(X)=𝐙^2`$. By contrast \[22, exercise II.6.11\], the Grothendieck group $`K_0^{}(X)=\text{Pic }X𝐙`$. Although the topological $`K`$-theory groups and Grothendieck groups are not identical, they are still closely related. Note for example that for $`X`$ a smooth Riemann surface, $`\text{Pic }X`$ is an extension of $`𝐙`$ by $`\text{Jac }X`$, so the Grothendieck group $`K_0^{}(X)=\text{Pic }X𝐙`$ includes the topological $`K`$-theory group $`K^0(X)=𝐙𝐙`$ as a subset. In other words, in this example the Grothendieck group contains more information than topological $`K^0`$. This certainly agrees with the intuition we laid earlier – the Grothendieck group should contain information not only about the choice of $`C^{\mathrm{}}`$ bundle, but also about the precise choice of connection on that bundle.
In general it is easy to see that the Grothendieck group $`K^0`$ maps into topological $`K^0`$. Unfortunately in general this map will not be surjective. One can certainly map a locally free sheaf to a smooth bundle, essentially just by forgetting the holomorphic structure. The attentive reader might be concerned that this map is not well-defined – in the definition of $`K^0`$, $``$ is identified with $`^{}^{\prime \prime }`$ if $``$ is an extension of either $`^{}`$ or $`^{\prime \prime }`$ by the other, whereas in topological $`K^0`$ we only identify split extensions. However, it is a standard fact that any extension of continuous vector bundles splits \[23, section 3.9\] (whereas not every extension of holomorphic bundles splits holomorphically), so in fact the obvious map $`K^0K^0`$ is well-defined. Unfortunately in general this map will not be surjective. One way to see this is to note that Chern classes of a holomorphic bundle on a projective variety $`X`$ live only in a subset of $`H^{}(X,𝐙)`$ – in particular, $`c_iH^{(i,i)}(X)H^{2i}(X,𝐙)`$ – whereas Chern classes of an arbitrary $`C^{\mathrm{}}`$ complex bundle are not so restricted.
In topological $`K`$-theory, one can define $`K^1`$ in addition to $`K^0`$. There are also holomorphic versions of $`K^1`$, though they are rather more obscure \[24, chapter 13\]. We shall not use these holomorphic versions of $`K^1`$, though for completeness we list them here.
Define \[24, chapter 13\] $`K_1^{}(X)`$ to be the free abelian group on pairs $`(,\rho )`$ where $``$ is a coherent sheaf on $`X`$ and $`\rho :`$ is an isomorphism, modulo elements $`(,\rho )(^{},\rho ^{})(^{\prime \prime },\rho ^{\prime \prime })`$ where $``$, $`^{}`$, and $`^{\prime \prime }`$ are coherent sheaves related by short exact sequences of the form
$$0^{\prime \prime }^{}0$$
and also modulo $`(,\rho \psi )(,\rho )(,\psi )`$.
Define \[24, chapter 13\] $`K^1(X)`$ analogously to $`K_1^{}(X)`$, that is, to be the free abelian group on pairs $`(,\rho )`$ where $``$ is a locally free sheaf on $`X`$ and $`\rho :`$ is an isomorphism, modulo elements $`(,\rho )(^{},\rho ^{})(^{\prime \prime },\rho ^{\prime \prime })`$ where $``$, $`^{}`$, and $`^{\prime \prime }`$ are locally free sheaves related by short exact sequences of the form
$$0^{\prime \prime }^{}0$$
and also modulo $`(,\rho \psi )(,\rho )(,\psi )`$.
In passing, note that these definitions are closely analogous to a definition of topological $`K^1`$ used recently in, for example, .
## 3 Derived categories
Ultimately in this paper we would like to study the action of T-duality (realized as a Fourier-Mukai transform) on brane/antibrane configurations. However, Fourier-Mukai transforms are defined on derived categories of coherent sheaves, not individual sheaves, in general. In special cases<sup>3</sup><sup>3</sup>3Indeed, previously in the physics literature authors have only considered these special cases when discussing Fourier-Mukai transforms. one can make sense out of the action of a Fourier-Mukai transform on an individual sheaf, however to discuss Fourier-Mukai transforms in generality, one must turn to derived categories.
Because of our interest in T-duality, we shall now discuss derived categories and their physical relevance. In particular, we shall show how an element of a Grothendieck group can be obtained from an object in a derived category (in a physically meaningful manner). In the next section, we shall put this map to use in studying T-duality in terms of Fourier-Mukai transformations. As usual, we shall be implicitly working over complex varieties and with holomorphic bundles, and so we shall also assume that all tachyons, viewed as bundle maps, are also complex and holomorphic.
Recall from that given a coincident brane, anti-brane pair, with bundles $``$ and $``$ respectively, and a tachyon field $`T:`$, then the resulting brane charge one would actually be left with in vacuum is (at least morally) the Grothendieck group element given by $`\mathrm{ker}T\text{coker }T`$, or equivalently $`H^0H^1`$, where the $`H^i`$ are the cohomology of the complex
$$0\stackrel{T}{}0$$
(Note that since we are working in complex geometry, $`\mathrm{ker}T`$ and $`\text{coker }T`$ make sense as sheaves<sup>4</sup><sup>4</sup>4A technical note: we shall implicitly restrict to complexes whose cohomology sheaves are coherent..)
One can also imagine working with more general complexes of bundles. These would be described as a sandwich of alternating branes and anti-branes. For example, let $`^{}`$ denote a complex of bundles
$$\mathrm{}\stackrel{T_{i1}}{}^i\stackrel{T_i}{}^{i+1}\stackrel{T_{i+1}}{}^{i+2}\stackrel{T_{i+2}}{}\mathrm{}$$
(where, by definition of complex<sup>5</sup><sup>5</sup>5 Note that, for example, $`T_{2j+1}T_{2j}`$ is a map from the total sheaf on the brane (equation (3)) back into itself, whereas tachyons should only map branes to antibranes and vice-versa. Thus, in order to consistently break up the tachyon between the total brane (3) and antibrane (4) into an interweaving series of maps, as will be mentioned shortly, we must demand that $`T_{j+1}T_j=0`$, i.e., that the maps define a complex., $`T_{j+1}T_j=0`$) such that the $`^{2i}`$ all live on (coincident) branes and the $`^{2i+1}`$ all live on (coincident) anti-branes. Put another way, the total sheaf on the brane is
$$\underset{n}{}^{2n}$$
(3)
and the total sheaf on the antibrane is
$$\underset{n}{}^{2n+1}$$
(4)
with the tachyon potential broken up into an interweaving series of maps between the brane and antibrane. After cancelling as much as possible, one is left with an element of $`K_0^{}(X)`$ given by
$$\left[\underset{n}{}H^{2n}\right]\left[\underset{n}{}H^{2n+1}\right]$$
Clearly such a complex encodes a lot of physically-irrelevant information. Indeed, the complex $`^{}`$ described above is physically identical to the complex
$$\mathrm{}\stackrel{0}{}H^{j1}\stackrel{0}{}H^j\stackrel{0}{}H^{j+1}\stackrel{0}{}\mathrm{}$$
and even the complex
$$0\underset{n}{}H^{2n}\stackrel{0}{}\underset{n}{}H^{2n+1}0$$
The only physically relevant aspect of the complex is its image in the Grothendieck group.
Although the only physically relevant aspect of a complex of branes and anti-branes is its Grothendieck-group image, we shall nevertheless find it useful to work in terms of complexes in the next section.
Since we are working in algebraic geometry, the attentive reader may wonder why we are restricting to locally free sheaves, rather than considering general coherent sheaves. For example, we could identify a torsion sheaf with a lower-dimensional D-brane. The difficulty is that we wish to speak of maps between the worldvolumes described by tachyons, and although open strings connecting branes and antibranes of the same dimension certainly contain tachyon modes, open strings connecting branes and antibranes of distinct dimension need not contain tachyon modes – whether a tachyon is actually present varies from case to case. Thus, we are restricting to locally free sheaves on worldvolumes all of the same dimension.
There exists a useful mechanism for working with complexes of holomorphic bundles, and more generally, holomorphic sheaves. This tool is known as a derived category.
A derived category of coherent sheaves on some variety $`X`$ is a category whose objects are complexes of sheaves on $`X`$, such that the cohomology sheaves of the complexes are coherent. A derived category of (bounded complexes of) sheaves on $`X`$ is denoted $`D^b(X)`$. The subcategory defined by complexes of sheaves with coherent cohomology is denoted $`D_c^b(X)`$. In general, not all derived categories are derived categories of sheaves; however, all the derived categories we shall describe in this paper are derived categories of sheaves.
A proper explanation of derived categories is well beyond the scope of this paper – for more information, see for example . However we shall mention one useful fact in passing. Morphisms of chain complexes that preserve cohomology (so-called quasi-isomorphisms) descend to isomorphisms in the derived category, so intuitively the reader might, very loosely<sup>6</sup><sup>6</sup>6Technically this is incorrect; however, for those readers unwilling to delve into technicalities, this description does give some handle on matters., imagine that any two complexes in the derived category with isomorphic cohomology groups are themselves considered isomorphic.
It has been speculated previously in the physics literature that derived categories were relevant for physics . In the context of holomorphic bundles, we now have an explicit correspondence.
Note that derived categories, just like complexes, contain a great deal of physically irrelevant information. We do not need to know the full cohomology of a complex of sheaves, but only the formal difference
$$\left(\underset{n}{}H^{2n}\right)\left(\underset{n}{}H^{2n+1}\right)$$
In other words, the only physically relevant part of an object in a derived category is its image in the Grothendieck group of coherent sheaves.
The attentive reader should be slightly bothered by our use of derived categories to describe sandwiches of branes and antibranes. In these brane/antibrane sandwiches, we implicitly assumed that all the branes and antibranes were of the same dimension (equivalently, that we had a locally free sheaf on the worldvolume of each). By contrast, the objects of a derived category are complexes of more or less arbitrary sheaves, whose cohomology groups are coherent sheaves. Naively, it would seem that our brane/antibrane sandwich construction can only sense a small portion of the possible objects of a derived category.
However, this is not the case. Any bounded complex of coherent sheaves on a smooth variety is quasi-isomorphic to a bounded complex of locally free sheaves, that is, admits a chain map to a complex of locally free sheaves such that the chain map preserves the cohomology of the complex. (This is known formally as a Cartan-Eilenberg resolution of the complex \[25, section 5.7\].) Since quasi-isomorphisms descend to isomorphisms in the derived category, we see that any complex of coherent sheaves is isomorphic (within the derived category) to a complex of locally free sheaves.
There is one further technical problem that might bother the attentive reader. We have just argued that any complex of coherent sheaves can be equivalently described by a complex of locally free sheaves, and so in terms of a brane/antibrane sandwich. However, the objects of a derived category are not precisely complexes of coherent sheaves, but rather complexes of sheaves whose cohomology sheaves are coherent. In the special case of sheaves on smooth projective varieties, we strongly suspect that the two categories are equivalent, but we do not have a rigorous argument to support that claim.
## 4 T-duality
Now that we have introduced relevant technical machinery, we shall discuss T-duality. It is often said that a $`𝐙_2`$ subgroup of T-duality is realized via Fourier-Mukai transforms , and in the present context we shall find a natural setting for this ansatz. We shall begin by giving a physical motivation for the identification of a $`𝐙_2`$ subgroup of T-duality with a Fourier-Mukai transform, then go through a number of technical results on Fourier-Mukai transforms, and finally conclude with a discussion of why precisely one needs Grothendieck groups and derived categories to discuss Fourier-Mukai transforms on general D-brane configurations.
### 4.1 Physical motivation
Before we begin discussing Fourier-Mukai transforms in technical detail, we shall discuss in a pair of examples why precisely it is sometimes claimed that a $`𝐙_2`$ subgroup of the T-duality group acting on branes wrapped on complex algebraic tori is realized as a Fourier-Mukai transform. Note that for D-branes wrapped on $`T^{2g}`$, when we speak of T-duality we mean, T-duality along each of $`2g`$ $`S^1`$’s in $`T^{2g}`$. Since we have T-dualized an even number of times, we will always take type IIA back to type IIA, and type IIB back to type IIB.
1) Consider a rank $`N`$ bundle on $`T^2`$, with $`c_1=0`$ – in other words, an $`SU(N)`$ bundle on $`T^2`$. This precisely corresponds to $`N`$ $`Dp`$-branes wrapped on $`T^2`$, with no immersed $`D(p2)`$-brane charge. One expects that T-duality should map this to a configuration of $`N`$ $`D(p2)`$-branes, with support only at points on $`\widehat{T}^2`$.
Indeed, this is precisely what we find. For reasonably nice<sup>7</sup><sup>7</sup>7In notation to be defined shortly, W.I.T.<sub>1</sub>. $`SU(N)`$ bundles $``$ on $`T^2`$, the Fourier-Mukai transform is a skyscraper sheaf on $`\widehat{T}^2`$, supported at points.
2) Consider a rank $`N`$ bundle $``$ on $`T^4`$ – in other words, a $`U(N)`$ bundle on $`T^4`$. This precisely corresponds to $`N`$ $`Dp`$-branes wrapped on $`T^4`$, with immersed $`D(p2)`$-brane charge $`c_1()`$, and with $`D(p4)`$-brane charge given by $`ch_2()=c_2()(1/2)c_1()^2`$. Under T-duality we expect $`D(p4)`$-brane charge<sup>8</sup><sup>8</sup>8A small clarification is in order. Given some $`Dp`$-brane, there are two ways to get, say, $`D(p4)`$-brane charge: (i) add a $`D(p4)`$-brane (add a torsion sheaf, in more algebraic language), and (ii) modify $`ch_2`$ of the bundle on the $`Dp`$-brane worldvolume. More globally one expects the moduli space to be more or less reducible, with these options corresponding to distinct components. For simplicity we only discuss option (ii) in the example above. on $`T^4`$ to become $`Dp`$-branes wrapped $`\widehat{T}^4`$, and $`Dp`$-branes wrapping $`T^4`$ to become $`D(p4)`$-brane charge on $`\widehat{T}^4`$. Thus, we expect the T-dual to this configuration to be another bundle $`\widehat{}`$ on the dual $`T^4`$, of rank $`ch_2()`$, and $`ch_2(\widehat{})=\text{rank }`$.
Indeed, this is precisely what we find. For reasonably nice<sup>9</sup><sup>9</sup>9In notation to be defined shortly, W.I.T.<sub>1</sub>. bundles $``$ on $`T^4`$, the dual is a bundle $`\widehat{}`$ of<sup>10</sup><sup>10</sup>10The equations shown correct typographical errors in equation (3.2.16) of . We would like to thank Kentaro Hori for pointing out these errors to us.
$`\text{rank }\widehat{}`$ $`=`$ $`ch_2()`$
$`=`$ $`c_2()(1/2)c_1()^2`$
$`c_1(\widehat{})`$ $`=`$ $`\sigma (c_1())`$
$`ch_2(\widehat{})`$ $`=`$ $`\text{rank }`$
where $`\sigma :H^2(T^4,𝐙)\stackrel{}{}H^2(\widehat{T}^4,𝐙)`$ is an isomorphism.
Thus, at least in these two examples, the usual claim that T-duality of branes is realized by Fourier-Mukai transform seems to check out.
In discussions of Fourier-Mukai transforms in the physics literature, a single sheaf is mapped to a single sheaf. This is not the most general way that Fourier-Mukai transforms act; it is also not the most natural. In general, Fourier-Mukai transforms act on derived categories of coherent sheaves, that is, they act on complexes of sheaves. One can act on a single sheaf $``$ by using the trivial complex
$$00$$
but in general the Fourier-Mukai transform will not be another trivial complex, but a much more complicated complex. In the next section we shall give the general technical definition of a Fourier-Mukai transform, then describe the special cases in which it has a well-defined action on individual coherent sheaves.
### 4.2 Technical definitions
Let $`X`$ and $`\widehat{X}`$ be projective varieties (not necessarily tori, for the moment). A Fourier-Mukai transform is a functor $`𝒯`$ between (in fact, an equivalence of) the derived categories $`D^b(X)`$ and $`D^b(\widehat{X})`$. More precisely, if $`\pi _1:X\times \widehat{X}X`$ and $`\pi _2:X\times \widehat{X}\widehat{X}`$ are the obvious projections, then for any $`𝒫\text{Ob }D^b(X\times \widehat{X})`$, we can define a Fourier-Mukai functor<sup>11</sup><sup>11</sup>11In general, any equivalence of derived categories $`D^b(X)`$ and $`D^b(\widehat{X})`$ for any smooth projective varieties $`X`$ and $`\widehat{X}`$ can be written in the form of equation (5) for some $`𝒫\text{Ob }D^b(X\times \widehat{X})`$ .
$$\underset{¯}{R}\pi _2\left(𝒫\stackrel{𝐋}{}\pi _1^{}\right):D^b(X)D^b(\widehat{X})$$
(5)
In the special case that $`𝒫`$ is a locally free sheaf on $`X\times X^{}`$ (the only case we shall consider), the Fourier-Mukai functor simplifies to become the right-derived functor<sup>12</sup><sup>12</sup>12 As an aside, it is perhaps worth mentioning that conditions for a locally free sheaf $`𝒫`$ to define an equivalence of categories via equation (6) are known . The locally free sheaf $`𝒫`$ defines an equivalence of categories via equation (6) precisely when for all points $`xX`$, $`𝒫_x`$ is simple, $`𝒫_x=𝒫_x\omega _{\widehat{X}}`$ (where $`\omega _{\widehat{X}}`$ is the dualizing sheaf on $`\widehat{X}`$), and for any two distinct points $`x_1`$, $`x_2`$ of $`X`$ and any integer $`i`$, one has $`\text{Ext}_{\widehat{X}}^i(𝒫_{x_1},𝒫_{x_2})=0`$ .
$$\underset{¯}{R}\pi _2(𝒫\pi _1^{}):D^b(X)D^b(\widehat{X})$$
(6)
We shall denote this functor by $`𝒯:D^b(X)D^b(\widehat{X})`$, and we shall usually restrict to $`D_c^b(X)`$ and $`D_c^b(\widehat{X})`$.
In the remainder of this section, we shall specialize to the case that $`X`$ and $`\widehat{X}`$ are dual projective complex tori, and that $`𝒫`$ is the Poincare bundle on $`X\times \widehat{X}`$.
Although Fourier-Mukai transforms are defined on derived categories, that is, on complexes of sheaves, there is a way to make sense out of their action on individual sheaves in special cases, and this is the specialization usually invoked in the physics literature. First, note that given any coherent sheaf $``$, we can define the trivial complex
$$00$$
(7)
thus we can map individual coherent sheaves into the class of objects of a derived category. We say a coherent sheaf $``$ is W.I.T.<sub>n</sub> if
$$R^i\pi _2\left(𝒫\pi _1^{}\right)=\mathrm{\hspace{0.25em}0}$$
for all $`i`$ except $`i=n`$. Then, the Fourier-Mukai transform of a sheaf $``$, identified with an object of the derived category via the trivial complex (7), is another sheaf (also defined via (7)), given by
$$\widehat{}=R^n\pi _2\left(𝒫\pi _1^{}\right)$$
Moreover, it can be shown that if $``$ is W.I.T.<sub>n</sub> for some $`n`$, then $`\widehat{}`$ is also W.I.T.$`_n^{}`$ for some $`n^{}`$, and moreover $`\widehat{\widehat{}}=(1)^{}`$, where $`(1)`$ multiplies all coordinates on the torus by $`1`$ . Clearly, those coherent sheaves that are W.I.T.<sub>n</sub> for some $`n`$ have well-behaved dualization properties, and so physicists speaking of Fourier-Mukai transformations usually assume the sheaves in question are all W.I.T. For example, in the examples at the beginning of this section, it was assumed that the coherent sheaves given were W.I.T.<sub>1</sub>. However, not all coherent sheaves of interest are W.I.T., and for the more general case one needs the more general methods outlined in this paper. We shall speak to the more general case, and the precise relevance of the W.I.T. condition, in a later section.
### 4.3 Action on Grothendieck groups
Although Fourier-Mukai transforms are defined on derived categories, they factor into an action on Grothendieck groups of coherent sheaves, in a manner that should be suggested by the physical setup of section 3. Let $`\alpha _X:D_c^b(X)K_0^{}`$ be defined as the map that takes a complex of sheaves into the alternating sum of the cohomologies of the complex, i.e.,
$$\alpha _X:^{}\left[_nH^{2n}(^{})\right]\left[_nH^{2n+1}(^{})\right]$$
(the same map we introduced in more physical terms in section 3) and let $`𝒯:D_c^b(X)D_c^b(\widehat{X})`$ denote Fourier-Mukai transform, then we have a commutative diagram
$$\begin{array}{ccc}\text{Ob }D_c^b(X)& \stackrel{\alpha _X}{}& K_0^{}(X)\\ 𝒯& & 𝒯_K\\ \text{Ob }D_c^b(\widehat{X})& \stackrel{\alpha _{\widehat{X}}}{}& K_0^{}(\widehat{X})\end{array}$$
where $`𝒯_K:K_0^{}(X)K_0^{}(\widehat{X})`$ is defined by
$$𝒯_K()=\underset{i}{}()^iR^i\pi _2(𝒫\pi _1^{})$$
In other words, the action of Fourier-Mukai transforms on derived categories factors into an action on Grothendieck groups of coherent sheaves.
### 4.4 A sign ambiguity
The attentive reader will notice there is a minor sign ambiguity in our presentation of Fourier-Mukai transformations. One typically defines the inverse of a Fourier-Mukai transformation with minor sign asymmetries relative to the original transformation \[30, section 3.2\], just as inverses of Fourier transformations are often defined with relative signs. By contrast, we have presented Fourier-Mukai transformations in an implicitly symmetric fashion, which means our results can only be interpreted physically up to a $`𝐙_2`$ ambiguity.
In order to describe this sign problem more precisely, let us reconsider the two examples given at the beginning of the section, being somewhat more careful about signs.
1) Consider a holomorphic rank $`N`$ bundle $``$ on $`T^2`$, with $`c_1=0`$ – in other words, an $`SU(N)`$ bundle on $`T^2`$. As mentioned earlier, we assume $``$ is W.I.T.<sub>1</sub>, so
$$R^0\pi _2(𝒫\pi _1^{})=\mathrm{\hspace{0.25em}0}$$
A close examination of our definition of Fourier-Mukai transform reveals that, as an element of $`K_0^{}(\widehat{T}^2)`$, the Fourier-Mukai transform of $``$ is not precisely the torsion sheaf $`\widehat{}=R^1\pi _2\left(𝒫\pi _1^{}\right)`$ but rather the virtual torsion sheaf $`\widehat{}K_0^{}(\widehat{T}^2)`$.
2) Consider a holomorphic rank $`N`$ bundle $``$ on $`T^4`$ – in other words, a $`U(N)`$ bundle on $`T^4`$. Assume the complex structure on $`T^4`$ is such that the $`T^4`$ is projective. Earlier we mentioned that the Fourier-Mukai transform of $``$ is a bundle $`\widehat{}`$ on $`\widehat{T}^4`$, in the case that $``$ is W.I.T.<sub>1</sub>, namely
$`R^0\pi _2(𝒫\pi _1^{})`$ $`=`$ $`0`$
$`R^2\pi _2(𝒫\pi _1^{})`$ $`=`$ $`0`$
A close examination of our definition of the Fourier-Mukai transform reveals that, as an element of $`K_0^{}(\widehat{T}^4)`$, the Fourier-Mukai transform of $``$ is not precisely the bundle $`\widehat{}=R^1\pi _2(𝒫\pi _1^{})`$ but rather the virtual bundle $`\widehat{}K_0^{}(\widehat{T}^4)`$.
As it has been presented so far, this sign problem could naively be cured by redefining the Fourier-Mukai transform. Unfortunately, the difficulty is much deeper. Consider applying a Fourier-Mukai transform twice. If we are studying branes wrapped on $`T^{2g}`$, then this means T-dualizing along each of the $`2g`$ $`S^1`$’s in $`T^{2g}`$ twice, and so intuitively we should return to where we started. According to , $`𝒯^2=(1)^{}[g]`$ as an action on $`D^b(T^{2g})`$. The $`[g]`$ formally shifts all complexes g places to the right, and the $`(1)`$ multiplies all complex coordinates on the torus by $`1`$. This descends to an action on the Grothendieck group that, for $`g`$ odd, switches signs (naively exchanging branes and antibranes), and for $`g`$ even, leaves the Grothendieck group essentially invariant.
Thus, if we apply Fourier-Mukai transform twice, then we do not get precisely the same element of the Grothendieck group we started with, but rather an element differing by a sign. Thus, we can clearly identify Fourier-Mukai transforms with T-duality only up to a $`𝐙_2`$. As mentioned earlier, this is not a fundamental difficulty, but merely reflects the fact that we have defined the Fourier-Mukai transform symmetrically with respect to a torus and its dual, rather than with sign asymmetries that are often introduced.
### 4.5 Non-W.I.T. sheaves
Earlier we gave the definition of W.I.T. sheaves, and noted that for W.I.T. sheaves, Fourier-Mukai transforms simplify greatly – their action becomes well defined on individual W.I.T. sheaves, one does not need the full technology of derived categories and/or Grothendieck groups. In prior physics literature on Fourier-Mukai transforms, all sheaves were typically assumed to be W.I.T., for precisely this reason. Unfortunately, not all the coherent sheaves that one would like to study are W.I.T. – not even all supersymmetric D-brane vacua are W.I.T. – and for the more general case one needs the more sophisticated methods reviewed in this paper. In this section we shall work through an example of a non-W.I.T. sheaf, and speak to the relationship between the W.I.T. condition and Mumford-Takemoto semistability.
First, let us construct an easy explicit example of a non-W.I.T. sheaf. Consider a sheaf $`𝒯`$ on $`T^2`$, where $``$ is a W.I.T. rank $`N`$ bundle of $`c_1=0`$, and $`𝒯`$ is a torsion sheaf supported at $`N^{}`$ points on $`T^2`$. It is easy to check that this sheaf is not W.I.T. As described earlier, the Fourier-Mukai transform of $``$ is $`\widehat{}=R^1\pi _2\left(𝒫\pi _1^{}\right)K_0^{}(\widehat{T}^2)`$, and the Fourier-Mukai transform of $`𝒯`$ is a rank $`N^{}`$ bundle $`\widehat{𝒯}`$ on $`\widehat{T}^2`$. Thus, the Fourier-Mukai transform of the sheaf $`𝒯`$ is the virtual sheaf $`\widehat{𝒯}\widehat{}K_0^{}(\widehat{T}^2)`$. In other words, the Fourier-Mukai transform of a non-W.I.T. sheaf is not an honest sheaf, but rather some general element of the Grothendieck group of coherent sheaves (or, depending on the reader’s preference, the derived category of coherent sheaves) on the dual algebraic torus.
What is the physics buried in the mathematical example above? The coherent sheaf $`𝒯`$ cannot be a supersymmetric vacuum configuration – it corresponds to non-dissolved D0-branes inside D2-branes. The Fourier-Mukai transformation takes this non-supersymmetric configuration, involving only branes, to another non-supersymmetric configuration, but (at least naively) involving both branes and antibranes. At first blush it seems very surprising that T-duality could map a configuration of only branes to one involving both branes and antibranes. However, on both sides of the duality we have a nonsupersymmetric configuration, and perhaps more importantly, it is not clear how to distinguish a configuration of D0-branes and D2-antibranes from a configuration of D0- and D2-branes. We shall return to this issue after making a closer examination of the W.I.T. condition.
The reader might well ask, what is the precise relationship between Mumford-Takemoto semistability and the W.I.T. condition? For example, the reader may be tempted to suspect that supersymmetric brane vacua are W.I.T. and so have easy Fourier-Mukai transformations, in other words, that a locally-free sheaf that is Mumford-Takemoto semistable (and therefore satisfies necessary conditions for a supersymmetric vacuum for a brane) must be W.I.T. Unfortunately this does not seem to be the case in general .
Under what circumstances is a torsion-free, Mumford-Takemoto semistable sheaf $``$ also W.I.T.? First, let us specialize to the case that $``$ is Mumford-Takemoto stable, not just semistable, and that<sup>13</sup><sup>13</sup>13It is interesting that W.I.T. and stability of a torsion-free sheaf $``$ correlate somewhat more naturally when $`\text{det }`$ is trivial; one is tempted to wonder if there is any connection to the fact that overall $`U(1)`$’s decouple from $`U(N)`$ in the AdS/CFT correspondence (see, for example, ). $`c_1()=0`$. In this case, $``$ can have no (holomorphic) sections, as such a section would make $`𝒪`$ a subsheaf of the same slope as $``$, whose existence would contradict Mumford-Takemoto stability. Similarly, if $``$ is any flat line bundle, then $``$ cannot have a section, as the section would define $`^{}`$ as a subsheaf, and we would have the same contradiction as for $`𝒪`$. Thus, for any flat line bundle $``$, $`H^0()=0`$, and so<sup>14</sup><sup>14</sup>14In this subsection we shall be slightly sloppy about computing right derived functors. For a more detailed examination of their properties, see for example \[22, section III.12\]. $`R^0\pi _2\left(𝒫\pi _1^{}\right)=0`$. Now, for any torsion-free sheaf $``$, $``$ is Mumford-Takemoto stable if and only if $`^{}`$ is also Mumford-Takemoto stable \[39, lemma 4.5\], consequently by Serre duality we have that on an $`n`$-(complex-)dimensional torus, $`H^n()=0`$ for any flat line bundle $``$ by the same arguments as above, and so $`R^n\pi _2\left(𝒫\pi _1^{}\right)=0`$.
Thus, a torsion-free, Mumford-Takemoto stable sheaf on $`T^4`$ of $`c_1=0`$ is necessarily W.I.T.<sub>1</sub>. Unfortunately one does not get such statements in greater generality. For example, on higher-dimensional tori, there is no good reason why a torsion-free, Mumford-Takemoto stable sheaf of $`c_1=0`$ should be W.I.T., and in general we expect that they will not be W.I.T.
So far in our discussion of the relationship between the W.I.T. condition and Mumford-Takemoto stability, we have only spoken about stable sheaves. How would one deal with Mumford-Takemoto semistable sheaves that are not stable? After all, these can also satisfy the conditions for a supersymmetric D-brane vacuum. As noted in , when using a properly semistable sheaf, physics sees a split sheaf with stable factors. Thus, questions regarding Fourier-Mukai transforms and W.I.T. conditions for semistable sheaves can be reduced to questions regarding direct sums of stable sheaves.
In general, therefore, there does not seem to be a simple relationship between Mumford-Takemoto stability and the W.I.T. condition. If we follow the usual wisdom that a $`𝐙_2`$ subgroup of T-duality is identified with Fourier-Mukai transformation, then one consequence is that T-duals of some supersymmetric D-brane vacua naively involve both branes and antibranes.
Some care is required in interpreting Grothendieck group elements, however. A standard example from topological $`K`$-theory should make possible subtleties more clear. Let $``$ be a $`C^{\mathrm{}}`$ vector bundle on a $`k`$-dimensional manifold $`M`$, then there exists a rank $`k`$ $`C^{\mathrm{}}`$ bundle $``$ such that $`=1`$, where $`1`$ denotes the trivial rank $`2k`$ bundle \[23, exercise 3.3e, p. 39\]. Ordinarily, following , one would assume that $`1`$ was necessarily a non-supersymmetric configuration of both branes and antibranes, but here we see that even without tachyon condensation, sometimes a brane/antibrane configuration is equivalent to a configuration of only branes<sup>15</sup><sup>15</sup>15For a simpler example of a brane/antibrane configuration equivalent to a configuration of only branes without tachyon condensation, consider the $`K`$-theory element $`\left(\right)`$. This is clearly the same as the branes-only configuration $``$, without tachyon condensation. The example discussed above is merely a more sophisticated version of this case..
Thus, even without tachyon condensation, sometimes naively nontrivial elements of topological $`K`$-theory, and also Grothendieck groups, are equivalent to trivial elements. We suspect (though we have not proven) that this is what is happening in Fourier-Mukai transformations of non-W.I.T. supersymmetric D-brane vacua – one gets a naively nontrivial element of the Grothendieck group of coherent sheaves, which is subtly equivalent to a configuration involving only branes.
## 5 Conclusions
In this paper we have argued that it can be useful to consider D-brane charges in terms of Grothendieck groups and, to a lesser extent, derived categories. We began by defining Grothendieck groups and listing some basic properties, then briefly outlined derived categories and displayed a physically natural map from objects of a derived category to a Grothendieck group of coherent sheaves. We concluded with a discussion of T-duality in terms of Fourier-Mukai transforms, and argued that to understand the action of Fourier-Mukai transforms even on general supersymmetric vacua, one needed the technology of Grothendieck groups and derived categories.
Derived categories have previously entered the physics literature through Kontsevich’s mirror conjecture , in which mirror symmetry was conjectured to be realizable as an equivalence of certain derived categories. It would be interesting to see if any insight could be gained by working instead with Grothendieck groups. Perhaps Grothendieck groups would be more relevant for the open-string mirror symmetry proposed in .
Derived categories might conceivably play a role in giving a solid justification to certain proposed analogues of T-duality. For example, in , a T-duality symmetry was conjectured that exchanged branes wrapped on general algebraic surfaces. This hypothesized T-duality-analogue might conceivably be justified in terms of an equivalence of derived categories on algebraic surfaces, or even an automorphism of derived categories on a single<sup>16</sup><sup>16</sup>16In particular, it has been argued that a variety $`X`$ can be more or less reconstructed from $`D_c^b(X)`$ if either its canonical sheaf or its anticanonical sheaf is ample, so for some algebraic surfaces, such as del Pezzo surfaces, the only possible T-duality-analogues of the form proposed above would necessarily map the surface into itself. algebraic surface, in which case one would presumably get not a $`𝐙_2`$ subgroup of some continuous family of symmetries, but only a (discrete) $`𝐙_2`$ T-duality-analogue. (A more specialized form of this conjecture, given essentially for algebraic K3s, was stated in .) One might even speculate that the existence of equivalences of derived categories of coherent sheaves on distinct Calabi-Yau’s might signal the existence of some mirror-symmetry-analogue for branes, analogous to that proposed in .
The description of D-branes in terms of $`K`$ theory given in may also yield interesting new insights via string-string duality. For example, consider the duality relating IIA compactified on $`K3`$ to a heterotic string on $`T^4`$. A D2-brane wrapped on a curve in $`K3`$, for example, is interpreted as a particle on the heterotic side. What is the heterotic interpretation of a wrapped D2 brane/antibrane pair? Presumably the heterotic dual to such a configuration is a massive heterotic string state . Thus, the interpretation of D-branes in terms of $`K`$ theory may give rise to a new geometric interpretation of massive heterotic states, for example.
It is somewhat tempting to speculate that massive heterotic string states may have, at least sometimes, an interpretation in terms of topological $`K`$ theory or Grothendieck groups of the space that the heterotic string is compactified on. In such an event, it would seem likely that isomorphisms of derived categories on distinct Calabi-Yau’s may correspond immediately to some limit of (0,2) mirror symmetry, in which all $`B`$-fields are turned off and $`\alpha ^{}`$ is small.
## 6 Acknowledgements
We would like to thank P. Aspinwall, T. Gomez, K. Hori, P. Horja, A. Knutson, D. Morrison, B. Pardon, R. Plesser, M. Stern, and E. Witten for useful conversations.
## Appendix A Notes on topological $`K`$-theory
For more information on topological $`K`$-theory, see for example .
Given a compact complex manifold $`X`$, the group $`K(X)`$ is the free abelian group on isomorphism classes of complex vector bundles on $`X`$, modulo elements of the form $`[_0_1][_0][_1]`$, where $`_0`$, $`_1`$ are complex vector bundles on $`X`$ and $`[]`$ denotes the isomorphism class of $``$. Put another way, elements of $`K(X)`$ are “virtual bundles” of the form $``$. More generally, it is straightforward to see that $`K`$ defines a contravariant functor from the category of compact spaces to the category of abelian groups.
What is $`K(\text{point})`$ ? A vector bundle on a point is completely determined by its rank, so it should be clear that $`K(\text{point})𝐙`$.
The reduced $`K`$-ring of $`X`$, denoted $`\stackrel{~}{K}(X)`$, is defined to be the kernel of the natural projection $`K(X)K(\text{point})`$.
Let $`X`$ be a topological space, $`Y`$ some subset of $`X`$. There exists a notion of relative K-theory, denoted $`K(X,Y)`$, which is defined as $`K(X,Y)=\stackrel{~}{K}(X/Y)`$. (If $`Y`$ is a closed subset of $`X`$, then $`X/Y`$ is essentially a space obtained by collapsing $`Y`$ down to a single point. For more information see for example \[48, section 0.2\].) As the reader may well guess, $`K(X,\mathrm{})=K(X)`$.
Define the suspension of a topological space $`X`$, denoted $`SX`$, to be the quotient of $`X\times I`$ (where $`I=[0,1]`$, the unit interval) obtained by collapsing $`X\times \{0\}`$ to one point and $`X\times \{1\}`$ to another point. For example, $`SS^n=S^{n+1}`$.
For $`n`$ positive, define $`K^n(X/Y)=\stackrel{~}{K}(S^n(X/Y))`$. In particular, $`K^1(\text{point})=0`$.
Bott periodicity is simply the statement that for any compact Hausdorff space $`X`$, $`K^n(X)K^{n+2}(X)`$. Similarly, $`K^n(X/Y)K^{n+2}(X/Y)`$.
We defined $`K^n`$ above for $`n`$ negative only; however, by Bott periodicity we can now define $`K^n`$ for arbitary integer $`n`$:
$`K^n(X)`$ $`=`$ $`K^0(X)\text{ for n even}`$
$`K^n(X)`$ $`=`$ $`K^1(X)\text{ for n odd}`$
and similarly for $`K^n(X,Y)`$, and so forth.
$`K^1(X)`$ has an alternative definition, described in \[46, section II.3\]. Consider the category whose objects are pairs $`(,\alpha )`$ where $``$ is a bundle and $`\alpha :`$ is an isomorphism, and whose morphisms $`(,\alpha )(^{},\alpha ^{})`$ are given by maps $`h:^{}`$ such that the following commutes:
$$\begin{array}{ccc}& \stackrel{\alpha }{}& \\ h& & h\\ ^{}& \stackrel{\alpha ^{}}{}& ^{}\end{array}$$
Define the sum of two objects $`(,\alpha )`$ and $`(^{},\alpha ^{})`$ to be $`(^{},\alpha \alpha ^{})`$. Define a pair $`(,\alpha )`$ to be elementary if $`\alpha `$ is homotopic to the identity within automorphisms of $``$. Now we finally have the definitions in hand to define $`K^1(X)`$. Define $`K^1(X)`$ to be the free abelian group on objects in the category, modulo the equivalence relation $`(,\alpha )(^{},\alpha ^{})`$ if and only if there exist elementary pairs $`(,\beta )`$ and $`(^{},\beta ^{})`$ such that $`(,\alpha )+(,\beta )(^{},\alpha ^{})+(^{},\beta ^{})`$.
For notational purposes, let $`[,\alpha ]`$ denote the equivalence class of the pair $`(,\alpha )`$ in $`K^1(X)`$. Then it can be shown that $`[,\alpha \beta ]=[,\alpha ]+[,\beta ]`$.
It is possible to define a product on elements of $`K`$ theory; it has the properties
$`K^0(X)K^0(X)`$ $``$ $`K^0(X)`$
$`K^0(X)K^1(X)`$ $``$ $`K^1(X)`$
$`K^1(X)K^1(X)`$ $``$ $`K^0(X)`$
By this point the reader has no doubt noticed the similarity between the groups $`K^n(X)`$ and $`H^n(X)`$. In fact, $`K`$ theory is an example of a “generalized” cohomology theory. More precisely, cohomology theories can be defined axiomatically , and $`K`$ theory satisfies all the axioms for a cohomology theory except one (the dimension axiom).
|
no-problem/9902/quant-ph9902061.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Geometric phases have received a great deal of attention since their description by Berry . The reasons are clear. They are a fundamental property of many quantum mechanical systems. They also have a beautiful description in terms of differential geometry and fiber bundles which is directly related to gauge field theory (see for example ). Their physical importance was known long before the excitement about them in the mid 80’s , , . In spite of all the attention, there have been few worked out examples and the examples that have been worked out, the descriptions haven’t been straight-forward. The most well-known example is that of a two state system, namely a magnetic dipole, in a magnetic field. This was the original example given by Berry . Wilczek and Zee originally pointed out that there could exist non- abelian geometric phases . Later people studied fermionic systems with a quadrapole Hamiltonian .
Uhlmann later developed machinery, namely a parallel transport , for describing the non-abelian geometric phases associated with density matrices. However, this was never applied to three state systems. Arvind et al and Khanna et al studied the geometric phases for three state systems that involve pure state density matrices , with a parameterization that was somewhat ad hoc. Mostafazadeh looked at a way of calculating the non-abelian geometric phases for a three state system with a two-fold degeneracy also with ad hoc coordinates. These topics will be brought together here.
In this paper the objective is to use explicit $`SU(3)`$ representations to extend and/or simplify several aspects of three state systems.
1. The expression for the density matrix for three state systems.
2. The identification of the parameter spaces of these systems.
3. The calculation of the abelian geometric phases for three state systems.
4. The calculation of the non-abelian geometric phases of three state systems with a two-fold degeneracy.
An obvious example of a three state system would be a spin one particle in a magnetic field. If this external magnetic field is “slowly” rotating then we may have the conditions for an adiabatic change in phase. For a proper description of what “slowly” means in this context see . Unless otherwise stated, this paper will be concerned with the adiabatic geometric phases although with some work this could be extended to non-adiabatic phase changes.
## 2 The Density Matrix for a System with Three Quantum States
As is demonstrated in the next section the density matrix can be parameterized by the action of an $`SU(3)`$ transformation. This will prove convenient for many calculations and is immediately generalizable to a system with an arbitrary number of states. See and below for a discussion.
The density matrix for general pure state three-level systems is given in and . It can be represented in the following way: Let $`\psi `$ be a state in a three dimensional complex Hilbert space $`^{(3)}`$. The density matrix is the matrix $`\rho `$ described by (in analogy with two state systems):
$$\rho =\psi \psi ^{}=|\psi \psi |=\frac{1}{3}(1+\sqrt{3}\stackrel{}{n}\stackrel{}{\lambda })$$
(1)
$$\psi ^{(3)},(\psi ,\psi )=1.$$
Here the dagger denotes the hermitian conjugate, $`\stackrel{}{n}`$ is a real eight dimensional unit vector, $`\stackrel{}{\lambda }`$ represents the eight Gell-Mann matrices.
The dot product is the ordinary sum over repeated indices $`n^r\lambda _r`$. The $`(,)`$ is the inner product on the space $`^{(3)}`$. The pure state density matrix satisfies:
$$\rho ^{}=\rho ^2=\rho 0\text{Tr}\rho =1.$$
This is equivalent to the following conditions on $`n`$:
$$n^{}=nnn=1nn=n.$$
(2)
The star product is defined by
$$(ab)_i=\sqrt{3}d_{ijk}a_jb_k$$
(3)
where the $`d_{ijk}`$ are the components of the completely symmetric tensor appearing in the anticommutation relations
$$\{\lambda _i,\lambda _j\}=\frac{4}{3}\delta _{ij}+2d_{ijk}\lambda _k.$$
Explicitly the nonzero $`d_{ijk}`$ are
$$d_{118}=d_{228}=d_{338}=d_{888}=\frac{1}{\sqrt{3}}d_{448}=d_{558}=d_{668}=d_{778}=\frac{1}{2\sqrt{3}}$$
$$d_{146}=d_{157}=d_{247}=d_{256}=d_{344}=d_{355}=d_{366}=d_{377}=\frac{1}{2}.$$
## 3 Parameter Spaces for Three State Systems
The parameter space of states of the three state systems can easily be seen to be coset spaces of $`SU(3)`$. The Euler angle parameters are a particularly convenient way in which to see this.
A representation of the coset space $`SU(3)/U(2)`$ and of the density matrix for the pure states of a three state system may be obtained in terms of the Euler parameters given in . There the group $`SU(3)`$ is parameterized by
$$D(\alpha ,\beta ,\gamma ,\theta ,a,b,c,\varphi )=e^{(i\lambda _3\alpha )}e^{(i\lambda _2\beta )}e^{(i\lambda _3\gamma )}e^{(i\lambda _5\theta )}e^{(i\lambda _3a)}e^{(i\lambda _2b)}e^{(i\lambda _3c)}e^{(i\lambda _8\varphi )}.$$
With this parameterization and the explicit representation of the corresponding adjoint representation in terms of the Euler angle parameters in , a parameterization of the density matrix of the three state system may be obtained by the following projection which is analogous to the Hopf map given in :
$$x=\pi (D)=D\left[\frac{1}{3}(1\sqrt{3}\lambda _8)\right]D^1.$$
(4)
Here $`xSU(3)/U(2)`$ and $`D`$ represents a point in the space $`SU(3)`$. This projection is clearly invariant under the right action of a $`U(2)`$ operation defined by $`UU(2)`$ with
$$U=e^{(i\lambda _3a^{})}e^{(i\lambda _2b^{})}e^{(i\lambda _3c^{})}e^{(i\lambda _8\varphi ^{})}.$$
This then defines the projection from $`SU(3)`$ to $`SU(3)/U(2)`$. Since the second term in equation (4) is simply an adjoint action on $`\lambda _8`$, it can be read directly from the equations given in . There the matrix $`R_{ij}`$ that satisfies
$$D\lambda _iD^1=R_{ij}\lambda _j$$
(5)
was given explicitly.
One may of course note that the projection operator is not unique. Any $`3\times 3`$ matrix with a one on its diagonal would be invariant under a $`U(2)`$ subgroup and would represent a pure state. It is, however, rather convenient in this parameterization to use this particular matrix:
$$\frac{1}{3}(1\sqrt{3}\lambda _8)=\left(\begin{array}{cccc}0& \hfill 0& 0& \\ 0& \hfill 0& 0& \\ 0& \hfill 0& 1& \end{array}\right),$$
so that it is clear that the upper left $`2\times 2`$ matrix of zeros will be unaffected by an $`SU(2)`$ transformation in that block. This matrix could be substituted for another that has one $`1`$ on a diagonal and zeros elsewhere and still be invariant under (another) $`SU(2)`$. The invariance of this with respect to an overall phase gives the $`U(2)`$ invariance.
Now equation (4) can be rewritten as
$$x=\left[\frac{1}{3}(1\sqrt{3}R_{8j}\lambda _j)\right]=\left[\frac{1}{3}(1+\sqrt{3}n_j\lambda _j)\right]$$
(6)
where we identify the $`R_{8j}`$ as the components of a vector that satisfies those properties given in equation (1). This can be viewed as an arbitrary rotation of the vector $`\lambda _8`$ with an adjoint action of the group (equation (5)) and of course it is now clear that $`x`$ is identified with $`\rho `$.
The vector $`\stackrel{}{n}`$ has the following components.
$`n_1`$ $`=`$ $`R_{81}={\displaystyle \frac{\sqrt{3}}{2}}\mathrm{cos}2\alpha \mathrm{sin}2\beta \mathrm{sin}^2\theta `$
$`n_2`$ $`=`$ $`R_{82}={\displaystyle \frac{\sqrt{3}}{2}}\mathrm{sin}2\alpha \mathrm{sin}2\beta \mathrm{sin}^2\theta `$
$`n_3`$ $`=`$ $`R_{83}={\displaystyle \frac{\sqrt{3}}{2}}\mathrm{cos}2\beta \mathrm{sin}^2\theta `$
$`n_4`$ $`=`$ $`R_{84}={\displaystyle \frac{\sqrt{3}}{2}}\mathrm{cos}(\alpha +\gamma )\mathrm{cos}\beta \mathrm{sin}2\theta `$
$`n_5`$ $`=`$ $`R_{85}={\displaystyle \frac{\sqrt{3}}{2}}\mathrm{sin}(\alpha +\gamma )\mathrm{cos}\beta \mathrm{sin}2\theta `$
$`n_6`$ $`=`$ $`R_{86}={\displaystyle \frac{\sqrt{3}}{2}}\mathrm{cos}(\alpha \gamma )\mathrm{sin}\beta \mathrm{sin}2\theta `$
$`n_7`$ $`=`$ $`R_{87}={\displaystyle \frac{\sqrt{3}}{2}}\mathrm{sin}(\alpha \gamma )\mathrm{sin}\beta \mathrm{sin}2\theta `$
$`n_8`$ $`=`$ $`R_{88}=1+{\displaystyle \frac{3}{2}}\mathrm{sin}^2\theta `$ (7)
From this, using the equations $`n_i=\psi ^{}\lambda _i\psi `$, it follows that
$$\psi =e^{i\chi }\left(\begin{array}{c}e^{i(\alpha +\gamma )}\mathrm{cos}\beta \mathrm{sin}\theta \\ e^{i(\alpha \gamma )}\mathrm{sin}\beta \mathrm{sin}\theta \\ \mathrm{cos}\theta \end{array}\right).$$
(8)
This may be recognized as the third column of the $`SU(3)`$ matrix $`D`$ above, thus agreeing with the calculation given in and coming full circle in the analysis. In this case the overall phase $`\chi `$ may be identified as $`2\varphi /\sqrt{3}`$ in the matrix $`D`$. In section 5 it will become clear why this works and it will be generalized for the case of non-abelian geometric phases.
Although many of the details have not been worked out for $`SU(n)`$ groups, (the Euler angle parameters, the adjoint representation, etc.) the method of identifying the space of the parameters is the same (see ). For a system with $`n`$ states, one may express a general diagonal density matrix in terms of the squared elements of an $`n1`$ sphere. Then to take it to a general basis, one acts with the appropriate $`SU(n)`$ matrix. The result is always a subset of $`SU(n)/T^{n1}`$, where $`T^{n1}`$ is the maximal ($`n1`$) torus for the group. If there are degenerate eigenvalues in the matrix, this space is reduced. For example in the case of three states discussed shortly, the parameter space is a subset of $`SU(n)/(SU(2)\times U(1))`$ since there exists a two-fold degeneracy. For an $`m`$-fold degeneracy we reduce the space by $`SU(m)`$. In the case of an adiabatic approximation, we will see this is a proper subset, but were we to relax this condition, the space would be isomorphic to these spaces, not subsets. In this way, one may identify a necessary condition for non-abelian geometric phases, namely the existence of the degeneracy and thus an $`SU(m)`$ factor in the denominator of the above coset expression.
Using this parameterization one gains essentially nothing over the expression of the Bloch sphere for two-state systems. In that case the common parameterization of the Bloch sphere,
$$\left(\begin{array}{cc}a& 0\\ 0& 1a\end{array}\right)$$
is really no different than the one presented here,
$$\left(\begin{array}{cc}\mathrm{cos}^2\theta & 0\\ 0& \mathrm{sin}^2\theta \end{array}\right)$$
except that positivity is automatic. However in the case of three state systems, we have
$$\left(\begin{array}{cccc}\mathrm{cos}^2\theta \mathrm{sin}^2\varphi & \hfill 0& 0& \\ 0& \hfill \mathrm{sin}^2\theta \mathrm{sin}^2\varphi & 0& \\ 0& \hfill 0& \mathrm{cos}^2\varphi & \end{array}\right).$$
(9)
This is a convenient parameterization since the analogous Bloch sphere would have parameters with a non-rectangular domain. The parameterization given here (see also ) then helps with the analysis of three state density matrices and their corresponding entropy .
## 4 Connection, Curvature, and Abelian Geometric Phases
In the spirit and notation of Nakahara , we can now derive the connection one form, the curvature and the Geometric Phase of the three state system. The connection one form, sometimes called Berry’s connection, can be written in terms of $`\psi `$ in the following way. Define the total phase to be
$$\phi i𝒜$$
$$𝒜=𝒜_\mu dx^\mu =i\psi |d|\psi ,$$
(10)
where $`d`$ is the ordinary exterior derivative. Using equation (7), this becomes
$$𝒜=d\chi +\mathrm{sin}^2\theta [\mathrm{cos}^2\beta (d\alpha +d\gamma )\mathrm{sin}^2\beta (d\alpha d\gamma )].$$
(11)
This agrees with reference if the following identifications are made with those quantities on the left being those of reference and those on the right being ours.
$$\eta \chi \theta \theta \chi \text{1}\alpha +\gamma \chi \text{2}\alpha \gamma .$$
The corresponding curvature two form is given by
$`F`$ $`=`$ $`d𝒜=id\psi ^{}d\psi `$ (12)
$`=`$ $`\mathrm{sin}2\theta \mathrm{cos}^2\beta d\theta d(\alpha +\gamma )\mathrm{sin}^2\theta \mathrm{sin}2\beta d\beta d(\alpha +\gamma )`$
$`\mathrm{sin}2\theta \mathrm{sin}^2\beta d\theta d(\alpha \gamma )\mathrm{sin}^2\theta \mathrm{sin}2\beta d\beta d(\alpha \gamma ).`$
This then, is the analogue of the “solid angle formula” for the two state systems. In other words, the integral of this curvature two form gives the geometric phase, just as
$$\phi _g=\frac{1}{2}\mathrm{\Omega }$$
in two state systems, where $`\mathrm{\Omega }`$ is the solid angle for the two sphere. The geometric phase is just the integral of the connection one form without the overall phase factor $`\chi `$, that is,
$`\phi _g`$ $`=`$ $`{\displaystyle \mathrm{sin}^2\theta [\mathrm{cos}^2\beta (d\alpha +d\gamma )\mathrm{sin}^2\beta (d\alpha d\gamma )]}`$ (13)
$`=`$ $`{\displaystyle [\mathrm{sin}^2\theta \mathrm{cos}2\beta d\alpha +\mathrm{sin}^2\theta d\gamma ]},`$
which again, agrees with .
## 5 Non-abelian Geometric Phases
In this section a novel way of obtaining geometric phases for 3-state systems is given. This method is a generalization and simplification over the method presented in and a generalization over the method given in the previous section. The way the connection one-forms for the 3-state systems are derived here uses the fact that the state space of the system can be expressed in terms of the group $`SU(3)`$. This enables the calculation of the forms without diagonalization of the Hamiltonian. In effect, the Hamiltonian is taken to be in diagonal form initially. It is then “undiagonalized” by an $`SU(3)`$ action which takes it into a general non-diagonal hermitian matrix. This method has the advantage of being potentially generalizable to other states, not just eigenstates of the Hamiltonian. (Of course, one has to be careful of what the adiabatic assumption means then. This is well described in .) It also has the advantage of being generalizable to $`SU(n)`$. Whereas one does not have a way of finding the eigenvalues of an $`n\times n`$ matrix, one would be able to use $`SU(n)`$ matrices and derive the connection forms for an $`n`$-state system. (Again, see .)
The aim is to find the adiabatic non-abelian geometric phase associated to the two-fold degeneracy of energy eigenvalues of the general Hamiltonian for a 3-state system. These are the simplest non-abelian geometric phases.
Let $`H(t)=H(\stackrel{}{R}(t))`$ be the time dependent Hamiltonian of the system and let $`E_n(t)`$ be its eigenvalues. Then if the Hamiltonian is periodic in time with period $`T`$, i.e., the curve $`C`$:$`[0,T]M`$ is closed. Here $`M`$ is the manifold parameterized by the coordinates $`\stackrel{}{R}`$. For the adiabatic approximation, $`n`$ labels the eigenstates, $`|\psi `$, of the Hamiltonian and does not change. This means there is a unitary matrix $`U(n)`$ relating $`|\psi (T)`$ and $`|\psi (0)`$ which is given by
$$e^{\frac{i}{\mathrm{}}_0^TE_n(t)}𝒫\left[e^{i_CA_n}\right].$$
Here $`𝒫`$ is the path-ordering operator and $`A_n`$ is a Lie algebra valued (connection) one-form whose matrix elements are locally given by:
$$A_n^{ab}=in,a,\stackrel{}{R}|d|n,b,\stackrel{}{R}.$$
(14)
It is important to note that the Hamiltonian is a $`3\times 3`$ Hermitian matrix which can be viewed as an element of the algebra of $`SU(3)`$, i.e.,
$$H(\stackrel{}{R})=b\underset{i=0}{\overset{8}{}}R^i\lambda _i,$$
where $`R^i`$ are real parameters, the $`\lambda _i`$ are $`\lambda _0=1\mathrm{l}_{3\times 3}`$ and the Gell-Mann matrices of Table (1). Here the constant $`b`$ is taken to be one. The adiabaticity assumption may then be expressed as $`T>>1`$.
The Hamiltonian, $`H`$, can be expressed in terms of the diagonalized Hamiltonian, $`H_D`$.
$$H(\stackrel{}{R})=U(\stackrel{}{R})H_DU^1(\stackrel{}{R}),$$
where $`U(\stackrel{}{R})SU(3)`$ and
$$H_D=\left(\begin{array}{cccc}E_1& \hfill 0& 0& \\ 0& \hfill E_1& 0& \\ 0& \hfill 0& E_3& \end{array}\right).$$
In this form it is obvious that $`M`$P<sup>2</sup> and what is more, it is clear from () that only the angles $`\alpha ,\beta ,\gamma `$ and $`\theta `$ will remain since $`\lambda _1,\lambda _2,\lambda _3`$ and $`\lambda _8`$ commute with $`H_D`$. Explicitly, the Hamiltonian in undiagonalized form, $`H`$, is given by
$`H_{11}`$ $`=`$ $`E_1(\mathrm{cos}^2\beta \mathrm{cos}^2\theta +\mathrm{sin}^2\beta )+E_3\mathrm{cos}^2\beta \mathrm{sin}^2\theta `$
$`H_{12}`$ $`=`$ $`(E_1E_3)e^{2i\alpha }\mathrm{cos}\beta \mathrm{sin}\beta \mathrm{sin}^2\theta `$
$`H_{13}`$ $`=`$ $`(E_3E_1)e^{i(\alpha +\gamma )}\mathrm{cos}\beta \mathrm{sin}\theta \mathrm{cos}\theta `$
$`H_{21}`$ $`=`$ $`(E_1E_3)e^{2i\alpha }\mathrm{cos}\beta \mathrm{sin}\beta \mathrm{sin}^2\theta `$
$`H_{22}`$ $`=`$ $`E_1(\mathrm{sin}^2\beta \mathrm{cos}^2\theta +\mathrm{cos}^2\beta )+E_3\mathrm{sin}^2\beta \mathrm{sin}^2\theta `$
$`H_{23}`$ $`=`$ $`(E_1E_3)e^{i(\alpha \gamma )}\mathrm{sin}\beta \mathrm{sin}\theta \mathrm{cos}\theta `$
$`H_{31}`$ $`=`$ $`(E_3E_1)e^{i(\alpha +\gamma )}\mathrm{cos}\beta \mathrm{sin}\theta \mathrm{cos}\theta `$
$`H_{32}`$ $`=`$ $`(E_1E_3)e^{i(\alpha \gamma )}\mathrm{sin}\beta \mathrm{sin}\theta \mathrm{cos}\theta `$
$`H_{33}`$ $`=`$ $`E_1\mathrm{cos}^2\theta +E_3\mathrm{sin}^2\theta `$
It can easily be shown that these angles parameterize $``$P<sup>2</sup>. In this way one can easily identify the patches needed for certain circumstances. This is analogous to the calculation here.
As is well known, the matrix that diagonalizes $`H`$ is composed of its eigenvectors. Therefore, given that $`H=UH_DU^1`$, $`H_D=U^1HU`$, so we have our $`|\psi `$s, the eigenvectors of $`H`$, they are
$$\left(\begin{array}{c}e^{i(\alpha +\gamma )}\mathrm{cos}\beta \mathrm{cos}\theta \\ e^{i(\alpha \gamma )}\mathrm{sin}\beta \mathrm{cos}\theta \\ \mathrm{sin}\theta \end{array}\right),\left(\begin{array}{c}e^{i(\alpha \gamma )}\mathrm{sin}\beta \\ e^{i(\alpha +\gamma )}\mathrm{cos}\beta \\ 0\end{array}\right),\left(\begin{array}{c}e^{i(\alpha +\gamma )}\mathrm{cos}\beta \mathrm{sin}\theta \\ e^{i(\alpha \gamma )}\mathrm{sin}\beta \mathrm{sin}\theta \\ \mathrm{cos}\theta \end{array}\right).$$
One can check that these are already orthonormal due to the fact that $`USU(3)`$.
Now all that needs to be done is calculate the connection forms given by (14). These are given by
$$A_1=\mathrm{cos}2\beta \mathrm{cos}^2\theta d\alpha +\mathrm{cos}^2\theta d\gamma ,$$
and
$$A_2=\left(\begin{array}{cccc}\mathrm{cos}2\beta d\alpha d\gamma & \hfill e^{2i\gamma }[\mathrm{sin}2\beta \mathrm{sin}\theta d\alpha i\mathrm{sin}\theta d\beta ]& & \\ e^{2i\gamma }[\mathrm{sin}2\beta \mathrm{sin}\theta d\alpha +i\mathrm{sin}\theta d\beta ]& \hfill \mathrm{cos}2\beta \mathrm{sin}^2\theta d\alpha +\mathrm{sin}^2\theta d\gamma & & \end{array}\right).$$
This is a expression in terms of $`SU(3)`$ Euler angle coordinates. We can generalize this by using the expression (9). This allows us to express the density matrix for an $`n`$-state system in terms of the Euler angle coordinates and the components of the $`n1`$ sphere along the diagonal and an overall scale factor. Thus the eigenvalues need not be those of the Hamiltonian but of any observable. Then a similar analysis holds for states that are not eigenvectors of the Hamiltonian but eigenvectors of another observable with the caution that, as stated before, one must be careful of what one means by an adiabatic approximation.
## 6 Conclusions/Comments
The diagonalized density matrices can be parameterized by the squared elements of the sphere (an $`n1`$ sphere for a system of $`n`$ states) combined with an $`SU(n)`$ action. This novel parameterization helps to identify the parameter spaces of these systems. The spaces are isomorphic to subspaces of $`SU(n)/T^n`$ for all eigenvalues unique, to subspaces of $`SU(n)/(SU(2)\times T^{n1})`$ for one two-fold degeneracy, $`SU(n)/(SU(3)\times T^{n2})`$ for one three-fold degeneracy, etc. This is because the density matrix and Hamitonian are both in the algebra of the group and can represented as $`UAU^{}`$, where $`USU(n)`$ and $`A`$ is the non-diagonal density matrix or Hamiltonian. When this is the Hamiltonian, we immediately know the eigenvectors because they are the rows of the matrix that diagonalizes the Hamiltonian. This enables the evaluation (in principle) of the geometric phases for the $`n`$-state systems. Here we have shown this explicitly for the case of three quantum states.
In this analysis the Euler angle parameterization has been extremely useful and although its generalization to $`SU(n)`$ is possible, the decomposition into components of the spheres and $`SU(n)`$ actions is independent of the parameterization.
In applications to multi-pole Hamiltonians were discussed. I would like to add that there are phenomenological nuclear physics models that use $`SU(3)`$. These multi-pole Hamiltonians are expressible in terms of the differential operators in , and . The author expects to perform a further analysis of the relations to those and other multi-pole Hamiltonians in the near future.
## Acknowledgements
I would like to thank Alonso Botero, Luis J. Boya, Richard Corrado Mark Mims, and E. C. G. Sudarshan for many insightful discussions. I would like to thank DOE for its support under the grant number DOE-ER-40757-123.
|
no-problem/9902/quant-ph9902072.html
|
ar5iv
|
text
|
# References
UICHEP-TH/99-2 , IP/BBSR/99-5
Comment on “Self-Isospectral Periodic Potentials and Supersymmetric Quantum Mechanics”
Uday Sukhatme
Department of Physics, University of Illinois at Chicago, Chicago, Illinois 60607
Avinash Khare
Institute of Physics, Sachivalaya Marg, Bhubaneswar 751005, Orissa, India
## Abstract
We show that the formalism of supersymmetric quantum mechanics applied to the solvable elliptic function potentials $`V(x)=mj(j+1)\mathrm{sn}^2(x,m)`$ produces new exactly solvable one-dimensional periodic potentials.
In a recent paper, Dunne and Feinberg have systematically discussed various aspects of supersymmetric quantum mechanics (SUSYQM) as applied to periodic potentials. In particular, they defined and developed the concept of self-isospectral periodic potentials at length. Basically, a one dimensional potential $`V_{}(x)`$ of period $`2K`$ is said to be self-isospectral if its supersymmetric partner potential $`V_+(x)`$ is just the original potential upto a discrete transformation - a translation by any constant amount, a reflection, or both. An example is translation by half a period, that is $`V_+(x)=V_{}(xK)`$. In this sense, a self-isospectral potential is somewhat trivial, since application of the SUSYQM formalism to it yields nothing new. The main example considered in ref. is the class of elliptic function potentials
$$V(x)=mj(j+1)\mathrm{sn}^2(x,m),j=1,2,3,\mathrm{}$$
(1)
Here $`\mathrm{sn}(x,m)`$ is a Jacobi elliptic function of real elliptic modulus parameter $`m`$ $`(0m1)`$. From now on, for simplicity, the argument $`m`$ is suppressed. The Schrödinger equation of the given elliptic potential is the well-known Lamé equation . There are $`j`$ bound bands (whose edges have known energies) followed by a continuum band. In ref. it is claimed that the potentials given in eq. (1) are self-isospectral. The purpose of this comment is to point out that although the $`j=1`$ potential is self-isospectral, this is not the case for higher values of $`j`$. Indeed, for $`j2`$, we claim that SUSYQM generates new exactly solvable periodic problems.
Taking the case $`j=2`$, and shifting the potential by a constant so that the ground state has zero energy gives
$$V_{}(x)=22m+2\delta +6m\mathrm{sn}^2(x),\delta =\sqrt{1m+m^2}.$$
(2)
The band edge energies and Bloch wave functions $`\psi _n^{()}(x)`$ are given in Table 1. The superpotential is
$$W\frac{d}{dx}log\psi _0^{()}(x)=\frac{6m\mathrm{sn}(x)\mathrm{cn}(x)\mathrm{dn}(x)}{1+m+\delta 3m\mathrm{sn}^2(x)},$$
(3)
The supersymmetric partner potentials $`V_\pm (x)`$ are related to $`W(x)`$ via $`V_\pm (x)=W^2(x)\pm dW/dx.`$ Hence, the potential $`V_+`$ is given by
$$V_+(x)=V_{}(x)+\frac{72m^2\mathrm{sn}^2(x)\mathrm{cn}^2(x)\mathrm{dn}^2(x)}{[1+m+\delta 3m\mathrm{sn}^2(x)]^2}.$$
(4)
Using SUSYQM and the known eigenfunctions $`\psi _n^{()}(x)`$ of $`V_{}(x)`$ one can immediately write down the corresponding un-normalized eigenfunctions $`\psi _n^{(+)}(x)`$ of $`V_+(x)`$.
$$\psi _0^{(+)}(x)=\frac{1}{\psi _0^{()}(x)},\psi _n^{(+)}(x)=(\frac{d}{dx}+W(x))\psi _n^{()}(x).$$
(5)
We have computed the band edge eigenfunctions of $`V_+(x)`$ and give them in Table 1. Our expression for $`V_+(x)`$ \[eq. (4)\] does not agree with eq. (29) in ref. . We have checked the correctness of our results by direct substitution into the Schrödinger equation, and by noting that in the limit of $`m1`$, our $`V_+(x)42\mathrm{sech}^2x`$, which indeed is the supersymmetric partner of $`V_{}(x,m=1)=46\mathrm{sech}^2x`$ .
Proceeding in the same way, we have also obtained a new periodic potential $`V_+(x)`$ corresponding to $`j=3`$ case of eq. (1). Here, the ground state wave function is
$$\psi _0^{()}(x)=\mathrm{dn}(x)[1+2m+\delta _15m\mathrm{sn}^2(x)]$$
and the corresponding superpotential is
$$W=\frac{m\mathrm{sn}(x)\mathrm{cn}(x)}{\mathrm{dn}(x)}\frac{[2m+\delta _1+1115m\mathrm{sn}^2(x)]}{[2m+\delta _1+15m\mathrm{sn}^2(x)]}.$$
(6)
The partner potentials $`V_\pm (x)`$ turn out to be
$$V_{}(x)=25m+2\delta _1+12m\mathrm{sn}^2(x),\delta _1\sqrt{1m+4m^2},$$
and
$$V_+(x)=V_{}(x)+\frac{2m^2\mathrm{sn}^2(x)\mathrm{cn}^2(x)}{\mathrm{dn}^2(x)}\frac{[2m+\delta _1+1115m\mathrm{sn}^2(x)]^2}{[2m+\delta _1+15m\mathrm{sn}^2(x)]^2}.$$
(7)
Clearly, the potential $`V_{}(x)`$ is not self-isospectral. In fact, $`V_{}(x)`$ and $`V_+(x)`$ are distinctly different periodic potentials which have the same seven band edges corresponding to three bound bands and a continuum band .
Although in this comment we have only focused on the $`j=2,3`$ cases, it is clear that SUSYQM provides a way of generating new solvable problems for all higher $`j`$ values. This is an exciting result given the extreme scarcity of analytically solvable periodic potentials. Indeed, a further extension to even more general potentials involving Jacobi elliptic functions yields additional quasi exactly solvable periodic potentials . Partial financial support from the U.S. Department of Energy is gratefully acknowledged.
Table 1: Band Edge Eigenstates of $`V_\pm `$ for $`j=2[\delta \sqrt{1m+m^2},B1+m+\delta ]`$
| n | $`E_n`$ | $`\psi _n^{()}`$ | $`[B3m\mathrm{sn}^2(x)]\psi _n^{(+)}`$ | |
| --- | --- | --- | --- | --- |
| 0 | 0 | $`m+1+\delta 3m\mathrm{sn}^2(x)`$ | 1 | |
| 1 | $`2\delta 1m`$ | $`\mathrm{cn}(x)\mathrm{dn}(x)`$ | $`\mathrm{sn}(x)[6m(m+1)B+m\mathrm{sn}^2(x)(2B33m)]`$ | |
| 2 | $`2\delta 1+2m`$ | $`\mathrm{sn}(x)\mathrm{dn}(x)`$ | $`\mathrm{cn}(x)[B+m\mathrm{sn}^2(x)(32B)]`$ | |
| 3 | $`2\delta +2m`$ | $`\mathrm{sn}(x)\mathrm{cn}(x)`$ | $`\mathrm{dn}(x)[B+\mathrm{sn}^2(x)(3m2B)]`$ | |
| 4 | 4$`\delta `$ | $`m+1\delta 3m\mathrm{sn}^2(x)`$ | $`\mathrm{sn}(x)\mathrm{cn}(x)\mathrm{dn}(x)`$ | |
|
no-problem/9902/cond-mat9902346.html
|
ar5iv
|
text
|
# 3D Spinodal Decomposition in the Inertial Regime
\[
## Abstract
We simulate late-stage coarsening of a 3D symmetric binary fluid using a lattice Boltzmann method. With reduced lengths and times, $`l`$ and $`t`$ respectively (with scales set by viscosity, density and surface tension) our data sets cover $`1\stackrel{<}{}l\stackrel{<}{}10^5`$, $`10\stackrel{<}{}t\stackrel{<}{}10^8`$. We achieve Reynolds numbers approaching $`350`$. At Re $`\stackrel{>}{}100`$ we find clear evidence of Furukawa’s inertial scaling ($`lt^{2/3}`$), although the crossover from the viscous regime ($`lt`$) is very broad. Though it cannot be ruled out, we find no indication that Re is self-limiting ($`lt^{1/2}`$) as proposed by M. Grant and K. R. Elder$`[`$Phys. Rev. Lett. 82, 14 (1999)$`]`$.
(Received 25 February 1999)
PACS numbers: 64.75+g, 07.05.Tp, 82.20.Wt
\] When an incompressible binary fluid mixture is quenched far below its spinodal temperature, it will phase separate into domains of different composition. Here we consider only fully symmetric 50/50 mixtures in three dimensions, for which these domains will, at late times, form a bicontinuous structure, with sharp, well-developed interfaces. The late-time evolution of this structure remains incompletely understood despite theoretical , experimental and simulation work.
As emphasized by Siggia and Furukawa , the physics of spinodal decomposition involves capillary forces, viscous dissipation, and fluid inertia. Thus, assuming that no other physics enters, the control parameters are interfacial tension $`\sigma `$, fluid mass density $`\rho `$, and shear viscosity $`\eta `$. From these can be constructed only one length, $`L_0=\eta ^2/\rho \sigma `$ and one time $`T_0=\eta ^3/\rho \sigma ^2`$. We define the lengthscale $`L(T)`$ of the domain structure at time $`T`$ via the structure factor $`S(k)`$ as $`L=2\pi S(k)𝑑k/kS(k)𝑑k`$. The exclusion of other physics in late stage growth then leads us to the dynamical scaling hypothesis : $`l=l(t)`$, where we use reduced time and length variables, $`lL/L_0`$ and $`t(TT_{int})/T_0`$. Since dynamical scaling should hold only after interfaces have become sharp, and transport by molecular diffusion ignorable, we have allowed for a nonuniversal offset $`T_{int}`$; thereafter the scaling function $`l(t)`$ should approach a universal form, the same for all (fully symmetric, deep-quenched, incompressible) binary fluid mixtures.
It was argued further by Furukawa that, for small enough $`t`$, fluid inertia is negligible compared to viscosity, whereas for large enough $`t`$ the reverse is true. Dimensional analysis then requires the following asymptotes:
$$lbt\text{}tt^{}$$
(1)
$$lct^{2/3}\text{}tt^{}$$
(2)
where, if dynamical scaling holds, amplitudes $`b,c`$ (and the crossover time $`t^{}`$) are universal. The Reynolds number, conventionally defined as, $`\mathrm{R}e=\rho /\eta LdL/dT=l\dot{l}`$, becomes indefinitely large in the inertial regime, Eq. (2).
In a recent paper, Grant and Elder have argued that the Reynolds number cannot, in fact, continue to grow indefinitely. If so, Eq. (2) is not truly the large $`t`$ asymptote, which must instead have $`lt^\alpha `$ with $`\alpha \frac{1}{2}`$. Grant and Elder argue that at large enough Re, turbulent remixing of the interface will limit the coarsening rate , so that Re stays bounded. A saturating Re (which they estimate as Re $`10100`$) would require any $`t^{2/3}`$ regime to eventually cross over to a limiting $`t^{1/2}`$ law. But if a single length scale $`lt^{1/2}`$ is involved, a saturating Re implies balance between viscous and inertial terms ($`t^{3/2}`$), while the driving term (interfacial tension) remains much larger than either ($`t^1`$). This suggests a failure of scaling altogether, with at least two length scales relevant at late times. In any case, the arguments of Grant and Elder are far from rigorous; the coarsening interfaces could, remain one step ahead of the remixing despite an ever-increasing Re which, if applied to a static interfacial structure, would break it up. Thus Eq. (2) cannot yet be ruled out as a limiting law.
In what follows we present the first large-scale simulations of 3D spinodal decomposition to unambiguously attain a regime in which inertial forces dominate over viscous ones. We find direct evidence for Furukawa’s $`lt^{2/3}`$ scaling, Eq. (2). Although a further crossover to a regime of saturating Re cannot be ruled out, we find no evidence for this up to Re $`350`$. Our work, which is of unprecedented scope, also probes the viscous scaling regime $`[`$Eq. (1)$`]`$, and the nature of the crossover between this and Eq. (2). Full details of our results and of the simulation algorithm will be published elsewhere.
Our simulations use a lattice Boltzmann (LB) method with the following model free energy:
$$F=𝑑𝐫\left\{\frac{A}{2}\varphi ^2+\frac{B}{4}\varphi ^4+\stackrel{~}{\rho }\mathrm{ln}\stackrel{~}{\rho }+\frac{\kappa }{2}|\varphi |^2\right\},$$
(3)
in which $`A`$, $`B`$ and $`\kappa `$ are parameters that determine quench-depth ($`A/B1`$ for a deep quench) and interfacial tension ($`\sigma =\sqrt{8\kappa A^3/9B^2}`$); $`\varphi `$ is the usual order parameter (the normalized difference in number density of the two fluid species); $`\stackrel{~}{\rho }`$ is the total fluid density, which remains (virtually) constant throughout .
The simulation code follows closely that of (for details see ) and uses a cubic lattice with nearest and next-nearest neighbor interactions (D3Q15). It was run on Cray T3D and Hitachi SR-2201 parallel machines with system sizes up to $`256^3`$. The LB method allows the user to choose $`\eta ,\sigma ,\rho `$ (we set $`\rho =1`$ without loss of generality), along with the order-parameter mobility $`M`$ defined via $`\dot{\varphi }=M(\delta F/\delta \varphi )`$. Although it plays no role in the arguments leading to Eqs. (1) and (2), $`M`$ must be chosen with some care to ensure that at late times (a) the algorithm remains stable, (b) the local interfacial profiles remain close to equilibrium (so that $`\sigma `$ is well-defined), and (c) the direct contribution of diffusion to coarsening is negligibly small. Table I shows the parameters used for our eight $`256^3`$ runs.
In all runs, the interface width is $`\xi 5\sqrt{\kappa /2A}3`$ in lattice units. This was found to be the minimum acceptable to obtain an accurately isotropic surface tension. To minimise diffusive effects, data for which the diffusive contribution to the growth rate was greater than 2% was discarded ; this corresponded to a minimum value of $`L`$ of $`15<L_{min}<24`$, depending on the run parameters. The large size of our runs allowed a ruthless attitude to finite size effects: we use no data with $`L>\mathrm{\Lambda }/4`$, with $`\mathrm{\Lambda }`$ the linear system size. In our $`256^3`$ runs, these filters mean that the good data from any single run lies within $`20\stackrel{<}{}L64`$, a comparable range to previous studies . Datasets of high and low $`L_0`$ are well fit respectively by $`\alpha =1`$ and $`\alpha =2/3`$ (see Fig.1).
However, as emphasized by Jury et al. , meaningful tests of scaling are best made not by looking at single data sets but by combining those of different parameter values. To this end, the good data from each run were fit to $`L=B(TT_{int})^\alpha `$, so as to extract an intercept $`T_{int}`$; we then transformed the data to reduced physical units $`l`$ and $`t`$ defined above. The exponent $`\alpha `$ was first allowed to float freely; this gave reproducible values at large $`l`$ and $`t`$ (e.g., $`\alpha =0.69`$ and $`0.67`$ for the last two data sets in Table 1), but more scattered ones at small $`l`$ and $`t`$ ($`\alpha =0.88`$, $`0.86`$ and $`1.16`$ for the first three data sets). In the latter region the floating fit is relatively poorly conditioned; it also gives large relative errors in $`T_{int}`$ (see Fig.1). In contrast, fits to $`\alpha =1`$ for these three data sets gave much better data collapse with consistent values of $`b`$ ($`b=0.073`$, $`0.073`$ and $`0.072\pm 0.01`$). Thus we are confident of $`\alpha =1`$ in this region. For the remaining data sets we estimate errors in individual exponent values at around 10% and in reduced time $`t`$ around 3% to 10%. Figure 2 shows all our data sets on a single plot using reduced variables $`l`$ and $`t`$. Such a plot is necessarily log-log, since our data sets span seven decades in $`t`$ and five in $`l`$, a range which exceeds all previous studies combined.
These LB results are fully consistent with the existence of a single underlying scaling curve $`l=l(t)`$, in which viscous ($`l=bt`$) and inertial ($`l=ct^{2/3}`$) asymptotes are connected by a long crossover whose breadth justifies our use of a single floating exponent $`\alpha `$ in the fits used above to extract $`T_{int}`$ for each run. Although we cannot rule it out for still larger times $`t`$, we see no evidence for a further crossover to a regime with asymptotic exponent $`\alpha 1/2`$ as demanded by Grant and Elder .
Before considering our results in more detail, we discuss their relation to others previously published. We restrict attention to those 3D data sets for which reliable estimates of $`L_0`$ and $`T_0`$ exist . Datasets of Laradji et al. and of Bastea and Lebowitz are shown on Fig.2 (fitted to $`\alpha =1`$ ). These lie in an $`l,t`$ range ($`1\stackrel{<}{}l\stackrel{<}{}20`$) in which our own data shows viscous (linear) scaling $`[`$Eq. (1)$`]`$; both data sets were claimed to confirm the linear law by their authors, but with differing values of $`b=0.13`$, 0.3. Our own $`b`$ values are lower than either (see above and Fig.2). As noted above, we took special care to ensure that the diffusive contribution to coarsening was small; we have found that, for matching $`L_0,T_0`$ values, LB data sets similar to those of Refs. can be generated using too large a mobility $`M`$. We hypothesize therefore that both data sets have strong residual diffusion, leading to an overestimate of $`b`$. Likewise the data of Appert et al. , which lies in the crossover regime of our scaling plot, asymptotes to our data from above; this suggests that their fitted exponent $`\alpha 2/3`$ is too low because of diffusion.
A different explanation, based on a possible nonuniversality of the physics of topological reconnection of domains, was suggested by Jury et al. , whose dissipative particle dynamics (DPD) results also appear in Fig.2 (inset) . These authors found that each data set was well fit by a linear scaling, Eq. (1), but with a systematic increase of the $`b`$ coefficient upon moving from upper right to lower left in the scaling plot. Their alternative suggestion was that their own data, and that of Refs., were part of an extremely broad crossover region, $`1\stackrel{<}{}t\stackrel{<}{}10^4`$ in reduced time. Our LB data support the idea of a broad crossover, but instead places it at $`10^2\stackrel{<}{}t\stackrel{<}{}10^6`$. Note that, unlike those of Refs., all the data sets of Jury et al. do lie very close to our own (Fig.2 inset). Since the two simulation methods are entirely different, this lends support to the idea of a universal scaling, although the fact that each DPD run is best fit by a locally linear growth law does not . The latter could be partly due to finite size effects; to obtain enough data, Jury et al. included results up to $`L=\mathrm{\Lambda }/2`$, whereas we reject all data with $`L>\mathrm{\Lambda }/4`$.
The arguments of Ref. involve the intrusion of a second length scale, alongside $`L_0`$, which in the LB context is the interfacial width $`\xi `$ (or more generally, a molecular scale). The ratio $`h=\xi /L_0`$ for real fluids is in the range $`0.05`$ (water) to $`10^7`$ (glycerol). In simulations, $`\xi `$ cannot be smaller than the lattice spacing, and the inertial region is achieved by setting $`L_01`$, so $`h1`$. In this sense our interface is “unnaturally thick”: simulation runs that enter the inertial regime do so directly from a diffusive one, without an intervening viscous regime. However, this should not matter if $`l(t)`$ follows a universal curve, as our results (in contrast to Ref.), in fact suggest. But the microscopic length still plays an interesting role, as follows. As a fluid neck stretches thinner and thinner before breaking, it shrinks laterally to the scale $`\xi `$; diffusion then takes over to finish the job of reconnection. So, although our work involves length scales where the direct contribution of diffusion to domain growth is negligible, we must ensure that it is handled correctly at smaller scales. This factor limits the accessible range of $`l`$ and $`t`$, not only at the lower but also at the upper end .
The breadth of the viscous-inertial crossover is somewhat less extreme when expressed in terms of Re (see above); our data span $`0.1\stackrel{<}{}`$Re$`\stackrel{<}{}350`$ and the crossover region is roughly $`1\stackrel{<}{}`$Re$`\stackrel{<}{}50`$. Re values (at $`L50`$) for each run are shown in Fig.3 against reduced time $`t`$. Data are consistent with Re $`t^{1/3}`$ as predicted from Eq. (2). Note that, in simulating high Re flows, one should strive to ensure that the dissipation scale (defined as $`\lambda _d=(\eta ^3/ϵ\rho ^3)^{1/4}`$, with $`ϵ`$ the energy dissipation per unit volume) always remains larger than the lattice spacing. This ensures that any turbulent cascade (whose shortest scale is $`\lambda _d`$) remains fully resolved by the grid. Equating dissipation with the loss of interfacial energy, one has $`ϵd(\sigma /L)/dT`$ and so, in reduced units, $`\lambda _d(l^2/\dot{l})^{1/4}`$. Comparable $`ϵ`$ values are found directly from our simulated velocity data; and $`\lambda _d`$ remains larger than the grid size for all our runs .
A decisive check that we really are simulating a regime where inertial forces dominate over viscous ones, is based directly on the velocity fields found in our simulations . From these we calculated rms values of the individual terms in the Navier-Stokes equation ($`\rho =1`$), $`\left(𝐯/t+𝐯𝐯\right)=\eta ^2𝐯𝐏.`$ Here $`𝐏`$, the pressure tensor, contains the driving terms arising from interfacial tension. Ratios $`R_1=𝐯/t_{rms}/\eta ^2𝐯_{rms}`$ and $`R_2=𝐯𝐯_{rms}/\eta ^2𝐯_{rms}`$, were then computed; these can be seen in Fig.3.
The ratio $`R_2`$ is closely related to the Reynolds number Re: it differs in representing length and velocity measures based on the rms fluid flow rather than on the interface dynamics and, because the length scales associated with the velocity gradients are smaller than the domain size, is significantly smaller than Re. The dominance (by a factor ten) of inertial over viscous forces is, at late times, nonetheless clear (Fig.3).
We finally ask whether, at the largest Re values we can reach, there is in fact significant turbulence in the fluid flow. One quantitative signature of turbulence is the skewness $`S`$ of the longitudinal velocity derivatives; this is close to zero in laminar flow but approaches $`S=0.5`$ in fully developed turbulence . We do detect increasingly negative $`S`$ as Re is increased but reach only $`S0.3`$ for Re $`350`$ . This suggests that at our highest Re’s, turbulence is at most partially developed – a view confirmed by visual inspection of velocity maps . Grant and Elder’s suggestion of an eventual transition to turbulent remixing thus remains open.
In conclusion, we have presented LB simulation data for 3D spinodal decomposition which spans an unprecedented range of reduced time and length scales. At $`t\stackrel{<}{}10^2`$ (Re $`\stackrel{<}{}1`$) we observe linear scaling, as announced in the previous literature . This is followed by a long crossover ($`10^2\stackrel{<}{}t\stackrel{<}{}10^6`$, or $`1\stackrel{<}{}`$Re$`\stackrel{<}{}50`$) connecting to a regime in which inertial forces clearly dominate over viscous ones (see Fig.3); our work is the first to unambiguously probe this regime in 3D. In the region so far accessible ($`10^6\stackrel{<}{}t\stackrel{<}{}10^8`$, or $`50\stackrel{<}{}`$Re$`\stackrel{<}{}350`$) Furukawa’s prediction of $`t^{2/3}`$ scaling is obeyed, to within simulation error. An open issue is whether this regime marks the final asymptote or whether a further crossover occurs to a turbulent remixing regime (saturating Re) as proposed by Grant and Elder . If it does, we have shown that any limiting value of Re must significantly exceed their estimate of $`10100`$.
We thank Craig Johnston, Simon Jury, David McComb, Patrick Warren and Julia Yeomans for valuable discussions. Work funded in part under the EPSRC E7 Grand Challenge.
|
no-problem/9902/hep-ph9902483.html
|
ar5iv
|
text
|
# Higgs production with large transverse momentum in hadronic collisions at next-to-leading order *footnote **footnote *Work partly supported by the EU Fourth Framework Programme ‘Training and Mobility of Researchers’, Network ‘Quantum Chromodynamics and the Deep Structure of Elementary Particles’, contract FMRX-CT98-0194 (DG 12 - MIHT) and the Swiss National Foundation.
## Abstract
Inclusive associated production of a light Higgs boson ($`m_\mathrm{H}m_t`$) with one jet in $`pp`$ collisions is studied in next-to-leading order QCD. Transverse momentum ($`p_\mathrm{T}30\mathrm{GeV}`$) and rapidity distributions of the Higgs boson are calculated for the LHC in the large top-quark mass limit. It is pointed out that, as much as in the case of inclusive Higgs production, the $`K`$-factor of this process is large ($`1.6`$) and depends weakly on the kinematics in a wide range of transverse momentum and rapidity intervals. Our result confirms previous suggestions that the production channel $`p+pH+\mathrm{jet}`$ $`\gamma +\gamma +`$ jet gives a measurable signal for Higgs production at the LHC in the mass range $`100140\mathrm{GeV}`$, crucial also for the ultimate test of the Minimal Supersymmetric Standard Model.
preprint: ETH-TH/99-06 February 1999
Recent results from LEP and the SLC indicate that the Higgs boson of the Standard Model might be light. A fit to the precision data has given the values $`m_\mathrm{H}=76_{47}^{+85}\mathrm{GeV}`$, corresponding to $`m_\mathrm{H}262\mathrm{GeV}`$ at the $`95\%`$ confidence level, whereas a direct search at LEP200 gives the lower limit as $`90\mathrm{GeV}m_\mathrm{H}`$ . In addition, a crucial theoretical upper limit exists on the mass of the light neutral scalar Higgs boson of the Minimal Supersymmetric Standard Model $`m_h130\mathrm{GeV}`$. It is, therefore, significant that one attempts to get the best possible signals in the light mass range of $`100\mathrm{GeV}m_\mathrm{H}140\mathrm{GeV}`$ at the LHC. Simulation studies carried out by ATLAS and CMS have shown that assuming a low integrated luminosity of $`310^4\mathrm{pb}^1`$, even in the case of the “gold-plated” decay channel into two photons, the signal significance ($`S/\sqrt{B}`$) is only around 5 . This conclusion depends on the value of the $`K`$-factor (a conservative value $`K`$=1.5 was used) and some plausible assumptions on the size of the background. The calculation of the next-to-leading order (NLO) corrections to the background is not yet complete and the contribution of the NNLO subprocess $`gg\gamma \gamma `$ is large . The complete NNLO analysis is extremely laborious, but appears to be feasible. Actually, for the full numerical control the background has to be calculated in NNNLO, which is completely beyond the scope of presently available techniques. Fortunately, the ambiguity in the value of the background to the signal is suppressed by the square root appearing in the definition of the signal significance. This situation, which is not completely satisfactory, can be improved by studying the $`\gamma +\gamma +\mathrm{jet}(\mathrm{s})`$ final statesThe study of Higgs production in association with a jet was first suggested in the context of improving $`\tau `$ reconstruction in the $`\tau ^+\tau ^{}`$ decay channel . ; this offers several advantages. The photons are more energetic than in the case of the inclusive channel and the reconstruction of the jet in the calorimeter allows a more precise determination of the interaction vertex, improving the efficiency and mass resolution. Furthermore the existence of a jet in the final state allows for a new type of event selection and a more efficient background suppression. In addition, the necessary control of the background contributions can probably be already achieved by the inclusion of the NLO corrections (the matrix elements are known ). In a recent phenomenological study it has been found that these advantages appear to be able to compensate the loss in production rates, provided one gets a large $`K`$-factor also for this process. The presentation of the NLO QCD corrections for this process is the main purpose of this letter.
The production process $`ggH`$ is given by loop diagrams in the Born approximation, since the gluons interact with the Higgs boson via virtual quark loops . The exact calculation of the NLO corrections is rather complex . Fortunately, the effective field theory approach obtained in the large top mass limit with effective gluon–gluon–Higgs coupling gives an accurate approximation (with or without QCD corrections) with an error less than 5%, provided $`m_\mathrm{H}2m_\mathrm{t}`$ . It has been checked in LO, by an explicit calculation, that the approximation remains valid also for the production of Higgs bosons with large transverse momentum, provided both $`m_\mathrm{H}`$ and $`p_\mathrm{T}`$ are smaller than $`m_\mathrm{t}`$ . It is therefore plausible to assume that the approximation remains valid also if we include NLO QCD corrections. Recently, in this approximation and using the helicity method, the transition amplitudes relevant to the NLO corrections have been analytically calculated for all the contributing subprocesses (loop corrections and bremsstrahlung ).
The available NLO matrix elements contain soft and collinear singularities and therefore do not allow for a direct numerical evaluation of the physical cross section. In the past few years, exploiting the universal structure of the soft and collinear contributions, several efficient algorithms have been suggested to obtain finite cross section expressions from the singular NLO matrix elements. We have used the method of ref. and implemented it into a numerical Monte Carlo style program which allows to calculate any infrared-safe physical quantity for the inclusive production of a Higgs boson with one jet in NLO accuracy.
In this paper we report some of our results obtained for proton–proton collisions with $`\sqrt{S}=14\mathrm{TeV}`$. For the strong coupling constant at NLO (LO) we use the standard two-loop (one-loop) form with $`\mathrm{\Lambda }_{QCD}`$ set to the value used in the analysis of the parton distribution function under consideration. Our default choice for the factorization and renormalization scales is $`Q_0^2=(m_\mathrm{H}^2+p_\mathrm{T}^2)`$, where $`p_\mathrm{T}`$ is the transverse momentum of the Higgs boson. Here, unless stated, we consider the case of $`m_\mathrm{H}=120`$ GeV and $`p_\mathrm{T}>30`$ GeV in the kinematical region where the perturbative result can be applied without having to consider low-$`p_\mathrm{T}`$ resummation effects.
Most of our curves have been obtained with MRST (ft08a) parton distribution functions, but we will also show some results using CTEQ(4M) and GRV98 . To compare the leading with the NLO results, for consistency, we use the corresponding LO parton distributions from each set. We shall discuss only results for the inclusive production cross section of a Higgs boson with large transverse momentum, although as we mentioned above the Monte Carlo program allows to study any infrared-safe quantity, including the implementation of different jet algorithms and experimental cuts.
In Fig. 1(a) we show the $`p_\mathrm{T}`$ distribution of the NLO and LO cross sections using MRST parton densities at three different scales $`Q=\mu Q_0`$, with $`\mu =0.5,1,2`$. In this figure one can see three important points. First, the radiative corrections are large; second, there is a reduction in the scale dependence when going from LO to NLO; third, the improvement in the scale dependence is still not completely satisfactory. The same features can be observed in more detail in Fig. 1(b), where the LO and NLO cross sections integrated for $`p_\mathrm{T}`$ larger than 30 and 70 GeV are shown as a function of the renormalization/factorization scale. Both the LO and NLO cross sections increase monotonically with decreasing $`\mu `$ scale, down to the limiting value where perturbative QCD can still be applied. In NLO the scale dependence has a maximum and its position characterizes the stability of the NLO perturbative results. In our case, as a result of the very large positive radiative corrections, the position of the maximum is shifted down to small $`\mu `$ values where the perturbative treatment is not valid, indicating that the stability of the NLO result is not completely satisfactory. In the usual range of variation of $`\mu `$ from 0.5 to 2, however, the LO scale uncertainty amounts up to $`\pm 35\%`$, whereas at NLO this is reduced to $`\pm 20\%`$, indicating the relevance of the QCD corrections. This feature of the scale dependence in NLO is very much the same as the one in the case of inclusive Higgs production .
In Fig. 2 we show the ratio
$$K=\frac{\mathrm{\Delta }\sigma _{NLO}}{\mathrm{\Delta }\sigma _{LO}}$$
(1)
of the next-to leading and the LO cross sections ($`K`$-factor) for the three different sets of parton distributions: MRST, CTEQ and GRV98, as a function of the transverse momentum and the rapidity of the Higgs boson. We can see that the $`K`$-factor is in the range 1.5–1.6 and it is almost constant (within 15% accuracy) for a large range of $`p_\mathrm{T}`$ and $`y`$. In the $`p_\mathrm{T}`$ distribution the variation never exceeds 10%, whereas it is a bit larger for the $`y`$ distribution at large $`|y|`$.
The ratios of the NLO cross sections
$$R=\frac{\mathrm{\Delta }\sigma _{\mathrm{CTEQ},\mathrm{GRV}}}{\mathrm{\Delta }\sigma _{\mathrm{MRST}}}$$
(2)
computed by using CTEQ and GRV98 over the one obtained by using MRST parton densities are also shown. From there, it is possible to see that the differences in the $`K`$-factors basically come from variations in the LO cross sections, mostly because of the value of $`\mathrm{\Lambda }_{QCD}`$ used in each set.
We conclude that the properties of the $`K`$-factors found for large transverse momentum Higgs production are very similar to the ones obtained for the total inclusive Higgs production. They are about the same size, they show the same scale dependence, and the $`K`$-factor changes mildly with changing $`m_\mathrm{H},y`$ and $`p_\mathrm{T}`$. Its large value and its surprising independence from the kinematics might be interpreted as an evidence for some universal origin of the large radiative corrections . This requires further theoretical understanding.
We have not included in our analysis the contributions from electroweak reactions, which can increase the cross section about 10% when suitable cuts are applied . Nevertheless it is worth noticing that since QCD corrections to electroweak boson fusion are substantially smaller than the ones corresponding to gluon fusion, the significance of the electroweak contributions is reduced at NLO.
In Fig. 3 we show NLO cross section values for the physics signal $`p+pH+\mathrm{jet}\gamma +\gamma +\mathrm{jet}`$ as a function of the Higgs mass (with a reference value for the branching ratio given by Br$`(H\gamma \gamma )=2.1810^3`$ for $`m_\mathrm{H}=120`$ GeV ). For comparison, the cross section values of the physics signal $`p+pH\gamma +\gamma `$ are also shown. From there it is possible to see that the loss in production rate due to the transverse momentum cut of $`p_\mathrm{T}>30`$ GeV is less than a factor of 2 for the range of masses considered.
In conclusion, we have pointed out that, much as in the case of inclusive Higgs production, the cross section values of the associated production of a Higgs boson with a jet are increased by a $`K`$-factor of 1.5–1.6 given by NLO QCD radiative corrections. Our result confirms previous suggestions that the production channel $`p+pH+\mathrm{jet}`$ $`\gamma +\gamma +`$ jet gives a measurable signal for Higgs production at the LHC in the mass range $`100`$$`140\mathrm{GeV}`$, crucial also for the ultimate test of the Minimal Supersymmetric Standard Model.
We are grateful to M. Spira for discussions. One of us (DdeF) would like to thank
S. Frixione for helpful comments.
|
no-problem/9902/hep-ph9902379.html
|
ar5iv
|
text
|
# References
GLUON CORRELATION MOMENTS RATIO IN THE INSTANTON FIELD
V.KUVSHINOV AND R.SHULYAKOVSKY
Institute of Physics, National Academy of Sciences of Belarus
Minsk 220072 Scaryna av.,70
E-mail: kuvshino@dragon.bas-net.by and
shul@dragon.bas-net.by
The instanton-induced multiple events in high energy collisions are considered in nonperturbative quantum chromodynamics (QCD). Here we obtained unusual behaviour of ratio of correlation moments $`H_q`$ for such processes which can be used for experimental search of instantons.
As it is known, Yang-Mills gauge theories have highly degenerated vacuum structure on the classical level . Quantum tunnelling transitions between different vacuum states are associated with instantons .
Experimental search of the QCD-instantons goes already at HERA (DESY, Hamburg) in electron-proton deep inelastic scattering . There are following theoretically predicted features of instanton canal of multiple production: high parton multiplicity (about $`10÷20`$ at HERA ); isotopically parton distribution in the instanton rest system and homogeneous quarks flavours distribution ; specific behaviour of total cross section and two-particle correlation function .
Here we study the behaviour of ratio of correlation moments $`H_q=K_q/F_q`$ as the new criterions of instanton identification. Here $`F_q`$, $`K_q`$ and $`H_q`$ are factorial, cumulant and co-called $`H_q`$-moments correspondingly. Correlation moments ratio $`H_q`$ is more precise quantity for distinguishing of multiplicity distribution .
In quasiclassical approximation Poisson distribution for the probability of $`n`$ gluon production was obtained for the instanton-induced multigluon final states . In this case we have the trivial results: $`G(z)=e^{A[z1]}`$, $`F_q=1`$, $`K_q=\delta _{q1}`$, $`H_1=\delta _{q1}`$.
Taking into account first quantum correction we obtain the following formula for the generating function takes place :
$$G(z)\underset{n=0}{\overset{\mathrm{}}{}}P_nz^n=e^{A[z1]}\frac{1+Bz^2}{1+B},A=\frac{4\pi }{\alpha _0}\left(\frac{1x^{}}{x^{}}\right)^2,$$
$$B=\frac{2\pi }{\alpha _0}\left(\frac{1x^{}}{x^{}}\right)^3,x^{}0.5÷1.$$
$`(1)`$
where coupling constant of strong interaction $`\alpha _0=\alpha (\rho _{cut})`$, $`\rho _{cut}`$ is instanton size cut off, $`x^{}`$ – Bjorken variable of parton-parton collisions.
By straight calculation we obtain $`H_q`$ moments behaviour as a function on $`q`$ (Fig. 1). $`H_q`$-moments are negative, have first minimum at $`q=2`$. Such dependence of $`H_q`$ on their rank $`q`$ may be new criterion of identification of instantons at experiment.
Fig. 1. $`H_q`$-moments as the functions of their ranks.
The reason is unusual and specific position of the first minimum for this nonperturbative process, which doesn’t move by the next quantum corrections in chosen interval of variables as estimations show. Perturbative QCD calculations confirmed by experimental data for ordinary multiple production of different types give the first minimum at $`q=5`$. Such clear distinction of perturbative and nonperturbative calculations is of principle both from experimental and theatrical point of view.
The authors are grateful for the support in part to Basic Science Foundation of Belarus (Projects M96-023, F97-013).
|
no-problem/9902/astro-ph9902226.html
|
ar5iv
|
text
|
# Determining the Galactic mass distribution using tidal streams from globular clusters
## 1 Introduction
There are currently several important problems and related disputes which depend on our understanding of the mass distribution in the Galaxy. In the inner Galaxy, the maximum disk controversy revolves around determining the relative contributions of disk and halo to the measured rotation curve (e.g. Debattista & Sellwood 1998; Tremaine & Ostriker 1998). The measurement of the rotation curve is itself controversial (Olling & Merrifield 1998). At intermediate distances in the halo, the interpretation of microlensing searches for dark matter candidates depends fairly strongly on the shape of the Galaxy (Alcock et al 1997). Finally, cosmological models make strong predictions for global halo structure (e.g. Dubinski & Carlberg 1991; Navarro, Frenk & White 1997). These results help fuel maximum disk arguments but also predict the shape of the halo at large distances. Resolving these issues requires significant improvements in our understanding of the mass distribution in the Galaxy.
A variety of methods have been used to estimate the mass of the Galaxy in different regions (see Fich & Tremaine 1991 for a review). In the inner Galaxy, estimates typically rely on measurements from observable material in the disk, including HI within the solar radius and open clusters, OB associations and planetary nebule beyond the solar radius. Consequently, the mass distribution above and below the disk is relatively poorly known (Dehnen & Binney 1998). In the outer Galaxy, estimates typically rely on the dynamics of satellites and are subject to uncertainties regarding whether individual objects are bound to the Galaxy (e.g. Leo I) and whether or not the entire distribution is in equilibrium given the long orbital timescales (e.g. Little & Tremaine 1989; Kochanek 1996).
If we could perform experiments to determine the mass, we would choose an ensemble of test particles and study their motion in time under the influence of the Galactic gravitational field. Although we cannot do this, it has been pointed out that tidal streams trace orbits in the potential (e.g. Lynden-Bell 1982; Kuhn 1993; Johnston et al 1998), and can therefore be used to determine the mass distribution giving rise to that potential. In this sense, tidal streams are analogous to streak lines which are used to trace steady fluid flow (Batchelor 1967).
However, with the sole exception of the Magellanic stream– whose origin and dynamics remain controversial (Moore & Davis 1994)– full-fledged tidal streams remain unobserved. Nevertheless, there is an abundance of theoretical work which predicts the existence of tidal streams and other substructure, either from fully disrupted, infalling satellites (e.g. Tremaine 1993; Johnston, Hernquist & Bolte 1996) or from visible satellites such as globular clusters which undergo mass loss as they orbit in the Galaxy (e.g. Gnedin & Ostriker 1997; Murali & Weinberg 1997; Vesperini 1997). In addition, the hierarchical picture of structure formation predicts that Galactic halos should contain a significant amount of debris from accreted substructure, suggesting that the halo is filled with tidal streams (Johnston et al 1996).
Recent observations have finally begun to reveal traces of tidal streams and substructure in the Galactic stellar halo. The Sagittarius dwarf provides an archetype for satellite accretion (Ibata, Gilmore & Irwin 1994) and ongoing observations are attempting to reveal the associated stream (Mateo et al 1998). Other observations have revealed moving groups and phase space substructure (e.g. Majewksi, Munn & Hawley 1996) and extra-tidal stars surrounding globular clusters (Grillmair et al 1995). Given proposed astrometric satellites, SIM and GAIA, the possibilities for using tidal streams to probe Galactic structure appear to have multiplied dramatically (Johnston et al 1998; Zhao et al 1999).
Given this motivation, we discuss in this paper the use of tidal streams from globular clusters, as suggested by Grillmair (1997), to probe the mass and potential of the Galaxy. Theoretical work on cluster evolution (e.g. Gnedin & Ostriker 1997; Murali & Weinberg 1997; Vesperini 1997) suggests that globulars with mass $`M_c\mathrm{}<10^5M_{}`$ and Galactocentric radius $`R_g\mathrm{}<20\mathrm{kpc}`$ will provide excellent candidates for stream measurements because they tend to lose mass through the combined effects of internal relaxation, tidal heating and post-collapse heating of the core. This suggests that many good candidates are available. We first summarize in §2 the dynamics of tidal streams created by mass loss from a satellite orbiting in an external potential. Then, in §3, we develop two methods for determining the Galactic mass and potential using tidal streams. Tests of these methods using Pal 5 as a model cluster and of the observational requirements are presented in §4. The interpretation and importance of the results as well as further possibilities are discussed in §5. In particular, we point out that recent proper motion determinations from plate measurements and Hipparcos data provide a sample of globular clusters which are good candidates for stream observations and mass determinations.
## 2 Dynamics of tidal streams
Mass loss from globular clusters is driven by a combination of internal relaxation, tidal heating and post-collapse heating of the core (e.g. most recently Gnedin & Ostriker 1997; Murali & Weinberg 1997; Vesperini 1997). Escaping stars reach the inner and outer Lagrange points nearly at rest and evaporate from the system. These particles have slight energy offsets from the center-of-mass of the system due to the small difference in potential energy determined by the finite size of the satellite (Tremaine 1993; Johnston 1998). The cluster center-of-mass energy $`E_c\mathrm{\Phi }_G(|R_c|)`$, the center-of-mass potential energy at perigalacticon; this defines a first-order, dimensionless energy correction for escaping stars:
$$\delta =\frac{\mathrm{\Phi }_G(|R_c\pm r_t|)\mathrm{\Phi }_G(|R_c|)}{\mathrm{\Phi }_G(|R_c|)}\pm \frac{r_t}{R_c}\frac{d\mathrm{ln}|\mathrm{\Phi }_G|}{d\mathrm{ln}R_c}\pm |\chi |\frac{r_t}{R_c}.$$
(1)
The parameter $`\delta =ϵ/\mathrm{\Phi }_G(R_c)`$, where $`ϵ`$ is the energy scale defined by Johnston (1998). The parameter $`|\chi |1`$ ($`\chi =1`$ for a Kepler potential). For the typical globular clusters we will consider below, $`r_t50\mathrm{pc}`$ and $`R_c10\mathrm{kpc}`$, so that $`\delta 0.005`$; i.e. less than a 1% correction. Therefore the mean motion of the stream is nearly indistinguishable from the center-of-mass of the satellite.
Johnston (1998) finds that the absolute energies of stripped material lies in the range $`02\delta `$ and is sharply peaked about $`\delta `$. Thus the distribution of total energies lies in the range $`E_{com}\mathrm{}<E_s\mathrm{}<(1+2\delta )E_{com}`$ (for positive $`\delta `$). Therefore, the velocity spread in the stream falls in the range $`V_{com}\mathrm{}<V_s\mathrm{}<\sqrt{1+2\delta }V_{com}`$. For $`\delta =0.005`$ and $`V_{com}220\mathrm{km}\mathrm{s}^1`$, the velocity spread $`\mathrm{\Delta }v1\mathrm{km}\mathrm{s}^1`$.
### 2.1 Simulated tidal streams
The phase space coordinates of individual stream stars are determined by the mass loss rate and fine-grained distribution of particle positions and velocities at the Lagrange points. Once particles are injected into the stream, they phase mix according to the collisionless Boltzmann equation; from the fine-grained evolution, one can calculate the velocity and density structure along the stream (Tremaine 1998; Helmi & White 1999).
Here we generate streams using both simple N-body simulations and an analytic approximation to the characteristics of the projected stream based on the discussion of energetics given above. In the quasistatic evolution of globular clusters, the mass loss rate is small so the potential remains very nearly spherical and constant over the timescales considered here. Therefore the N-body simulations use a fixed, Plummer-law satellite with test particle orbits integrated along the satellite’s orbit in the Galaxy.
With N-body calculations, it is difficult to accurately reproduce expected mass loss rates from globular clusters given the importance of internal relaxation and its dependence on the stellar mass spectrum as well as the possible importance of core heating in evaporating clusters (e.g. Gnedin & Ostriker 1997; Murali & Weinberg 1997). Therefore, to increase our flexibility, we adopt a simple Gaussian approximation to simulate the projected characteristics of tidal streams for use in the projected orbit fits discussed below. Of course, the dynamics of the stream are best studied through direct orbit integration and N-body simulation: below we compare this approximation with the results of a simulation to ensure that the approach is reasonable.
In this approximation, we adopt a mass loss rate to specify the number of stars in the stream and assume that the stream stars have Gaussian distributions of: 1) orbital phases about the current phase of the satellite; 2) line-of-sight, radial velocities about the line-of-sight, radial velocity of the satellite at the star’s phase; 3) angular offsets from the angular position of the satellite at the star’s phase. The phase distribution is very narrow for recent mass loss; this corresponds to a distribution clumped about the satellite. The phase distribution is very broad for mass loss in the distant past; this corresponds to a uniform phase distribution or a phase-mixed stream. The dispersion of the radial velocity distribution is given roughly by the velocity range determined from the energy spread. The angular dispersion corresponds to the angular size of the system at the star’s orbital phase.
In practice, we choose phase by sampling time along the orbit since azimuthal phase angle $`w=\mathrm{\Omega }t`$, where $`\mathrm{\Omega }`$ is the azimuthal frequency of the orbit. The Galactic latitude of the satellite at this time is chosen as the phase variable <sup>1</sup><sup>1</sup>1This is usually unique for small angular scales. Sometimes it may be necessary to choose a different independent variable– e.g. if the stream makes a loop on the sky in $`l`$.. Then radial velocity and angular variates are generated assuming means given by the center-of-mass coordinates at this phase. This provides a reasonable approximation for the characteristics of a stream from a globular cluster. For a larger satellite, it is necessary to account for the offset of the stream from the orbit of the satellite (Johnston 1998). It is useful because we can arbitrarily change the number of stars in the stream and their phase distribution in order to explore observational possibilities.
We choose mass loss rates in the range
$$\frac{\dot{M}}{M}\lambda =0.11.0\times 10^{10}\mathrm{yr}^1.$$
(2)
The mass loss rates imply that clusters have lost roughly $`4060\%`$ of their initial mass for fixed $`\lambda `$. The average rate is consistent with (and even somewhat lower than) recent calculations of the evolution of relatively low mass clusters: $`M_c\mathrm{}<10^5M_{}`$ (Gnedin & Ostriker 1997; Murali & Weinberg 1997; Johnston, Sigurdsson & Hernquist 1998). For convenience, we define the parameter $`\lambda _{10}`$ to have units $`10^{10}\mathrm{yr}^1`$ so that $`0.1\mathrm{}<\lambda _{10}\mathrm{}<1`$. We assume a mean stellar mass $`m=0.5M_{}`$ for the cluster to define the number of stars in the stream. The importance of the stellar content of the stream is discussed below.
## 3 Using tidal streams for mass and potential determinations
We develop two methods which can be used to determine the mass and potential of the Galaxy using tidal streams. The first method assumes that a tidal stream very nearly follows a streamline in the Galactic potential, which is a very good approximation for a globular cluster. With complete phase space data for stream stars, we can determine the Galactic mass and potential directly from the observations without any modeling. This approach is a generalization of rotation-curve mass measurements to non-circular orbits. The upcoming space astrometry missions, SIM and GAIA, promise to give phase-space coordinates for nearby clusters ($`d\mathrm{}<5\mathrm{kpc}`$) and their tidal streams which will be sufficiently accurate to use this method effectively.
The second method involves fitting a model stream curve to stream data where we know the full position and velocity information for the cluster but only know projected positions and radial velocities for stream stars. Here we are motivated by the possibility of combining the results of various plate measurement programs (e.g. Dinescu et al 1999) and the Hipparcos results on globular cluster proper motions (Odenkirchen et al 1997) with current ground-based observational capabilities. We discuss the possibilities in more detail below.
### 3.1 The streamline approximation
Tidal streams approximately trace the orbit of their parent satellite in the Galaxy. With full phase space information, Lynden-Bell (1982), Kuhn (1993) and Johnston et al (1998) point out that we can approximately measure the potential difference along the stream by measuring the kinetic energy difference between different positions since the force field is conservative. For globular clusters, the streamline approximation should be quite accurate since the energy spread in the stream is quite small.
For completeness, we present the basic derivation of the equation of energy conservation (Bernoulli’s equation) for a streamline starting with the equation of motion. The acceleration of a particle in a gravitational potential is given by Newton’s law:
$$\frac{d𝐯}{dt}=\mathrm{\Phi }(𝐫).$$
(3)
Because the orbit is parameterized by time, this equation provides little information about the potential of the Galaxy. However, rewriting it in Lagrangian form (assuming a static potential),
$$\frac{d𝐯}{dt}=\frac{𝐯}{𝐫}\frac{d𝐫}{dt}=\frac{𝐯}{𝐫}𝐯=\mathrm{\Phi }(𝐫),$$
(4)
we parameterize the motion in terms of the path of the particle. The acceleration of the particle may therefore be determined from the velocity history of the particle along the path.
While the path of an individual star is unknown, a tidal stream traces the path of a hypothetical particle in the Galactic potential. Since we can, in principle, determine positions and velocities of material along the stream, we can calculate the acceleration at a point using equation (4).
Equation (4) is equivalent to the equation of motion for an element of an incompressible fluid in a static potential (Batchelor 1967). In this context, we may view a tidal stream as the manifestation of a streamline. By observing the velocity field along the streamline, we can determine the gravitational potential which produces it.
With no symmetry assumption, we can obtain general expressions which define the potential along the path. Taking the line integral of equation (4) along the path, we determine the potential difference between two points along the curve
$$\mathrm{\Phi }(𝐫_1)\mathrm{\Phi }(𝐫_0)\mathrm{\Delta }\mathrm{\Phi }_{01}=_{𝐫_0}^{𝐫_1}𝑑𝐫\frac{𝐯}{𝐫}𝐯.$$
(5)
Since the velocity field is irrotational (except for the possibility of contamination by binaries and spin in the satellite itself), this can be written
$$\mathrm{\Delta }\mathrm{\Phi }_{01}=𝑑𝐫\frac{dv^2/2}{d𝐫}=\frac{1}{2}[v^2(𝐫_0)v^2(𝐫_1)].$$
(6)
This is simply a statement of energy conservation (Bernoulli’s equation) but makes the point that a measurement of the difference in the kinetic energy of two points along the stream is equivalent to a measurement of the potential difference or work done between the two points. Thus, if we can measure an ensemble of tidal streams, we will be able to reconstruct the Galactic potential in a fairly unbiased manner.
Now, assuming a spherically symmetric potential, we easily obtain the mass from the velocity gradient:
$$M(r)=\frac{r^2}{G}𝐯\frac{𝐯}{𝐫}\widehat{r}.$$
(7)
For measurement purposes, the equation of energy conservation provides a more convenient way to estimate the spherical mass. Rewriting $`𝐯𝐯/𝐫`$ assuming zero vorticity, we find
$$M(r)=\frac{r^2}{G}\frac{v^2/2}{𝐫}\widehat{r}.$$
(8)
For a circular orbit, we recover the usual formula for rotation curve measurements:
$$M(r)=\frac{rv_c^2}{G},$$
(9)
where $`v_c`$ is the circular rotation velocity.
By considering only the radial force component in equation (7), we have ignored the information provided in the other directions. In general, accurate determination of the velocity field as a function of 3-dimensional position along the stream directly gives the 3 components of the gravitational acceleration. This, in turn, provides information on the asphericity of the mass distribution. In fact, the non-radial components of the acceleration can be determined most accurately since they only depend on differences of angular coordinates, which are determined very accurately. For complete generality, we can simply take the divergence of equation (4) and obtain a dynamical form of Gauss’ law:
$$\frac{𝐯}{𝐫}𝐯=\frac{1}{2}^2v^2=^2\mathrm{\Phi }(𝐫)=4\pi G\rho .$$
(10)
This, of course, has the disadvantage that second derivatives are required so that the data must be very accurate. Thus it does not appear to be of immediate practical use.
### 3.2 Fitting the projected stream
Given the difficulty of obtaining a complete set of data, we suggest a statistical approach to local mass determinations. What we describe is a procedure for fitting a projected tidal stream to the observational data using a $`\chi ^2`$ estimator. This is similar to the method described by Johnston et al (1998) but only requires projected positions and radial velocities for material along the stream and does not depend on the structure of the satellite.
Suppose we have a satellite with determined position and velocity, and we observe an associated stellar stream for which we can measure only projected positions and radial velocities of individual stars. Given a model mass distribution defined by some set of parameters $`\theta `$ and the satellite position and velocity, we can integrate the satellite trajectory forward and backward in phase for each set $`\theta `$ and find the trajectory which best fits the stream data. It is straightforward to use a $`\chi ^2`$ estimator, choosing Galactic longitude $`\mathrm{}`$ as the independent, phase variable and Galactic latitude $`b`$ and radial velocity $`v_r`$ as dependent variables:
$$\chi ^2=\underset{i}{\overset{N}{}}\left(\frac{b_ib(\mathrm{}_i|\theta )}{\sigma _{b,i}}\right)^2+\underset{i}{\overset{N}{}}\left(\frac{v_{r,i}v_r(\mathrm{}_i|\theta )}{\sigma _{v,i}}\right)^2;$$
(11)
as usual, this defines the logarithm of the joint probability<sup>2</sup><sup>2</sup>2 Strictly speaking, this is the joint probability density of measuring $`b_i`$ and $`v_{r,i}`$ at $`\mathrm{}_i`$, given the model. Note that it is straightforward to generalize this procedure when more information is available.
The dispersions $`\sigma _{b,i}`$ and $`\sigma _{v,i}`$ include contributions from the width and velocity dispersion in the stream and uncertainties in the observations. Observational uncertainties are negligibly small for $`\sigma _{b,i}`$ but potentially dominate $`\sigma _{v,i}`$. In addition to the physical dispersion in position and velocity, the angular length of the observed stream and the number of stars observed strongly determine the quality of the mass determination. We consider the role of these factors below.
#### 3.2.1 Including proper motion uncertainties
In presenting the $`\chi ^2`$-estimator, we have implicitly assumed that all available data are determined to high precision. In general, this may not be the case: indeed, we currently face fairly broad uncertainties in proper motion measurements for individual clusters. It is, nevertheless, straightforward to include measurement uncertainties in the curve-fitting procedure from a Bayesian point of view.
For the specific example of proper motion uncertainties, we can add two parameters to our model: namely $`\mu _\alpha `$ and $`\mu _\delta `$, the proper motions measured with respect to right ascension and declination, respectively. Since we have estimates and uncertainties for these two quantities, by Bayes’ theorem (e.g. Martin 1971) the probability of the model given the data becomes
$$P(\theta ^{})\underset{i}{}P_i(\theta ^{})P(\mu _\alpha )P(\mu _\delta );$$
(12)
in other words, the relative probability of any set of parameters $`\theta ^{}`$, given the data, is the joint probability of the data given the model, $`_iP_i(\theta ^{})=\mathrm{exp}(\chi ^2/2)`$ multiplied by the prior probabilities of the proper motions, $`P(\mu )=\mathrm{exp}[(\mu \mu _0)^2/\sigma _\mu ^2]`$, where $`\mu `$ denotes either $`mu_\alpha `$ or $`\mu _\delta `$ and $`\mu _0`$ the respective mean. To return to the original set of parameters $`\theta `$, we project over $`\mu _\alpha `$ and $`\mu _\delta `$:
$$P(\theta )𝑑\mu _\alpha 𝑑\mu _\delta P(\theta ^{}).$$
(13)
We examine the influence of proper motion uncertainties in §4.2.1.
## 4 Mass estimates in spherical potentials
For illustrative purposes, it is simplest to adopt spherical, scale-free mass models for the Galaxy:
$$M(<r)=M_0\left(\frac{r}{r_0}\right)^\alpha ,$$
(14)
where $`M_0`$ is the mass interior to some assumed radius $`r_0`$ and $`\alpha `$ gives the slope of the cumulative mass distribution. The model has two parameters $`M_0`$ and $`\alpha `$.
In the discussion, we point out several candidate clusters to which we can apply the spherical mass estimator discussed here. Among these are Pal 5: for definiteness in specifying satellite initial conditions, we adopt the best estimates for the space motion for Pal 5. Pal 5 is a low mass globular cluster, $`M_c10^4M_{}`$, at high Galactic latitude and distance $`d=20\mathrm{kpc}`$ on the far side of the Galactic center with a fairly elongated orbit. There are two conflicting proper motion determinations (Cudworth & Majewski 1993; Scholz et al 1998) but both indicate that this cluster has just passed apogalacticon and is not associated with the Sgr dwarf galaxy as had been suggested by Ibata et al (1997).
Here we adopt the more recent proper motion $`(\mu _\alpha ,\mu _\delta )=(1.0\pm 0.3,2.7\pm 0.4)\mathrm{mas}/\mathrm{yr}`$ and 3-dimensional velocity determined by Scholz et al (1998; see their Table 2) and integrate the orbit in a spherical potential as defined above with $`M_0=2\times 10^{11}M_{}`$, $`r_0=20\mathrm{kpc}`$ and $`\alpha =1`$ (singular isothermal sphere). The orbital period is $`2.4\times 10^8\mathrm{yr}`$. We convert to galactocentric quantities using the formulae given in Johnson & Soderblom (1987), assuming a solar distance $`R_{}=8.5\mathrm{kpc}`$, a rotation velocity of $`220\mathrm{k}\mathrm{m}/\mathrm{s}`$ and the ‘basic solar motion’ (Mihalas & Binney 1982). Although the Galactic mass distribution along the entire orbit is not spherical, we are only interested in the prospects for mass determination from observations on relatively small angular scales: $`\mathrm{}<10^o`$. The stream material over this angular range remains high above the Galactic plane and extends roughly $`5\mathrm{kpc}`$ in length so that deviations from spherical symmetry will be small.
### 4.1 With full phase space information
Here we generate an example stream using an N-body simulation to see if the mass is recovered correctly. Figure 1 shows that we do obtain the correct mass. In this case, the satellite orbit was started 5 radial periods in the past (approximately $`1.2\mathrm{Gyr}`$) so that there have been 5 perigalactic passages. Most of the mass loss occurred in the first passage, so that material has had time to drift away from the satellite, spreading over an angular extent $`|\mathrm{\Delta }\mathrm{\Theta }|<3^o`$. This is advantageous because, near the satellite, the stream stars have more complicated dynamics since they are still far their asymptotic energy distribution; mass determinations using particles near the satellite will be biased. Distances greater than twice the tidal radius should be adequate to ensure the proper behavior.
In general, measuring the potential difference between two points along a stream is easiest because we need only compute the kinetic energy difference. The mass determination is somewhat more difficult because we must measure velocities at neighboring points and then take differences between these two points to evaluate the derivative. Clearly this can be sensitive to noise. Here, $`v^2`$ happens to have an approximately linear dependence on $`r`$ so determining the slope is easy. In general, the relationship will be more complicated since $`\mathrm{\Delta }v^2\mathrm{\Delta }\mathrm{\Phi }`$. One approach would be to fit a smooth curve to the data and use that to extract physical quantities.
Figure 1 shows that at the distance of Pal 5, the uncertainties in distance measurements for SIM and GAIA will dominate since $`\sigma _d(d/20\mathrm{kpc})^21.6\mathrm{kpc}`$. Thus, although absolute distances are known with 10% accuracy, the relative distances of stream stars can be highly uncertain. Since the uncertainties decrease quadratically with distance, the uncertainties in relative distance between stream stars decrease linearly with distance for fixed angular size. This suggests that the method can be best used for fairly nearby clusters, $`d\mathrm{}<5\mathrm{kpc}`$, to determine the 3 components of the gravitational acceleration near the disk.
### 4.2 In projection
For convenience, we use the Gaussian approximation described above to generate realizations of streams in projection. As a simple check, we compare the results of an N-body simulation with a stream generated using this procedure. The simulation was started 3 radial periods in the past so that the satellite has had 3 perigalactic passages. The realization has phase dispersion $`\sigma _t=5\times 10^6\mathrm{yr}`$, line-of-sight, radial velocity dispersion $`\sigma _{V_r}=1\mathrm{km}\mathrm{s}^1`$ and latitude disperision $`\sigma _b=10^{}`$. The phase dispersion $`\sigma _t`$ is chosen simply by inspection of the N-body simulation. The others are defined by the characteristics of Pal 5. Figure 2 shows that the agreement is reasonable.
Our view of a tidal stream is determined by the mass loss history of the satellite as well as the angular extent of the observations. If little mass loss has occurred recently, then material will be well mixed and more difficult to detect near the satellite. As an example, Figure 3 shows the observed characteristics of a stream with 56, fully phase-mixed stars in an angular range $`|\mathrm{\Delta }\mathrm{\Theta }|<10^o`$.
This provides somewhat of a worst-case scenario because there has been no recent mass loss and because there are very few stars in the stream. Nevertheless, although the orbits have indistinguishable spatial projections for all values of $`M_0`$ on this scale, the radial velocies provide a strong discriminant. This is not suprising since, physically, the measured mass depends sensitively on the velocities: $`Mv^2`$.
To quantify this statistically , we generate fits to streams using the orbit estimator, equation (11). Figure 4 shows the confidence intervals in the $`\alpha M_0`$ parameter space for the indicated particle number and $`\mathrm{\Delta }\mathrm{\Theta }`$ with fully phase-mixed debris. These results suggest that under extremely poor conditions, the mass determination is highly uncertain; under somewhat better conditions it is possible to constrain the mass and mass distribution with several % accuracy.
Assuming fully phase-mixed debris is equivalent to assuming no recent mass loss. As another example, we assume a low rate of mass loss $`\lambda _{10}=0.125`$ over the last 10 perigalactic passages (roughly $`2.5\mathrm{Gyr}`$) so that debris still clusters near the satellite and has a smaller angular extent. Figure 5 shows the appearance of the stream in projection at the present time over an interval $`|\mathrm{\Delta }\mathrm{\Theta }|<10^o`$.
Figure 6 shows that a fit to this realization gives fairly tight confidence surfaces but only somewhat stronger than the phase-mixed, $`\lambda _{10}=0.5`$, $`|\mathrm{\Delta }\mathrm{\Theta }|<5^o`$ fit given in Figure 4 (lower left). Although there are 4 times as many stars in this fit, most of them are clumped near the satellite: this nullifies the leverage of the additional stars. The angular extent provides the most leverage in fitting the projected orbit.
#### 4.2.1 The effect of proper motion uncertainties
For illustrative purposes, we have so far assumed that available measurements are perfect: i.e. that there are no uncertainties in these quantities. To be more realistic, we can apply the Bayesian approach presented in §3.2.1 to account for measurement uncertainties. At present, proper motion uncertainties dominate, especially for Pal 5: here we consider their effect on the estimated mass.
Table 1 shows how confidences in the estimated value of $`M_0`$, denoted $`\overline{M}`$, change when we include proper motion uncertainties in the fits. Proper motion uncertainties are defined relative to the mean measured proper motion: actual refers to the uncertainties given by Scholz et al. (1998). The error bars $`\sigma _{\overline{M}}`$ define 95% confidence intervals about $`\overline{M}`$.
As the table shows, 1% proper motion uncertainties have little effect on the fit: the mass estimate is unbiased and has tight, symmetric confidence levels which are Gaussian. However, as we decrease the proper motion accuracy, the fits degrade and estimates of $`M_0`$ appear to become biased to lower values and confidences become highly skewed. Qualitatively, the biasing results because, at lower $`M_0`$, there is a broad range of $`(\mu _\alpha ,\mu _\delta )`$ within the uncertainties which give acceptable fits; at the true value, only a small range gives an acceptable fit. Therefore, in projection, it appears that the lower value of $`M_0`$ is more likely. The surfaces cut off rapidly because the range of proper motions giving acceptable fits moves far outside of the range of likely proper motions determined from the observations.
The situation is rather bad for proper motion uncertainties comparable to those which currently plague Pal 5: $`\overline{M}`$ is quite far from its true value. By extending the angular length of the observations, the situation improves considerably because differences in orbit become more pronounced over larger angles on the sky. Ultimately, however, the exact nature of this behavior depends on the particular cluster under study and must be considered on a case-by-case basis.
## 5 Discussion and conclusions
We have presented two methods for measuring the mass and potential of the Galaxy using tidal streams from globular clusters. The method with full phase space information is closely related to previous work by Lynden-Bell (1982), Kuhn (1993) and Johnston et al. (1998) and clarifies the idea of using tidal streams as potentiometers. It is both a generalization of rotation curve measurements to non-circular orbits and a dynamical statement of Gauss’ law. In principle, direct measurements of the local gravitational acceleration and density field can be obtained. Of course, it should be emphasized that extremely accurate data are required so current and near-future observations will provide only a limited sample of objects to use with this method.
Analysis of the orbit-fitting method indicates that important preliminary information can be obtained from incomplete phase-space information using current observational capabilities. However, we must be careful to include measurement uncertainties in our analysis. In particular, excessive proper motion uncertainties can introduce biases into estimates. In a broader context, we face the issue of bias in any parametric analysis of the Galactic mass distribution, regardless of measurement error: we must be careful not to ignore systematic biases introduced by adopting any particular model for the Galaxy.
The results presented above suggest that several good candidates in the globular cluster population are available for halo mass determinations at intermediate distances $`R20\mathrm{kpc}`$. These include Pal 5 as mentioned above; NGC 4147, NGC 5024 and NGC 5466 from the Hipparcos sample; and possibly other clusters from ongoing plate measurement programs (e.g. Dinescu et al 1999). These clusters may prove particularly interesting because of their large distance from the Galactic plane: they could provide information on the mass distribution in relatively uncharted territory. Moreover, they provide independent measurements of roughly the same region of the Galaxy because of their spatial proximity.
The Hipparcos sample contains 15 clusters in all. The clusters which we have not mentioned lie closer to the disk and sun and, therefore, have much better proper motion determinations and are easier to observe. In continuing the work presented here, we will develop detailed models of tidal streams for these clusters in realistic, axisymmetric models of the Milky Way (Kuijken & Dubinski 1995). These clusters will provide excellent candidates for probing the mass distribution very close to the disk. In particular, the astrometric accuracy of the SIM and GAIA satellites within $`20\mathrm{kpc}`$ should allow precise distance determinations to clusters and their streams.
There are clearly several uncertainties in the analysis presented above. The principal uncertainty lies in determining cluster mass loss rates, which are difficult to model precisely. A related uncertainty is the stellar content of the stream. On the one hand, low mass clusters provide good candidates because they tend to lose fair amounts of mass through relaxation and tidal heating. However, a large fractional mass loss from a low mass cluster is still a relatively small number of stars. Moreover, the stellar content of the stream will play a strong role in its detectability. Above we adopted $`m=0.5M_{}`$, which provides a fair number of stars in the stream. However, at $`20\mathrm{kpc}`$, these stars would have $`I22`$. It would be extremely difficult– if not impossible– to obtain $`\mathrm{km}\mathrm{sec}^1`$ radial velocity accuracies for such faint stars. The best that we know of are Vogt et als (1995) radial velocity study of $`I=1819`$ giants in Leo II with Keck which required $`10`$ minute exposures. If, instead, the present mass function has flattened substantially through dynamical evolution (e.g. Pal 5; Smith et al. 1986), then many of the stream stars near the cluster may be bright, higher mass stars. This reduces the number of stream stars for our adopted mass loss rates; but, if the mass loss rate is somewhat higher, the situation may prove ideal.
Proper motions for these distant clusters also remain fairly uncertain in spite of the success of Hipparcos. There is presumably no hope of improving stellar proper motion measurements before the next generation of space-based, astrometric satellites. However, we note that, in the past, there has been some effort to detect water maser emission from giants in globular clusters (Frail & Beasley 1994). Although unsuccessful, the possibility of improving proper motion determinations using VLBI warrants renewed searches with deeper detection limits.
The discussion and analysis presented above suggest that important problems of the structure of the inner Galaxy may be fruitfully addressed by searching for tidal streams from relatively nearby globular clusters. Moreover, with upcoming mission such as SIM (scheduled for 2005) and GAIA (scheduled for 2009), it is essential to establish groundwork for the larger scale studies advocated by Johnston et al (1998) and Zhao et al (1998). We expect that, with present observational capabilities, important progress can be made by focusing attention on the clusters discussed herein. With these missions still relatively far in the future, the progress made now can help resolve important questions and serve as an invaluable guide.
We thank Bill van Altena and the referee, Kathryn Johnston, for helpful discussion and acknowledge support from NSERC and the Fund for Astrophysical Research.
|
no-problem/9902/cond-mat9902179.html
|
ar5iv
|
text
|
# Local Electronic Structure and High Temperature Superconductivity.
## I Introduction
High temperature superconductivityBM is obtained by adding charge carriers into a highly-correlated antiferromagnetic insulating state. Despite the fact that there is a large “Fermi surface” containing all of the pre-existing holes and the doped holes,arpes it is impossible to understand the behavior of the system and, in particular, the origin of high temperature superconductivity unless the nature of the doped-insulating state is incorporated into the theory. In particular, the Fermi liquid theory of the normal state and the BCS theory of the superconducting state, which are so successful for conventional metals, were not designed for doped insulators, and they do not apply to the high temperature superconductors. (Section II.) Consequently it is necessary to develop a new mechanism and many-body theory of high temperature superconductivity.
In our view, the physics of the insulator and the doped insulator, including antiferromagnetism and superconductivity, is driven by a lowering of the zero-point kinetic energy.pwa This is well known for the antiferromagnetic state but, in addition, the motion of a single hole in an antiferromagnet is frustrated because it stirs up the spins and creates strings of ferromagnetic bonds. Consequently, a finite density of holes forms self-organized structures, designed to lower the zero-point kinetic energy. This is accomplished in three stages: a) the formation of charge inhomogeneity (stripes), b) the creation of local spin pairs, and c) the establishment of a phase-coherent high-temperature superconducting state. The zero-point kinetic energy is lowered along a stripe in the first stage, and perpendicular to the stripe in the second and third stages.
Static or dynamical charge inhomogeneity,zaan ; losala ; ute ; chayes ; erica1 or “topological doping” topo is quite common for doped correlated insulators. In $`d`$ dimensions, the charge forms one-dimensional arrays of $`(d1)`$-dimensional structures that are also antiphase domain walls for the background spins. In $`d=1`$ there is an array of charge solitons,review whereas, in $`d=2`$, there are linear “rivers of charge” (stripes) threading through the antiferromagnetic background.losala ; ute ; chayes In $`d=3`$ there are arrays of charged planeschayes ; erica1 , as observed in the manganates.mang These self-organized structures, which may be fluctuating or form ordered or glass phases, are a consequence of the tendency of the correlated antiferromagnet to expel the doped holes, and they lead to a lowering of the zero-point kinetic energy. The theoretical arguments that lead to this picture will be summarized in Sec. III.
It is clear that any new many-body theory must be based on the local electronic structure and there are strong indications of a link to high temperature superconductivity. First of all, in LSCO and YBCO the value of T<sub>c</sub> is inversely proportional to the spacing between stripes in underdoped and optimally doped materials.yamada ; bb Secondly, $`\mu `$SR experiments weidinger ; niedermeyer have found evidence for a phase in which superconductivity coexists with a cluster spin glass. In YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub>, the spin freezing temperature goes to zero when the superconducting T<sub>c</sub> is more than 50K. It is difficult to see how these two phases could coexist unless there is a glass of metallic stripes dividing the CuO<sub>2</sub> planes into randomly-coupled antiferromagnetic regions. A new mechanism and many-body theory of superconductivity, based on local charge inhomogeneity has been developed,badmetals ; nature ; ekz ; erica2 and there is substantial experimental support for the overall picture, as described in subsequent sections.
## II BCS Many-Body Theory
There are several reasons why the Fermi liquid theory of the normal state and the BCS theory of the superconducting state do not apply to the high temperature superconductors:
1) In BCS theory, the superfluid density $`n_s`$ is given by all electrons in the Fermi sea, whereas, in the high temperature superconductors, $`n_s`$ is proportional to the density of doped holes.
2) The outstanding success of BCS theory stems from the existence of sharp quasiparticles. However, an analysis of the temperature dependence of the resistivity shows that the quasiparticle concept does not apply to many synthetic metals, including the high temperature superconductors.badmetals ; pphmf This idea is supported by angular resolved photoemission spectroscopy (ARPES) which shows no sign of a normal-state quasiparticle peak near the points $`(0,\pm \pi )`$ and $`(\pm \pi ,0)`$ where high temperature superconductivity originates.norman ; darpes
3) If there are no quasiparticles, there is no Fermi surface in the usual sense of a discontinuity in the occupation number $`n_\stackrel{}{k}`$ at $`T=0`$. This undermines the very foundation of the BCS mean-field theory, which is a Fermi surface instability.
4) In BCS theory, pairing and phase coherence take place at the same temperature T<sub>c</sub>, and a good estimate of T<sub>c</sub> is given by $`\mathrm{\Delta }_0/2`$, where $`\mathrm{\Delta }_0`$ is the energy gap measured at zero temperature. However, this criterion does not give a good estimates of T<sub>c</sub> for the high temperature superconductors, especially for underdoped materials: $`\mathrm{\Delta }_0/2T_c`$ varies with doping and can be much greater than one. Rather, the value of T<sub>c</sub> is determined by the onset of phase coherencenature ; erica2 and is governed by the zero-temperature value of the “phase stiffness”, $`V_0(\mathrm{}c)^2a/16\pi (e\lambda (0))^2`$, which sets the energy scale for the spatial variation of the superconducting phase. Here $`\lambda (T)`$ is the penetration depth and $`a`$ is a microscopic length scale that depends on the dimensionality of the material.nature
5) A major problem for any mechanism of high temperature superconductivity is how to achieve a high pairing scale in the presence of the repulsive Coulomb interaction, especially in a doped Mott insulator in which there is poor screening. In the high temperature superconductors, the coherence length is no more than a few lattice spacings, so neither retardation nor a long-range attractive interaction is effective in overcoming the bare Coulomb repulsion. Nevertheless ARPES darpes shows that the major component of the gap function is proportional to $`\mathrm{cos}k_x\mathrm{cos}k_y`$. It follows that, in real space, the gap function and hence, in BCS theory, the net pairing force, is a maximum for holes separated by one lattice spacing, where the bare Coulomb interaction is very large ($``$ 0.5 eV, allowing for atomic polarization). It is not easy to find a source of an attraction that is strong enough to overcome the Coulomb force at short distances and achieve a high transition temperature in a natural way by the usual Cooper pairing.
Clearly there is a need for a new mechanism and many-body theory to explain high temperature superconductivity.
## III Topological Doping
It is well known that the motion of a single hole in an antiferromagnet is frustrated by the creation of strings of broken bonds.lev This idea is supported by ARPES, which found that the bandwidth of a single hole is controlled by the exchange integral $`J`$, rather than the hopping amplitude $`t`$.wells
When there is a finite density of holes, the system strives to relieve this frustration and lower its kinetic energy. If the holes were neutral the system would separate into a hole-free antiferromagnetic phase and a hole-rich (magnetically disordered or possibly ferromagnetic) phase, in which the holes are mobile and the cost in exchange energy is less than the gain in kinetic energy.ekl ; marder ; manousakis In practice the holes are charged, but macroscopic phase separation can take place whenever the dopants are mobile, as in oxygen-doped and photo-doped materials. We have reviewed the experimental evidence for this behavior elsewhere.physicaC ; losala More recent experiments exploring oxygen doping in detail have been carried out by Wells et al..wells2
When the dopants are immobile, charged holes can do no more than phase separate locally, by forming arrays of linear metallic stripes losala ; ute ; chayes which are “topological” in nature, since they are antiphase domain walls for the antiferromagnetic background spins.topo ; landau This structure lowers the kinetic energy along the stripe but makes it more difficult, if anything, for a single hole to move perpendicular to the stripe direction. A hop transverse to a stripe takes the hole far above Fermi energy.ekz However, as we shall see, pairs of holes can move more easily transverse to a stripe, and they lower their kinetic energy first by forming spin pairs and, at a lower temperature, by making the system a high temperature superconductor.
It has been argued that charge stripes are energetically impossible because the driving energies are unable to overcome the Coulomb repulsion.phillips However, charge modulation is inevitable if the short-range interactions give a negative compressibility, $`\kappa `$, as they do between the spinodals of a system that, otherwise, would undergo phase separation. A general expression for the Debye screening length is $`\lambda _D=\sqrt{ϵ/4\pi e^2n^2\kappa }`$, where $`ϵ`$ is the dielectric constant, $`e`$ is the charge and $`n`$ is the density. When $`\kappa <0`$, $`\lambda _D`$ is imaginary, which indicates that the ground state is unstable to a density modulation.pol ; neto Of course it requires a more detailed microscopic calculation to obtain the physical length scale.
The existence of charge and spin stripes in the La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> family was established in an elegant series of experiments on La<sub>1.6-x</sub>Nd<sub>0.4</sub>Sr<sub>x</sub>CuO<sub>4</sub> by Tranquada and co-workers.jtran In a Landau theory of the phase transition,landau the spin order parameter $`\stackrel{}{S}_\stackrel{}{q}`$ and the charge order parameter $`\rho _\stackrel{}{Q}`$ first couple in third order ($`\stackrel{}{S}_\stackrel{}{q}\stackrel{}{S}_\stackrel{}{q}\rho _\stackrel{}{Q}`$), so the ordering vectors must satisfy $`\stackrel{}{Q}=2\stackrel{}{q}`$ or, in other words, the wavelength of the spin modulation is twice that of the charge modulation. This relation is found to be satisfied experimentally,jtran and it implies that the charge stripes also form antiphase domain walls in the magnetic order, which gives the precise meaning of the concept of topological doping.topo The observation of essentially ordered stripes allowed a study of the evolution of the spin and charge order parameters, which not only provided input into the mechanism of stripe formation by showing that they are charge driven, but also established that inelastic incommensurate magnetic peaks observed previously cheong in La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> were produced by fluctuating stripes. Recently, inelastic incommensurate magnetic peaks have been observed in underdoped YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub> by neutron scattering experiments,mook thereby establishing that YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub> and the La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> family have a common spin structure.
By now, the prediction of metallic stripeslosala ; ute has been confirmed in all families of materials in which extensive neutron scattering experiments have been performed (LSCO and YBCO). There is growing evidence of similar behavior in Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+δ</sub>: preliminary neutron scattering experiments show incommensurate magnetic peaks, and there is ARPES evidenceshenstripe of spectral weight transfer associated with stripes. Also, a calculation of the effects of stripes in ARPES experiments markku produced regions of degenerate states and a flat section of the “Fermi surface” near $`(0,\pm \pi )`$ and $`(\pm \pi ,0)`$, as observed experimentally.shen ; anl ; norman
## IV Spin Pairing
The existence of a cluster spin-glass state for a substantial range of doping in the high temperature superconductorsweidinger ; niedermeyer implies that the stripe dynamics is slow and that the motion of holes along the stripe is much faster than the fluctuation dynamics of the stripe itself. Thus an individual stripe may be regarded as a finite piece of one-dimensional electron gas (1DEG) located in an active environment of the undoped spin regions between the stripes. Then it is appropriate to start out with a discussion of an extended 1DEG in which the singlet pair operator $`P^{}`$ may be written
$$P^{}=\psi _1^{}\psi _2^{}\psi _1^{}\psi _2^{},$$
(1)
where $`\psi _{i,\sigma }^{}`$ creates a right-going ($`i=1`$) or left-going ($`i=2`$) fermion with spin $`\sigma `$. In one dimension, the fermion operators of a 1DEG may be expressed in terms of Bose fields and their conjugate momenta ($`\varphi _c(x),\pi _c(x)`$) and ($`\varphi _s(x),\pi _s(x)`$) corresponding to the charge and spin collective modes respectively. In particular, the pair operator $`P^{}`$ becomes review
$$P^{}e^{i\sqrt{2\pi }\theta _c}\mathrm{cos}\left(\sqrt{2\pi }\varphi _s\right),$$
(2)
where $`_x\theta _c\pi _c`$. In other words, there is an operator relation in which the amplitude of the pairing operator depends on the spin fields only and the (superconducting) phase is a property of the charge degrees of freedom. Now, if the system acquires a spin gap, the amplitude $`\mathrm{cos}\left(\sqrt{2\pi }\varphi _s\right)`$ acquires a finite expectation value, and superconductivity will appear when the charge degrees of freedom become phase coherent. Clearly, in one dimension, the temperature at which the spin gap forms is generically distinct from the phase ordering temperature because phase order is destroyed by quantum fluctuations, even at zero temperature.review
We emphasize that we are not dealing with a simple 1DEG, for which a spin gap occurs only if there is an attractive interaction in the the spin degrees of freedom.review The 1DEG on the stripe is in contact with an active (spin) environment, and we have shown that pair hopping between the 1DEG and the environment will generate a spin gap in both the stripe and the environment, even for purely repulsive interactions.ekz The same mechanism gives rise to spin gaps in spin ladders. Also, although the theory was worked out for an infinite 1DEG in an active environment, it is known from numerical calculations on finite-size systems that the conclusions are correct for any property that has a length scale small compared to the size of the system. Here, we use the theory only to establish the existence of a spin gap, which corresponds to a length scale of a few lattice spacings. Once a spin gap has been formed, the problem is reduced to the physics of the superconducting phase and its quantum conjugate (the number density), and high temperature superconductivity emerges when phase order is established.badmetals ; nature ; erica2
Experimentally the formation of an amplitude of the order parameter is indicated by a peak in $`(T_1T)^1`$ (where $`T_1`$ is the spin-lattice relaxation rate),julien and by ARPES,arpesgap both of which are consistent with spin pairing. A drop in the specific heatloram and a pseudogap in the $`c`$-axis optical conductivity,homes both of which indicate that the charge is involved, occur at a higher temperature in underdoped materials, and are symptoms of the onset of stripe correlations.ekz
## V Phase Coherence
High temperature superconductivity is established when there is coherent motion of a pair from stripe to stripe.ekz This final step in the reduction of the zero-point kinetic energy is equivalent to establishing phase order, and it determines the value of T<sub>c</sub>, especially in underdoped and optimally doped materials.badmetals ; nature ; erica2
It is sometimes argued that thermal phase fluctuations are excluded because the Coulomb interaction moves them up to the plasma frequency, $`\omega _p`$, via the Anderson-Higgs mechanism. This argument, if correct, also would imply that critical phenomena near to T<sub>c</sub> cannot display 3d-XY behavior. An explicit calculation shows why this objection is incorrect. The Fourier transform of the Lagrangian density in the long wavelength limit has the formbadmetals ; wagen
$$(\stackrel{}{k},\omega )=\frac{1}{2}\stackrel{}{k}^2a^2\left[V_0(\omega )+V_1\omega ^2ϵ(\omega )\right]\theta ^2(\stackrel{}{k},\omega )$$
(3)
where $`ϵ(\omega )`$ is the dielectric function at $`\stackrel{}{k}=0`$, $`V_1=\mathrm{}^2a/16\pi e^2`$, and $`V_0(0)V_0`$ is the classical phase stiffness, defined above. At high frequency
$$ϵ(\omega )=ϵ_{\mathrm{}}\frac{\omega _p^2}{\omega ^2}.$$
(4)
Note that $`V_0(\omega )`$ vanishes at high frequency and, in general, it does not contribute to the plasma frequency. Then phase fluctuations occur at a frequency $`\omega _p/\sqrt{ϵ_{\mathrm{}}}`$. At the same time, for $`\omega =0`$, the Lagrangian has the form $`=const.\stackrel{}{k}^2`$, as required for classical phase fluctuations. In general, it is necessary to do a renormalization group calculation to obtain the zero-frequency limit, and we have shown that, for sufficiently good screening (large dielectric function), the behavior of the system is given by the classical ($`V_0`$) part of the Lagrangian.badmetals The unusual form of the Lagrangian stems from the use of the dual phase-number representation, in which the $`V_0`$ term represents the kinetic energy (pair hopping) and the $`V_1`$ term is the potential energy (Coulomb interaction).
We have solved the following model of classical phase fluctionsnature ; erica2 :
$$H=J_{}\underset{<ij>_{}}{}\left\{\mathrm{cos}(\theta _{ij})+\delta \mathrm{cos}(2\theta _{ij})\right\}\underset{<kl>_{}}{}\left\{J_{}^{kl}\mathrm{cos}(\theta _{kl})\right\},$$
(5)
where the first sum is over nearest neighbor sites within each plane, and the second sum is over nearest neighboring planes. The values of the constants $`J_{}`$ and $`\delta `$ are taken to be isotropic within each plane and the same for every plane. The coupling between planes, $`J_{}^{kl}`$, is different for crystallographically distinct pairs of neighboring planes.
The results of this final stage of the calculation are in good agreement with experiment. For a reasonable range of parameter values (as constrained by the magnitudes of the penetration depths in different directions) the model gives a good estimate of T<sub>c</sub> and its evolution with doping. It also explainserica2 the temperature dependence of the superfluid density, obtained for a range of doping by microwave measurements.hardy
The phase diagram itself is consistent with this picture. The physics evolves in three stages. Above the superconducting transition temperature there are two crossovers, which are quite well separated in at least some underdoped materials. The upper crossover is indicated by the onset of short-range magnetic correlations and by the appearance of a pseudogaphomes in the $`c`$-axis optical conductivity (perpendicular to the CuO<sub>2</sub> planes) which might possibly indicate the establishment of a stripe glass phase. The lower crossover is where a spin gap or pseudogap (which is essentially the amplitude of the superconducting order parameter) is formed. Finally, superconducting phase order is established at T<sub>c</sub> and, in fact, determines the value of T<sub>c</sub>.nature ; erica2
Acknowledgements: We acknowledge frequent discussions with J. M. Tranquada. This work was supported at UCLA by the National Science Foundation grant number DMR93-12606 and, at Brookhaven, by the Division of Materials Sciences, U. S. Department of Energy under contract No. DE-AC02-98CH10886.
|
no-problem/9902/astro-ph9902186.html
|
ar5iv
|
text
|
# Modeling The Time Variability of Accreting Compact Sources
## 1 INTRODUCTION
The study of the physics of accretion onto compact objects (neutron stars and black holes) whether in galactic (X-ray binaries) or extragalactic systems (Active Galactic Nuclei) involves length scales much too small to be resolved by current technology or that of the foreseeable future. As such, this study is conducted mainly through the theoretical interpretation of spectral and temporal observations of these systems, much in the way that the study of spectroscopic binaries has been used to deduce the properties of the binary system members and the elements of their orbit. In this endeavor, the first line of attack in unfolding their physical properties is the analysis of their spectra. For this class of objects and in particular the Black Hole Candidate (BHC) sources, a multitude of observations have indicated that their energy spectra can be fitted very well by Comptonization of soft photons by hot electrons; the latter are “naturally” expected to be present in these sources, a result of the dissipation of the accretion kinetic energy within an accretion disk. It is thus generally agreed upon that Comptonization is the process by which the high energy ($`>2100`$ keV) spectra of these sources are formed, with the study of this process receiving hence great attention over the past couple of decades (see e.g. Sunyaev & Titarchuk 1980; Titarchuk 1994; Hua & Titarchuk 1995). Thus, while the issue of the detailed dynamics of accretion onto the compact object is still not resolved, the assumption of the presence of a thermal distribution of hot electrons in the vicinity of the compact object has proven sufficient to produce models which successfully fit the spectra of the emerging high energy radiation. Spectral fitting has subsequently been employed as a way to constrain or even determine the dynamics of accretion onto the compact object.
It is well known, however, that the Comptonization spectra cannot provide, in and of themselves, any information about the size of the scattering plasma, because they depend (in the case of optically thin plasmas) on the product of the electron temperature and total probability of photon scattering in the plasma, a quantity proportional to its Thomson depth. Therefore, they cannot provide any clues about the dynamics of accretion of the hot gas onto the compact object, which require the knowledge of the density and velocity as a function radius. To determine the dynamics of accretion, one needs, in addition to the spectra, time variability information. It is thought, however, that such information may not be terribly relevant, because it is generally accepted that the X-ray emission originates at the smallest radii of the accreting flow and as such, time variability would simply reflect the dynamical or scattering time scales of the emission region, of order of msec for galactic accreting sources and $`10^510^7`$ times longer for AGN.
Recent RXTE (Focke, private communication) as well as older HEAO-1 observations (Meekins et al. 1984) of the BHC Cyg X-1, which resolved X-ray flares of duration $``$ a few msec, appear to provide a validation of our simplest expectations. At the same time, however, the X-ray fluctuation Power Spectral Densities (PSD) of accreting compact sources generally contain most of their power at frequencies $`\omega <1`$ Hz, far removed from the kHz frequencies expected on the basis of the arguments given above. Flares of a few msec in duration, while present in the X-ray light curves of Cyg X-1, contribute but a very small fraction to its overall variability as manifest in its PSD, which exhibits very little power at frequencies $`>`$ 30 Hz (Cui et al. 1997). Interestingly, this form of the PSD has been reported in addition to Cyg X-1 also in the BHC GX 339-4 and Nova Muscae GINGA data (see e.g. Miyamoto et al. 1991).
This discrepancy between the observed and the expected distribution of the variability power of BHC sources, hints that one may have to revise the notion that the entire hard X-ray emission of these sources derives from a region a few Schwarzschild radii in size and indicates the need of more detailed models of the timing properties of these systems. In this respect, models of the timing properties of accreting compact sources have been rather limited (with the exception of models of the quasi-periodic oscillations), the reason being, (on the theoretical side) the largely aperiodic character of their light curves and (on the experimental side) the lack of sufficiently large area detectors, in conjunction with high telemetry rates which would provide high timing resolution data. As a consequence, the study and modeling of these class of sources has concentrated mainly on their spectra, whose S/N ratios can be improved by longer exposure times. Thus, the literature associated with models of the spectra of BHC sources is substanially larger than that of their timing models.
Much of the earlier work in modeling the aperiodic light curves of BHC sources was kinematic in character, aimed in its deconvolution in elementary events with the goal of providing fits to the observed PSDs. Thus, Lochner (1989) was able to reproduce the observed PSDs as the ensemble of incoherent, exponential shots of durations ranging from 0.01 to 1 secs, while Lochner, Swank & Szymkoviak (1991) searched (unsuccessfully) for a low dimensional attractor in the light curve of Cyg X-1. In a similar fashion, Abramowicz et al. (1991) produced model light curves with PSDs similar to those observed, resulting from a large number of bright spots in an accretion disk. More recently, a number of dynamical models have appeared in the literature: Chen & Taam (1995) produce time variability as a result of hydrodynamic instabilities of an accretion disk, while Takeuchi, Mineshige & Negoro (1995) use a model of self-organized criticality to simulate accretion onto the compact object. Both these models provide reasonable fits to the observed PSD shapes (however, not necessarily to their normalizations) by producing a modulation of the accretion rate onto the compact object.
Models of the type just described, generally aim to account for only the simplest of variability tests, namely the PSD, and usually they have enough freedom to be able to match the shape of the observed PSDs. However, in most of them, the associated light curves are very much different than those observed, testimony to the fact that the power spectrum erases all the phase information available in the signal and that very different light curves can in fact have identical PSDs. We are aware of only one effort to keep track and search for correlations in the phases of the different Fourier components in the light curves of low luminosity AGN (Krolik, Done & Madejski 1993). The conclusion of this search was that the process responsible for the AGN variability (at least the ones studied in the above reference) is incoherent, in the sense that the phases of the various Fourier components are uncorrelated and apparently random.
In this respect, one should note that the very fact that the spectra of these sources are due to Comptonization of soft photons has several direct implications concerning the phases of photons of different energies: Because, on the average, it takes longer to produce a photon of high energy compared to that of a lower energy, the energy of the escaping photons increases with their residence time in the scattering medium. Thus, while there may not be any coherence in the absolute phases of the various Fourier components of the light curves of this class of sources, the relative phases between photons in two different energy bands may not be random, provided that the observed radiation is produced by a single hot electron cloud rather than a large number of individual, disjoint sites. As a result, the hard photon light curves should lag with respect to those of softer photons by amounts which depend on the photon scattering time in the plasma. If the sources are of rather limited spatial extent and the scattering takes place only in the vicinity of the compact object, then the corresponding time lags should be roughly constant (independent of the Fourier frequency) and of order of the scattering time in this medium, i.e. $`10^3`$ sec for galactic sources and correspondingly larger for AGN. On the other hand, if the scattering medium is extended, the lags should cover a temporal range which depends on the characteristics of the scattering medium.
Searches for these lags in the light curves of the BHC sources Cyg X-1 and GX 339-4 in the GINGA (Miyamoto et al. 1988, 1991) and the more recent RXTE data (Cui et al. 1997b), have detected their presence and, more important, discovered that the hard time lags $`\mathrm{\Delta }t`$ depend on the Fourier period $`P`$, increasing roughly linearly with $`P`$ from $`\mathrm{\Delta }t<`$0.001 sec at $`P0.05`$ sec to $`\mathrm{\Delta }t<`$ 0.1 sec for $`P10`$ sec. These long lags and in particular their dependence on the Fourier period $`P`$ are very difficult to interpret in the context of a model in which the X-ray emission is due to soft photon Comptonization in the vicinity of the compact object; in such a model, the hard lags should simply reflect the photon scattering time in the specific region (i.e. $``$ msec) and moreover, they should be independent of the Fourier frequency. While this type of time lag was found originally in the light curve of the BHC Cyg X-1, similar lag behavior has been recorded also for the transient J0422+32 (Grove et al. 1998) and also the source GRS 1758-258 (Smith et al. 1997). Finally, to indicate that these results may not be universal, the data of Wilms et al. (1997) indicate that the lags associated with Cyg X-1 during their observation, increase much more gradually.
Motivated by the discrepancies between our expectations of X-ray variability based on simple dynamical models and the observed form of their PSDs and their frequency dependent hard lags, we have revisited the issue of time variability of BHC sources (Kazanas, Hua & Titarchuk 1997, hearafter KHT; Hua, Kazanas & Titarchuk 1997, hearafter HKT; Hua, Kazanas & Cui 1999, hereafter HKC). The central point of our considerations has been that, contrary to the prevailing notions about the timing behavior of accreting sources, features in the observed PSD correspond not to variations in the accretion rate but rather to properties of the electron density distribution of an extended ($`>10^3R_s`$; rather than a compact $`<10R_s`$) hot electron corona. With this assumption in place, features in the PSDs translate to well defined properties of the scattering corona which can in principle be checked for consistency with observations. Thus we have proposed that the low frequency ($`\nu <1`$ Hz) break of the PSD to white noise is associated with the outer edge of the Comptonizing corona, while its inner edge is very close to the compact source. We further indicated that the power law - like PSDs depend on both the density profile and the total Thomson depth of the corona, implying correlations between the energy and the variability power spectra, which can be sought for in the data.
Using this extended corona model, we were able to show explicitly that the observed energy spectra convey rather limited information about the structure of the scattering medium (KHT, HKC) and that very different hot electron configurations can indeed yield the same energy spectrum as long as the total probability of scattering remains the same across the different configurations.
We have also shown (HKC) that the fact that the lags between different energy bands depend on the differential probability of scattering at a given range of radii (contrary to the spectrum which depends on the total probability) affords a means of probing the density structure of the corona through the study of the lag dependence on the Fourier period $`P`$. Finally, we were able to show that due to the linearity of Compton scattering, the coherence function (as defined by Vaughan & Nowak 1996) of the extended hot corona configuration is very close to one, in agreement with observations (Vaughan & Nowak 1996), suggesting, in addition, that the properties of the scattering hot electron cloud remain constant over the observation time scales (Hua, Kazanas & Titarchuk 1997).
Thus the timing observations indicate, on one hand, that the absolute phase of the light curves of accreting compact objects are random, implying an incoherent process, while on the other hand that the relative phases between different energy bands are indeed extremely well correlated, indicating a coherent underlying process. The puropose of the present paper is to produce model light curves of accreting compact sources compatible with these apparently contradictory aspects of their coherence as well as the observed PSDs. We also examine the structure of the latter (including the presence of QPOs) and their dependence on the sources’ luminosity in the general context of these models. Having provided a prescription for producing model light curves, we subsequently examine their properties in the time rather than frequency domain and indicate tests which may confirm or disprove the fundamentals of our model.
In §2 we outline the extended corona configuration under consideration and we present models of the Comptonization response function and the corresponding power spectra associated with it. We also elaborate on the fundamental tenet of our models, namely that of association of PSD features with corresponding features in the density structure of the extended corona by providing a generic QPO model within this framework. In §3 we provide a prescription for generating model light curves using the coronal response function and we do generate a number of such curves for different values of the model parameters. With the model light curves at hand, we further elaborate our analysis by computing and comparing their attributes to observation: Thus in §4 we compute phase lags as a function of the Fourier frequency and in §5 the associated autocorrelation and skewness functions. Finally, in §6 the results are summarized and discussed.
## 2 THE EXTENDED CORONA AND ITS RESPONSE
Kazanas, Hua & Titarchuk (1997) proposed that the timing properties of BHC discussed above, i.e. the PSD and the form and frequency dependence of the time or phase lags, can be easily accounted for with the assumption that all associated variability is due to the Compton scattering of soft photons in an extended, non-uniform corona surrounding the compact object and spanning several decades in radius. Specifically, they proposed the following density profile for the Comptonizing medium:
$$n(r)=\{\begin{array}{cc}n_1\hfill & \text{for }rr_1\hfill \\ n_1(r_1/r)^p\hfill & \text{for }r_2>r>r_1\hfill \end{array}$$
$`(1)`$
where the power index $`p>0`$ is a free parameter; $`r`$ is the radial distance from the center of the spherical corona; $`r_1`$ and $`r_2`$ are its inner and outer radii respectively.
KHT considered $`p`$ to be an arbitrary parameter, however, as indicated in that reference, the value of $`p=1`$ allows for scattering of the photons with equal probability over the entire extent of the corona introducing time lags at every available frequency with equal probability, a fact which is in agreement with the observations; hence they considered the value $`p=1`$ as the fiducial value of this parameter. However, they also considered different values of $`p`$, in particular the value $`p=3/2`$, as it corresponds to the density profiles of the currently popular Advection Dominated Accretion Flows (ADAF; Narayan & Yi 1994).
### 2.1 The Shot Profiles
The time response of the given corona, i.e. the flux of photons escaping at a given energy range as a function of time, following the injection of a $`\delta `$-function of soft photons at its center at $`t=t_0`$ has been computed using the Monte Carlo code of Hua (1997). As discussed in KHT and in HKC its form, $`g(t)`$, ignoring its rising part, can be approximated by a Gamma distribution function, i.e. a function of the form
$$g(t)=\{\begin{array}{cc}t^{\alpha 1}e^{t/\beta },\hfill & \text{if }t0\text{;}\hfill \\ 0\hfill & \text{otherwise,}\hfill \end{array}$$
$`(2)`$
where $`t`$ is the time; $`\alpha >0`$ and $`\beta >0`$ are parameters determining the shape of the light curves which depend on the scattering depth $`\tau _0`$ and photon escape energy (see Figures 1a & 1b in KHT). As indicated in KHT, for small values of the total depth of the corona $`\tau _01`$, $`\alpha 1=p`$, while as $`\tau _0`$ increases the power law part of the curve becomes progressively flatter, i.e. $`|\alpha 1|<p`$. For the cloud with $`p=1`$ in Eq. (1), $`\alpha `$ is small compared to 1 and $`\beta `$, determined by the outer edge of the scattering cloud, is taken to be of the order of 1 second so that the light curves have exponential cutoff at $`1`$ second. For uniform cloud, $`p=0`$, $`\alpha =1`$ and the light curve is a pure exponential (leading to a PSD proportional to $`\omega ^2`$); one should note though that the precise value of $`\alpha `$ depends also on the photon energy). The simplified form of the response function given by equation (1) allows one to compute analytically its Fourier transform (HKC)
$$G(\omega )=\frac{\mathrm{\Gamma }(\alpha )\beta ^\alpha }{\sqrt{2\pi }}(1+\beta ^2\omega ^2)^{\alpha /2}e^{i\alpha \theta }.$$
$`(3)`$
where $`\mathrm{\Gamma }(x)`$ is the Gamma function; $`\omega `$ is the Fourier frequency and $`\theta `$ is the phase angle and $`\mathrm{tan}\theta =\beta \omega .`$
However, the form of the realistic response functions is more complicated. Figure 1 shows the shape of the response function as computed using the Monte Carlo code (Hua 1997). The parameters used in the Monte Carlo calculation were, $`r_1=6.35\times 10^3`$ light sec $`=1.9\times 10^8`$ cm, $`r_2=6.24`$ light sec$`=1.87\times 10^{11}`$ cm, the electron temperature was taken to be $`T_e=100`$ keV, the Thomson depth $`\tau _0=1`$ and the density at bottom of the corona $`n_1=10^{15}`$ cm<sup>-3</sup>. The soft photons were chosen randomly from a Planck distribution of temperature 1 keV and the data give the flux of Comptonized photons in the $`3560`$ keV energy range.
To simulate the precise form of the shots in the coronae we consider we use a more complex function
$$g(t)=\{\begin{array}{cc}A_1(1B_1x^b)x^\gamma ,\hfill & \text{if }xt/t_01\text{;}\hfill \\ A_2(1+B_2x^b)x^{\alpha 1}e^{(xt_0/\beta )^3},\hfill & \text{if }xt/t_0>1\text{,}\hfill \end{array}$$
$`(4)`$
where $`b,\gamma >0`$. The parameters $`A_2,B_1,B_2`$ are given in terms of the arbitrary normalization $`A_1`$ and the form parameters $`\alpha ,\beta ,\gamma ,t_0`$ and $`b`$ by the requirement that the function $`g(t)`$ and its first derivative be continuous at $`x=1`$, with that point being also a local maximum. The parameters $`\alpha ,\beta `$ have the same meaning as those used in Eq. (2) above, while $`t_0`$ and $`\gamma `$ indicate the time at which the response function achieves its maximum value (of order $`r_1/c`$) and the rate at which this maximum is achieved. The cut-off form of $`g(t)`$ is steeper than exponential to mimic the detailed curve produced by the Monte Carlo simulation. One should note the additional parameter $`b`$, which regulates the “sharpness” of the transition from the rising to the falling part of the model response function. The values of the parameters used in fitting the response function were $`\alpha =0.4,\gamma =1.5`$, $`\beta =10`$ sec, $`t_0=0.02`$ sec, $`b=1`$. In the same figure we also present the PSD which corresponds to a single shot with this specific form of the response function. As will be argued in the next section, under certain assumptions, this is also the PSD of the entire light curve.
The particular light curve and the associated values of the corresponding fits should be considered only as fiducial values. Monte Carlo runs with smaller values of $`r_1`$ gave shots achieving their peak flux at times scales proportionally shorter. Thus fits of corona response functions with $`r_1=4.77\times 10^4`$ light sec $`=1.43\times 10^7`$ cm and $`n_1=10^{16}`$ cm<sup>-3</sup>, gave rise times of order $`t_0=0.001`$ sec.
### 2.2 The Power Spectra
As will be discussed in the next section, the PSDs of the model X-ray light curves of accreting neutron stars and BHC sources which we present, to a large extent, reflect the properties of the response functions of the corresponding coronae; we therefore feel that a short discussion of their form is, at this point, necessary. To begin with, we note that in the limit of infinitely sharp turn-on of the shots under consideration, the Fourier transform of $`g(t)`$ is given by $`G(\omega )`$ of Equation (2) and therefore the PSD, under these conditions, can be computed analytically (HKC)
$$|G(\omega )|^2=\frac{\mathrm{\Gamma }(\alpha )^2\beta ^{2\alpha }}{2\pi }(1+\beta ^2\omega ^2)^\alpha .$$
$`(5)`$
As discussed in KHT, these PSDs consist of a power law section with slope which is related to the corresponding slope of the power law segment of the response function while flattening to white noise for $`\omega <1/\beta `$. Since we have adopted the value $`\alpha 0.5`$ in the form of the response function, the PSD consists mainly of a power law segment of slope $`2\alpha 21`$, i.e. flicker noise, representative of the power law segment of the time response function. However, for the case of more realistic shots which have a finite rise time, such as those shown in fig. 1, the computation of the PSD can no longer be done analytically. Moreover, the finite size of the shot rise time introduces additional structure in the PSD which manifests as a break in its high frequency portion. In figure 1 we present, in addition to the profiles of the response functions, the corresponding PSDs (short-dashed lines).
It is of interest to compare the shape of the PSD to those associated with the real data of the BHC Cyg X-1, GX 339-4 and Nova Muscae (Miyamoto et al. 1992, fig. 1). One can see that, in addition to the flicker noise behavior, our model also produces the steepening observed at higher frequencies (see also Cui et al. 1997 for RXTE data of Cyg X-1 and Grove et al. 1998 for OSSE data of GRO J0422+32). The similarity of the model PSD shapes to the data notwithstanding, the most important feature of the present model is, in our view, the physical association between the specific PSD features and the physical characteristics of the hot electron corona responsible for the production of the high energy radiation through Comptonization (most notably its size and radial density slope, though the latter is only determined unequivocally through the hard lags, HKC).
We would like to comment specifically on the effect of the parameter $`b`$, the shot “sharpness” parameter, on the form of the PSD; as one might expect, for values $`b<1`$, this parameter can affect significantly the form of the PSD, rendering it steeper than expected on the basis of the value of the parameter $`\alpha `$. However, the realistic light curves produced by the Monte Carlo simulation of Hua (1997) fit quite well with values of $`b1`$ and we have therefore adopted this value for this parameter in the remaining of this work.
### 2.3 The QPOs
Given that the present model purports to provide a rather generic account of the variability of accreting sources we feel that, even at this early stage of its development, it should also be able to provide a generic account of the features occasionally occuring in the PSDs of these sources, known as Quasiperiodic Oscillations (QPOs). These features have attracted the attention of both theorists and observers, because it was considered that their quasiperiodic nature would lead to clues about the dynamics of accretion onto the compact object not available in their largely aperiodic light curves.
Since our present goal is not the study of the subject of QPO, we forgo any discussion, references or models associated with this subject matter with the exception of the reviews by van der Klis (1995, 1998), wherein the interested reader can find more about the QPO phenomenology, systematics and additional references. However, we would like to demonstrate that the tenet of our model - namely that features of PSD correspond to features in the electron density of the hot corona - can indeed provide an account of the QPO phenomenon and some of its systematics.
In fig. 2a we present the response function of the following configuration: Two concentric, uniform shells, each of $`\tau _0=1,kT_e=100`$ keV, one extending from $`r=0`$ to $`r=1`$ light second, while the second from $`r_s=10`$ light seconds to $`r_s+\mathrm{\Delta }r`$ with $`\mathrm{\Delta }r/r_s1`$. Soft photons of energy 0.1 keV are released at the center, $`r=0`$, of the configuration. Each panel of fig. 2a corresponds to the response function for photon in the energy range denoted in the figure. One can distinguish a cut-off at $`t1`$ sec, indicative of the exponential drop-off of the photons escaping from the inner shell and a second one at $`t10`$ sec due to escape through the outer thin shell. One should also note the presence of an additional peak at $`t20`$ sec, corresponding to photons which were reflected by the outer shell (rather than transmitted through it), escaping at the opposite side of the configuration. It is our contention that it is the presence of these photons which is responsible for at least some of the aspects of the QPO phenomenon.
In fig. 2b we exhibit the power spectra corresponding to the shots given in fig. 2a. There is an apparent QPO peak at $`\nu 1/20`$ sec as well as harmonically spaced peaks, indicative of the power associated with the time scale $`t_02r_s/c`$, corresponding to the light crossing time of the outer shell. This additional power in the PSD is thus related to the spacial rather than timing properties of an otherwise stationary configuration with generally stochastic soft photon injection. As shown in figure 2, the QPO contribution increases with photon energy, a fact also in agreement with observation. This model provides a straightforward account of this effect: The larger the photon energy, the greater the number of scatterings it has undergone and therefore the greater the probability that is has crossed the outer shell, thereby increasing the QPO contribution to the PSD. We view this dependence of the QPO fractional power on the photon energy as an interesting interplay between the spectral and timing properties which the present model can easily accommodate within its general framework.
The presence of multiple QPOs in the PSD of accreting sources implies therefore, within our model, the presence of multiple shells similar to that responsible for the PSD of figure 2. We do not know at this point the dynamics which would lead to the presence of such features; the purpose of the present note is to simply indicate that their presence can lead to QPO-like features similar to those observed. Furhtermore, while features in the spacial electron distribution can indeed produce QPOs, one should be cautioned that they are not necessarily the sole cause of all QPO features, and that more conventional models may be actually responsible for a number of them. We are in the process of studying these possibilities in greater depth (Kazanas & Hua in preparation).
## 3 THE LIGHT CURVES
As discussed in the introduction, it is our opinion that one of the major obstacles in understanding the dynamics of accretion onto compact objects is the difficulty in providing concrete models of their observed time variability which could be easily compared to observation; in our view, this lack of models is largely due to to the aperiodic character of their light curves, which offers few clues on the mechanism responsible for their formation. It is our contention that the light curves associated with the high energy emission of accreting compact sources is due, to a large extent, to the stochastic nature of Compton scattering in an optically thin, hot, extended medium, which is responsible for the formation of their spectra, coupled with a (probably) stochastic injection of soft photons to be Comptonized. Our proposal, therefore, is that the observed light curves consist of the incoherent superposition of elementary events, each triggered by the injection of a soft photon pulse into the extended corona of hot electrons discussed above. To simplify matters, we assume that the soft photon injection takes place at the center of the corona and it is of vanishing duration; therefore the resulting high energy light curve should be the incoherent sum of pulses with shapes given by the corresponding response function discussed in the previous section.
Following this prescription, one can easily produce model light curves of the resulting high energy emission. These will have the form
$$F(t)=\underset{i=1}{\overset{N}{}}Q_ig(t\tau _i)$$
$`(6)`$
The variable $`\tau _i`$ in the above equation is a random variable indicating the injection times of the individual shots, while $`Q_i`$ is their normalization, a quantity which in our specific model depends on the number of soft photons injected at each particular shot event. Clearly, one could in principle arrange for any form of the PSD by choosing the values of the parameters $`Q_i,\tau _i`$ from appropriately defined distributions. While the values of $`Q_i`$ and $`\tau _i`$ may in fact be associated with certain distributions, we have no a priori knowledge of such distributions and they are in no way restricted by any compelling dynamical arguments. Therefore, in order to avoid introducing extraneous information into our time series, we have chosen a constant value for all $`Q_i`$’s, while the values of $`\tau _i`$’s are chosen to be Poisson distributed with a given constant rate. As such, the $`\tau _i`$’s are chosen using the relation $`\tau _i=ft_0logR_i`$, where $`R_i`$ is a random number uniformly distributed between 0 and 1 and $`f`$ a real number, indicating the mean time between shots in terms of their rise time $`t_0`$.
In Figures 3a and 3b we present two such model light curves. The parameters of the shots used in constructing these curves were the same as those fitting the response function of figure 2a. The two figures are different in the value of the parameter $`f`$ i.e. the parameter which indicates the mean arrival time between shots. The value of $`f`$ was set to $`f=3`$ in figure 3a and $`f=10`$ in figure 3b. The two curves were produced with exactly the same sequence of random numbers, a fact which can be discerned by identifying corresponding features in the two light curves. The random character of the parameter $`\tau _i`$ leads then to an apparently incoherent light curve in the sense discussed by Krolik, Done & Madejski (1993), i.e. of random absolute phases as a function of the Fourier frequency.
In figures 4a and 4b we present “zoom-in” sections of the light curves of Figs 3a, 3b, corresponding to the same ordinal in the sequence of random shots. The shape of the shots becomes progressively asymmetric as the value of $`f`$ increases, since that gives time to the contribution of an individual shot to the light curve to decrease substantially before the next one appears, thus manifesting the underlying asymmetry of the individual shots. At the same time, this also leads to an increase in the RMS amplitude of the variability. One should note that the model light curves presented in these figures consist exclusively of the superposition of shots which die out like power laws rather than exponentially in time, without the presence of a d.c. component. The brief rising part of the light curve lasts for a time interval of $`\beta `$ seconds, i.e. for the time it takes the earliest shot to die out, and it is a transient associated with the turn-on process. Beyond this point a steady – state of rather well defined mean is established, which however is not particularly smooth but characterized by large, aperiodic oscillations, the result of the random arrival and superposition of shots of shapes similar to those of Fig. 1.
It is apparent from the above that an RMS value for the light curve can be established only for time scales $`t>\beta `$. We thus propose that observations in which the RMS values of the corresponding light curves depend on the observation interval indicate that these light curves have not been sampled for sufficiently long intervals i.e. $`t\beta `$, even if the accretion rate has remained constant during this interval. Conversely, the sampling time over which a well defined RMS value of the light curve can be established, could be used to estimate of the time scale $`\beta `$ and thus the size of the high energy emitting source. This size could then be compared to that obtained through study of the lags (see HKC) for consistency.
The present model allows for several different ways of implementing a change in the RMS amplitude of the model light curves. These are, in principle, related to observables and could be lead to provide novel insights concering the variability and dynamics of accreting sources. The simple prescription for generating the light curves given above indicates clearly that increasing the value of $`f`$, i.e. the mean time between shots in units of their rise time, would result in larger RMS fluctuations of the light curves, should the rest attributes of the response function remain the same. This particular case is discussed in more detail in §5.2 as it is expected to be correlated to the skewness of the light curves.
In this section we discuss the additional possibility of changes in the RMS fluctuations of the light curves effected by changes in the value of the rise time of the shots $`t_0`$. For example, decreasing $`t_0`$ while keeping the other parameters (size, temperature, density) of the hot corona constant, would lead to an increase in the source luminosity and a corresponding decrease in the RMS fluctuation amplitude without any additional changes in the spectral or temporal properties of the source.
However, if the time scales $`t_0`$ and $`\beta `$ are indeed related to a length scale associated with the size of the corona’s inner and outer edges, this would generally depend on the macroscopic parameters associated with the accretion flow, most notably the accretion rate. To illustrate with an example the insights which can be gained by comparing such simple models to observation, we assume that the inner and outer radii of the corona $`r_1,r_2`$ (and for that matter the entire flow within the corona), scale proportionally to $`v_{ff}\tau _{\mathrm{cool}}`$, where $`v_{ff}`$ is the free-fall velocity and $`\tau _{\mathrm{cool}}`$ the local cooling time scale. Assuming the latter to be inversely proportional to the local density, a prescription in accordance with, say, the models of ADAF of Narayan & Yi (1994), then all the length scales of the corona, expressed in Schwarzschild radii, should be inversely proportional to the accretion rate of the flow, measured in units of its Eddington value. Therefore, an increase in $`\dot{m}`$ would lead not only to a decrease in $`t_0`$ and therefore in the RMS amplitude of fluctuations, but also in a concommitant decrease of all other lengths of the system (e.g. its outer edge), resulting to an increase in the PSD frequencies corresponding to these features (e.g. the PSD low frequency break).
Apparently such correlations between the source luminosity and the timing characteristics, while they may not be a universal phenomenon, have been observed in at least several sources. For example, van der Hooft et al. (1996) indicate that an increase in the luminostity of the Black Hole Candidate GRO J1719-24 leads to an increase in the QPO frequency at 0.04 Hz by a factor of $`4`$ while at the same time the RMS amplitude decreases by exactly the same amount preserving total variability power in the sense that the product $`\omega |F(\omega )|^2`$ remains constant. These authors indicate that scaling down all the frequencies of the PSD obtained when the source was in it high state, leads to a PSD indistinguishable from that obtained when this source had a much lower luminosity. Similar general trends have also been observed in the transient source GRO J0422+32 (Grove et al. 1998) and are indicative that this behavior does not represent an isolated phenomenon associated with a particular source. At the same time, the simple example discussed above indicates how modeling the aperiodic variability of these sources could lead to new insights and probes of the dynamics of accretion onto compact objects.
Concerning the morphology of the light curves given in figs. 3a and 3b, eye inspection reveals shots with a variety of time scales, and indeed we believe that a distribution of such shots can be found, should one care to view and model them as such (see e.g. Focke & Swank 1998). However, none of these additional time scales or shots have been used as input in this particular simulation. They result simply from the superposition of a large number of shots with power law tails which extend to $`1`$ second. Zooming-in to the highest time resolution, one can indeed discern shots with rise times of the order of $`t_010^3`$ sec in agreement with observations (Meekins et al. 1984). These shots are indeed the individual elementary events (with the power-law extended tails) which comprise our model light curves. However, given the rather limited time range over which one can distinguish the contribution of such individual shots over the flux produced by the incoherent sum of their ensemble (or the detector statistics), one would most likely attempt to fit their shape with an exponential rather than a power law, because their long, extended tails cannot be discerned in the data. However, while fits in the time domain fail to uncover the true structure of these shots because of their crowding, the PSD can achieve this very easily and reliably.
The above analysis makes clear that our model for the variability of accreting sources stands in stark contrast to other models put forth to date to account for the “flicker noise” character of the observed variability of BHC (Chen & Taam 1995; Takeuchi, Mineshige & Negoro 1995): Rather than attributing the observed PSD characteristics entirely to variations in the mass flux onto the compact object, it attributes them, for the most part, to the spacial distribution of electrons in the Comptonizing cloud and the stochastic nature of Compton scattering. The accretion rate is no doubt variable, especially at the shortest intervals - smallest radii (e.g. the soft photon injection events), however its variability is consistent with a constant “averaged-out” accretion rate on longer time scales, as indicated by the coherence measurements and their interpretation (HKT). More importantly, this analysis provides a direct association between the physical properties of the scattering “corona” and specific features of the PSD, in particular its low and high frequency breaks. Since the scattering properties of the extended corona (i.e. $`p,T_e,n_1,\tau ,r_2`$) affect both the spectral and the temporal properties of the emitted radiation in well defined fashion, this model implies the presence of certain well defined spectro-temporal correlations and provides the motivation to search for and model them in detail. Because such correlations will have to be related eventually to the dynamics of accretion, they could serve as a means for probing these dynamics along the lines of the simple example given above. Finally, the apparently random character of the observed light curves (Lochner et al. 1991) is attributed to the “random” (Poisson) injection of soft photons and the stochastic nature of the Comptonization process.
## 4 THE TIME AND PHASE LAGS
The fact that the variability associated with the light curves of figure 3 is due in part to Compton scattering, rather than, for instance, the modulation of the accretion rate, affords an additional probe of the properties of scattering medium, namely the study of time or phase lags in the light curves of two different energy bands as a function of the Fourier frequency. This particular issue has already been discussed in KHT, HKT and in greater detail in HKC. It bears on the fact that time lags between photons of different energies depend on the scattering time of the plasma within which the Compton scattering takes place. In a corona with a density profile given by Equation (1), there is a linear relation between the scattering time and the crossing time of a given decade in radius, while in addition, the probability of scattering within a given (logarithmic) range in radius is constant (for $`p=1`$). Consider now the light curves in two different energy bands: The escaping photons in the higher of two energy bands suffer, on average, a larger number of scatterings, which, for $`p=1`$, take place with equal probability at all radii; the information about the radii at which the additional scatterings took place is imprinted in the Fourier structure of these time lags. The linearity between the scattering and the crossing times then suggests that the lags grow proportionally to the Fourier period $`P`$.
We have repeated the lag analysis outlined in the previous references, this time with model light curves appropriate for two different energies, generated artificially using the algorithm described in the previous section. These were generated by sums of shots as demanded by Eq. (6), with the function $`g(t)`$ in each sum being the response function corresponding to a given photon energy. As noted in KHT, the very fact that higher energies require longer residence times of the photons in the scattering cloud, leads to small but significant (from the point of view of the lags; see HKC) differences in their corresponding response functions. In particular, the power law part of the response function is slightly flatter (larger $`\alpha `$) and extends to slightly larger times (larger $`\beta `$). While eye inspection cannot discern any difference in the shape of the light curves corresponding to the two different energies, they are easily manifest in the Fourier decomposition of the time lags. As pointed out in HKC these differences in the individual shot profiles suffice to produce lags in general agreement with observation.
Figure 5 presents the phase lags of our model light curves as a function of the Fourier frequency. Its magnitude and Fourier frequency dependence is very similar to those corresponding to galactic BHC analyzed by Miyamoto et al. (1991) and to that of the X-ray transient source GRO J0422+32 (Grove et al. 1998). The corresponding time lags of the light curves so generated are obtained by simply dividing the phase lags by the Fourier frequency. The fact that the phase lags are almost constant over the entire range in Fourier frequency of figure 4 suggests that the corresponding time lags will be roughly proportional to the Fourier period $`P`$, as discussed in HKC.
One could apply the arguments presented in the previous section on the dependence of the PSD shape and features to infer the dependence of the lags on the accretion rate; the contraction of all scales associated with the corona argued there should also reflect to a decrease of the corresponding lags with increasing accretion rate. As indicated by Cui et al. (1997) such a dependence has been observed in Cyg X-1 providing further evidence in favor of this specific model.
## 5 THE AUTOCORRELATION AND TIME SKEWNESS FUNCTIONS
In addition to the information provided by the Power Spectral Densities (PSD) and the phase or time lags discussed above, insights into the nature of variability of the BHC sources can also be obtained from moments of the light curves in the time rather than the frequency domain. These have been used in the analysis of the light curves of accreting compact objects several times in the past (Lochner et al 1991). Since we are able to produce models of these light curves we feel that it is instructive to compute the corresponding statistics associated with them so that one could directly compare them to those associated with the light curves obtained from observation. At present we will pay particular attention to two such statistics, the autocorrelation function and the time skewness of the light curves.
### 5.1 The Autocorrelation Function
This statistic provides a measure of the dependence of the flux (counting rate), $`F(t)`$, of the source at a given time $`t`$ on its flux, $`F(t+\tau )`$, at a prior time. Assuming that the source variability consists of flares of a particular time scale, the autocorrelation function provides a measure of this time scale. While the information contained in the autocorrelation function is related to that of the PSD through the fluctuation - dissipation theorem, since this statistic is also used to gauge the variability of accreting sources, for purposes of comparison, it is instructive to provide its form for the model light curves we produce in conjunction with the PSD.
Given that our model light curves are the incoherent sum of a large number of shots of the form given by Eq.(4), in order to exhibit directly the effect of the random injection of shots used to produce the light curve, we have chosen to compute the autocorrelation function in two ways: (a) through the convolution of the response function $`g(t)`$ with itself, i.e.
$$ACF(\tau )=_0^{\mathrm{}}g(t)g(t+\tau )𝑑t$$
$`(7)`$
where $`\tau `$ is the associated time lag. (b) directly from the model light curves produced by the procedure described above. Considering that the light curve consists of measurements of the flux $`F`$ at $`N`$ points in time, denoted as $`t_i`$, separated in time by an interval $`\mathrm{\Delta }\tau `$, the autocorrelation function at a given lag, $`\tau =u\mathrm{\Delta }\tau `$, is given by the sum (ignoring normalization factors)
$$ACF(\tau )=\underset{i}{\overset{Nu}{}}[F(t_i)\overline{F}][F(t_i+u\mathrm{\Delta }\tau )\overline{F}]$$
$`(8)`$
where $`\overline{F}`$ is the mean value of the flux over the interval we consider.
The results of these two procedures are shown in figures 5a, 5b. As it can be seen, the autocorrelation function computed by these two procedures are consistent with each other, with the fluctuations at the largest lags of fig. 5b due to the statistical nature of our light curve. This latter one is also very similar to that computed from the Cyg X-1 light curve by Meekins et al. (1984) and Lochner et al. (1991). The similarity of the autocorrelation function of that obtained from the observations further corroborates the model we have just presented.
At this point we would like to note that, usually, the data associated with the autocorrelation function are presented, (e.g. Meekins et al. 1994; Lochner et al. 1991), in linear time coordinate; we believe that this presentation masks most of the interesting physics which are contained in the interval near zero. It is our contention that in the study of systems whose variability apparently spans several decades in frequency, like the accreting compact objects considered in the present note, the use of logarithmic rather than linear coordinates is instrumental; it is only in terms of former that one can capture the entire range of the physical processes involved. The form of the autocorrelation functions corresponding to those of Figs. 5a, 5b in logarithmic time coordinate are given in Figs. 5c, 5d.
### 5.2 The Time Skewness Function
This statistic measures the time asymmetry of a given light curve, i.e. whether the latter is composed of pulses having sharper rise than decay or vice versa. Because the autocorrelation function is symmetric in the lag variable $`\tau `$, it cannot give any such information about the shape of the light curve. This property can be assessed by computing moments of the light curve higher than the second. In particular, the third moment, $`Q(\tau )(\tau =u\mathrm{\Delta }\tau )`$, as defined in Priedhorsky et al. (1979), provides the proper statistic. For a light curve given as an array of the flux $`F(t_i)`$ as described in the previous subsection, the skewness is given by the sum (ignoring again the normalization factors)
$$Q(\tau )=\underset{i}{\overset{Nu}{}}[F(t_i)\overline{F}][F(t_i+u\mathrm{\Delta }\tau )\overline{F}][F(t_i)F(t_i+u\mathrm{\Delta }\tau )]$$
$`(9)`$
Given that our model light curves are sums of shots with a unique time profile, one can infer a priori several properties of the resulting light curves: The form of the shot profile (Eq. 4) suggests that, since the shots are asymmetric in time, the corresponding light curves should be also asymmetric, at least in situations in which the contribution of individual shots can be perceived. However, the presence of the long power–law tails associated with the individual shots, suggests that over sufficiently long time scales, which encompass a large number of shots, the light curves should be largely symmetric in time. This argument is born out by both inspection of our model light curves and computation of the skewness parameter.
The prescription for creating model light curves discussed in section 3, offers the possibility of testing the above arguments by producing light curves with the proper characteristics, through variation of one or more of their control parameters. The specific parameter in this case is the mean time between shots $`f`$. In figure 6 we present the skewness function of light curves corresponding to two different values of this parameter, namely $`f=2,10`$. As expected, for the small values of this parameter the light curves are indeed symmetric as it can be assessed both by inspection and the value of skewness. For $`f=10`$ the light curves become distinctly asymmetric out to time lags roughly $`10t_0`$, beyond which they appear again symmetric due to the superposition of a large number of shots.
This specific property then offers itself to observational testing: One should note that large values of $`f`$ lead not only to non-zero short time skewness, but also to large RMS fluctuations of the light curve as alluded in §3; in fact, the larger the $`f`$-value, the larger the corresponding RMS fluctuations and also the value of the lag $`\tau `$ for which the skewness, $`Q(\tau )`$, of the light curve deviates from a non-zero value. To the best of our knowledge, such a correlation has never been proposed or sought in the data. It would be of interest to see to what extent it is born by observations.
## 6 CONCLUSIONS, DISCUSSION
We have presented above a general prescription for producing model light curves of accreting compact objects. Within our model, the observed aperiodic variability of these light curves is due to the stochastic nature of the Comptonization process in conjunction with the soft photon injection near the compact object by a Poisson process. Our prescription thus accounts naturally for the apparent lack of coherence in the absolute phase of the observed light curves and the apparent high coherence in the relative phase of the light curves of two different energy bands, since both the hard and soft shots have the same origin in the impulsive injection of the soft photons. Concerning the most common test of variability, namely the PSD, our model relates it to spacial rather than timing properties of these systems, in particular to the spacial distribution of electrons in the Comptonizing hot corona. Thus it produces PSDs in agreement with observation and provides a novel framework within which one can easily accommodate the existence of QPOs and some of their systematics and dependence on the sources’ luminosity. Within the present framework for the variability of accreting sources, their timing and spectral properties are intimately related in a way that could allow one to probe of the dynamics of accretion onto the compact object through the use a combined spectro-temporal analysis. Finally, the model light curves produced using the prescription indicated above have a morphology is in general agreement with the observed aperiodic variability of this class of sources.
The morphology of the light curves has been examined by computation of two statistical properties in the time domain, namely the autocorrelation function and the time skewness. These, as computed for our model light curves, appear to be in good agreement with their (albeit limited) published literature forms corresponding to the light curves of the BHC Cyg X-1. Considering the simplicity of our models (a single type of shot, Poisson distribution in injection times), we are quite surprised that they work as well as they do. Our investigation points, in addition, to a correlation between the RMS variability and the skewness of the corresponding light curves, which we feel should be tested against the observations.
The models presented herein draw heavily on the ideas presented in KHT, HKT and HKC, namely of very extended ($`10^310^4\mathrm{R}_S`$) hot electron coronae with power law dependence of the electron density in radius. In our view, the importance of these models lies in the implied direct association between features in the observed PSDs and time lags (i.e. features of their Fourier domain characteristics) with features in the spacial domain (i.e size, radial density structure). Such an association is not dictated in any of the, albeit very few, alternative models of BHC variability, and to the best of our knowledge, neither has been proposed before in the literature.
It is of interest that both the sources’ sizes and density structures, as deduced from our models (see also KHT, HKT, Hua et al. 1997), are incommensurate with those predicted by the most popular models. These models generally require the sources’ size to be only a few R<sub>S</sub> (Shapiro, Lightman & Eardley 1976), leading to very narrow range in density (which in the present framework can be considered uniform) and therefore to an equally narrow range in time lags, in disagreement with observation. The recent, popular ADAF (Narayan & Yi 1994) are indeed extended in radius, as demanded by our models, but their density profiles are proportional to $`r^{3/2}`$, rather than the $`r^1`$ profiles preferred by our fits to most (but not all) of the time lag observations obtained to date (HKC), suggesting that a variant of these models maybe more appropriate in describing the dynamics of accretion in these sources. It is nonetheless important to point out that both of these density profiles have been associated with data from the same source (Cyg X-1), at different viewing periods, indicating that, according to the present model, the structure of a given source can vary drastically as a function of time.
We believe that the present model is sufficiently well defined and makes concrete enough predictions to allow its falsification or confirmation by more detailed observations. As such, we expect observations in the time domain to be of vital importance. Because the timing and spectral properties are intimately related within our model, testing the particular paradigm would most likely involve a combination of spectral and temporal correlations. We have provided fits of our models to a small set of observations, and derived the corresponding physical parameters of the scattering coronae for these systems; these indicate a great departure from our previous notions as to what they should be. We do not know as yet how to justify the values of the parameters obtained by our models, however this is not the purpose of the present paper. We simply hope that these models will stimulate additional observational scrutiny and re-analysis of the data within their framework, leading possibly to novel insights which will further our understanding of the dynamics of accretion onto the compact object.
Last but not least, these models will have to be modified to incorporate additional spectral features such as the reflection features and the Fe lines, which the more conventinal models of thin, cold disks and their associated hot coronae have addressed so far with significant success.
We would like to thank W. Focke, C. Shrader and W. Zhang for a number of interesting and informative discussions on the variability of accreting black holes.
Figure Captions
Figure 1. The response function a corona with $`p=1`$ and $`r_1=6.35\times 10^3`$ light sec $`=1.9\times 10^8`$ cm, $`r_2=6.24`$ light sec$`=1.87\times 10^{11}`$ cm, $`T_e=100`$ keV, $`\tau _0=1`$ and $`n_1=10^{15}`$ cm<sup>-3</sup> (long dashed curve) along with its fit by Eq. (4) with $`\alpha =0.4,\gamma =1.5`$, $`\beta =10`$ sec, $`t_0=0.02`$ sec, $`b=1`$, (solid curve) and the corresponding power spectrum (short dashed curve).
Figure 2. The response functions (2a) and the corresponding power spectra (2b) associated with the configuration consisting of a uniform sphere of $`\tau _0=1`$ and radius $`r=1`$ light second, surrounded by a thin shell of the same Thomson depth $`\tau _s=1`$ and $`r=10`$ light seconds.
Figure 3. Model light curves constructed using the prescription of Eq. (6). Both curves use $`\alpha =0.5,\gamma =1.5`$, $`\beta =1.5`$ sec, $`t_0=0.001`$ sec, $`b=1`$ but two different values of the mean arrival time between shots. The corresponding parameter $`f`$ takes the values $`f=3`$ (Fig. 3a) and $`f=10`$ (Fig. 3b).
Figure 4. “Zoomed-in” sections of the light curves of Figs. 3a and 3b respectively. The two sections correspond to the same ordinal of the random shots at the beginning of each figure.
Figure 5. The phase lags corresponding to two sets of light curves generated as described in the text. The light curves had the following parameters (a) $`\alpha =0.5`$, $`\beta =16`$ sec, $`t_0=0.02`$ sec. (b) $`\alpha =0.55`$, $`\beta =16`$ sec, $`t_0=0.02`$ sec. (c) $`\alpha =0.6`$, $`\beta =16`$ sec, $`t_0=0.02`$ sec. The two curves correspond to the lags between: (a) - (b) (solid line) and (a) - (c) (dotted line).
Figure 6. (a) The autocorrelation function computed using the corona response function $`g(t)`$ (Eq. 7) with $`\alpha =0.38`$, $`\beta =10`$ sec and $`t_0=0.02`$ sec. (b) The autocorrelation function computed from a model light curve, generated according to Eq. (6) with $`g(t)`$ as above. (c) Same as in figure 6a plotted in logarithmic time coordinate. (d) Same as in figure 6b plotted in logarithmic time coordinate.
Figure 7. The skewness function as computed for two model light curves generated using Eq. (6) with $`g(t)`$ parameters equal to those of figure 6 and two different values of $`f`$, $`f=2`$ (a) and $`f=10`$ (b). The change in the sign of skewness with $`f`$ at short time scales is apparent in the figure.
|
no-problem/9902/hep-th9902054.html
|
ar5iv
|
text
|
# References
QUANTUM GENERATION OF THE NON-ABELIAN
$`SU(N)`$ GAUGE FIELDS
P. I. Fomin, T. Yu. Kuzmenko
Bogolyubov Institute for Theoretical Physics,
National Academy of Sciences of Ukraine,
14 b Metrologichna Str., Kyiv-143, 252143, Ukraine
e-mail: tanya@ap3.bitp.kiev.ua
## Abstract
A generation mechanism of the non-Abelian gauge fields in the $`SU(N)`$ gauge theory is investigated. We show that the $`SU(N)`$ gauge fields ensuring the local invariance of the theory are generated at the quantum level only due to nonsmoothness of the scalar phases of the fundamental spinor fields. The expressions for the gauge fields are obtained in terms of the nonsmooth scalar phases.
PACS: 11.15.–q
Nowadays, the gauge principle occupies a significant place in quantum field theory. According to this principle, the fundamental interactions of elementary particles are transferred by gauge fields. The existence of these fields is considered to be necessary for ensuring the local gauge symmetries. The local $`U(1)`$ gauge symmetry in quantum electrodynamics was first discovered by Weyl . The non-Abelian local gauge symmetries and corresponding gauge fields were introduced by Yang and Mills . Based on this approach, later on the structure of weak and strong interactions was established .
It is commonly supposed that the gauge principle must necessarily be a consequence of the requirement of the gauge symmetry locality. However, it was shown that in the framework of classical field theory, the local gauge invariance can be ensured without introduction of nontrivial gauge fields, i.e., vector fields with nonzero field strengths. It is sufficient to introduce only gradient vector field $`_\mu B(x)`$, as a ”compensative field”, with zero strength $`(_\mu _\nu _\nu _\mu )B(x)=0`$. Such field does not contribute to dynamics . From the viewpoint of the classification of fields by spin, the scalar field $`B(x)`$ corresponds to spin of zero and gradient vector field $`_\mu B(x)`$ is longitudinal. True vector gauge fields $`A_\mu `$ are transversal fields corresponding to spin of unity. Gauge invariance of theory means that the longitudinal part of vector gauge fields does not contribute to dynamics.
If so, what is the real cause of the existence of gauge fields and interactions? In Ref., the ”quantum gauge principle” was formulated in the context of quantum electrodynamics. This principle holds that the Abelian $`U(1)`$ gauge fields are generated at the quantum level only and the generation of these fields is related to nonsmoothness of the field trajectories in the Feynman path integrals, by which the field quantization is determined. In this paper, we investigate the mechanism of non-Abelian $`SU(N)`$ gauge field generation. It is shown that the non-Abelian nontrivial vector fields are generated because of nonsmoothness of the field trajectories for the scalar phases of the spinor fields in the $`SU(N)`$ gauge theory.
Let us consider the Lagrangian for free spinor fields
$$L=i\overline{\psi }^j\gamma ^\mu _\mu \psi ^jm\overline{\psi }^j\psi ^j,$$
(1)
where $`j=1,2,\mathrm{},N`$. In what follows the index $`j`$ will be omitted.
The Lagrangian (1) is invariant under global non-Abelian $`SU(N)`$-transformations
$$\psi ^{}(x)=e^{it^a\omega _a}\psi (x),\overline{\psi }^{}(x)=\overline{\psi }(x)e^{it^a\omega _a},$$
(2)
where $`t^a`$ are $`SU(N)`$ group generators, $`\omega _a=const`$, $`a=1,2,\mathrm{},N^21`$.
In the framework of classical theory, physical fields are known to be described by sufficiently smooth functions. Considering smooth local $`SU(N)`$-transformations
$$\psi ^{}(x)=e^{it^a\omega _a(x)}\psi (x),\overline{\psi }^{}(x)=\overline{\psi }(x)e^{it^a\omega _a(x)},$$
(3)
we obtain that the transformed Lagrangian differs from the original one by the term:
$$L=i\overline{\psi }(x)e^{it^a\omega _a(x)}\gamma ^\mu (_\mu e^{it^b\omega _b(x)})\psi (x).$$
(4)
In Ref., it was shown that the local gauge invariance of the transformed Lagrangian can be ensured by introducing scalar fields $`B_a(x)`$. To put it another way, the Lagrangian
$$L=i\overline{\psi }\gamma ^\mu _\mu \psi +i\overline{\psi }(x)e^{it^aB_a(x)}\gamma ^\mu (_\mu e^{it^bB_b(x)})\psi (x)m\overline{\psi }\psi $$
is invariant under the transformations (3) provided that the fields $`B_a(x)`$ transform as:
$$e^{it^aB_a^{}(x)}=e^{it^aB_a(x)}e^{it^b\omega _b(x)}.$$
The introduced scalar fields $`B_a(x)`$ do not contribute to dynamics, since they do not give rise to nonzero strengths and can be excluded by means of the smooth point transformations of the field variables $`\psi \mathrm{exp}\left(it^aB_a\right)\psi `$ . Thus we need not compensate the term (4) by introducing nontrivial vector fields $`A_\mu ^a`$ that do not reduce to gradients of scalar functions.
The situation changes in the quantum approach. In the Feynman formulation of quantum field theory the transition amplitudes are expressed by the path integrals that are determined on nonsmooth field trajectories . In this context the Lagrangian (1) and its symmetries are determined on the class of nonsmooth functions $`\psi (x)`$, corresponding to nonsmooth trajectories in path integrals. In the strict sense, the derivatives involved in the Lagrangian (1) are discontinuous functions. From physics standpoint, field trajectory nonsmoothnesses are related to quantum fluctuations of the local fields. Feynman integrals, as a rule, are additionally specified by the implicit switch to ”smoothed-out” approximations . In this case the degrees of freedom corresponding to gauge vector fields are lost. Here we show that, as in quantum electrodynamics , in the non-Abelian $`SU(N)`$ gauge theory these degrees of freedom can be explicitly taken into account when ”smoothing” of nonsmooth fields is more carefully carried out.
Let us approximate nonsmooth functions $`\theta ^a(x)`$ by smooth functions $`\omega ^a(x)`$:
$$\theta ^a(x)=\omega ^a(x)+\mathrm{}$$
In order to write down the next term of the ”smoothed-out” representation of the nonsmooth functions $`\theta ^a(x)`$ it is necessary to consider the behaviour of the first derivatives of $`\theta ^a(x)`$. The derivatives $`_\mu \theta ^a(x)`$ at nonsmoothness points of $`\theta ^a(x)`$ are discontinuous functions. Since the derivatives $`_\mu \omega ^a(x)`$ are continuous functions, they approximate badly the behaviour of the derivatives of the ”smoothed-out” $`\theta ^a(x)`$. Let us denote a difference between them by $`\theta _\mu ^a(x)`$ and write $`_\mu \theta ^a(x)`$ as follows:
$$_\mu \theta ^a(x)=_\mu \omega ^a(x)+\theta _\mu ^a(x).$$
(5)
Since the nonsmooth fields $`\theta _\mu ^a(x)`$ do not reduce to gradients of smooth scalar fields, they are the nontrivial vector fields that give rise to nonzero field strengths:
$$_\mu \theta _\nu ^a(x)_\nu \theta _\mu ^a(x)0.$$
Therefore the fields $`_\mu \theta ^a(x)`$ involve the additional degrees of freedom which are related to nonsmoothness of the $`\theta ^a(x)`$. It should be noted that the fields $`\theta _\mu ^a(x)`$ are ambiguously determined due to ambiguity of choice of $`\omega ^a(x)`$.
On integrating the left and right sides of Eq.(5) over space-like contour $`(P)`$ we obtain:
$$\theta ^a(x)=\omega ^a(x)+\underset{(P)}{\overset{x}{}}𝑑y^\mu \theta _\mu ^a(y).$$
Let us now consider $`\theta ^a(x)`$ as scalar phases of the spinor fields $`\psi (x)`$ realizing the fundamental representation of the $`SU(N)`$ gauge group and separate out these phase degrees of freedom in an explicit form:
$$\psi (x)=e^{it^a\theta _a(x)}\psi _0(x),$$
(6)
where the spinor fields $`\psi _0`$ are the representatives of the class of gauge-equivalent fields , $`e^{it^a\theta _a}`$ is a unitary $`N\times N`$ matrix. Then, provided the Lagrangian (1) is determined on the class of nonsmooth functions $`\psi (x)`$, using Eq.(6) we obtain:
$$L=i\overline{\psi }_0\gamma ^\mu _\mu \psi _0+i\overline{\psi }_0e^{it^a\theta _a}\gamma ^\mu \left(_\mu e^{it^b\theta _b}\right)\psi _0m\overline{\psi }_0\psi _0.$$
(7)
Represent the matrix $`e^{it^a\theta _a}`$ as a superposition of the unit matrix $`I`$ and $`SU(N)`$ group generators $`t^a`$:
$$e^{it^a\theta _a}=CI+iS_at^a.$$
(8)
Since $`t^a`$ are traceless matrices normalized by Tr$`(t^at^b)=\frac{1}{2}\delta ^{ab}`$, the coefficients $`C`$ and $`S_a`$ in Eq.(8) are given by:
$$C=\frac{1}{N}\text{Tr}\left(e^{it^a\theta _a}\right),S_a=2i\text{Tr}\left(t^ae^{it^b\theta _b}\right).$$
(9)
Then taking into account the commutation rules for $`SU(N)`$ group generators we can write down:
$$e^{it^a\theta _a}_\mu e^{it^b\theta _b}=it^a\left\{\overline{C}_\mu S_a\overline{S}_a_\mu C+(f_{abc}id_{abc})\overline{S}^b_\mu S^c\right\},$$
(10)
where $`d_{abc}`$ $`(f_{abc})`$ are totally symmetric (antisymmetric) structural constants of $`SU(N)`$-group, the overline denotes complex conjugation. It should be noted that the terms proportional to the unit matrix are absent in the right side of Eq.(10) because Tr$`\left(e^{it^a\theta _a}_\mu e^{it^b\theta _b}\right)=0`$.
Since the matrix $`e^{it^a\theta _a}`$ is unitary, the following equation is valid:
$$\overline{C}S_a\overline{S}_aC+(f_{abc}id_{abc})\overline{S}^bS^c=0.$$
(11)
Differentiating the left and right sides of Eq.(11) and using the property of antisymmetry of $`f_{abc}`$ we conclude that the expression in curly brackets in Eq.(10) is a real function. Thus this expression can be identified with the gauge fields:
$$A_\mu ^a\overline{C}_\mu S^a\overline{S}^a_\mu C+(f^{abc}id^{abc})\overline{S}_b_\mu S_c.$$
(12)
Unlike the gauge field in electrodynamics , the fields $`A_\mu ^a`$ are nonlinear functions of $`\theta ^a(x)`$. As a consequence of nonsmoothness of the phases $`\theta ^a(x)`$ the fields $`A_\mu ^a`$ are also not smooth. If we take into account only the first term in the right side of relation (5) we obtain that the fields $`A_\mu ^a`$ do not contribute to the dynamics, as in classical field theory , and the degrees of freedom corresponding to gauge vector fields are lost. The account of $`\theta _\mu ^a(x)`$ enables us to interpret the fields $`A_\mu ^a`$ as nontrivial vector fields that give rise to nonzero field strenghths:
$$_\mu A_\nu ^a(x)_\nu A_\mu ^a(x)0.$$
By way of illustration let us consider the Yang-Mills $`SU(2)`$ gauge group. In consequence of anti-commutativity of the $`SU(2)`$ group generators the coefficients $`C`$ and $`S_a`$ (see Eq.(9)) are given by:
$$C=\mathrm{cos}\left(\theta /2\right),S_a=2n_a\mathrm{sin}\left(\theta /2\right),$$
(13)
where
$$\theta =\sqrt{\theta _a\theta ^a},n_a=\theta _a/\theta ,a=1,2,3.$$
(14)
From Eqs.(13) and (14) it follows that the gauge fields $`A_\mu ^a`$ can be written as:
$$A_\mu ^a=n^a_\mu \theta +\mathrm{sin}\theta (_\mu n^a)+\mathrm{sin}^2(\theta /2)[𝐧\times _\mu 𝐧]^a.$$
(15)
Expression (15) demonstrates explicitly the relation between the Yang-Mills gauge fields and the nonsmooth scalar phases of the spinor fields.
Let us obtain the transformation law for the vector fields (12). For this purpose we consider the infinitesimal smooth local transformations for the spinor fields:
$$\psi _0^{}(x)=e^{it^a\omega _a(x)}\psi _0(x),\overline{\psi }_0^{}(x)=\overline{\psi }_0(x)e^{it^a\omega _a(x)}.$$
(16)
Then the Lagrangian (7) can be written as:
$$L=i\overline{\psi }_0^{}\gamma ^\mu _\mu \psi _0^{}+i\overline{\psi }_0^{}e^{it^a\omega _a}e^{it^b\theta _b}\gamma ^\mu _\mu \left(e^{it^c\theta _c}e^{it^l\omega _l}\right)\psi _0^{}m\overline{\psi }_0^{}\psi _0^{}.$$
(17)
Defining the gauge fields $`A_{\mu }^{a}{}_{}{}^{}(x)`$ similarly to Eqs.(10) and (12) by the following equation:
$$it_aA_{\mu }^{a}{}_{}{}^{}(x)=e^{it^a\omega _a}e^{it^b\theta _b}_\mu \left(e^{it^c\theta _c}e^{it^l\omega _l}\right),$$
(18)
we find that the transformed gauge fields $`A_{\mu }^{a}{}_{}{}^{}(x)`$ are related to the fields (12) as follows:
$$A_{\mu }^{a}{}_{}{}^{}(x)=A_\mu ^a(x)_\mu \omega ^a(x)f_{abc}\omega ^b(x)A_\mu ^c(x).$$
(19)
Consequently, in the framework of considered scheme of the gauge field generation we derive the usual transformation law for the $`SU(N)`$ gauge fields, with the local gauge invariance of the Lagrangian (7) being not necessary.
Using Eqs.(10) and (12) we obtain that the Lagrangian (7) takes the form:
$$L=i\overline{\psi }_0\gamma ^\mu \widehat{D}_\mu \psi _0m\overline{\psi }_0\psi _0,$$
(20)
where $`\widehat{D}_\mu _\mu +iA_\mu ^at_a`$ is the covariant derivative. It is easy to verify that the Lagrangian (20) is invariant under the transformations (16) and (19).
Therefore the gauge fields $`A_\mu ^a`$ ensuring the local $`SU(N)`$ gauge invariance of the Lagrangian (20) are generated because of nonsmoothness of the field trajectories in Feynman path integral. The nonsmoothness of the fields $`A_\mu ^a`$ corresponds to their quantum nature and means that these fields should also be quantized, i.e., continual integration is to be carried out over the variables $`A_\mu ^a(x)`$. However the fields $`A_\mu ^a`$ in the Lagrangian (20) do not exhibit all the properties of physical fields since they cannot propagate in space because of the absence of the kinetic term.
An expression similar to the kinetic term can be obtained by the calculation of the effective action for the spinor fields described by the Lagrangian (20). Using the results of the calculations performed in Ref., we find the following expression for the kinetic term in the one-loop approximation
$$L_{\text{eff}}=\kappa \mathrm{ln}\frac{\mathrm{\Lambda }}{\mu _0}tr\widehat{F}_{\mu \nu }^2,\widehat{F}_{\mu \nu }=[\widehat{D}_\mu ,\widehat{D}_\nu ],$$
(21)
where $`\mathrm{\Lambda }`$ and $`\mu _0`$ are the momentum of the ultraviolet and infrared cut-off respectively; $`\kappa `$ is the numerical coefficient.
The formula (21) takes the usual form
$$L_{\text{eff}}=\frac{\mathrm{}c}{8g^2}\text{tr}F_{\mu \nu }^2$$
upon identifying
$$g^2=\frac{\mathrm{}c}{8\kappa \mathrm{ln}\frac{\mathrm{\Lambda }}{\mu _0}}.$$
(22)
The last equation relates the charge $`g`$ with the parameters $`\mathrm{\Lambda }`$ and $`\mu _0`$ as well as with the world’s constants $`\mathrm{}`$ and $`c`$, and thus demonstrates explicitly quantum origin of the charge.
We note in conclusion that the ”compensating” gauge fields need not be artificially introduced for the local gauge invariance of the theory to be ensured. The vector gauge fields are generated through nonsmoothness of the scalar phases of the fundamental spinor fields. From the viewpoint of the described scheme of the gauge field generation, the gauge principle is an ”automatic” consequence of field trajectory nonsmoothness in Feynman path integral.
Acknowledgement
This work is supported in part by Swiss National Science Foundation Grant CEEC/NIS/96-98/7 IP 051219. One of the authors (PIF) is thankful to Professor H. Leutwyler for the kind hospitality at ITP of Bern University. We would like to thank Yu. Shtanov for several helpful comments and a careful reading of the manuscript.
|
no-problem/9902/astro-ph9902016.html
|
ar5iv
|
text
|
# Molecular gas and the dynamics of galaxies
## 1. CO and H<sub>2</sub> content of galaxies
Although the molecular component is a key parameter for the star formation history and the evolution of galaxy disks, their H<sub>2</sub> content is still very uncertain, mainly because the bulk of the gas is not seen directly, but through questionable tracers, such as the CO lines. The wider surveys now available, together with the observations of starbursts and the more sensitive observations of low metallicity galaxies now have revealed how variable can be the CO to H<sub>2</sub> conversion ratio, that was previously thought constant within $`\pm `$ 50% (Young & Scoville 1991). Also, the first surveys were oriented towards star forming galaxies, selected from far-infrared (IRAS) samples, and those happened to be rich CO emitters (according to the well established FIR-CO correlation). When more galaxies are included, the derived H<sub>2</sub>/HI mass ratio in galaxies becomes lower than 1, the previously established value, to be around 0.2 in average (Casoli et al 1998).
The dependence of the molecular content with type also has been refined; the apparent H<sub>2</sub>/HI mass ratio decreases monotonously from SO to Sm galaxies, but the extremes were separated by a factor 20-30 (Young & Knezek 1989), which is now reduced to about 10 (cf fig.1, Casoli et al 1998). This gradient towards the late-types might be only due to a reduced CO emission, because of metallicity effects, and not to an intrinsic reduction of the H<sub>2</sub> content. When only the more massive objects are taken into account, this tendency with type does not appear: there is no gradient of molecular fraction. This supports the hypothesis that the gradient is due to metallicity, which is correlated with total mass.
The dependence of the CO-to-H<sub>2</sub> conversion factor X with metallicity has been confirmed now in many galaxies. In the Small Magellanic Clouds, X could be 10 times higher than the standard value of 2.3 10<sup>20</sup> mol cm<sup>-2</sup> (K.km/s)<sup>-1</sup> (Rubio et al 1993). The effect has been seen also in local group galaxies, such as M31, M33, NGC 6822 and IC10 (Wilson 1995). The physical explanation is complex, since the CO lines are optically thick, but the size of the clouds (and therefore the filling factor) decreases with metallicity, both due to direct CO abundance, and UV-phtotodissociation increased by the depletion of dust. When the dust is depleted by a factor 20, there is only 10% less H<sub>2</sub> but 95% less CO (Maloney & Black 1988).
Another tracer of the molecular gas has been widely used in recent years: cold dust emission, through galaxy mapping with bolometers at 1.3mm. The technique is best suited to edge-on galaxies, since a nearby empty reference position must be frequently observed to eliminate atmospheric gradients. One of the first observed, NGC 891, has a radial dust emission profile exactly superposable to the CO-emission profile (Guelin et al 1993). At 3mm, the dust is completely optically thin, and therefore the emission is proportional to the dust abundance, or the metallicity Z. The identity between the dust and CO emission profiles tend to confirm the strong dependence of CO emission with Z. In some galaxies, the CO emission falls off radially even faster than the dust emission; it is the case of NGC 4565 or NGC 5907 for example (cf fig 2, Neininger et al 1996).
In those galaxies, where the CO emission falls faster than dust emission, the latter is correlated to the HI distribution (see also the case of NGC 1068, Papadopoulos & Seaquist 1999). This could be interpreted by two effects: there is an extended diffuse molecular component, not visible in the CO lines, because the H<sub>2</sub> density is not sufficient to excite the CO lines, or the CO abundance is depending non-linearly with metallicity, i.e. decreases more than the dust, which abundance is linear in Z. This might be the case in dwarf galaxies, as discussed in the next section.
## 2. CO in dwarf galaxies and Blue Compact starbursts
Much effort has been devoted to the detection of CO lines in blue compact dwarf galaxies (BCDGs): their high star formation rates is assumed to require a large H<sub>2</sub> content, and its detection will help to understand the star formation mechanisms and efficiencies. But the task has revealed very difficult, since the objects have low masses, and low metallicity. Arnault et al. (1988) already suggested that X was varying as Z<sup>-1</sup>, and even as Z<sup>-2</sup> in a certain domain, but this was contested (Sage et al 1992). Recent results improve considerably the detection rate and the upper limits, because of the technical progress in the receivers (Barone et al 1998, Gondhalekar et al. 1998). They confirm the high dependence on metallicity of the CO emission.
Taylor et al. (1998) have tried to detect in CO HI-rich dwarf galaxies, in which the oxygen abundance was known. They have stringent upper limits on the most metal-poor galaxies, and conclude that the conversion factor must be varying nonlinearly with metallicity, increasing sharply below $``$1/10 of the solar metallicity \[12 + log (O/H) $``$ 7.9\] (cf figure 3).
## 3. Low surface brightness galaxies: LSB
Low surface brightness galaxies may provide a clue to the galaxy evolution processes: they appear to be unevolved galaxies, with large gas fraction, which may have formed their stars only on the second half of the Hubble time (McGaugh & de Blok 1997). Their metallicity is low, according to the correlation of metallicity with surface brightness (Vila-Costas & Edmunds 1992). Some can have however, very large masses (and large rotational velocities, implying large dark matter fractions). To tackle the mystery of their low-evolution rate, it is of prime importance to know their molecular content, and their total gas surface density. De Blok & van der Hulst (1998) have made a search for CO line emission in 3 late-type LSB galaxies: they find a clear CO-emission deficiency, and conclude to a molecular gas deficiency. They claim that the conversion factor X has not the same reasons to be higher than in normal late-type galaxies, since the star formation rate is smaller than in HSB galaxies, implying a lower UV flux, that does not photodissociate as much the CO molecules. However, their derived upper limits of the M(H<sub>2</sub>)/M(HI) mass ratio fall in the same range as in HSB late-type galaxies; the question of their true molecular component is still open, the more so as early-type LSB galaxies are detected in CO. Note that there exists a large scatter of CO-emission or H<sub>2</sub> content, even in normal galaxies, and this is not well understood: in M81 for example, a particularly low H<sub>2</sub>/HI interacting galaxy, with normal metallicity, the molecular clouds appear to have different characteristics, with a lower velocity dispersion at a given size (Brouillet et al. 1998).
Even with normal H<sub>2</sub>/HI mass ratio (close to 1), the gas surface density of those LSB galaxies is lower than critical for gravitational instabilities, and that is sufficient to explain the low efficiency of star formation (van der Hulst et al. 1993, van Zee et al. 1997). These low surface densities could come from the poor environment and the lack of companions (Zaritsky & Lorrimer 1993).
## 4. Spiral Structure
It becomes now possible to map in the CO lines galaxies at large-scale, and with high spatial resolution, to have an overview of the molecular structure of a galaxy, with a high dynamical range. With the On-The-Fly mapping procedure, Neininger et al. (1998a) have surveyed nearly half of the M31 disk in CO(1–0) with 23” (90pc) resolution, and they have detailed some remarkable regions, at 2” resolution with the IRAM interferometer. Apart from emphasizing the fractal structure of the molecular gas over such large range of scales, this study reveals a tight correlation between the CO arms and the dust lanes, and also with the HI arms. At this scale, the kinematics of the CO lines are more dominated by the star forming shells and the resulting chaotic motions in the arms, than by large-scale streaming motions. In the global sense, the spiral structure of this galaxy is not well determined, due to projection effects, and it is possible that the ISM is concentrated more in a ring that in spiral arms (see the ISOPHOT map from Haas et al 1998).
The BIMA interferometer group has undertaken a survey of 44 nearby galaxies (SONG), with large fields of view (mosaics of 7 fields of 3-4’), with 7” angular resolution (cf poster by Helfer et al., this conference). There is a large range of CO morphologies detected, among spirals, bars and rings. The spiral structure of M51 is particularly detailed, and the arm-interarm contrast deduced more exactly, including short-spacing.
## 5. Centers of galaxies
High-resolution CO maps of galaxy centers (at 100pc or less) have been obtained with interferometers. One of the striking features of the central H<sub>2</sub>-component structure is the strong asymmetry observed, similar to what happens in the center of the Milky Way: the gas distribution is lopsided, and this appears to be the rule more than the exception.
One of the best example is the CO-rich M82 galaxy. Neininger et al (1998b) have recently performed a 4” resolution <sup>13</sup>CO map with the IRAM interferometer. The map shows the same gross features as the <sup>12</sup>CO one made by Shen & Lo (1995), i.e. a compact central source, and two offset maxima, that could be the signature of an edge-on ring. However, the central peak has a low <sup>13</sup>CO/ <sup>12</sup>CO ratio, may be due to a large UV-photodissociation (affecting selectively more the rarer isotopic species).
There is a strong velocity gradient in the CO, which is also a general rule in spiral galaxies (cf Sofue et al. 1997). The dynamical centre coincides with the IR peak and is shifted 6” north-east of the compact <sup>13</sup>CO source, emphasizing the lopsidedness. The kinematics is also perturbed by star-formation shells, and around the most luminous compact radio source in M 82, is identified a 130 pc-wide bubble of molecular gas.
Another example is the edge-on warped galaxy NGC 5907, mapped in CO at 3” resolution by Garcia-Burillo et al. (1997). Spiral structure can be resolved in the center, which means that the galaxy is not completely edge-on (or strongly warped even in the molecular component). Non-circular motions are well explained by a bar rotating at a pattern speed of $`\mathrm{\Omega }_b`$ = 70 km/s/kpc (high enough to be that of a nuclear bar). The velocity gradient is high in the center, resulting from a massive and compact nuclear disk. This is interesting, since this galaxy could be classified as an early-type according to its nuclear velocity gradient, but has no bulge and is in fact classified as an Sc-Sd late-type. The molecular distribution is significantly off-centered and lopsided also in this galaxy.
## 6. Double bars and nuclear disks
Barred galaxies are conspicuous in general by their high concentrations of molecular gas in the center. They often possess nuclear disks, or large flattened gas concentrations in fast rotation, within the central 1kpc. This large molecular gas concentration can trigger starbursts, sometimes confined in nuclear rings (hot spots in H$`\alpha `$).
The molecular distribution can have several morphologies, according to the presence of zero, one or two inner Lindblad resonances in the center. When there are Lindblad resonances, the gas accumulates in a spectacular twin-peaks morphology (Kenney 1996), corresponding to the crossing of the x1 orbits parallel to the bar, and the x2 orbits perpendicular to it, inside the ILR, generally materialised by a nuclear ring. This typical structure is nicely seen in NGC1530 mapped in CO by Reynaud & Downes (1997): the twin peaks are at the beginning of the characteristic thin dust lanes aligned with and leading the bar. In rare cases, there is only a single peak, may be due to the interaction with a companion (cf NGC5850, Leon et al 1999). When there is no ILR, in late-type galaxies for instance, or when the pattern speed is relatively high, there is no ring, and only a central concentration (cf NGC 7479, Laine et al 1998).
Bars within bars is a frequently observed phenomenon, easy to see on color-color plots, or in NIR images of galaxies (to avoid dust extinction). The presence of an embedded bar has long been invoked as a mechanism to prolonge the non-axisymmetry (and the resulting gravity torques) towards the center, to drive the gas and fuel an AGN (Shlosman et al 1989). Simulations have described the numerical processus leading to the formation of double bars (Friedli & Martinet 1993, Combes 1994). In concentrating the mass towards the center, the first bar modifies the inner rotation curve, and the precessing rate ($`\mathrm{\Omega }\kappa /2`$) of the $`m=2`$ elliptical orbits in the center increases to large values. This widens the region between the two ILRs, and mass accumulate on the perpendicular x2 orbits, which weakens the principal bar in this region. This strong differential $`\mathrm{\Omega }\kappa /2`$ prevents the self-gravity from matching all precessing rates in the center, and decoupling occurs: two bars rotating at two different pattern speeds develop. Eventually, a too large mass accumulation into the center can destroy the bar (Hasan et al 1993). To probe this scenario in double-bar galaxies, it is important to know the pattern speeds of the two bars, and derive the dynamical processus at play.
A prototype of double-bar galaxies is NGC4321, or M100 (cf Knapen & Beckman 1996). The study of its molecular cloud distribution has been done at high resolution towards its central parts, including the nuclear bar (Sakamoto et al. 1995, Garcia-Burillo et al 1998a). A small nuclear spiral structure has been detected inside the nuclear ring made of the star-forming hot spots. This morphology requires a model of nuclear bar with a very fast pattern speed ($`\mathrm{\Omega }`$ = 160 km/s/kpc, see fig 4, from Garcia-Burillo et al. 1998a).
Sometimes, the gas in the center is also observed at high altitude above the plane (cf NGC 891, 5907, or N 4013, Garcia-Burillo et al. 1999). These might be accounted for by processes associated to the star-formation, more than purely dynamical mechanisms. The only exception could be gas in retrograde orbits.
Garcia-Burillo et al. (1998b) report for the first time the detection of a massive counterrotating molecular gas disk in the early-type spiral NGC3626. The CO emission is concentrated in a compact nuclear disk of average radius r $``$ 12” (1.2 kpc), rotating in a sense opposite to that of the stars, and in the same sense as the HII and HI gas (themselves counterrotating with respect to the stars). There is no evidence of a violent starburst in the center of the galaxy, which corresponds probably to the late stage of a merger.
## 7. Molecular content of cluster galaxies
Although many spiral galaxies in clusters are stripped from their HI gas, it has been established that they are not deficient in CO-emission, and probably also H<sub>2</sub> (Kenney & Young 1989 for Virgo; Boselli et al. 1995, Casoli et al. 1998 for Coma). This can be understood since the HI is usually located in the outer parts of galaxies, where the gas is less bound, and the tidal forces are larger; also the HI gas is more diffuse, and easier to deplace by ram pressure.
In Hickson compact groups, the molecular gas is not deficient either, and even enhanced in tidally interacting galaxies (Boselli et al. 1996, Leon et al. 1998, cf figure 5). In a particular Hickson group, the Stephan’s Quintet, Xu & Tuffs (1998) have recently found evidence of a starburst going on outside galaxies. From ISOCAM 15 $`\mu `$m H$`\alpha `$ and NIR observations, they identified an outstanding bright source about 25kpc away from the neighbouring galaxies, containing very young stars, and corresponding to an SFR of about 0.7 M/yr. They propose that the starburst is triggered by the collision between a fast galaxy an the IGM; alternatively this could be the formation of a tidal dwarf, through gravitational instabilities in a tidal tail (cf Duc & Mirabel 1998). Molecular gas (through CO emission) is difficult to detect so far from the nuclei (possibly because of metallicity gradients), and the detection in tidal tails are rare, like that in the NGC 2782 (Arp 215) tail by Smith et al. (1998).
## 8. Ultra-luminous IRAS galaxies
A new survey of ultra-luminous infrared galaxies with millimeter interferometer reveals that the molecular gas is confined in compact rotating nuclear disks or rings (Downes & Solomon 1998). The constraint that the gas mas is smaller than the dynamical mass imposes an excitation model in which the CO lines are only moderately optically thick ($`\tau =410`$) and subthermally excited, so that the CO-to-H<sub>2</sub> conversion ratio is about 5 times less than standard. In that case, the fraction gas-to-dynamical mass is 15%. The surface density of gas, however, is in average $`\mu _g`$/$`\mu _{tot}`$ = 30%. In some cases, the CO position-velocity diagrams clearly shows a ring, with a central gap. In the particular case of Arp 220, there appears to be two bright sources embedded in a central nuclear disk, which are compact extreme starburst regions, more likely than the premerger nuclei, as was previously thought.
An often debated question is whether the huge far infrared emission of the ultra-luminous galaxies are due to a starburst or a monster (AGN). It is frequent that both are present simultaneously, since they are both the results of huge mass accumulation in the centers of galaxies. Using ISO data, and diagnostic diagrams involving the ratio of high-to-low excitation mid-IR emission lines, together with the strength of the 7.7 $`\mu `$m “PAH” feature, Genzel et al. (1998) conclude that the far infrared emission appears to be powered predominantly by starbursts: 70%-80% are predominantly powered by recently formed massive stars, and 20%-30% are powered by a central AGN. Very high extinctions are measured towards these star-forming regions, supporting the high H<sub>2</sub> column densities derived from CO observations. In these objects, the active region is always sub-kpc in size.
## 9. Galaxies at high redshift
One of the most exciting results of these last years is the detection of galaxies at larger and larger redshifts, allowing to tackle the evolution and history of star formation in the Universe. After the first discovery in CO lines of an object at $`z>2`$, the ultraluminous galaxy IRAS 10214+4724 (Brown & van den Bout 1992, Solomon et al 1992), there has been an extended search for more early starbursts, that resulted in the discovery of about 10 objects at high $`z`$ in CO lines (cf Table 1).
In fact the majority of these objects (if not all) are amplified by gravitational lenses, and this explains why they are detectable at all (see figure 6). The amplification is very helpful to detect these remote objects, but the drawbacks are significant uncertainties in the amplification factors, and therefore on the total molecular content. The excitation of the gas is also uncertain, since the various CO lines emission may have different spatial extents and different resulting amplifications.
One strategy to search for CO lines in high-$`z`$ galaxies has been to select objects already detected in the far-infrared or submm dust emission. Indeed, all objects in Table 1 have been first detected in continuum, some (the SMM) have been discovered in blank field searches with the SCUBA bolometer on JCMT (Hawaii) by Smail et al. (1997). Since the emission of the dust is varying as $`\nu ^4`$ with the frequency $`\nu `$ in the millimeter range (until the maximum near 60$`\mu `$m), it becomes easier to detect galaxies at $`z`$ = 5 than $`z`$ = 1 (cf Blain & Longair 1993). The millimeter domain is then a privileged one to follow the star-formation history as a function of redshift, and several surveys have been undertaken. Searches toward the Hubble Deep Field-North (Hughes et al 1998), and towards the Lockman hole and SSA13 (Barger et al 1998), have found a few sources, revealing an increase of starbursting galaxies with redshift. They correspond by extrapolation to a density of 800 per square degree, above 3 mJy at 850 $`\mu `$m. This already can account for 50% of the cosmic infra-red background (CIRB), that has been estimated by Puget et al (1996) and Hauser et al (1998) from COBE data. Similar conclusions have been reached towards the CFRS fields by Eales et al. (1999).
These first results show the potentiality of the millimeter domain, already with the present instruments. With the fore-coming next generation of mm telescopes, which will yield a factor 10-20 increase in sensitivity, it will be possible to detect not only huge starbursts but more ordinary galaxies at high redshift (cf Combes et al 1999).
## References
Arnault P., Kunth D., Casoli F., Combes F. 1988 A&A 205, 41
Barger A.J., Cowie L.L., Sanders D.B. et al. 1998, Nature 394, 248
Barone L.T., Heithausen A., Fritz T., Klein U. 1999, in The Physics and Chemistry of the Interstellar Medium, Proceedings of the 3rd Cologne-Zermatt Symposium, Zermatt, September 22-25, 1998, ed. V. Ossenkopf
Barvainis R., Alloin D., Guilloteau S., Antonucci R. 1998, ApJ 492, L13
Barvainis R., Tacconi L., Antonucci R., Coleman P. 1994, Nature 371, 586
Blain A.W., Longair M.S. 1993, MNRAS 264, 509
Boselli A., Casoli F., Lequeux J. 1995 A&AS 110, 521
Boselli A., Mendes de Oliveira C., Balkowski C., Cayatte V., Casoli F. 1996 A&A 314, 738
Brouillet N., Kaufman M., Combes F., Baudry A., Basch F. 1998 A&A 333, 92
Brown R., Vanden Bout P. 1992, ApJ 397, L19
Casoli F., Dickey J., Kazes I. et al. 1996, A&AS 116, 193
Casoli F., Sauty S., Gerin M. et al. 1998, A&A 331, 451
Combes F., Maoli R., Omont M. 1999, A&A in press
Combes F. 1994, in Mass-transfer induced activity in galaxies, ed. I. Shlosman, Cambridge Univ. Press, p. 170
de Blok W.J.G., van der Hulst J.M. 1998, A&A 336, 49
Downes D., Neri R., Wiklind T., Wilner D.J., Shaver P. 1998, ApJ preprint (astro-ph/9810111)
Downes D., Solomon P.M., Radford S.J.E. 1995, ApJ 453, L65
Downes D., Solomon P.M. 1998 ApJ 507, 615
Duc P.A., Mirabel I.F. 1998 A&A 333, 813
Eales S.A., Lilly S.J., Gear W.K., Dunne L., Bond J.R., Hammer F., LeFevre O., Crampton D. 1999, ApJL in press (astro-ph/9808040)
Frayer D.T., Ivison R.J., Scoville N.Z., et al., 1998, ApJ 506, L7
Frayer D.T., Ivison R.J., Scoville N.Z., et al., 1999, ApJ preprint (astro-ph/9901311)
Friedli D., Martinet L. 1993 A&A, 277, 27
Garcia-Burillo S., Combes F., Neri R. 1999, A&A in press (astro-ph/9901068)
Garcia-Burillo S., Sempere M.J., Combes F., Neri R. 1998a A&A, 333, 864
Garcia-Burillo S., Sempere M.J, Bettoni D. 1998b ApJ 502, 235
Garcia-Burillo S., Guelin M., Neininger N. 1997 A&A 319, 450
Genzel R., Lutz D., Sturm E. et al. 1998 ApJ 498, 579
Gondhalekar P.M., Johansson L.E.B., Brosch N., Glass I.S., Brinks E. 1998 A&A 335, 152
Guelin M., Zylka R., Mezger P.G. et al. 1993, A&A 279, L37
Guilloteau S., Omont A., McMahon R.G., Cox P., PetitJean P. 1997, A&A 328, L1
Haas M., Lemke D., Stickel M. et al. 1998 A&A 338, L33
Hasan H, Pfenniger D., Norman C. 1993 ApJ 409, 91
Hauser M.G., Arendt R.G., Kelsall T., et al. 1998, ApJ 508, 25
Hughes D.H., Serjeant S., Dunlop J. et al. 1998, Nature 394, 241
Kenney J.D.P. 1996, in Barred galaxies, Astronomical Society of the Pacific Conference Series, Volume 91; edited by R. Buta, D. A. Crocker and B. G. Elmegreen, p.150
Kenney J.D.P., Young J.S. 1989 ApJ 344, 171
Knapen J.H., Beckman J.E. 1996, MNRAS 283, 251
Laine S., Shlosman I., Heller C.H. 1998, MNRAS 297, 1052
Leon S., Combes F., Friedli D. 1999, in ”Galaxy Dynamics”, Proceedings of Rutgers Conference, to appear in ASP Conf Series, ed. D. R. Merritt, M. Valuri, J.A. Sellwood
Leon S., Combes F., Menon T.K. 1998 A&A 330, 37
Maloney P., Black J.H. 1988, ApJ 325, 389
McGaugh S., de Blok W.J.G 1997 ApJ 481, 689
Neininger N., Guelin M., Ungerechts H. et al. 1998a, Nature 395, 871
Neininger N., Guelin M., Klein U. et al. 1998b, A&A 339, 737
Neininger N., Guelin M., Garcia-Burillo S. et al. 1996, A&A 310, 725
Ohta K., Yamada T., Nakanishi K., Kohno K., Akiyama M., Kawabe R. 1996, Nature 382, 426
Omont A., Petitjean P., Guilloteau S., McMahon R.G., Solomon P.M. 1996, Nature 382, 428
Papadopoulos P.P., Seaquist E.R. 1999, ApJ preprint (astro-ph/9901346)
Puget J.L., Abergel A., Bernard J-P. et al. 1996, A&A 308, L5
Reynaud D., Downes D. 1997, A&A, 319, 737
Rubio M., Lequeux J., Boulanger F. 1993, A&A 271, 9
Ryder .D., Knapen J.H. 1999, MNRAS 302, L7
Sage L.J., Salzer J.J., Loose H.H., Henkel C. 1992 A&A 265, 19
Sakamoto K., Okumura S., Minezaki T. et al. 1995 AJ 110, 2075
Scoville N.Z., Padin S., Sanders D.B. et al. 1993, ApJ 415, L75
Scoville N.Z., Yun M.S., Windhorst R.A., Keel W.C., Armus L. 1997, ApJ 485, L21
Shen J.J., Lo K.Y. 1995 ApJ, 445, L99
Shlosman I., Frank J., Begelman M. 1989, Nature, 338, 45
Smail I., Ivison R.J., Blain A.W. 1997, ApJL 490, L5
Smith B.J., Struck C., Kenney J.D.P., Jogee S. 1998, AJ preprint (astro-ph/9811239)
Sofue Y., Tutui Y., Honma M, Tomita A. 1997 AJ 114, 2428
Solomon P.M., Downes D., Radford S.J.E., Barrett J.W. 1997, ApJ 478, 144
Solomon P.M., Downes D., Radford S.J.E. 1992, Nature 356, 318
Taylor C.L., Kobulnicky H.A., Skillman E.D. 1998, AJ 116, 2746
van der Hulst J.M., Skillman E.D., Smith T.R. et al. 1993 AJ, 106, 548
van Zee L., Haynes M.P., Salzer J.J. 1997 AJ 114, 2497
Vila-Costas M.B., Edmunds M.G. 1992, MNRAS 259, 121
Wilson C.D. 1995, ApJ 448, L97
Wink J.E., Guilloteau S., Wilson T.L. 1997, A&A 322, 427
Xu C., Tuffs R. 1998, ApJ preprint (astro-ph/9808344)
Young, J., Knezek, M. 1989, ApJ, 347, L55
Young, J., Scoville N.Z. 1991, A.R.A.A. 29, 581
Zaritsky D., Lorrimer S.J. 1993 in The Evolution of Galaxies and Their Environment, Proceedings NASA. Ames Research Center, p. 82-83
|
no-problem/9902/cond-mat9902047.html
|
ar5iv
|
text
|
# How to account for virtual arbitrage in the standard derivative pricing
## Appendix
In this appendix we give an informal derivation of the main equation (3) using the formalism of functional integrals.
We start with the Black-Scholes equation with the rate of return on the riskless portfolio $`VS\frac{V}{S}`$ given by $`r_0+x`$:
$$\frac{V}{t}+\frac{\sigma ^2S^2}{2}\frac{^2V}{S^2}+(r_0+x)S\frac{V}{S}(r_0+x)V=0,$$
$$V(S,t)|_{t=T}=\delta (SS^{}).$$
To simplify the calculation we change variables as $`\tau =Tt`$ and $`y=lnS`$ which casts the previous equations in the form:
$$\frac{V}{\tau }=\frac{\sigma ^2}{2}\frac{^2V}{y^2}+(r_0+x\frac{\sigma ^2}{2})\frac{V}{y}(r_0+x)V,$$
$$V(y,\tau )|_{\tau =0}=\frac{1}{y^{}}\delta (yy^{}).$$
The solution of the problem can be expressed as the following functional integral :
$$V(y,\tau )=\frac{1}{y^{}}DqD\frac{p}{2\pi }e^{_0^\tau \left(ip\dot{q}\frac{\sigma ^2}{2}p^2(r_0+x(\tau ))+i(r_0+x(\tau )\frac{\sigma ^2}{2})p\right)𝑑\tau }$$
with the boundary conditions $`q(0)=y^{}`$, $`q(\tau )=y`$. The functional integral form of the solution is extremely convenient for the purpose of averaging over trajectories $`x(\tau )`$ since it presents the dependence in explicit form. We, however, will use another trick. Let us first of all average the previous expression over realisations of the stochastic process $`x(\tau )`$ with fixed ends, i.e. when $`x(\tau )=x`$ and $`x(0)=0`$ (since there is no arbitrage at the expiration date of the contract and later). The probabilistic weight of a particular trajectory $`x(\tau )`$ for the Ornstein-Uhlenbeck process (1) is given by the expression
$$D\frac{\xi }{2\pi }e^{_0^\tau (i\xi \dot{x}\frac{\mathrm{\Sigma }^2}{2}\xi ^2i\lambda \xi x)𝑑\tau }$$
which leads to the following result for the conditional average value of the contract
$$\overline{V}(x,y,\tau )=\frac{1}{y^{}}DqD\frac{p}{2\pi }DxD\frac{\xi }{2\pi }e^{_0^\tau \left(i\xi \dot{x}+ip\dot{q}\frac{\sigma ^2}{2}p^2(r_0+x(\tau ))+i(r_0+x(\tau )\frac{\sigma ^2}{2})p\frac{\mathrm{\Sigma }^2}{2}\xi ^2i\lambda \xi x\right)𝑑\tau }$$
$$q(0)=y,q(\tau )=y^{},x(0)=0,x(\tau )=x.$$
Now, instead of evaluating these integrals, we find a partial differential equation for $`\overline{V}(x,y,\tau )`$ :
$$\frac{\overline{V}}{t}=\frac{\sigma ^2}{2}\frac{^2\overline{V}}{y^2}+(r_0+x\frac{\sigma ^2}{2})\frac{\overline{V}}{y}(r_0+x)\overline{V}+\frac{\mathrm{\Sigma }^2}{2}\frac{^2\overline{V}}{x^2}+\lambda \frac{x\overline{V}}{x},$$
with the initial conditions
$$\overline{V}(x,y,\tau )|_{\tau =0}=\frac{1}{y^{}}\delta (x).$$
Returning back to the initial variables $`t`$ and $`S`$ we obtain the problem (3)
$$\frac{\overline{V}}{t}+\frac{\sigma ^2S^2}{2}\frac{^2\overline{V}}{S^2}+(r_0+x)S\frac{\overline{V}(x,S,t)}{S}(r_0+x)\overline{V}+\frac{\mathrm{\Sigma }^2}{2}\frac{^2\overline{V}}{x^2}+\lambda \frac{x\overline{V}}{x}=0,$$
$$\overline{V}(x,S,t)|_{t=T}=\delta (x)\delta (SS^{}).$$
Integration of the solution over $`x`$ (to get the unconditional average) and the convolution with the final payoff complete the consideration.
In a similar way various generalizations of the equations can be derived. It is clear that the functional integral is a very convenient tool for such kind of manipulations. We would like to note however that the functional ingral formalism is full of subtlties which we have not emphasized here. It serves for fast derivation but not proving the results. The proof can be obtained in the routine framework of stochastic calculus.
|
no-problem/9902/hep-ph9902482.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The weak phase $`\gamma =`$ Arg$`(V_{ub}^{})`$ is presently the least well known quantity among the four parameters (three angles and a phase) of the Cabibbo-Kobayashi-Maskawa (CKM) matrix. Its determination, which is regarded to be more difficult than that of the other two angles of the CKM unitarity triangle , can provide a crucial test of the CKM mechanism for CP violation in the Standard Model. Several methods have been proposed to determine $`\gamma `$ from hadronic two-body $`B`$ decays. The methods which seem to be experimentally most feasible in the near future are based on applications of SU(3) flavor symmetry in $`B`$ decays into two light charmless pseudoscalars . These methods involve certain theoretical uncertainties, which are expected to be reduced when more data become available and when better theoretical understanding of hadronic $`B`$ decays is achieved.
In a first paper in a series, Gronau, London and Rosner (GLR) proposed to extract $`\gamma `$ by combining decay rate measurements of $`B^+K\pi `$, $`B^+\pi \pi `$ with their charge-conjugates. SU(3) breaking, occuring in a relation between $`B\pi \pi I=2`$ and $`BK\pi I=3/2`$ amplitudes, was introduced through a factor $`f_K/f_\pi `$ when assuming that these amplitudes factorize. In its original version, suggested before the observation of the heavy top quark, the method of Ref. neglected electroweak penguin (EWP) contributions and certain rescattering effects. Subsequently, model-calculations showed that due to the heavy top quark the neglected EWP terms were significant ; and recently these terms were related by SU(3) to the $`BK\pi I=3/2`$ current-current amplitudes . This led to a modification of the GLR method, to be referred to as the GLRN method, which in the limit of flavor SU(3) symmetry includes EWP effects in a model-independent way. Corrections from SU(3) breaking, affecting the relation between EWP terms and current-current terms, were argued to be small .
Assuming that the above SU(3) breaking effects are indeed under control, there is still an uncertainty due to rescattering effects. To determine $`\gamma `$ from the above rates, one takes the $`B^+K^0\pi ^+`$ amplitude to be pure penguin, involving no term with weak phase $`\gamma `$. This assumption, which neglects quark annihilation and rescattering contributions from charmless intermediate states, was challenged by a large number of authors . Several authors proposed ways of controlling rescattering effects in $`B^\pm K^0\pi ^\pm `$ by relating them through SU(3) to the much enhanced effects in $`B^\pm K^\pm \overline{K}^0`$ (see also ). The charge-averaged rate of the latter processes can be used to set an upper limit on the rescattering amplitude in $`B^\pm K^0\pi ^\pm `$. While present limits are at the level of $`2030\%`$ of the dominant penguin amplitude (depending somewhat on the value of $`\gamma `$), they are expected to be improved in the future. The smaller the rescattering amplitude is, the more precisely can $`\gamma `$ be determined from the GLRN method. A recent demonstration , based on a few possible rate measurements, seems to show that if the rescattering amplitude is an order of magnitude smaller than the dominant penguin amplitude in $`B^+K^0\pi ^+`$, the uncertainty in $`\gamma `$ is only about 5 degrees.
In the present Letter we reexamine in detail the uncertainty in $`\gamma `$ due to rescattering effects. Using a geometrical interpretation for the extraction of $`\gamma `$, we perform in Section 2 numerical simulations which cover the entire parameter space of the two relevant strong phases, the rescattering phase $`\varphi _A`$ and the relative phase $`\varphi `$ between $`I=3/2`$ current-current and penguin amplitudes. We find that, contrary to the demonstration made in , a 10$`\%`$ rescattering amplitude leads to an uncertainty in $`\gamma `$ as large as about $`14^{}`$ around $`\varphi 90^{}`$. For certain singular cases no solution can be found for $`\gamma `$. We show that $`\varphi `$ can be determined rather precisely from the $`B^\pm K\pi `$ rate measurements , which could reduce substantially the error in $`\gamma `$ if values far apart from $`\varphi =90^{}`$ were found.
It has been suggested to go one step beyond setting limits on rescattering contributions in $`A(B^\pm K^0\pi ^\pm )`$ and to completely eliminate them by using the charge-averaged rate measurement of $`B^\pm K^\pm K^0`$. Applying our geometrical formulation, we will show in Section 3 that the resulting determination of $`\gamma `$ is unstable under SU(3) breaking which can introduce very large uncertainties in $`\gamma `$ .
Finally, in order to overcome these uncertainties, we have recently proposed to use in addition to $`B^\pm K^\pm \overline{K}^0`$ also the processes $`B^\pm \pi ^\pm \eta _8`$ . Although this may be considered an academic exercise, mainly due to complicating $`\eta \eta ^{}`$ mixing effects, we will examine in Section 4 the precision of this method. We will show that, when neglecting $`\eta \eta ^{}`$ mixing, the theoretical error in $`\gamma `$ is reduced to a few degrees. We conclude in Section 5. An algebraic condition, used in Section 3 to eliminate rescattering effects by $`B^\pm K^\pm K^0`$ decays, is derived in an Appendix.
## 2 Rescattering uncertainty in $`\gamma `$ from $`B^\pm K\pi `$
The amplitudes for charged $`B`$ decays can be parameterized in terms of graphical contributions representing SU(3) amplitudes (we use the notations of ):
$`A(B^+K^0\pi ^+)`$ $`=`$ $`|\lambda _u^{(s)}|e^{i\gamma }(A+P_{uc})+\lambda _t^{(s)}(P_{ct}+P_3^{EW}),`$ (1)
$`\sqrt{2}A(B^+K^+\pi ^0)`$ $`=`$ $`|\lambda _u^{(s)}|e^{i\gamma }(TCAP_{uc})+\lambda _t^{(s)}(P_{ct}+\sqrt{2}P_4^{EW}),`$ (2)
$`\sqrt{2}A(B^+\pi ^+\pi ^0)`$ $`=`$ $`|\lambda _u^{(s)}|e^{i\gamma }(TC),`$ (3)
where $`\lambda _q^{}^{(q)}=V_{q^{}b}^{}V_{q^{}q}`$ are the corresponding CKM factors. These amplitudes satisfy a triangle relation
$`\sqrt{2}A(B^+K^+\pi ^0)+A(B^+K^0\pi ^+)=\sqrt{2}\stackrel{~}{r}_u|A(B^+\pi ^+\pi ^0)|e^{i(\gamma +\xi )}\left(1\delta _{EW}e^{i\gamma }\right).`$
(4)
Here we denote $`\stackrel{~}{r}_u=(f_K/f_\pi )\lambda /(1\lambda ^2/2)0.28,\delta _{EW}=(3/2)|\lambda _t^{(s)}/\lambda _u^{(s)}|\kappa 0.66`$ ($`\kappa (c_9+c_{10})/(c_1+c_2)=8.810^3`$), while $`\xi `$ is an unknown strong phase. The second term in the brackets represents the sum of EWP contributions to the amplitudes on the left-hand side . The factor $`f_K/f_\pi `$ accounts for factorizable SU(3) breaking effects.
The relation (4), together with its charge-conjugate counterpart, written for $`\stackrel{~}{A}(\overline{B}\overline{f})e^{2i\gamma }A(\overline{B}\overline{f})`$, are represented graphically by the two triangles OAA and OBB in Fig. 1. Here all amplitudes are divided by a common factor $`𝒜\sqrt{2}\stackrel{~}{r}_u|A(B^+\pi ^+\pi ^0)|e^{i(\gamma +\xi )}`$, such that the horizontal line $`OI`$ is of unit length and the radius of the circle is $`\delta _{EW}`$. Four of the sides of the two triangles are given by
$`x_{0+}`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}\stackrel{~}{r}_u}}{\displaystyle \frac{|A(B^+K^0\pi ^+)|}{|A(B^+\pi ^+\pi ^0)|}},x_{+0}={\displaystyle \frac{1}{\stackrel{~}{r}_u}}{\displaystyle \frac{|A(B^+K^+\pi ^0)|}{|A(B^+\pi ^+\pi ^0)|}},`$ (5)
$`\stackrel{~}{x}_0`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}\stackrel{~}{r}_u}}{\displaystyle \frac{|A(\overline{B}^{}\overline{K}^0\pi ^{})|}{|A(B^+\pi ^+\pi ^0)|}},\stackrel{~}{x}_0={\displaystyle \frac{1}{\stackrel{~}{r}_u}}{\displaystyle \frac{|A(\overline{B}^{}K^{}\pi ^0)|}{|A(B^+\pi ^+\pi ^0)|}}.`$
The relative orientation of the two triangles depends on $`\gamma `$ and is not determined from measurements of the sides alone. Assuming that the rescattering amplitude with weak phase $`\gamma `$ in $`B^+K^0\pi ^+`$ can be neglected, one takes the amplitude (1) to be given approximately by the second (penguin) term , which implies $`OB=e^{2i\gamma }OA`$ in Fig. 1. In this approximation, the weak phase $`\gamma `$ is determined by requiring that the angle ($`2\gamma `$) between $`OA`$ and $`OB`$ is equal to the angle ($`2\gamma `$) at the center of the circle .
In order to study the precision of determining in this way the phase $`\gamma `$ as function of the rescattering contribution which is being neglected, let us rewrite (1) in the form
$`A(B^+K^0\pi ^+)=V_{cb}\left(1{\displaystyle \frac{\lambda ^2}{2}}\right)p(1+ϵ_Ae^{i\varphi _A}e^{i\gamma }),pP_{ct}+P_3^{EW}`$ (6)
where $`ϵ_A`$ measures the magnitude of rescattering effects. In Fig. 1 the magnitude of these effects has a simple geometrical interpretation in terms of the distance of the point $`Y`$ from the origin $`O,ϵ_A=|YO|/|YA|`$, where $`YO`$ and $`YA`$ are the two components in the $`B^+K^0\pi ^+`$ amplitude carrying weak phases $`\gamma `$ and zero, respectively
$`YO=|\lambda _u^{(s)}|e^{i\gamma }[(A+P_{uc})p]/𝒜,YA=V_{cb}\left(1{\displaystyle \frac{\lambda ^2}{2}}\right)p/𝒜.`$ (7)
The rescattering phase $`\varphi _A`$ is given by $`\varphi _A=`$Arg$`(YO/YZ)`$, where $`Z`$ is any point on the line bisecting the angle $`AYB`$. A second strong phase which affects the determination of $`\gamma `$ is $`\varphi `$, the relative strong phase between the penguin amplitude $`p`$ and the $`I=3/2`$ current-current amplitude $`T+C`$. In Fig. 1 this phase is given by $`\varphi =`$Arg$`(YZ/OI)`$.
Let us now investigate the dependence of the error in $`\gamma `$ when neglecting rescattering on the relevant hadronic parameters. Our procedure will be as follows. First we generate a set of amplitudes based on the geometry of Fig. 1 and on given values of the parameters $`\gamma ,ϵ,ϵ_A,\varphi _A`$ and $`\varphi `$; then we solve the equation $`\mathrm{cos}2\gamma =\mathrm{cos}(BOA)`$ and compare the output value of $`\gamma `$ with its input value. Here $`ϵ`$ is given in terms of the ratio of charge-averaged branching ratios
$$ϵ\frac{\lambda }{1\lambda ^2/2}\frac{f_K}{f_\pi }\sqrt{\frac{2B(B^\pm \pi ^\pm \pi ^0)}{B(B^\pm K^0\pi ^\pm )}},$$
(8)
The geometrical construction in Fig. 1 is described by
$`YA={\displaystyle \frac{e^{i(\varphi \gamma )}}{ϵ\sqrt{1+2ϵ_A\mathrm{cos}\varphi _A\mathrm{cos}\gamma +ϵ_A^2}}}OI,OY=ϵ_Ae^{i(\varphi _A+\gamma )}YA,`$ (9)
implying a rate asymmetry between $`B^+K^0\pi ^+`$ and $`B^{}\overline{K}^0\pi ^{}`$.
For illustration, we take $`\gamma =76^{},ϵ=0.24`$ , $`ϵ_A=0.1`$ (which is a reasonable guess ), and we vary $`\varphi `$ and $`\varphi _A`$ in the range $`0^{}\varphi 180^{},90^{}\varphi _A270^{}`$. The results of a search for solutions in the interval $`65^{}\gamma 90^{}`$ are presented in Fig. 2 which displays a twofold ambiguity. Fig. 2(a) shows the solution as function of $`\varphi _A`$ for two values of $`\varphi `$, $`\varphi =60^{}`$ and $`\varphi =90^{}`$. Whereas for $`\varphi _A=90^{}`$ the solution is very close to the input value, the deviation becomes maximal for $`\varphi _A=0^{},180^{}`$. This agrees with the geometry of Fig. 1, in which the largest rescattering effects are expected when $`YO`$ is parallel or anti-parallel to the line bisecting the angle $`BYA`$.
In a second plot, Fig. 2(b), we fix $`\varphi _A=0^{}`$ and vary $`\varphi `$ over its entire range, which illustrates the maximal rescattering effect. We find two branches of the solution for $`\gamma `$, both of which deviate strongly from the input value $`\gamma =76^{}`$ for values of $`\varphi `$ around $`90^{}`$. At $`\varphi =90^{}`$ there is no solution for $`ϵ_A=0.1`$ in the considered interval. We checked that the solution is restored and approaches the input value as the magnitude of $`ϵ_A`$ decreases to zero, as it should. Thus, the uncertainty in $`\gamma `$, seen both in Fig. 2(a) and Fig. 2(b) at $`\varphi _A=0^{}`$ and around $`\varphi =90^{}`$, is about $`14^{}`$. It can even be worse in the singular cases where no solution for $`\gamma `$ can be found.
A variant of this method for determining $`\gamma `$, proposed recently in , was formulated in terms of two quantities $`R_{}`$ and $`\stackrel{~}{A}`$ defined by
$`R_{}{\displaystyle \frac{B(B^\pm K^0\pi ^\pm )}{2B(B^\pm K^\pm \pi ^0)}},`$ (10)
$`\stackrel{~}{A}{\displaystyle \frac{B(B^+K^+\pi ^0)B(B^{}K^{}\pi ^0)}{B(B^\pm K^0\pi ^\pm )}}{\displaystyle \frac{B(B^+K^0\pi ^+)B(B^{}\overline{K}^0\pi ^{})}{2B(B^\pm K^0\pi ^\pm )}}.`$
These quantities do not contain $`𝒪(ϵ_A)`$ terms; their dependence on the rescattering parameter $`ϵ_A`$ appears only at order $`𝒪(ϵϵ_A)`$. Therefore, it was argued in , the determination of $`\gamma `$, by setting $`ϵ_A=0`$ in the expressions for $`R_{}`$ and $`\stackrel{~}{A}`$, is insensitive to rescattering effects. This procedure gives two equations for $`\gamma `$ and $`\varphi `$ which can be solved simultaneously from $`R_{}`$ and $`\stackrel{~}{A}`$. Using two pairs of input values for ($`R_{},\stackrel{~}{A}`$) (corresponding to a restricted range for $`\varphi _A`$ and $`\varphi `$) seemed to indicate that the error in $`\gamma `$ for $`ϵ_A=0.08`$ is only about $`5^{}`$. (the relations between the parameters used in and ours are $`\varphi =\varphi ,\eta =\varphi _A+\pi ,\overline{ϵ}_{3/2}=ϵ`$ and $`ϵ_a=ϵ_A`$).
In Fig. 3 we show the results of such an analysis carried out for the entire parameter space of $`\varphi _A`$ and $`\varphi `$. Whereas the angle $`\varphi `$ can be recovered with small errors, the results for $`\gamma `$ show the same large rescattering effects for values of $`\varphi `$ around 90 as in Fig. 2. (A slight improvement is the absence of a discrete ambiguity in the value of $`\gamma `$.) These results show that the large deviation of $`\gamma `$ from its physical value for $`\varphi =90^{}`$ is a general phenomenon, common to all variants of this methods. Some information about the size of the expected error can be obtained by first determining $`\varphi `$. Values not too close to 90 would be an indication for a small error.
## 3 Eliminating rescattering by $`B^\pm K^\pm K^0`$
The amplitude for $`B^+K^+\overline{K}^0`$ is obtained from $`A(B^+K^0\pi ^+)`$ in (1) by a $`U`$-spin rotation
$`A(B^+K^+\overline{K}^0)`$ $`=`$ $`|\lambda _u^{(d)}|e^{i\gamma }(A+P_{uc})+|\lambda _t^{(d)}|e^{i\beta }(P_{ct}+P_3^{EW}).`$ (11)
In the limit of SU(3) symmetry the amplitudes in (11) are exactly the same as those appearing in (1). In Fig. 4 $`A(B^+K^+\overline{K}^0)`$, scaled by the factor $`\lambda /(1\lambda ^2/2)`$ (and divided by $`𝒜`$ as in Fig. 1), is given by the line $`OC`$ and its charge-conjugate is given by $`OD`$. We have shown in that knowledge of these two amplitudes allows one to completely eliminate the rescattering contribution $`A+P_{uc}`$ from the determination of $`\gamma `$. This is achieved by effectively replacing in the GLRN method the origin $`O`$ by the intersection $`Y`$ of the lines $`AC`$ and $`BD`$. $`\gamma `$ is determined by requiring that the angle ($`2\gamma `$) between $`YA`$ and $`YB`$ is equal to the angle ($`2\gamma `$) at the center of the circle.
The amplitude (11) can be decomposed into two terms carrying definite weak phases in form very similar to (6),
$`{\displaystyle \frac{\lambda }{1\lambda ^2/2}}A(B^+K^+\overline{K}^0)=V_{cb}\left(1{\displaystyle \frac{\lambda ^2}{2}}\right)p\left({\displaystyle \frac{\lambda ^2}{(1\lambda ^2/2)^2}}+ϵ_Ae^{i\varphi _A}e^{i\gamma }\right),`$ (12)
The ratio $`|CY|/|AY|=\lambda ^2/(1\lambda ^2/2)^2`$ implies that the triangle $`AYB`$ is about 25 times larger than the triangle $`CYD`$. This will result in a large uncertainty in $`\gamma `$ also when the equality between the corresponding terms in $`B^+K^0\pi ^+`$ and $`B^+K^+\overline{K}^0`$ amplitudes involves relatively small SU(3) violation.
The geometrical construction by which rescattering amplitudes can be completely eliminated in the SU(3) limit consists of three steps. (See Fig. 4. For an alternative suggestion, see .)
a) Determine the position of the point $`Y`$ as a function of the variable angle $`2\gamma `$ and the decay rates of $`B^\pm K\pi `$ and $`B^+\pi ^+\pi ^0`$. The point $`Y`$ is chosen on the mid-perpendicular of $`AB`$ such that the equality of the angles marked $`2\gamma `$ is preserved for any value of $`\gamma `$.
b) Draw two circles of radii $`\lambda /(1\lambda ^2/2)|A(B^\pm K^0K^\pm )|`$ centered at the origin $`O`$ (dashed-dotted circles in Fig. 4). The intersections of the lines $`AY`$ and $`BY`$ with these circles determine $`C`$ and $`D`$ respectively (up to a two-fold ambiguity), again as functions of $`\gamma `$.
c) The physical value of $`\gamma `$ is determined by the requirement $`|AC|=|BD|`$ . This condition on $`\gamma `$ can be formulated in an algebraic form, showing that only the charge-averaged rate of $`B^\pm K^\pm K^0`$ is needed. The condition is given by Eq. (15) in the Appendix.
Let us examine the precision of this method for $`ϵ_A=0.1`$ at $`\varphi 90^{}`$, for which the simpler method of Sec. 2 receives large rescattering corrections. In Fig. 5(a) we show the left-hand side of Eq. (15) as a function of variable $`\gamma `$ at $`\varphi =90^{}`$ for several values of $`\varphi _A`$. The value of $`\gamma `$ is obtained from the condition that the left-hand side of this equation vanishes. In the absence of SU(3) breaking this method reproduces precisely the physical value of $`\gamma `$ ($`\gamma =76^{}`$) for all values of $`\varphi _A`$. However SU(3) breaking effects can become important, to the point of completely spoiling this method. We simulate these effects by taking the amplitudes $`p`$ and $`aA+P_{uc}p`$ in $`B^\pm K^\pm K^0`$ (Eq. (11)) to differ by at most 30% from those in $`B^\pm K^0\pi ^\pm `$ (Eq. (1)). This expands the lines of Fig. 5(a) into bands of finite width, which give a range for the output value of $`\gamma `$.
In Fig. 5(b) we show the effects of SU(3) breaking on the determination of $`\gamma `$ as function of $`\varphi _A`$ for $`\varphi =90^{}`$. We see that for values of $`|\varphi _A|`$ larger than about 25 the error on $`\gamma `$ is quite large. Thus, we conclude that for certain values of the strong phases the determination of $`\gamma `$ using this method is unstable under SU(3) breaking in the relation between $`B^+K^0\pi ^+`$ and $`B^+K^+\overline{K}^0`$.
## 4 The use of $`B^\pm \pi ^\pm \eta _8`$
In Ref. we proposed to use in addition to $`B^+K^+\overline{K}^0`$ also $`B^+\pi ^+\eta _8`$ and their charge-conjugates. Writing
$$A(B^+\pi ^+\eta _8)=|\lambda _u^{(d)}|e^{i\gamma }(TC2A2P_{uc})+|\lambda _t^{(d)}|e^{i\beta }(P_{ct}+P_5^{EW}),$$
(13)
we find the triangle relation
$$A(B^+K^+\overline{K}^0)+\sqrt{\frac{3}{2}}A(B^+\pi ^+\eta _8)=\frac{1}{\sqrt{2}}A(B^+\pi ^+\pi ^0).$$
(14)
This relation and its charge-conjugate provide another condition which determines the positions of the points $`C`$ and $`D`$. As in Section 3, the phase $`\gamma `$ is determined by the equation $`\mathrm{cos}(BYA)=\mathrm{cos}2\gamma `$, where the point $`Y`$ is fixed by the intersection of the lines $`AC`$ and $`BD`$. General considerations, based on the relative sizes of the amplitudes involved, suggest that this method is relatively insensitive to SU(3) breaking effects .
We illustrate this in Fig. 6 where we show on the same plot the two sides of the equation $`\mathrm{cos}(BYA)=\mathrm{cos}2\gamma `$ as functions of the variable $`\gamma `$. As in method of Section 3, SU(3) breaking is simulated by taking the penguin ($`p`$) and annihilation ($`a`$) amplitudes in $`B^\pm K^\pm K^0`$ to differ by at most 30% (separately for their real and imaginary parts) from those in $`B^\pm K^0\pi ^\pm `$. The latter are used to construct the positions of the points $`C`$ and $`D`$. In the example of Fig. 6 we take $`ϵ_A=0.1,\varphi =90^{},\varphi _A=45^{}`$, for which the two methods described in Sections 2 and 3 were shown to lead to large errors in $`\gamma `$. For an input value $`\gamma =76^{}`$, the output is given by the range $`74^{}<\gamma <78^{}`$, obtained by the intersection of the solid line with the band formed by the diamond points. We see that the error in $`\gamma `$ due to SU(3) breaking is less than $`\pm 2^{}`$, which confirms the general arguments of . This scheme, or rather its analogous version using $`B^0`$ and $`B_s`$ decay , may prove useful for a determination of $`\gamma `$ in case that the strong phases $`(\varphi ,\varphi _A)`$ turn out to have values which preclude the use of the two simpler methods.
## 5 Conclusion
We compared three ways of dealing with rescattering effects in $`B^\pm K^0\pi ^\pm `$, in order to achieve a precise determination for the weak phase $`\gamma `$ from these processes and $`B^\pm K^\pm \pi ^0`$. In the simplest GLRN method we find that large errors in $`\gamma `$ are possible for a particular range of the strong phases, $`\varphi 90^{}`$, even when the rescattering term is only at a level of 10$`\%`$. $`B^+`$ and $`B^{}`$ decay rate measurements are expected to provide rather precise information on $`\varphi `$. Small errors in $`\gamma `$ would be implied if $`\varphi `$ turns out to be far away from $`90^{}`$. The second method, in which rescattering effects can be completely eliminated in the SU(3) limit by using also the charge-averaged $`B^\pm K^\pm K^0`$ rate, suffers from a sizable uncertainty due to SU(3) breaking. These uncertainties would be resolved in an ideal world, where $`B^\pm \pi ^\pm \eta _8`$ can be measured, or alternatively by using corresponding $`B^0`$ and $`B_s`$ decays.
## 6 Appendix
The weak angle $`\gamma `$ is fixed in the method described in Sec. 3 by the condition $`|AC|=|BD|`$, or equivalently $`|YC|=|YD|`$. Explicitly, this can be written after some algebra as an equation in $`\gamma `$
$`2(1x_0)^2\stackrel{}{Y}^2+2x_0(1x_0)\stackrel{}{Y}(\stackrel{}{A}+\stackrel{}{B})+x_0^2(x_{0+}^2+\stackrel{~}{x}_0^2)(y_{+0}^2+\stackrel{~}{y}_0^2)=0,`$ (15)
where $`x_0`$ is defined as the ratio of two CP rate differences
$$x_0(y_{+0}^2\stackrel{~}{y}_0^2)/(x_{0+}^2\stackrel{~}{x}_0^2)\stackrel{}{\text{SU(3)}}\lambda ^2/(1\lambda ^2/2)^2.$$
(16)
Here
$`y_{+0}`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}{\displaystyle \frac{f_\pi }{f_K}}{\displaystyle \frac{|A(\overline{B}^+K^+\overline{K}^0)|}{|A(B^+\pi ^+\pi ^0)|}},\stackrel{~}{y}_0={\displaystyle \frac{1}{\sqrt{2}}}{\displaystyle \frac{f_\pi }{f_K}}{\displaystyle \frac{|A(\overline{B}^{}K^{}K^0)|}{|A(B^+\pi ^+\pi ^0)|}}`$ (17)
obey an SU(3) relation with the amplitudes (5) of $`B^\pm K\pi ^\pm `$
$`y_{+0}^2\stackrel{~}{y}_0^2={\displaystyle \frac{\lambda ^2}{(1\lambda ^2/2)^2}}(x_{0+}^2\stackrel{~}{x}_0^2).`$ (18)
This implies that CP rate differences in $`B^\pm K^0\pi ^\pm `$ and $`B^\pm K^\pm K^0`$ are equal and of opposite sign . We see that in the SU(3) limit the condition (15), which eliminates rescattering effects, requires only a measurement of the charge-averaged rate of $`B^\pm K^\pm K^0`$ and not the CP asymmetry in these processes .
To prove (15), let us consider two lines $`AY`$ and $`BY`$ cutting two circles of radii $`R_1`$, $`R_2`$ (centered at the origin) at points $`C`$ and $`D`$ respectively. The intersection points can be written as $`\stackrel{}{C}=\stackrel{}{Y}+x_1(\stackrel{}{A}\stackrel{}{Y})`$ and $`\stackrel{}{D}=\stackrel{}{Y}+x_2(\stackrel{}{B}\stackrel{}{Y})`$, where $`x_1,x_2`$ are solutions of the equations
$`(\stackrel{}{A}\stackrel{}{Y})^2x_1^2+2x_1\stackrel{}{Y}(\stackrel{}{A}\stackrel{}{Y})+(\stackrel{}{Y}^2R_1^2)=0`$ (19)
$`(\stackrel{}{B}\stackrel{}{Y})^2x_2^2+2x_2\stackrel{}{Y}(\stackrel{}{B}\stackrel{}{Y})+(\stackrel{}{Y}^2R_2^2)=0.`$ (20)
The condition $`|YC|=|YD|`$ is equivalent to requiring that these two equations have a common solution $`x_1=x_2`$. Obviously, if such a solution exists, it is given by
$`x_0={\displaystyle \frac{R_1^2R_2^2}{2\stackrel{}{Y}(\stackrel{}{A}\stackrel{}{B})}}={\displaystyle \frac{R_1^2R_2^2}{\stackrel{}{A}^2\stackrel{}{B}^2}},`$ (21)
where we used the equality $`(\stackrel{}{A}\stackrel{}{Y})^2=(\stackrel{}{B}\stackrel{}{Y})^2`$. Taking the sum of (19) and (20) with the value (21) for $`x`$ leads immediately to the condition (15).
Acknowledgements. We thank R. Fleischer, M. Neubert and T.M. Yan for useful discussions. This work is supported by the National Science Foundation and by the United States - Israel Binational Science Foundation under Research Grant Agreement 94-00253/3.
|
no-problem/9902/astro-ph9902213.html
|
ar5iv
|
text
|
# The radio emission from the Galaxy at 22 MHz
## 1 Introduction
The nonthermal radio continuum emission from the Galaxy at frequencies below 100 MHz is the synchrotron radiation of cosmic ray electrons with energies of order 1 GeV. Measurements of the emissivity and the spectral index of the emission provide direct information about the electron energy spectrum and the magnetic field strength in the Galaxy. At frequencies below about 40 MHz, the opacity of the thermal component of Galactic emission is often sufficient to absorb the much brighter background synchrotron emission, providing a means of estimating local synchrotron emissivities in the directions of H ii regions at known distances.
There have been numerous surveys made of Galactic emission below 500 MHz including: the all–sky map of Haslam et al. (haslam82 (1982)) at 408 MHz; the northern sky maps of Turtle & Baldwin (turtle (1962): 178 MHz, $``$5° to +90°), of Milogradov-Turin & Smith (milogradov (1973): 38 MHz, $``$25° to +70°) and of Caswell (caswell (1976): 10 MHz, $``$6° to +74°, 0<sup>h</sup> to 16<sup>h</sup>); and southern sky maps of Landecker & Wielebinski (landecker (1970): 150 MHz and 85 MHz, $``$15° to +15°), Hamilton & Haynes (hamilton (1969): 153 MHz, +5° to $``$90°), Alvarez et al. (alvarez (1997): 45 MHz, +19° to $``$90°), and Ellis (ellis (1982): 16.5 MHz, 0° to $``$90°).
The 22 MHz Telescope at the DRAO was used in the period 1965 to 1969 to measure the emission from discrete sources and to map the background radiation from our galaxy. The telescope has been described completely by Costain et al. (costain (1969)); only those details relevant to the present paper are given here. Flux densities of point sources have been published by Roger et al. (roger69b (1969)) and by Roger et al. (roger86 (1986)). The low-frequency spectra of these sources have been discussed by Roger et al. (roger73 (1973)). Technical problems prevented a satisfactory calibration of the Galactic emission, but these problems have now been circumvented and in this paper we present a map of the Galactic emission at 22 MHz between declinations $``$28° and $`+`$80° covering the complete range of right ascension, $``$73% of the sky. By comparing the 22 MHz map with the 408 MHz data from the all-sky survey of Haslam et al. (haslam82 (1982)) we have prepared a map of the spectral index of the emission over the same area. We also derive values of the local synchrotron emissivity.
## 2 The telescope
The telescope was an unfilled aperture in the form of a T with dimensions 96$`\lambda `$ $`\times `$ 2.5$`\lambda `$ east-west (EW) and 32.5$`\lambda `$ $`\times `$ 4$`\lambda `$ north-south (NS). This array formed a nearly Gaussian beam of extent 1.1 $`\times `$ 1.7 degrees (EW $`\times `$ NS) at the zenith (declination 48.8°). The telescope was operated as a meridian transit instrument, steerable in declination between $``$30° and the North Celestial Pole by the adjustment of the phases of rows of dipole elements in the north-south direction. Away from the zenith, foreshortening of the array broadened the beam in this direction to 1.7° secant(Z), where Z is the zenith angle. The gain of the telescope in directions away from the zenith was further reduced by the response of the basic array element, a full-wave dipole $`\frac{\lambda }{8}`$ above a reflecting screen. Aperture grading by means of attenuators was used to keep sidelobes to a level of a few percent.
The pencil-beam response was formed by multiplying the signals from the EW and NS arms. A problem in T– and cross–format telescopes arises from the region in which the two arms intersect. If this region is removed, a broad negative response is produced around the narrow pencil beam, and the telescope filters out the lowest spatial frequencies, leading to a poor representation of the broadest angular components of the emission. To overcome this problem, the signals from the elements in the overlap region were split and fed to both the EW and NS arrays.
The gain of the antenna (in effect the ratio between the flux density of a radio source and the antenna temperature it produces in the main beam) was established using an assumed flux density of 29100 $`\pm `$ 1500 Jy for Cygnus A. Details of the original flux-density scale and the subsequent revision can be found in Roger et al. (roger73 (1973)) and in Roger et al. (roger86 (1986)).
## 3 Observations and basic data reduction
Observations were made in several modes during the working life of the telescope. In the early part of the observation period the telescope formed only one beam which could be moved in declination by operating the NS phasing switches. Long scans at fixed declination were made to observe the background emission as well as point sources, and short scans were made to measure the flux density of point sources with the beam switched frequently between different declinations. Only the scans of long duration were used to assemble the data presented in this paper. At a later stage, more automated phasing switches were added to permit rapid time–shared observations at five adjacent declinations. A large fraction of the present data was obtained from long scans with this equipment.
Scans were made at a set of standard declinations, chosen to sample the emission at half-beamwidth intervals. Since the NS beamwidth increased with increasing zenith angle, the standard declinations were not evenly spaced.
The observations were made mostly in the years 1965 to 1969 which covered a period of fairly low solar activity. Nevertheless, because of the low frequency of operation, the influence of the ionosphere on the observations was often large. Observations of point sources were affected by refraction, scintillation, and absorption in differing degrees. A correction factor for ionospheric absorption (primarily a daytime phenomenon) was derived from an on–site 22 MHz riometer and was applied to the data. Refraction amounting to a significant fraction of the (EW) beam was detected only near times of sunrise in the ionosphere when large horizontal gradients of electron density were present. Scintillation of “point sources” (sources less than 15′ diameter) was a frequent occurrence. Due to lack of correlation of the phase and amplitude variations over the extent of the telescope array during times of severe scintillation, flux densities could be seriously underestimated. To overcome this problem, many observations of each source were made and measurements of flux density were derived from only those observations judged to be the least affected. By contrast, most observations of the extended emission were largely unaffected by scintillation and, after correction for absorption, could be reliably averaged together. The data presented here are the result of averaging at least two observations, and in many parts of the sky up to five good observations contributed to the average.
Thus, the basic data set is a data array assembled from the averaged scans at the standard declinations. Because neighbouring averaged scans contained data observed at different times with varying conditions, the array displayed significant scanning artifacts. These were greatly reduced by the application of a Fourier filtering process in the declination dimension. The data were then interpolated onto a grid sampled at 1 minute intervals in right ascension and 15′ in declination.
## 4 Further data processing
Comparison of the resulting map with maps at higher frequencies suggested that the brightness temperature calibration of the 22 MHz map had a zenith–angle–dependent error. We used the 408 MHz all-sky survey of Haslam et al. (haslam82 (1982)) for comparisons because these single–parabloid observations would likely be free of such defects, but comparisons with other similar data sets would probably have produced similar conclusions.
For comparison with the 22 MHz map we convolved the 408 MHz data from its original resolution of 51′ to the variable beamwidth of our data. Calculation of a map of spectral index between the two frequencies indicated a systematic variation of spectral index with declination. Since such an effect is unlikely to be real, we suspected a calibration problem in the 22 MHz data. Although the cause is not fully understood, we believe that the response of the telescope to extended structure differed from its response to point sources in directions away from the zenith. This effect is discussed in Section 6.2. We believe, however, that the response of the telescope at the zenith (declination 48.8°) is well understood. At this declination, we plotted brightness temperature at 22 MHz against brightness temperature at 408 MHz (a T-T plot). We used data at all right ascensions except (a) near the Galactic plane in the Cygnus region (at $``$21<sup>h</sup>, 48°) where strong absorption features are evident in the 22 MHz map (see below) and (b) regions around a few strong small-diameter sources. Fig. 1 shows this plot. The highest temperatures in Fig. 1 correspond to the Galactic plane in the anticentre (at 4<sup>h</sup> 40<sup>m</sup>, 48°) where there may be a small amount of absorption, causing the T-T plot to deviate from a straight line. Using all the data shown in Fig. 1 we derived a differential spectral index of 2.52$`\pm `$.02. Restricting the fit to regions with $`T_{22}<50`$ kK (corresponding to right ascensions between 5<sup>h</sup> 40<sup>m</sup> and 18<sup>h</sup> 10<sup>m</sup>) the correlation is very tight and the differential spectral index is 2.57$`\pm `$.02. These values of spectral index are close to the value expected in this frequency range from the work of Bridle (bridle (1967)) and Sironi (sironi (1974)) indicating that the temperature scale at 22 MHz is accurate. Furthermore, the extrapolation of the line fitted to the data points passes close to the origin of the T-T plot, giving us confidence that the zero level of the 22 MHz measurements is also well established at the zenith. T-T plot analyses at other declinations (again excluding areas of absorption) indicate that the zero level is acceptably correct throughout the declination range, but that the temperature scale varies (the accuracy of the zero level is discussed in section 6.1).
The temperature scale at declinations away from the zenith was adjusted using the following procedure. At each declination the average brightness temperature ratio between 22 MHz and 408 MHz was calculated (using T-T plots) over the range 8 to 16 hours in right ascension (to exclude extended absorption regions on the Galactic plane). The temperature scale at 22 MHz was then adjusted to make this ratio equal to that at the zenith (note that Fig. 1 shows data over a wider range of right ascension).
In order to cover the central regions of the Galaxy, we have included observations as far south as declination $``$28°, where the telescope was operating at a zenith angle of 76.8°. At these large zenith angles, there is some departure of the antenna gain function from its calculated value (see Costain et al. costain (1969)). However, reliable flux densities of point sources have been obtained as far south as declination $``$17.4° (Roger et al. roger86 (1986)) and we believe that our calibration procedure remains reasonably effective to the southern limit of our map.
## 5 Results
### 5.1 Map of the northern sky at 22 MHz
Figs. 2 to 6 show in equatorial coordinates a contoured gray-scale map of the 22 MHz emission from the sky between declinations $``$28° and 80° in five segments. Figs. 7 and 8 depict the data between Galactic latitutes $``$40° and $`+`$40° with the same contours and grayscale in Galactic coordinates, and with positions of extended Galactic sources indicated. Fig. 9 is a grayscale representation of the full data set in Aitoff projection of Galactic coordinates.
The brightness temperature of the 22 MHz emission varies from $``$17 kK towards a broad minimum about 50° off the Galactic plane at the longitude of the anticentre to over 250 kK on the plane near the Galactic centre. The brightness temperature near both north and south Galactic poles is approximately 27 kK. The Galactic plane itself is apparent over the full range of longitude from $`+`$1° to $`+`$244°. At various points along the plane, particularly at lesser longitudes, depressions are apparent in the emission. These represent thermal free-free absorption of bright synchrotron background emission by relatively nearby, opaque regions of dense ionized gas.
Two other large–scale features apparent in the maps are Loop I, the North Polar Spur (NPS), rising from the plane near longitude 30°, and Loop III, centred near longitude 87°.
We emphasize that the main value of the data lies in the representation of structure larger than the beam. The strongest point sources (Cas A, Cyg A, Tau A and Vir A) have been removed from the map. While other point sources remain in the maps, these data can not be used to determine their flux densities. First, the ionospheric effects mentioned above cause the point sources to be very poorly represented in these maps. Second, the scaling applied after comparison with the 408 MHz data will have further affected the flux densities of point sources at declinations away from the zenith. Reliable point source flux densities are already available in the published lists referred to in Section 1.
### 5.2 Extended Galactic sources – supernova remnants
A number of extended supernova remnants are apparent in the data and the positions of these are indicated with labels in Figs. 7 and 8. The flux densities of most of the SNRs have been previously measured from the original observations and published in various papers. We have collected these and listed them in Table 1 together with new flux densities for two additional remnants not previously reported. One other SNR, HB21, is indicated in Fig. 8 but not listed in Table 1 because of difficulties in separating its emission from that of nearby confusing sources.
### 5.3 Absorption of the background emission – H ii regions
Depressions in the background emission near the Galactic plane are identified with a number of extended H ii regions, which at frequencies near 20 MHz will largely obscure background emission. We list the properties of 21 of these discrete absorption regions in Table 2, with positions plotted in Figs. 7 and 8.
Figs. 2 to 8 and particularly 9 show the extended trough of absorption between $`l=10`$° and $`l=40`$°. This trough undoubtedly extends to and past the Galactic centre but the increasingly extended N-S width of the telescope beam at large zenith angles was unable to fully resolve the feature below $`l=10`$°.
### 5.4 The spectral index of emission, 22 to 408 MHz
Fig. 10 shows a map of spectral index calculated from the final 22 MHz map and the 408 MHz map (Haslam et al. haslam82 (1982)), the latter convolved to the declination–dependent beamwidth of the 22 MHz telescope. The spectral index, $`\beta `$, as displayed, is related to the brightness temperatures at each frequency, $`T_{22}`$ and $`T_{408}`$, by the expression
$$\beta =log(T_{22}/T_{408})/log(408/22).$$
Because the 408 MHz map has been used to establish the variation of the 22 MHz temperature scale with declination, great care is needed in interpreting this map. The process of revising the 22 MHz scale could eliminate or reduce spectral index features between 8 and 16 hours right ascension with structure in the declination direction if they extend over a large range in this dimension. On the other hand, features in the spectral index map which have structure in the right ascension dimension are likely to be largely unaffected by the correction process. Similarly, spectral index features with structure in various directions, including most features which have counterparts in the individual maps, would be suspect only if distortions appeared in the declination dimension. No such artefacts are apparent. However, some “banding” in declination, particularly below $``$3°, can be seen in specific right ascension ranges. This effect is easily recognized as spurious and is probably related to zero level errors in the 22 MHz or, possibly, in the 408 MHz map. Such effects may be aggravated by the large zenith angles at which these low-declination regions were observed at 22 MHz.
Errors in the spectral index map can arise from zero-level or temperature-scale errors at either frequency. We deal with zero-level errors first. Taking 5 kK as a possible error in the 22 MHz data (see the detailed discussion in Section 6.1) we estimate the effects on the spectral index map. The large frequency separation between 22.25 and 408 MHz means that zero-level errors have relatively little effect: the $`\pm `$5 kK error will change spectral index by $`\pm `$0.1 at the sky minimum and by $`\pm `$0.01 on the brightest part of the Galactic plane. We tested the effect of an error of $`\pm `$5 kK on the map of Fig. 10 by computing maps with this error applied to the 22 MHz data in both senses. All the main features visible in Fig. 10 remain in both new maps. When we discuss the spectral features which we see in Fig. 10, we discuss only those which survive this test.
The effects of temperature scale errors are more difficult to assess. Once again, the large frequency separation is an asset. At the zenith (declination 48.8°) we have an independent determination of spectral index since we have not changed our data in any way at that declination. We estimate that the probable error there is 0.05, taking into account both systematic and random errors (amounting to a 16% difference in the temperature ratio $`T_{22}/T_{408}`$), and we assign this error to the whole map. Further errors at other declinations depend on the validity of the assumption on which our calibration of the 22 MHz temperature scale is based, the constancy of the spectral index of the Galactic emission over the region 8<sup>h</sup> to 16<sup>h</sup>, $``$28° to 80°. This can only be tested with an extensive study of spectral index using data at a number of frequencies, a study beyond the scope of this paper. Note that we have assumed the constancy of the differential spectrum: we are assuming that the Galactic component of the emission has constant spectral index over this region, which includes the Galactic pole and lies mostly at latitudes higher than $``$20°. The total spectrum includes an extragalactic (presumably isotropic) component of emission, and the total spectrum may still vary across this region, as it appears from Fig. 10 to do.
In the final analysis, the main value of our spectral index work is in the assessment of differences in spectral index between regions rather than in a precise determination of the spectral index of a given region. In this spirit, we make the following observations.
* In general, the map in Fig. 10 shows few large scale variations. The mean index away from the Galactic plane is 2.47. Within this area (latitude greater than $``$ 10°) the mean index of extended regions varies from values of 2.41 in the area of synchrotron minimum (10<sup>h</sup>, 40°) to 2.54, near the position of Loop III, both with an rms variation of $``$0.03. On the Galactic plane, the flatter spectrum region coincident with the absorption trough at low longitudes is narrow in the latitude dimension. At all points on the Galactic plane it is notable that the spectral index is relatively constant over most of the broader synchrotron emission along the plane with no gradual decrease in spectral index as latitude decreases.
* The outline of the North Polar Spur at higher latitudes can be seen in the spectral index map. An integration over the area of this arc shows its emission to have an index of 2.51, slightly steeper than its surroundings by $``$0.03.
## 6 Factors affecting the accuracy
### 6.1 Zero–level offsets
The offsets of the T-T plots between 408 MHz and the final 22 MHz map are in the range $`\pm `$10 kK at 22 MHz. The mean of 11 offset values at intervals of 10° in declination from $`+`$79° to $``$21° is $``$5.1 kK with an rms deviation of 11.1 kK. This mean offset is of the same order as that derived for the T-T plot of zenith data (see Fig. 1). From these data alone, however, it is not possible to decide the extent of the true offsets in either the 22 MHz or the 408 MHz data. The 408 MHz data is suspected of having offsets as large as 2 K (Reich & Reich reich88 (1988)) and may also have a baselevel correction of 5 K. If the average spectral index between 22 and 408 MHz is 2.50, these values correspond to 2.8 kK and 7.2 kK at 22 MHz.
### 6.2 Effects of antenna properties on the data
In this section we examine the possible effects on our data of our imperfect knowledge of the properties of the antennas. The antennas used at 22 MHz and 408 MHz are of different types, and need to be discussed separately.
If emission received in the sidelobes made a significant contribution to antenna temperature, then errors would result. These would be especially significant in the map of spectral index, because the sidelobe responses of the two very different antennas would receive emission from different parts of the sky. The greatest effect would occur in measurements of those parts of the sky where the brightness temperature is the lowest (around 9 hours, 30°) with emission from the bright Galactic plane being received in the far sidelobes of the antennas. As an example of such effects, Landecker & Wielebinski (landecker (1970)), using the Parkes 64-m telescope at 150 MHz, found that about one third of the antenna temperature at the sky minimum arose from sidelobe contributions.
The response of the 22 MHz antenna is, in principle, completely calculable from the geometry of the array, and from the phasing and grading applied to the array elements. The angular size of the main beam proved to be very close to the calculated value. The net sidelobe solid angle should be zero, implying a beam efficiency of 1.0. Sidelobe responses, apart from their effects near strong sources, should not affect the measurement of the broad structure which is the focus of this paper. Measurements of the antenna response using the bright sources Cas A and Cyg A (Costain et al. costain (1969)) bear out this expectation. The dynamic range of these measurements is about 30 dB, determined by the ratio of the flux density of Cyg A (29,100 Jy) to the confusion limit for the telescope ( 30 Jy). Sidelobes above this level were confined to the NS and EW planes (strictly a small circle EW, depending on the phasing in declination) and were alternately positive and negative, as expected, and close to the predicted amplitude. Sidelobe response fell below the detection limit within 10° of the main beam in the EW direction and below 1% within 18° of the main beam in the NS plane. These measurements verify our assertion that the performance of the telescope at the zenith is well understood (Cas A and Cyg A pass within 10° of the zenith at DRAO).
We are confident that the response of the antenna to the extended background was as predicted near the zenith because of the linear relationship between the 22 and 408 MHz brightness temperatures with the expected spectral index at that declination (48.8°). We know that the behaviour away from the zenith departed from the expected response for the extended emission features, but not for point sources. We tentatively attribute this to inadequately compensated mutual impedance effects between phased rows of dipoles in the array. At increasing zenith angles, these effects may have dominated the predicted response of individual dipoles above a reflecting screen.
The available measurements suggest, however, that the sidelobes of the complete telescope (as opposed to the individual groups of radiating elements) were still confined to the predicted regions, even at large zenith angles. If we assume that the sidelobe level was all positive and at the detection limit ($``$30 dB) in two 180° strips in the EW and NS directions, each equal in width to the main beam, then the beam efficiency would be 0.9, better than most reflector antennas. However, this is very much a worst-case assumption, and it is probably safe to conclude that the beam efficiency was $``$ 0.95. Furthermore, the largest sidelobes lie in the NS plane, and, when the main beam is measuring the coldest region of the sky, these sidelobes do not intersect the bright Galactic plane. We therefore feel justified in ignoring the effects of the sidelobe response of the 22 MHz telescope.
The 408 MHz data were not directly corrected for sidelobe contributions, but an indirect correction was applied (Haslam et al. haslam82 (1982)). The absolutely calibrated survey at 404 MHz made by Pauliny-Toth & Shakeshaft (pauliny-toth (1962)) with a beam of about 7.5° was used to establish both the zero-level and the temperature scale of the 408 MHz survey data. Since the 404 MHz survey was corrected for sidelobe contributions, using it as a reference for the later survey roughly corrected those measurements for sidelobe contributions. The technique is valid in this case since the relationship between the measured antenna temperature and the sidelobe correction is, to a good approximation, linear. Checks of the effectiveness of this procedure were made subsequently by Lawson et al. (lawson (1987)) and by Reich & Reich (reich88 (1988)) who convolved the 408 MHz data to the broad beams of horns and other low–sidelobe antennas used by Webster (webster (1975)) and Sironi (sironi (1974)) to measure the Galactic emission at 408 MHz. In all cases the comparisons were satisfactory, indicating that sidelobe effects had been effectively removed from the 408 MHz data.
## 7 Discussion
### 7.1 Components of extended Galactic emission
Beuermann et al. (beuermann (1985)) have used the 408 MHz all-sky survey (Haslam et al. haslam82 (1982)) to produce a three-dimensional model of the Galactic radio emission using an unfolding procedure. In this model the Galaxy consists of a thick non-thermal radio disk in which a thin disk is embedded. The thick disk exhibits spiral structure, has an equivalent width of $``$3.6 kpc at the solar radius and accounts for $``$90% of the diffuse 408 MHz emission. Emission extends to at least 15 kpc from the Galactic centre, at which radius the thick disk has an equivalent width near 6 kpc. The thin disk, by comparison, appears in the model as a mixture of thermal and non-thermal emission also with spiral structure, but with an equivalent width of $``$370 pc, similar to that of the H i disk and of the distribution of H ii regions in the inner Galaxy.
Our comparison of the 22 MHz and 408 MHz maps shows a remarkable constancy of spectral index in the extended emission corresponding to the thick disk component over our full range of longitudes from $``$0° to $``$240°. The principal departures from this general tendency are (i) the slightly flatter spectral index in a broad area in the region of minimum Galactic emission at high latitudes toward the longitude of the Galactic anticentre and (ii) the somewhat steeper indices near Loop III and the outer rim of the NPS. In a similar comparison of the 408 MHz map with a map of 1420 MHz emission, Reich & Reich (reich88 (1988)) also note these general features. However, we see no indication in the lower frequency range for the steeper spectra seen by Reich & Reich in regions on the plane both near the Galactic centre and near longitude 130°. This suggests that any steepening of the spectra in these regions must be a higher frequency phenomenon with spectral curvature above 408 MHz.
Details of the spectral index variations associated with the loops of emission also differ in the two frequency ranges. Fig. 10 shows a slightly steeper index (by $``$0.03) for a substantial part of the arc forming the outer edge of the NPS. This contrasts with the 408–1420MHz comparison (Reich & Reich reich88 (1988)) which shows a steeper index in a relatively broad arc on the part of the NPS closest to the Galactic plane. Neither study indicates a difference between the spectral index of emission within the loop of the NPS and that outside the loop. The NPS has variously been considered as a nearby, very old supernova remnant (e.g. Salter salter (1983)) and as a local magnetic “bubble” (Heiles heiles (1998)).
### 7.2 Absorption at lower longitudes
We have noted that at longitudes less than $``$40° there exists a continuous trough of absorption along the Galactic plane. We illustrate this in Fig. 11 which shows a map of the “quasi optical depth” at 22 MHz calculated from a comparison with the 408 MHz map on the assumption that the absorption is due entirely to cool ionized gas on the near side of the emission. (We define quasi optical depth, $`\tau ^{}`$, by the relation $`\tau ^{}=ln\left(T_{408}(408/22)^\beta ^{}/T_{22}\right)`$, where $`\beta ^{}`$ is the mean spectral index of the emission off the Galactic plane). This represents an underestimate of the true optical depth of the absorbing gas since (i) a proportion of the non-thermal emission will be on the near side of some absorption and, (ii) the kinetic temperature of the thermal gas will lessen the apparent depth of the absorption. A more accurate estimate of the true optical depth would require a modelling of the intermixed emission and absorption components which is beyond the scope of this paper. Nonetheless, it is obvious from Fig. 11 that the full angular width of the absorbing region is less than 3° which, at an assumed mean distance of 4 kpc, corresponds to a thickness of less than 250 pc. Thus, it is apparent that the absorption corresponds to the “thin disk component” of emission identified by Beuermann et al (beuermann (1985)) as comprising the known disks of H ii regions, diffuse thermal continuum emission, diffuse recombination line emission and the distribution of atomic hydrogen.
The extended absorption in the plane in the region of Cygnus between longitudes 70° and 90° is also shown in Fig. 11. Note that the region appears at least twice as extensive in latitude as the continuous trough, probably because much of the absorbing gas is at distances of 1 kpc or less.
### 7.3 Non-thermal emissivities in the plane
Several of the discrete H ii regions which appear in absorption at 22 MHz and which are listed in Table 2 can be used to estimate the emissivity of local synchrotron emission. We have calculated the emissivities for eight H ii regions at well–determined distances, which are sufficiently extended compared to the observing beam to ensure that only thermal radiation from the ionized gas and foreground non-thermal radiation contribute to the measured emission. An assumed contribution from the opaque ionized gas of 6000K was subtracted from the brightness temperature in the depression and the result divided by the distance to the H ii region. The values of emissivity are presented in Table 3.
In the longitude range 85° to 205°, six H ii regions are at distances from 400–900 pc and the values of 22 MHz emissivity<sup>1</sup><sup>1</sup>1A volume emissivity per unit line-of-sight of 1 Kpc<sup>-1</sup> is equal to 4.93$`\times `$10<sup>-42</sup>Wm<sup>-3</sup>Hz<sup>-1</sup>sr<sup>-1</sup> at 22 MHz range from 21 Kpc<sup>-1</sup> to 60 Kpc<sup>-1</sup> with a mean of 40.1 Kpc<sup>-1</sup>. Excluding two H ii regions, Sh220 and Sh264, which are more than 10° off the plane, but including IC1805, at a distance of 2.2 kpc, we find a mean emissivity of 30.2 Kpc <sup>-1</sup> with an rms of 9.6 Kpc<sup>-1</sup>. These emissivities are comparable with similarly derived emissivities tabulated (at 10MHz) by Rockstroh & Webber (rockstroh (1978)). In addition, our value in the direction of IC1805, 20.9 Kpc<sup>-1</sup>, is close to the value of 18 Kpc<sup>-1</sup> obtained by Roger (roger69a (1969)) using a detailed modelling of 22 and 38 MHz data for the IC1805–IC1848 complex.
However, there is a problem reconciling a mean value of local emissivity of 30 Kpc<sup>-1</sup> with the model of Galactic emission of Beuermann et al. (beuermann (1985)) which assumes a lesser value of 15 Kpc<sup>-1</sup> (11 Kkpc<sup>-1</sup> at 408 MHz) at the solar radius. If we take the value of the brightness temperature at the Galactic poles (27 kK), subtract an extragalactic component of 6 kK (Lawson et al. lawson (1987)) and divide by the model’s half–equivalent–width of 1.8 kpc, we derive a mid-plane emissivity of only 11.7 Kpc<sup>-1</sup>, almost a factor of 3 less than our measured mean value. To reconcile our measurement with the model, one or more of the following must apply: (i) our measured local mean emissivity is greater than the typical value at the solar radius; (ii) the equivalent width of the “thick–disk” component is locally less than the model predicts; (iii) the extragalactic component of the polar emission is less than is estimated from extrapolations of extragalactic source counts at higher frequencies; and/or (iv) a zero–level correction should be added to the 22 MHz brightness temperatures. With regard to the extragalactic component of emission, we note that estimates are usually derived from source count (“log N–log S”) analyses at frequencies above 150 MHz (e.g. Lawsonlawson (1987)), extrapolated with an assumed spectral index $`\beta `$2.75. Analyses of source counts at substantially lower frequencies are needed for accurate estimates of the extragalactic component. We noted in Section 6.1 the possibility of a zero–level correction as indicated by T-T plot comparisons with 408 MHz data. In this regard, it is interesting to note that very low resolution measurements with scaled antennas at several low frequencies (Bridle bridle (1967)) predicted a brightness temperature at 22 MHz in the area of the North Galactic Pole 4 kK higher than our value. This is of the same magnitude and sense as the offset suggested by the T-T plot analysis.
We note the unusually high emissivity derived for the direction toward $`\zeta `$-Oph (Sh27), a relatively nearby complex some 23° above the plane at the longitude of $``$6°. Emission from this direction may include components from the North Polar Spur and from a minor spur that is most prominent near $`l=6`$°, $`b=14`$°, both of which may be foreground features. Also, it is possible that this somewhat diffuse region is not completely opaque at 22 MHz, in which case an unknown amount of background emission may contribute a spurious component to the emissivity.
Acknowledgements. We are indebted to several colleagues for their assistance in collecting and processing the observational data, and we particularly thank J. D. Lacey, J.H. Dawson and D.I. Stewart. We are also grateful to Dr. J.A. Galt for his encouragement at various stages of this project.
FIGURE CAPTIONS
Fig. 1 A T-T plot of 22 MHz and 408 MHz data for declination 48.8°and all right ascensions except (a) for the range 20<sup>h</sup> to 22<sup>h</sup> and (b) small regions around bright point sources.
Fig. 2 A map of the emission at 22 MHz, continued in Figs. 3 to 6. Contours of brightness temperature are at the following levels in kilokelvins: 14, 19, 24, 29, 34, 41, 48, 55, 65, 75, 85, 100, 115, 130, 150, 170, 190, 220, 250. Levels in bold have thick contour lines and are labelled. Regions affected by sidelobes of four strong sources are blanked out. The superimposed grid shows Galactic co-ordinates in steps of 30° in $`l`$ and $`b`$, labelled only where grid lines intersect the right-hand side and the top of the figure.
Fig. 3 Fig. 2 continued with the same contours and grayscale, showing the Galactic anticentre. A region around Tau A has been blanked out.
Fig. 4 Fig. 2 continued with the same contours and grayscale. A region around Cas A has been blanked out.
Fig. 5 Fig. 2 continued with the same contours and grayscale, showing the central regions of the Galaxy. Note the deep absorption trough along the plane near the Galactic centre. The North Polar Spur rises from the Galactic plane at $`l`$ = 30°. A region around Cyg A has been blanked out.
Fig. 6 Fig. 2 continued with the same contours and grayscale. The large circular feature is the North Polar Spur. A region around Vir A has been blanked out.
Fig. 7 The 22 MHz emission in Galactic coordinates in two segments with positions of prominent Galactic sources indicated. Contours, at the same levels shown in Figs. 2 to 6, are indicated with a bar scale. The superimposed grid shows equatorial co-ordinates (J2000) in steps of 2<sup>h</sup> in right ascension and 30° in declination.
Fig. 8 Fig. 7 continued.
Fig. 9 An Aitoff projection of the 22 MHz emission in Galactic coordinates. The Galactic centre is at the map centre and grids are at 30° intervals in longitude and latitude, positive to the left and upwards respectively. Note the arc of the North Polar Spur rising from the plane near longitude $`+`$30°.
Fig. 10 A map of the spectral index from 22 MHz to 408 MHz in shaded grey levels as indicated by the bar scale. Regions near four strong sources are blanked out. The superimposed grid shows Galactic co-ordinates in steps of 30° in $`l`$ and $`b`$.
Fig. 11 The “quasi optical depth” at 22 MHz along the Galactic plane in the first quadrant, from a comparison of the 408 MHz and 22 MHz emissions, assuming all absorbing (thermal) gas is on the near side of the background synchrotron emission. Contours are at optical depths of 0.4, 0.8, 1.2, 1.6 and 2.0.
|
no-problem/9902/cond-mat9902039.html
|
ar5iv
|
text
|
# Escape rate of a biaxial nanospin system in a magnetic field: first- and second-order transitions between quantum and classical regimes
## Abstract
We investigate the escape rate of the biaxial nanospin particle with a magnetic field applied along the easy axis. The model studied here is described by the Hamiltonian $`=AS_z^2BS_x^2HS_z,(A>B>0)`$. By reducing this Hamiltonian to a particle one, we derive, for the first time, an effective particle potential for this model and find an analytical form of the phase boundary line between first- and second-order transitions, from which a complete phase diagram can be obtained. We also derive an analytical form of the crossover temperature as a function of the applied field at the phase boundary.
preprint: GTP-98-03
Recently the quantum-classical phase transition of the escape rate in nanospin system has been studied intensively. One of the main issues in this subject is to determine whether the transition is first-order or second-order. In this regards, for the uniaxial spin system such as high-spin molecular magnet $`\mathrm{Mn}_{12}\mathrm{Ac}`$, two models have been investigated comprehensively : one with a transverse field and the other with an arbitrarily directed field , described by the Hamiltonians $`=DS_z^2H_xS_x`$ and $`=DS_z^2H_xS_xH_zS_z`$ respectively. In the first case, by using the method of particle mapping and the Landau theory of phase transition, Chudnovsky and Garanin have shown that the transition order changes from first to second when the field parameter $`h_xH_x/(2SD)`$ is 0.25. In the case of model with arbitrarily directed field Garanin et al. have obtained the phase boundary line $`h_{xc}=h_x(h_z)`$ to show the whole phase diagram in which $`h_{xc}(0)=0.25`$ in the unbiased case and $`h_{xc}(1h_z)^{3/2}`$ in the strongly-biased limit.
For biaxial spin system such as iron cluster $`\mathrm{Fe}_8`$ Liang et al. considered a model without an applied field, $`=K(S_z^2+\lambda S_y^2),(0<\lambda <1)`$ . Using the coherent spin state representation they have shown that the coordinate dependent effective mass leads to the first-order transition and the change between the first- and second-order transitions occurs at the value $`\lambda =0.5`$ . The biaxial spin model with transverse field, $`=K(S_z^2+\lambda S_y^2)H_yS_y`$, has also been investigated by Lee et al. who demonstrated that various types of combinations of the first- and second-order transitions are possible depending on $`\lambda `$ and $`H_y/K\lambda S`$ .
In this paper we study the phase transition of the escape rate of the biaxial spin system with a longitudinal field. We will first derive an effective particle potential by mapping the spin Hamiltonian onto particle one, which is the first derivation for the present model. Then, with the help of the recently developed method for the criterion of the transition order , we find an analytical form of the phase boundary line from which a complete phase diagram for the order of the phase transition is obtained.
Consider a nanospin particle with an applied field $`H`$ along the easy axis. If the spin particle is a biaxial spin system with $`XOZ`$ easy plane anisotropy and the easy $`Z`$-axis in the $`XZ`$-plane the Hamiltonian can be described by
$$=AS_z^2BS_x^2HS_z$$
(1)
where the anisotropy constants satisfy $`A>B>0`$. Our model is equivalent to $`=K(S_z^2+\lambda S_y^2)HS_x,(\lambda <1)`$ if we set $`A=K,B=(1\lambda )K`$. In the following, for convenience, we introduce dimensionless anisotropy parameter $`bB/A`$ and field parameter $`\alpha H/SA,(0<\alpha <2)`$ where $`S`$ is the spin number. For iron cluster $`\mathrm{Fe}_8`$ in Ref.5, $`S=10,A=0.316`$ K, and $`B=0.092`$ K. We can reduce this spin problem to a particle moving in a potential . The equivalent Schrödinger-like equation is derived as
$$\frac{1}{2m}\frac{d^2\mathrm{\Psi }}{dx^2}+V(x)\mathrm{\Psi }=E\mathrm{\Psi },$$
(2)
where $`m=1/2A`$,
$$\mathrm{\Psi }(x)=\left(\frac{\mathrm{cn}x}{\mathrm{dn}x}\right)^S\mathrm{exp}\left[\frac{\alpha S}{2\sqrt{1b}}\mathrm{tanh}^1\left(\sqrt{1b}\mathrm{sn}x\right)\right]\mathrm{\Phi }(x)$$
(3)
with
$$\mathrm{\Phi }(x)=\underset{\sigma =S}{\overset{S}{}}\frac{C_\sigma }{\sqrt{(S\sigma )!(S+\sigma )!}}\left(\frac{\mathrm{sn}x+1}{\mathrm{cn}x}\right)^\sigma $$
is the particle wave function, and $`V(x)`$ is the effective particle potential given by
$$\frac{V(x)}{A}=\frac{\alpha ^2S^2\mathrm{cn}^2x2\alpha bS(2S+1)\mathrm{sn}x4bS(S+1)}{4\mathrm{d}\mathrm{n}^2x}$$
(4)
in which $`\mathrm{sn}x,\mathrm{cn}x,`$ and $`\mathrm{dn}x`$ are the Jacobian Elliptic functions with modulus $`k^2=1b`$. This potential is shown in Fig.1. The local minimum represents a metastable state of the spin system described by the Hamiltonian (1). The potential preserves the symmetry of a rotation from $`Z`$ to $`+Z`$-axis. Thus, the escape from the local minimum $`x_m`$ to the global minimum $`x_m`$ corresponds to the inversion of the spin magnetization vector. For large spin such as $`S(S+1)\stackrel{~}{S}^2(S+1/2)^2`$ it has a maximum at point $`x_0=\mathrm{sn}^1[\alpha /2(1b)]`$. For a given value of $`b`$ the barrier height decreases as $`\alpha `$ increases and vanishes at $`\alpha =2(1b)`$ which corresponds to the coercive field.
The type of the phase transition is determined by the behavior of the Euclidean time oscillation period $`\tau (E)`$, where $`E`$ is the energy, near the bottom of the Euclidean potential which corresponds to the top of the potential barrier . When $`\tau (E)`$ decreases monotonically as $`E`$ approaches the barrier top the transition from quantum to classical regime is second-order. If, however, $`\tau (E)`$ is not a monotonous function of energy a first-order quantum-classical transition takes place. One can also argue that the condition for the first-order transition can be obtained by looking at the behavior of the oscillation period in Euclidean time as a function of oscillation amplitude near the barrier top. In this case the first-order transition appears when the amplitude dependent period $`\tau (a)`$, where $`a`$ is the amplitude, is smaller than the zero amplitude period $`\tau (0)`$ which corresponds to the solution near the position of the sphaleron solution . The important feature of these approaches relies on the shape of the potential near the top of the barrier. Thus, parameterizing the amplitude as the coefficient of the perturbation expansion a sufficient condition for the first-order transition can be derived . Below, we show that the latter approach leads to an analytical form of phase boundary between first- and second-order transitions for the present model.
Expanding the potential $`V(x)`$ near $`x=x_0`$ up to fourth-order we obtain
$$V(z+x_0)a_1(x_0)z^2+a_2(x_0)z^3+a_3(x_0)z^4,$$
(5)
where $`z=xx_0,a_1(x_0)=V^{\prime \prime }(x_0)/2(<0),a_2(x_0)=V^{\prime \prime \prime }(x_0)/6(>0)`$, and $`a_3(x_0)=V^{\prime \prime \prime \prime }(x_0)/24`$. Following references 8 and 9 the criterion of the first-order transition can be obtained from the condition $`\tau (a)\tau (0)<0`$, which is expressed by
$$\frac{15}{4}\frac{a_2^2(x_0)}{a_1(x_0)}+3a_3(x_0)<0.$$
(6)
By equating both sides we can find an equation of the phase boundary line. For large spin number this is obtained to be
$$\alpha _c=2(1b_c)\sqrt{\frac{12b_c}{1+b_c}},$$
(7)
where $`\alpha _c`$ and $`b_c`$ are the critical values of the field and anisotropy parameters on the phase boundary, respectively. In Fig.2 we have plotted $`b_c`$ as a function of $`\delta _c1\alpha _c/2`$. This picture displays a complete phase diagram in the whole range of the applied field. From the condition in Eq.(6) the first-order exists below the line. From the picture we immediately see $`b_c=0.5`$ in the unbiased case $`\delta _c=1`$. This is the same as the previously obtained result and confirms that the present approach is correct. In the strongly biased case, $`\delta _c0`$ and $`b_c0`$ since $`\alpha _c2`$. Thus, in this limit, by expanding the Eq.(7) for small $`b_c`$ we find the linear behavior $`b_c0.4\delta _c`$.
For the critical temperature at the boundary between the first- and second-order transitions we use the formula $`2\pi T_c=\sqrt{V^{\prime \prime }(x_0)/m}`$. For the present potential this becomes
$$\frac{T_c}{\stackrel{~}{S}A}=\frac{2\sqrt{3}b_c}{\pi }\sqrt{\frac{1b_c}{1+b_c}}.$$
(8)
In Fig.3 we have plotted $`T_c/\stackrel{~}{S}A`$ vs. $`\delta _c`$ graph, where the relation in Eq.(7) has been used. In the unbiased case Eq.(8) gives $`T_c/\stackrel{~}{S}A=0.318`$ which coincides with the value calculated from the Eq.(13) in Ref.6. In the strongly biased limit it represents linear behavior $`T_c/\stackrel{~}{S}A0.442\delta _c`$.
Very recently the same spin system has also been considered , but with a slightly different model : $`=DS_z^2+BS_x^2HS_z,(D,B>0)`$. Comparing this with our model we realize $`D=\lambda A`$. By using a perturbative approach with respect to $`bB/D`$ they obtained a phase diagram in the whole range of field from which the linear dependences of $`b_c`$ and $`T_c/SD`$ on $`\delta _c`$ in the strongly biased case can be found. Since the perturbation parameter is $`b`$ the validity of this approach is limited in the range of small values of $`b`$ which requires the strong bias limit to change the order of transition. On the other hand, there is no restriction to $`b`$ in our approach. We believe that the present approach is more rigorous and the results are improved.
Finally, we comment on the experimental observation of the results. Using the anisotropy constants given in Ref.5 we have $`b_c=0.29`$, and thus $`\alpha _c=0.81`$ from Eq.(7) for which the critical field is estimated to be 1.9 $`\mathrm{T}`$, and the coercive field is 3.4 $`\mathrm{T}`$. From Eq.(8) the transition temperature on the phase boundary can also be calculated, and we find $`T_c=0.79`$ $`\mathrm{K}`$.
To summarize we have investigated the phase transition of the escape rate from metastable state in nanospin system with a magnetic field applied along the easy axis. By using the particle mapping we derived an effective particle potential. We obtained an analytical form for the equation of the phase boundary line between the first- and second-order transitions and thus a complete phase diagram. In the strongly biased case we found a linear dependence of $`b_c`$, the dimensionless anisotropy parameter on the applied field. We also obtained a diagram for the crossover temperature as a function of the applied field. It also shows a linear relation. The results obtained here can be used as a guide for the experimental observation.
|
no-problem/9902/cond-mat9902124.html
|
ar5iv
|
text
|
# Why 𝑇_𝑐 is too high when antiferromagnetism is underestimated? — An understanding based on the phase-string effect
## Abstract
It is natural for a Mott antiferromagnetism in RVB description to become a superconductor in doped metallic regime. But the issue of superconducting transition temperature is highly nontrivial, as the AF fluctuations in the form of RVB pair-breaking are crucial in determining the phase coherence of the superconductivity. Underestimated AF fluctuations in a fermionic RVB state are the essential reason causing an overestimate of $`T_c`$ in the same system. We point out that by starting with a bosonic RVB description where both the long-range and short-range AF correlations can be accurately described, the AF fluctuations can effectively reduce $`T_c`$ to a reasonable value through the phase string effect, by controlling the phase coherence of the superconducting order parameter.
It was first conjectured by Andersonanderson that the ground state of the two-dimensional (2D) $`tJ`$ model may be described by some kind of resonating-valence-bond (RVB) state. Perhaps the most natural consequence of a RVB description is the superconductivity once holes are introduced into the system, which otherwise is a Mott insulator, as preformed spin pairs become mobile, i.e., carrying charge like Cooper pairs.
Even though the RVB state was proposedanderson to explain the then-newly-discovered high-$`T_c`$ superconductor in cuprates, the mean-field estimate of $`T_c`$ turned out to be way too high ($`1000`$$`K`$ at doping concentration $`\delta 0.1`$nagaosa ) as compared to the experimental ones ($`100`$$`K`$). Another drawback for the earlier fermionic RVB theory (where spins are in fermionic representation) is that the antiferromagnetic (AF) correlations are always underestimated, which becomes obvious in low-doping limit where long-range AF ordering (LRAFO) cannot be naturally recovered. Even at finite-temperature where the LRAFO is absent, the spin-lattice relaxation rate calculated based on those RVB theories shows a wrong temperature-dependence as compared to that well-known for the Heisenberg model, indicating an absence of AF fluctuations.
Intuitively, a fermionic RVB state should become superconducting of BCS type at finite doping where the RVB pairs are able to move around carrying charge. But since the RVB pair-breaking process corresponds to AF fluctuations at insulating phase while it also represents Cooper pair-breaking in superconducting state, it is not difficult to see why the underestimate of AF fluctuations in the fermionic RVB theory would be generally related to an overestimated $`T_c`$.
Of course, the above-mentioned drawback in describing antiferromagnetism does not include all RVB theories. There actually exists a RVB state which can describe the AF correlations extremely well. As shown by Liang, Douct, and Andersonliang , the trial wavefunction of RVB spins in bosonic representation can reproduce almost exact ground-state energy at half-filling (which implies a very accurate description of short range spin-spin correlations). A simple mean-field theory of bosonic RVB studied by Arovas and Auerbachaa (usually known as the Schwinger-boson theory) can easily recover the LRAFO at zero-temperature and reasonable behavior of magnetic properties at finite temperature.
Thus, one may classify two kinds of RVB states based on whether the spins are described in fermionic or bosonic representation. In principle, both representations should be equivalent mathematically due to the constraint that at each site there can be only one spin. But once one tries to do a mean-field calculation by relaxing such a constraint, two representations will result in qualitatively different consequences: in fermionic representation, even an exchange of two spins with the same quantum number will lead to a sign change of the wavefunction as required by the fermionic statistics. At half-filling, this is apparently redundant as the true ground-state wavefunction only changes sign when two opposite spins at different sublattice sites exchange with each other, known as the Marshall signmarshall which can be easily incorporated into the bosonic RVB description. That explains the great success of the bosonic RVB mean-field theory over the fermionic ones at half-filling.
Since the bonsonic RVB description of antiferromagnetism is proven strikingly accurate at half-filling, one may wonder why we cannot extend such a formalism to the doped case by literally doping the Mott-insulating antiferromagnetism into a metal (superconductor). In fact, people have tried this kind of approach based on the so-called slave-fermion representation but the mean-field theories always lead to the so-called spiral phasess which is inherently unstable against the charge fluctuationsweng0 . In other words, an instability boundary seems to prevent a continuous evolution of the mean-field bosonic RVB description into a short-ranged spin liquid state at finite doping.
It implies that some singular effect must have been introduced by doping which was overlooked in those theories. Indeed, it was recently revealedweng1 that a hole hopping on the antiferromagnetic background always leaves a string of phase mismatch (disordered Marshall signs) on the path which is non-repairable at low-energy (because the spin-flip term respects the Marshall sign rule). The implication of the existence of phase string is straightforward: a hole going through a closed loop will acquire a nontrivial Berry phase and a quasiparticle picture no longer holds here. This explains why the mean-field theory in the slave-fermion representation, where the topological effect of the phase string is smeared out by mean-field approximation, always results in an unphysical spiral-phase instability.
This barrier can be immediately removed once the nonlocal phase string effect is explicitly incorporated into the Schwinger-boson, slave-fermion representation through a unitary transformation – resulting in the so-called phase string formulationweng2 where the mean-field treatment generalized from the Schwinger-boson mean-field state at half-fillingaa becomes workable at finite doping. A metallic phaseweng3 with short-range spin correlations can be then obtained in which the ground state is, not surprisingly, always superconducting.
What becomes special here is that the phase string effect introduces a phase-coherence factor to the superconducting order parameterweng3 :
$$\mathrm{\Delta }_{ij}^{SC}\rho _s^0\mathrm{\Delta }^se^{\frac{i}{2}(\mathrm{\Phi }_i^s+\mathrm{\Phi }_j^s)}$$
(1)
where $`\mathrm{\Delta }^s`$ denotes the mean-field RVB order parameter for bosonic spinons and $`\rho _s^0\delta `$ is the bare superfluid density determined by holons (both spinon and holon are bosonic in this representation), $`i`$ and $`j`$ refer to two nearest-neighbor sites. The phase-coherence factor $`e^{\frac{i}{2}\mathrm{\Phi }_i^s}`$ is related to the spin degrees of freedom as follows
$$\mathrm{\Phi }_i^s=\underset{li}{}\text{Im ln }(z_iz_l)_\alpha \alpha n_{l\alpha }^b$$
(2)
with $`n_{l\alpha }^b`$ being defined as the spinon number operator at site $`l`$. The physical interpretation of the phase-coherence factor (2) is that each spinon contributes to a phase-vortex (anti-vortex).
At zero temperature, when all spinons are paired, so are those vortices and anti-vortices, such that superconducting order parameter $`\mathrm{\Delta }^{SC}`$ achieves phase-coherenceemery . At finite temperature, free excited spinons or dissolved vortices (anti-vortices) tend to induce a Kosterlitz-Thouless type transition once the “rigidity” of the condensed holons breaks down which may be estimated as the excited spinon number becomes comparable to the holon numberweng3 .
The superconducting transition temperature obtained this way is shown in Fig. 1 versus a spinon characteristic energy scale $`E_g`$. The definition of $`E_g`$ is shown in Fig. 2 where the local ($`𝐪`$-integrated) dynamic spin susceptibility as a function of energy is given at $`\delta =0.143`$ (solid curve) at zero temperature. As compared to the undoped case ($``$ curve), a resonance-like peak emerges at low-energy $`E_g`$ due to the phase string effect. The doping-dependence of $`E_g`$ is illustrated in the insert of Fig. 2.
Therefore, in the bosonic RVB state where the AF correlations are well described, the superconducting transition temperature is essentially decided by the low-lying spin characteristic energy. According to Fig. 2, $`J100`$ $`meV`$ gives rise to $`E_g=41`$$`meV`$ at $`\delta =0.15`$ which are consistent with the neutron-scattering data for such a compoundo7 . Then at the same $`E_g`$, one finds $`T_c100`$$`K`$ according to Fig. 1 which is very close to the experimental number in the optimal-doped $`YBCO`$ compound.
The overall picture goes as follows. The bosonic RVB order parameter $`\mathrm{\Delta }^s`$ controls the short-range spin correlations which reflects the “rigidity” of the whole phase covering both undoped and doped regimes, superconducting and normal (strange) metallic states. On the other hand, $`T_c`$ is basically determined by the phase coherence: for those preformed RVB pairs to become true superconducting condensate, the extra phase frustration introduced by doping has to be suppressed. Here we see how the AF fluctuations and superconductivity interplay: the former in a form of RVB pair-breaking fluctuations causes strong frustration on the charge part through the phase string effect and its energy scale thus imposes an upper limit for the transition temperature of the latter. It is interesting to see that AF fluctuations and superconducting condensation do compete with each other, although the driving force of superconductivity already exists in the Mott antiferromagnet in a form of RVB pairing.
To summarize, even though it is very natural for a RVB pairing description of the Mott-insulating antiferromagnetism to develop a superconducting condensation in the neighboring metallic regime, the issue of superconducting transition temperature is highly nontrivial as the AF fluctuations in a form of RVB pair-breaking process are the key effect to scramble the phase coherence of the superconductivity. The underestimated AF fluctuations in a fermionic RVB state are the essential reason causing an overestimate of $`T_c`$ in the same system. We pointed out that by starting with a bosonic RVB description where both the long-range and short-range AF correlations can be accurately described, the AF fluctuations can effectively reduce $`T_c`$ to a reasonable value through the phase string effect controlling the phase coherence of the superconducting order parameter.
###### Acknowledgements.
This talk is based on a series of work done in collaboration with D. N. Sheng and C. S. Ting. I would like to acknowledge the support by the Texas ARP program No. 3652707 and the State of Texas through the Texas Center for Superconductivity at University of Houston.
|
no-problem/9902/astro-ph9902171.html
|
ar5iv
|
text
|
# The H i Column Density Distribution Function at 𝑧=0: the Connection to Damped Ly𝛼 Statistics
## 1 Introduction
High column density absorbers seen in the spectra of background QSOs are referred to as Damped Ly$`\alpha `$ (DL$`\alpha `$) systems if the observed H i column density exceeds the value of $`N_{\mathrm{HI}}=2\times 10^{20}\mathrm{cm}^2`$. The DL$`\alpha `$ absorption lines are on the square-root part of the curve of growth where damping wings dominate the profile and column densities can be determined accurately by fitting the line profiles. Wolfe (1995) argues that these systems at high redshifts are gas-rich disks in the process of contracting to present-day spiral galaxies. This idea is supported by the fact that the characteristic velocity profiles of metal lines and Lyman series lines in DL$`\alpha `$ systems are similar to those of sightlines through spiral galaxies at $`z=0`$. More recently, detailed modeling of DL$`\alpha `$ absorption profiles by Prochaska & Wolfe (1998) has shown that the DL$`\alpha `$ systems are consistent with rapidly rotating, thick disks. Note however that alternative models, like protogalactic clumps coalescing into dark matter halos (Haehnelt et al. 1997, Khersonsky & Turnshek 1996), can also explain the kinematics. The cosmological mass density of neutral gas in DL$`\alpha `$ systems at high redshift is comparable to the mass density of luminous matter in galaxies at $`z=0`$ (e.g. Lanzetta et al. 1995).
One of the best known statistical results of the study of QSO absorption line systems is the column density distribution function (CDDF) of neutral hydrogen. The function describes the chance of finding an absorber of a certain H i column density along a random line of sight per unit distance. An observational fact from high-$`z`$ Ly$`\alpha `$ studies is that the differential CDDF \[$`f(N_{\mathrm{HI}})`$\] can be described by a single power law of the form $`f(N_{\mathrm{HI}})N_{\mathrm{HI}}^\alpha `$, where $`\alpha 1.5`$ over ten orders of magnitude in column density (e.g. Tytler 1987, Hu et al. 1995) from $`10^{12}\mathrm{cm}^2`$ (Ly$`\alpha `$ forest) to $`10^{22}\mathrm{cm}^2`$ (DL$`\alpha `$).
An integration over the distribution function gives the total cosmological neutral gas density as a function of redshift. The H i gas density relates to $`f(N_{\mathrm{HI}})`$ as $`\mathrm{\Omega }_{\mathrm{HI}}_{N_1}^{N_2}N_{\mathrm{HI}}f(N_{\mathrm{HI}})𝑑N_{\mathrm{HI}}`$ and it is readily seen that $`\mathrm{\Omega }_{\mathrm{HI}}(N_{\mathrm{HI}})N_2^{0.5}`$ if $`\alpha =1.5`$ and $`N_2N_1`$. This implies that although the high column density systems are observationally rare, they contain the bulk of the neutral gas mass in the Universe. Because so few DL$`\alpha `$ systems are known ($`80`$), the uncertainties on $`\mathrm{\Omega }_{\mathrm{HI}}`$ and the CDDF for high column densities are large, especially if the measurements are split up into different redshift bins. But following the CDDF as a function of redshift is certainly very important in constraining models of complicated physical processes like star formation or gas feedback to the interstellar medium.
There are several reasons why the determination of $`f(N_{\mathrm{HI}})`$ at the present epoch is difficult. Due to the expansion of the Universe the expected number of absorbers along a line of sight decreases with decreasing redshift, the Ly$`\alpha `$ line is not observable from the ground for redshifts smaller than 1.65, and starlight and dust in the absorbing foreground galaxies hinder the identification of the background quasars. Gravitational lensing may also play a role as it can bring faint quasars into the sample which otherwise would not have been selected (e.g. Smette et al. 1997).
At the present epoch the largest repositories of neutral gas are clearly galaxies. No instance of a free-floating H i cloud not confined to the gravitational potential of a galaxy has yet been identified. It is therefore justified to use our knowledge of the local galaxy population to estimate the shape and normalization of the CDDF.
## 2 How to determine $`f(N_{\mathrm{HI}})`$ at $`z=0`$?
A simple but illustrative and instructive method is to take the analytical approach. This is illustrated in Figure 1. Here we represent the radial distribution of the neutral hydrogen gas in galaxies by both an exponential and a Gaussian model. The differential cross sectional area of an inclined ring with a column density in the range $`N`$ to $`N+dN`$ is given by $`d\mathrm{\Sigma }(N,i)=2\pi r(N)dr\mathrm{cos}i`$, where $`r(N)`$ is the radius at which a column density $`N`$ is seen, and $`i`$ is the inclination of the ring. We assume that the luminosity function $`\varphi (M)`$ of the local galaxy population can be described by a Schechter function as indicated in the upper right panel of figure 1. The local $`f(N)`$ can be derived from $`\varphi (M)`$ and the area function $`d\mathrm{\Sigma }(N)`$ by taking the integral
$$f(N)=\frac{c}{H_0}\frac{_{M_{\mathrm{min}}}^{M_{\mathrm{max}}}\varphi (M)d\mathrm{\Sigma }(N)_i𝑑M}{dN},$$
(1)
where the subscript $`i`$ indicates an average over all inclinations. To evaluate this integral, the area function, or more generally the radial H i distribution, needs to be related to $`M`$. Here we adopt the relation $`\mathrm{log}M_{\mathrm{HI}}=A+BM_B`$ (following Rao & Briggs 1993) and assume that the central gas surface density in disks is not dependent on morphological type or luminosity. The resulting $`f(N)`$ for both models is shown in the lower left panel. The integral H i gas density in $`h_{100}\mathrm{g}\mathrm{cm}^3`$ as a function of column density is shown in the lower right panel. This function can be calculated with $`\rho _{\mathrm{HI}}(N)=m_\mathrm{H}N\frac{H_0}{c}f(N)dN`$, where $`m_\mathrm{H}`$ is the mass of the hydrogen atom.
The Gaussian models yield a CDDF of the form $`f(N)N^\alpha `$, where $`\alpha =1`$ for $`N`$ smaller than the maximum column density seen in a face-on disk ($`N_{\mathrm{max}}`$) and $`\alpha =3`$ for the higher values of $`N`$. The exponential model gives a smoother function. The logarithmic slope is approximately $`1.2`$ around $`N=10^{20}\mathrm{cm}^2`$, slowly changing to $`3`$ at higher column densities. In fact, it was shown already by Milgrom (1988) that $`f(N)N^3`$ for $`N>N_{\mathrm{max}}`$ for any radial surface density distribution. The lower right panel clearly illustrates that an overwhelming part of the total H i mass in the local Universe is associated with column densities close to $`N_{\mathrm{max}}`$.
In addition to these simple models we also show the effect of disk truncation on the CDDF. The thin dashed line illustrates a Gaussian disk truncated at $`N_{\mathrm{HI}}=10^{19.5}\mathrm{cm}^2`$, the level below which photo-ionization by the extragalactic UV-background is normally assumed to be important (e.g. Corbelli & Salpeter 1993, Maloney 1993). It appears that this truncation only seriously affects the CDDF below $`N_{\mathrm{HI}}=10^{19.5}\mathrm{cm}^2`$. No significant changes occur at higher column densities.
A more reliable method than this analytical approach is to determine $`f(N_{\mathrm{HI}})`$ by using observed H i distributions. 21cm maps of nearby galaxies routinely reach sensitivity limits comparable to column densities that typify DL$`\alpha `$ absorbers. It therefore seems natural to calculate $`f(N_{\mathrm{HI}})(z=0)`$, simply by adding cross sectional areas as a function of $`N_{\mathrm{HI}}`$ for a large sample of galaxies for which 21cm synthesis observations are available. However, this approach is complicated by the fact that there is an enormous variation in sensitivity and angular resolution of the 21cm maps, and the problem of choosing a fair and complete sample of galaxies. Most galaxies that have been studied extensively in the 21cm line were selected on having either a large H i diameter, so that the rotation curve can be sampled out to large galactocentric radii, or on having peculiarities such as polar rings or warps. Thus, most samples for which 21cm synthesis data exist are not representative of the galaxy population of the local Universe and would likely be biased against dwarf and low surface brightness galaxies.
## 3 The Ursa Major Cluster
The Ursa Major cluster of galaxies, studied extensively by Verheijen (1997), forms an ideal sample for an unbiased study of the column density distribution function. The Ursa Major cluster, at a distance of 15 Mpc, is different in many respects from famous, equally distant clusters like Virgo and Fornax. Ursa Major’s member galaxies show no concentration towards a central condensation and their velocity dispersion is exceptionally low, approximately 150 km/s. The estimated crossing time is half a Hubble time and hence the galaxies do not seem to be seriously affected by tidal interactions. In addition to this, there is a predominance of late type galaxies and the morphological mix of galaxies is indistinguishable from that in the field. This combination of properties implies that the Ursa Major cluster is in fact an overdensity of galaxies and not comparable to classical clusters of galaxies. This justifies the use of the Ursa Major cluster for the study of the shape of the CDDF of neutral hydrogen in the local Universe.
The Ursa Major cluster as defined by Tully et al. (1996) comprises a volume of 80 Mpc<sup>3</sup>, within which 80 galaxies are identified to date. For a complete sample of 62 galaxies intrinsically brighter than the Small Magellanic Cloud ($`M_B=16.5^\mathrm{m}`$) 21cm synthesis observations have been performed with the WSRT<sup>1</sup><sup>1</sup>1The WSRT is operated by the Netherlands Foundation for Research in Astronomy (NFRA/ASTRON), with financial support by the Netherlands Organization for Scientific Research (NWO). H i has been detected by the WSRT in 49 galaxies. Details on observations and data reduction are described in Verheijen (1997).
An obvious advantage of using the UMa sample for this study is that all the member galaxies are at approximately the same distance. Therefore, the spatial resolution of the synthesis observations are constant for the whole sample. This simplifies the problem of assessing the influence of resolution on the determination of the CDDF and the comparison with the CDDF at high redshift.
The shape of the column density distribution function is determined by counting in each H i map the number of pixels per logarithmic bin of 0.1 dex in column density. The solid angle covered by pixels of a certain column density is then determined by multiplying the number of pixels with the angular pixel size which varies slightly from galaxy to galaxy.
The disadvantage of using a galaxy sample taken from a clear cosmic overdensity is that the CDDF is not automatically normalized. If we would naively assume that the Ursa Major cluster is a representative part of the nearby Universe, we would overestimate the normalization of the CDDF by roughly a factor of 12. This factor is obtained by comparing the H i mass function of the cluster with that of the field galaxy population (Zwaan et al. 1997). The shape of the Ursa Major mass function is indistinguishable from that of the field, but the normalization, $`\theta ^{}`$, is larger by a factor of $`12`$. Ideally, one would use a sample of galaxies with well understood selection criteria so that the normalization would occur automatically. Unfortunately, there are no such samples available for which H i synthesis observations with sufficient angular resolution have been performed. The HIPASS survey, a blind 21cm survey of the whole southern sky, will eventually yield a suitable galaxy sample for this purpose, if a representative subsample is followed up with the ATCA to obtain high spatial resolution maps.
There are several methods for normalizing the UMa CDDF. By assuming a local luminosity function (LF) or H i mass function (HIMF), each galaxy could be given a weight according to its absolute magnitude or H i mass. However, this method introduces extra uncertainty in the derived CDDF, due to uncertainties in the exact shape and normalization of the LF and the HIMF. Our preferred method of normalizing the CDDF is to scale the complete function, not the individual contributors to it. This can be achieved by scaling the integral H i mass density that is contained under the CDDF:
$$\rho _{\mathrm{HI}}=_{N_{\mathrm{min}}}^{N_{\mathrm{max}}}m_\mathrm{H}N\frac{H_0}{c}f(N)𝑑N.$$
(2)
By means of a blind 21cm survey Zwaan et al. (1997) determined $`\rho _{\mathrm{HI}}=5.8\times 10^7h_{100}\mathrm{M}_{}\mathrm{Mpc}^3`$, a result that is in excellent agreement with earlier estimates based on optically selected galaxies. Note that dependencies on $`H_0`$ disappear in the final specification of the CDDF.
## 4 The Column Density Distribution Function
Figure 2 shows the CDDF determined from the 21cm observations of the Ursa Major sample. From left to right the function is shown for three different resolutions of the H i maps: $`15^{\prime \prime }`$, $`30^{\prime \prime }`$, and $`60^{\prime \prime }`$. The solid line is the determined CDDF; the dashed lines indicate the quality of the measured column densities. Each pixel in the H i maps has an estimate of the signal to noise level assigned to it. In the determination of the CDDF we calculated an average S/N level for each bin in column density by averaging the S/N ratios for the individual pixels. The dashed lines show the average $`1\sigma `$ errors on the column densities and should be interpreted as horizontal errorbars. Nonetheless, they clearly overestimate the real uncertainties on the CDDF as many pixels are used in each bin (2500 independent beams for full resolution). The lines merely serve as an indicator of the quality of the measurements at each resolution. The thin solid line represents the CDDF for a Gaussian model, where $`f(N_{\mathrm{HI}})N_{\mathrm{HI}}^1`$ for $`N_{\mathrm{HI}}<10^{21}\mathrm{cm}^2`$ and $`f(N_{\mathrm{HI}})N_{\mathrm{HI}}^3`$ for higher column densities.
When comparing the CDDFs at different resolution, it appears that the highest resolution maps yield the smoothest CDDF. This occurs because there the measurements of column density have the lowest S/N ratios. The low resolution (but high S/N) CDDF is in excellent agreement with the Gaussian model for $`10^{20}<N<10^{21.5}`$, but for higher column densities, the measured curve drops below the model since high column density peaks are smeared away. Going to higher resolutions leads to better agreement between the measured curve and the model for the highest $`N_{\mathrm{HI}}`$, and at $`15^{\prime \prime }`$ resolution the CDDF follows the $`N_{\mathrm{HI}}^3`$ distribution up to $`10^{21.9}\mathrm{cm}^2`$.
Besides beam smearing two other effects can cause a deviation from the $`N_{\mathrm{HI}}^3`$ function. Firstly, the calculations assume that the gaseous disks are infinitely thin. Observations show that the bulk of the H i indeed resides in a thin layer with axis ratio $`<0.1`$ (Rupen 1991). The thin disk approximation is therefore valid for moderately inclined disks. However, the highest column densities in the models arise in highly inclined thin disks. A small degree of puffiness will prevent these high column densities from being observed. The second effect is H i self absorption. The theoretical calculation of the CDDF is based on the assumption that the optical depth of the neutral gas layer is negligible. Column densities much higher than the maximal column density in a face-on galaxy can only be seen in a highly inclined disk when the gas is optically thin. It is remarkable that the full resolution CDDF follows the $`f(N)N_{\mathrm{HI}}^3`$ line up to $`N_{\mathrm{HI}}=10^{21.9}`$, well above the value where H i self absorption is normally assumed to set in. For example, Dickey & Lockman (1990) calculate that an H i cloud with $`T=50\mathrm{K}`$ and an FWHM velocity dispersion of $`10\mathrm{km}\mathrm{s}^1`$ becomes optically thick ($`\tau =1`$) at column densities $`N=10^{21}\mathrm{cm}^2`$.
Also shown in Figure 2 are the measurements of $`f(N_{\mathrm{HI}})`$ at high redshifts as determined by Storrie-Lombardi et al. (1997). We choose not to split up their high-$`z`$ sample in different redshift bins in order to get reasonable signal to noise. The median redshift of the total DL$`\alpha `$ sample is $`z=2.5`$. A value of $`q_0=0.5`$ has been used here. Lower values of $`q_0`$ would not significantly change the slope of $`f(N_{\mathrm{HI}})`$ but would decrease the normalization by approximately a factor of 2. Strong redshift evolution of the CDDF from $`z=2.5`$ to the present is apparent. The intersection cross-section for H i column density $`<10^{21.2}\mathrm{cm}^2`$ has decreased by a factor of 6 (factor 3 for $`q_0=0`$) from $`z=2.5`$ to $`z=0`$. Higher column densities show a larger decrease, the evolution accounting for a factor of 10 (5 for $`q_0=0`$). Lanzetta et al (1995) report still stronger evolution of the higher column densities for higher redshift, although the highest column densities suffer from small number statistics and the effect is hardly seen by Storrie-Lombardi et al. (1997). The strong evolution of the higher column densities can be understood if gas consumption by star formation occurs most rapidly in regions of high neutral gas density (Kennicutt et al. 1994).
Rao & Briggs (1993) evaluated the CDDF at the present epoch by analyzing Arecibo observations of a sample of 27 galaxies with optical diameters in excess of $`7^{}`$. Double-Gaussian fits to the observed radial H i distribution were used to calculate $`f(N_{\mathrm{HI}})`$. The disadvantage of this method is that the Gaussian fits automatically introduce the $`N_{\mathrm{HI}}^1`$ for low $`N_{\mathrm{HI}}`$ and $`N_{\mathrm{HI}}^3`$ for high $`N_{\mathrm{HI}}`$. In the present study no modeling has been applied. The location of the change of the slope and the normalization are in excellent agreement between Rao & Briggs’ work and the Ursa Major determination.
## 5 Contribution of Low Surface Brightness Galaxies
It has been argued in the literature that low surface brightness (LSB) galaxies might contribute a considerable H i cross section. In particular, Linder (1998) explores a scenario in which the outskirts of galaxies are responsible for most of the cross section for low column density neutral gas ($`N_{\mathrm{HI}}<10^{20.3}\mathrm{cm}^2`$). She concludes that Ly$`\alpha `$ absorber counts at low redshifts can be explained if LSB galaxies of moderate absolute luminosity with extended low density gas disks are included in the analysis. Contrary to this view, Chen et al. (1998) claim that extended disks of luminous galaxies can account for most of the observed Ly$`\alpha `$ lines below $`N_{\mathrm{HI}}=10^{20.3}\mathrm{cm}^2`$. The contribution of dwarf and LSB galaxies to the cross section for high column density H i is also unclear. For instance, Rao & Turnshek (1998) show that there are no luminous spiral galaxies in the vicinity of the quasar OI 363 in which spectrum they identify two low-$`z`$ DL$`\alpha `$ systems.
Here we evaluate the contribution of LSB galaxies to the cross section for high column density gas at $`z=0`$. First we have to address the problem of completeness. The Ursa Major sample is essentially a magnitude limited sample. Selection effects against LSB galaxies are therefore to be expected. Tully & Verheijen (1997) discuss the completeness of the sample by plotting the observed central surface brightness against the exponential disk scale length. Theoretical approximations of the visibility limits seem to describe the boundaries of the observed sample satisfactorily. We apply the same visibility limits to the H i selected galaxy sample of Zwaan et al. (1997) to estimate what fraction of the H i mass density in the Ursa Major cluster could be missed in the present study. It appears that galaxies below the optical detection limits of our Ursa Major sample probably contain 10% of the total H i density of the cluster. Not surprisingly, most of this missed H i density resides in LSB galaxies. Following Tully & Verheijen (1997), the separation between LSB and HSB galaxies is made at an extrapolated central surface brightness of $`18.5\mathrm{mag}\mathrm{arcsec}^2`$ in the $`K^{}`$-band, which roughly compares to $`22.0\mathrm{mag}\mathrm{arcsec}^2`$ in the $`B`$-band.
Figure 3 illustrates the contribution of LSB galaxies to the cross section for high column density gas, relative to the total galaxy population. We have corrected for the incompleteness by adding extra cross section for the LSB galaxies, equally over all column densities, in such a way that the mass density in these galaxies increases by an amount equal to 10% of the total H i density. The left panel shows the CDDF, the right panel shows the cosmological mass density of H i as a function of column density. The full resolution data are used. LSB galaxies do not make a significant contribution to the cross section for column densities higher than $`N_{\mathrm{HI}}=10^{21.3}\mathrm{cm}^2`$. Below that value they are responsible for approximately 25% of the cross section. The right panel shows that LSB galaxies make a minor contribution to the local neutral gas density, a conclusion very much in concordance with the results of Briggs (1997), Zwaan et al. (1997), Sprayberry (1998) and Côté et al. (1998).
## 6 Conclusions and Discussion
We have used the present knowledge of the nearby galaxy population to estimate the H i column density distribution function at $`z=0`$. It is shown that $`f(N_{\mathrm{HI}})`$ undergoes strong redshift evolution from $`z2.5`$ to the present, especially at the high column densities. The observed evolution in $`f(N_{\mathrm{HI}})`$ critically depends on whether the census of H i in the local Universe is complete. Surveys in H i and the optical indicate that the density of visible light and neutral gas is dominated by luminous, high surface brightness galaxies. The H i surveys routinely reach column density limits much lower than what is required to detect the $`z=0`$ counterparts of DL$`\alpha `$ systems. Since H i mass functions published to date typically lose sensitivity below $`M_{\mathrm{HI}}=10^7\mathrm{M}_{}`$, the region of parameter space still open to hide a large amount of high column density gas is that of low H i masses. Observations to measure the space density of these small H i masses (H i clouds and extreme LSB dwarf galaxies) and to evaluate to what extent they contribute to the H i density and the CDDF of the local Universe are important next steps.
## References
Briggs, F.H. 1997, ApJ, 484, 618
Chen, H.-W. Lanzetta, K.M., Webb, J.K., & Barcons, X. 1998, ApJ, 498, 77
Corbelli, E.& Salpeter, E. E. 1993, ApJ, 419, 104
Côté, S, Broadhurst, T., Loveday, J., & Kolind, S. 1998, astro-ph/9810470
Dickey, J.M. & Lockman, F.J. 1990, ARA&A, 28, 215
Haehnelt, M.G., Steinmetz, M., & Rauch, M. 1998, ApJ 495, 647
Hu, E.M., Kim, T.-S., Cowie, L.L., Songaila, A., & Rauch, M. 1995, AJ, 110, 1526
Kennicutt, R.C. 1998, ApJ, 498, 541
Khersonsky, V.K. & Turnshek, D.A. 1996, ApJ, 471, 657
Lanzetta, K.M., Wolfe, A.M., & Turnshek, D.A 1995, ApJ, 440, 435
Linder, S.M. 1998, ApJ, 495, 637
Maloney, P. 1993, ApJ, 414, 41
Milgrom, M. 1988, A&A, 202, L9
Prochaska, J.X. & Wolfe, A.M. 1997, ApJ, 487, 73
Rao, S., & Briggs, F. H. 1993, ApJ, 419, 515
Rupen, M.P. 1991, AJ, 102, 48
Smette, A., Claeskens, J.F., & Surdej, J. 1997, NewA, 2, 53
Sprayberry, D. 1998 these proceedings
Storrie-Lombardi, L.J., Irwin, M.J., & McMahon, R.G. 1996, MNRAS, 282, 1330
Tully, R.B., Verheijen, M.A.W., Pierce, M.J., Huang, J.S., & Wainscoat, R.J. 1996, AJ, 112, 2471
Tully, R.B. & Verheijen, M.A.W. 1997, ApJ, 484, 145
Tytler, D. 1987, ApJ, 321, 49
Verheijen, M.A.W. 1997, Ph.D. thesis, Univ. Groningen
Wolfe 1995, in QSO Absorption Lines, ed. G. Meylan
Zwaan, M.A., Briggs, F.H., Sprayberry, D., & Sorar, E. 1997, ApJ, 490, 173
|
no-problem/9902/astro-ph9902224.html
|
ar5iv
|
text
|
# Correlated variability of Mkn 421 at X-ray and TeV wavelengths on timescales of hours.
## 1 Introduction
It is now widely recognized that the strong and variable non- thermal emission observed in BL Lac objects is due to a relativistic jet oriented at a small angle to the line of sight. The origin of such jets is not understood at present and observations of the broad band continuum offer a means to pin down radiation processes and derive the physical parameters of the jet including the bulk Lorentz factor of the flow.
Mkn 421 is the brightest BL Lac object at X-ray and UV wavelengths and the first extragalactic source discovered at TeV energies (Buckley, this workshop). The emission up to X-rays is thought to be due to synchrotron radiation from high energy electrons in the jet, while it is likely that gamma-rays from GeV to TeV energies derive from the same electrons via inverse Compton scattering of soft photons ( e.g., Ulrich, Maraschi and Urry 1997, and refs therein).
If the above model is correct, a change in the density and/or spectrum of the high energy electrons is expected to produce simultaneous variations at the frequencies emitted by the same electrons through the two processes. In particular, the two peaks present in the broad band spectral energy distribution (SED) should ”correspond” to electrons of the same energy. Hence, a correlation between X-ray and TeV emission is expected.
Simultaneous observations in the above two bands indeed provided significant evidence of correlation on relatively long timescales but due to insufficient sampling did not probe short timescales (Catanese et al. 1997, Buckley et al. 1996). A new campaign was organized in 1998 to obtain continuous coverage in X-rays for at least one week with ASCA complemented by other space and ground based telescopes. Preliminary results are reported at this conference by Takahashi and Urry. Here we discuss observations obtained between April 21 – 24 with the BeppoSAX satellite and the Whipple Cherenkov telescope, preceding the start of the ASCA observations. ˚
## 2 Observations
### 2.1 BeppoSAX observations
The scientific payload carried by BeppoSAX is fully described in Boella et al. (1997a). The data of interest here derive from three coaligned instruments, the Low Energy Concentrator Spectrometer (LECS, 0.1-10 keV, Parmar et al. 1997), the Medium Energy Concentrator Spectrometer (MECS, 2-10 keV, Boella et al. 1997b) and the Phoswich Detector System (PDS, 12-300 keV, Frontera et al. 1997).
Observations with BeppoSAX were scheduled prior to but partially overlapping with the ASCA ones in order to extend the interval of time covered and to allow intercalibration. The data reduction for the PDS was done using the XAS software (Chiappetti and Dal Fiume 1997), while for the LECS and MECS linearized cleaned event files generated at the BeppoSAX Science Data Centre (SDC) were used. No appreciable difference was found extracting the MECS data with the XAS software. Light curves were accumulated from each instrument with the usual choices for extraction radius and background subtraction as described in Chiappetti et al. 1998. ˚
### 2.2 Whipple observations
The observations of gamma-rays in the TeV energy bands were made with the Whipple Collaboration’s 10m Atmospheric Cherenkov Telescope (Cawley et al., 1990; see also Buckley this Volume).
The Whipple camera utilizes fast circular-face photomultiplier tubes arranged in a hexagonal pattern, with intertube spacing of 0.25 deg. In 1997 the camera was upgraded to have 331 pixels giving a total field of view of 4.8 deg. The trigger condition for this camera was that any two of the 331 phototubes register a signal $`>`$ 40 photoelectrons within the coincidence overlap time of 8 nsec. Light-cones, which minimize the dead-space between the phototubes and reduce the albedo effect, are normally used with this camera but were not in place for the observations reported here. The absence of the light-cones as well as the reduced reflectivity of the mirrors (due to exposure to the elements) resulted in a somewhat higher energy threshold than usual for these observations.
Observations were taken on the nights of April 21, 22, 23 and 24, 1998, prior to the major observing campaign with ASCA which is reported elsewhere (Takahashi this Volume). After the detection of a strong flux on April 21 during an initial ON/OFF run ( the source is tracked continuously for 28 minutes and then an equivalent amount of time is spent observing a region offset by 30 minutes in Right Ascension over the same range of elevations and azimuth angles) the subsequent observations were mostly taken in the TRACKING mode to maximize coverage. In the latter mode the source is tracked continuously and the background is estimated from the off-axis events.
To cover an observation period of 4 hours, observations were made over a large range of zenith angles. Because the collection area and energy threshold increase with zenith angle the observed gamma-ray rates are convoluted with a changing collection area and energy threshold. In addition the gamma-ray selection is a function of zenith angle. Hence, the rate variations are not linearly related to the observed flux variations of the source. Thus it was necessary to determine the collection areas so that a light-curve with integral fluxes above a single energy could be derived. Unfortunately, contemporaneous observations of the Crab Nebula were not available with sufficient statistics over all the zenith angles so that this analysis relies entirely on Monte Carlo shower simulations.
The Monte Carlo code (ISU simulation package) was used to generate gamma-ray induced showers at zenith angles of 20 deg. (45000 events), 45 deg. (25000 events) and at 55 deg. (25000 events) zenith angles. The common energy range of all three collection areas covers the range from 2 TeV up to tens of TeV. For comparing rate measurements at the three different zenith angle regimes, it is necessary to set a common energy threshold and to calculate the different collection areas. This is done here by normalizing the rate measurements at 45 and 55 deg. to the collection area at z = 20 deg. and by setting a threshold of 2 TeV, where the telescope has a good sensitivity at all three zenith angle ranges. The results reported here are based on limited statistics and are therefore preliminary. The aim of this analysis is to derive a normalized flux as a function of time rather than absolute fluxes and energy spectra.
## 3 Results
A pronounced, well-defined, flare event is seen during the first day of observation with BeppoSAX. The light curves in 3 energy bands (0.1-2 keV; 4-10 keV; 12-26 keV normalized to their respective means) are plotted in Fig. 1. The selected intervals were chosen so as to represent well- separated ”effective” energies with reasonable statistics. Given the source average spectrum and the instrumental response we can estimate them as $`1`$ keV for the LECS light curve in the 0.1 to 2 keV range, $`4`$ keV for the MECS light curve in the 4-10 keV band and 15 keV for the PDS light curve (15-26 keV). Although the amplitude of the X-ray light curves increases with energy the peak to average intensity ratio is close to 2 in all X-ray bands (see Fig. 1). Further analysis is in progress with a view to quantifying possible systematic differences in the shape of the light curves at different energies.
The normalized event rates per 28 min interval from the Whipple Cherenkov Telescope above the 2 TeV threshold for the four nights are also shown in Fig 1, normalized to the average over the four nights. A clear peak is present with amplitude of a factor 4 and a halving time of about 1 hour. As discussed above the results are preliminary due to the use of Monte Carlo simulations to estimate the telescope collecting area at different zenith angles.
The 0.1-2 keV, 4-10 keV and the 2 TeV peaks are simultaneous with each other within one hour but the halving time of the TeV light curve seems definitely shorter than that of the LECS and MECS light curves. The 12-26 keV light curve seems to peak later but this is uncertain due to the limited statistics. The significance of possible leads/lags needs further study.
## 4 Discussion and Conclusions
The strong correlation between the TeV and X-ray flares on short time-scales, demonstrated by these data for the first time, supports models in which the high energy radiation arises from the same population of high energy electrons that produce the X-ray flare via synchrotron radiation and in particular from the same spatial region. The most likely mechanism for the production of the TeV photons is inverse Compton scattering of soft photons. In particular, on the basis of the simultaneity of the peaks, those electrons producing the 4-10 keV light curve seem to be the best candidates for producing also the TeV flare.
The fact that the decay of the TeV flux is faster than that of the keV flux could imply that not only the high energy electron spectrum but also the energy density of the target photons varies during the flare, which could happen either in the synchrotron self-Compton or in the mirror scenario (Ghisellini & Maraschi 1996, Dermer, Sturner & Schlickeiser 1997, Ghisellini & Madau 1996). Alternatively, electrons radiating at even higher synchrotron frequencies, whose light curve could not be measured with the present instrumentation, could have faster decay timescales and be responsible for the TeV emission.
Although quantitative models are needed in order to verify whether any of the above scenarios can actually reproduce the observed intensities and timescales in the X-ray and TeV range, we anticipate that these data will represent a very effective probe of the physical conditions in the most active region of the Mrk 421 jet.
|
no-problem/9902/cs9902029.html
|
ar5iv
|
text
|
# The “Fodor”-FODOR fallacy bites back
## 1 Introduction
Fodor and Lepore (FL from here on) have saddled up recently and ridden again at the Windmills of Artificial Intelligence (AI): this time against Pustejovsky’s Generative Lexicon (Pustejovsky, 1995: FL call the work TGL), so as to make an example for the rest of us. I want to join in because FL claim he is part of a wider movement they call Informational Role Semantics (which I will call IRS as they do), and I count myself a long term member of that movement. But their weapons are rusty: they wave about as their sword of truth an old and much satirised fallacy, which Ryle (1957) called the “Fido”-Fido fallacy: that to every word corresponds a meaning, be it abstract denotation (as for FL), a concept, or a real world object. The special quality of the fallacy is the simple one-to-one mapping, and not the nature of what corresponds to a word.
In the first part of this paper I want to show that the fallacy cannot be pressed back into service: it is old and overexposed. It is important to do this (again) even though, as the paper progresses, FL relent a little about the need for the fallacy, and even seem to accept a part of the IRS position. But they do this as men convinced that, really and truly and after all concessions are made, the fallacy is still true. It is not, and this needs saying yet again. In the second part of the paper, I will briefly touch on issues specific to Pustejovsky’s (JP) claims; only briefly because he is quite capable of defending his own views. In the third and final part I will make some points to do with the general nature of the IRS position, within AI and computational natural language processing, and argue that the concession FL offer is unneeded: IRS is a perfectly reasonable doctrine in its own right and needs no defence from those who really believe in the original fallacy.
## 2 “Fido” and FIDO
Fodor and Le Pore’s dissection of JP’s book is, and is intended to be, an attack on a whole AI approach to natural language processing based on symbolic representations, so it is open to any other member of that school to join in the defence. IRS has its faults but also some technological successes to show in the areas of machine translation and information extraction (e.g. Wilks et al., 1993), but is it well-founded and philosophically defensible?
Many within IRS would say that does not matter, in that the only defence lexical or other machine codings need in any information processing system is that the system containing them works to an acceptable degree; but I would agree with those who say it is defensible, or is at least as well founded as the philosophical foundation on which FL stand. That is, I believe, one of the shakiest and most satirised of this century, and loosely related to what what Ryle (1957) called the “Fido”-Fido fallacy: the claim that to every word corresponds a concept and/or a denotation, a view that has crept into everyday philosophical chat as the joke that the meaning of life is life’ (life prime, the object denoted by “life”<sup>2</sup><sup>2</sup>2‘Fido’ or Fido-prime are common notations for denotations corresponding to words. FL seem to prefer small caps FIDO, and I will use that form from their paper in the argument that follows..
It is a foundation of the utmost triviality, that comes from FL (op.cit., p.1) in the form:
(1) The meaning of “dog” is DOG.
They seem to embrace it wholeheartedly, and prefer it to any theory, like TGL, offering complex structured dictionary entries, or even any paper dictionary off a shelf, like Webster’s, that offers even more complex structures than TGL in a form of English. FL embrace an empty lexicon, willingly and with open eyes: one that lists just DOG as the entry for “dog”. The questions we must ask, though the answer is obviously no in each case, are:
* is (1) even a correct form of what FL wants to say?
* could such a dictionary consisting of statements like (1) serve any purpose whatever, for humans or for machines?
* would one even need to write such a dictionary, supposing one believed in a role for such a compilation, as opposed to say, saving space by storing one as a simple rule for capitalizing any word whose meaning was wanted?
The first of these points brings back an age of linguistic analysis contemporary with Ryle’s, in particular the work of writers like Lewy (1964); put briefly, the issue is whether or not (1) expresses anything determinate (and remember it is the mantra of the whole FL paper), or preferable to alternatives such as:
(2) The meaning of “dog” is a domestic canine animal.
or
(3) The meaning of “dog” is a dog.
or even
(4) The meaning of “dog” is “domestic canine animal”.
not to mention
(5) The meaning of “dog” is “dog”.
The two sentences (2) and (3) are perfectly sensible, depending on the circumstances: (2) is roughly what real, non-Fodorian, dictionaries tell you, which seems unnecessary for dogs, but would be more plausible if the sentence was about marmosets or wombats. (3) is unhelpful, as it stands, but perhaps that is accidental, for if we translate it into German we get something like :
(3a) Die Bedeutung von “dog” ist ein Hund.
which could be very helpful to a German with little or no knowledge of English, as would be
(2a) Die Bedeutung von “dog” ist ein huendliche Haustier.
To continue with this line of argument one needs all parties to accept the reality of translation and its role in argument: that there are translations, at least between close languages for simple sentences, and no amount of philosophical argument can shift that fact. For anyone who cannot accept this, there is probably no point in arguing about the role of lexicons at all.
Both (2) and (3), then, are sensible and, in the right circumstances, informative: they can be synonymous in some functional sense since both, when translated, could be equally informative to a normal fluent speaker of another language. But (4) and (5) are a little harder: their translations would be uninformative to a German when translated, since translation does not translate quotations and so we get forms like:
(5a) Die Bedeutung von “dog” ist “dog”.
and similarly for a German (4a) version of the English (4). These sentences therefore cannot be synonymous with (3) and (2) respectively. But (4) might be thought no more than an odd form of a lexical entry sentence like (3), spoken by an English speaker.
But what of (1); who could that inform about anything? Suppose we sharpen the issue by again asking who its translation could inform and about what:
(1a) Die Bedeutung von “dog” ist DOG.
(1a) tells the German speaker nothing, at which point we may be told that DOG stands for a denotation and its name is arbitrary. But that is just old nonsense on horseback: it implies that the English speaker cannot understand (1) either, since DOG might properly be replaced by G00971 if the final symbol in (1) is truly arbitrary. It is surely (3), not (1), that tells us what the denotation of “dog” is, in the way language is normally used to do such things.
DOG in (1) is simply a confidence trick: it is put to us as having the role of the last term in (3). When and only when it is in the same language as the final symbol of (3) (a fact we are confidently assured is arbitrary) it does appear to point to dogs. However, taken as the last term in the synonymous (1a) it cannot possibly be doing that for it is incomprehensible, and functioning as an (untranslated) English word, exactly as in the last term of (5). But, as we saw, (5) and (3) cannot be synonymous, and so DOG in (1) has two incompatible roles at once, which is the trick that gives (1) interpretations that flip uncontrollably between the (non-synonymous) (3) and (5). It is an optical illusion of a sentence.
In conclusion, then, (1) is a dangerous sentence, one whose upper case-inflation suggests it has a function but which, on careful examination, proves not to be there: it is either (case-deflated) a form of the commonsense (3), in which case it loses its capitals and all the role FL assign to it, since it is vacuous in English, or just a simple bilingual dictionary entry in German or some other language. Or it is a form or (5), uninformative in any language or lexicon but plainly a triviality, shorn of any philosophical import.
Those who still have worries about this issue, and wonder if capitalizing may not still have some merit, should ask themselves the following question: which dog is the real DOG? The word “dog” has 24 entries even in a basic English dictionary like Collins, so how do FL know which one is intended by (1)? If one is tempted to reply, well DOG will have to be subscripted then, as in DOG<sub>1</sub>, DOG<sub>2</sub> etc, then I shall reply that we will then be into another theory of meaning, and not one of simple denotations. My own suspicion is that all this can only be understood in terms of Fodor’s Language of Thought (1975) and that DOG for FL is a simple primitive in that language, rather than a denotation in the world or logical space. However, we have no access whatever to such a language, though Kay among others has given arguments that, if anything like an LOT exists, it will have to be subscripted (Kay, 1989), in which case the role of (1) will have to be rethought from scratch. All the discussion above will still remain relevant to such a development, and the issue of translation into LOT will then be the key one. However, until we can do that, and in the presence of a LOT native speaker, we may leave that situation aside and await developments.
The moral for the rest of the discussion, and the role of IRS and TGL, is simple: some of the sentences numbered above are like real, useful, lexical entries: (3) is a paradigm of an entry in a bilingual lexicon, where explanations are not crucial, while (2) is very like a monolingual lexical entry, where explanations are the stuff of giving meaning.
## 3 Issues concerning TGL
The standard of the examples used by FL to a attack TGL is not at all challenging; they claim that JP’s:
(6) He baked a cake.
is in fact ambiguous between JP’s create and warm up aspects of “bake”, where baking a cake yields the first, but baking a potato the second. JP does not want to claim this is a sense ambiguity, but a systematic difference in interpretation given by inferences cued by features of the two objects, which could be labels such as ARTIFACT in the case of the cake but not the potato.
“But in fact, bake a cake is ambiguous. To be sure, you can make a cake by baking it; but also you can do to a (pre-existent) cake just what you can do to a (pre-existent) potato: viz. put it in the oven and (non creatively) bake it.” (op.cit. p.7)
From this, FL conclude, “bake” must be ambiguous, since “cake” is not. But all this is absurd and untrue to the simplest facts. Of course, warming up a (pre-existent) cake is not baking it; whoever could think it was? That activity would be referred to as warming a cake up, or through, never as baking it. You can no more bake a cake again, with the other interpretation, than you can bake a potato again and turn it into an artifact. FL like syntactically correlated evidence in semantics, and they should have noticed that a baked potato is fine but a baked cake is not, which correlates with just the difference JP requires (cf. baked fish/meat).
It gets worse: FL go on to argue that if ARTIFACTs are the distinguishing feature for JP then bake a trolley car should take the creative reading since it is an artifact, completely ignoring the fact that the whole JP analysis is based on the (natural) assumption that potatoes and cakes both share some attribute like FOOD (as trolley cars do not) which is the only way the discussion can get under way: being a FOOD is a necessary condition for this analysis of “bake” to get under way.
FL’s key argument against TGL is that it is not possible to have a rule, of the sort JP advocates, that expands the content or meaning of a word in virtue of (the meaning content of) a neighbouring word in a context, namely, a word in some functional relation to the first. So, JP, like many in the IRS tradition, argues that in:
(7) John wants a beer.
the meaning of “wants” in that context, which need not be taken to be any new or special or even existing sense of the word, is to be glossed as wants to drink a beer. This is done by a process that varies in detail from IRS researcher to researcher, but comes down to some form of:
(8) X wants Y $``$ X wants to do with Y whatever is normally done with Y.
An issue over which AI researchers have differed is whether this knowledge of normal or default use is stored in a lexical entry or in some other computational knowledge form such as one that was sometimes called a script (Schank and Abelson, 1997) and thought of as indexed by words but was much more than a lexical entry. It is not clear that one needs to discriminate between structures, however complex, if they are indexed by a word or words. JP stores the inference captured in (8) within the lexical entry under a label TELIC that shows purpose. In earlier AI systems, such information about function might be stored as part of a lexical semantic formulas attached to a primitive GOAL (Charniak and Wilks, 1976<sup>3</sup><sup>3</sup>3 In preference semantics (Wilks) “door” was coded as a formula (that could be displayed as a binary tree) such as:
((THIS((PLANT STUFF)SOUR)) ((((((THRU PART)OBJE) (NOTUSE \*ANI))GOAL) ((MAN USE) (OBJE THING))))
where the subformula:
((((THRU PART)OBJE) (NOTUSE \*ANI))GOAL)
was intended to convey that the function of a door is to prevent passage through an aperture by a human.), or (depending on its salience) within an associated knowledge structure<sup>4</sup><sup>4</sup>4Such larger knowledge structures were called pseudo-texts (Wilks) in preference semantics (to emphasize the continuity of language and world knowledge): one for “car” (written \[car\]) would contain a clause like \[car consume gasoline\] where each lexical item in the pseudo-text was an index to a semantic formula (in the sense of note 3) that explicated it.. Some made the explicit assumption that a system should be sufficiently robust that it would not matter if such functional information was stored in more than one place in a system, perhaps even in different formats.
For FL all this is unthinkable, and they produce a tortuous argument roughly as follows:
* “Fido”-FIDO may not be the only form for a lexicon, but an extension could only be one where an expansion of meaning for a term was independent of the control of all other terms, as it is plainly not in the case of JP’s (8)).
* Any such extension would be to an underlying logical form, one that should also be syntactically motivated.
FL then produce a complex algorithm (op.cit. p.10) that expands “want” consistently with these assumptions, one which is hard to follow and comes down to no more than the universal applicability (i.e. if accepted it must be applied to all occurrences of “want” regardless of lexical context) of the rule:
(9) X wants Y $``$ X wants to have Y.
This, of course, avoids, as it is intended to, any poking about in the lexical content of Y. But it is also an absurd rule, no matter what dubious chat is appended to it about the nature of “logical form”. Consider:
(10a) I want an orange.
(10b) I want a beer.
(10c) I want a rest.
(10d) I want love.
(10a) and (10b) seem intuitively to fit the IRS rule (8) and the FL rule (9). (10c) might conform to some appropriate IRS coding to produce (from (8)): X wants to experience a rest, but the apparently felicitous application of FL’s (9), yielding X wants to have a rest, is purely fortuitous, since have a rest is a lexicalised form having nothing at all to do with possession, which is the only natural interpretation of (9). This just serves to show the absurdity of FL’s “content-free” rule (9) since its application to (10c) cannot possibly be interpreted in the same way as it was when producing X wants to have a beer.
Only the IRS rule (8) distinguishes appropriate from inappropriate applications of rules to (10c). One could make the same case with (10d), where the FL rule (9) produces only ambiguous absurdity, and the applicability of the IRS rule (8) depends entirely on how the function of “love” has been coded, if at all. However, the purpose of this section has been not to defend an IRS view or rule (8) in particular, but to argue that there is no future at all in FL’s grudging, context free, rule (9), in part because it is context free.
JP’s specific claim is not that the use of rule (8) produces a new sense, or we would have a new sense corresponding to many or most of the possible objects of wanting, a more promiscuous expansion of the lexicon (were it augmented by every rule application) than would be the case for bake a potato/cake where JP resisted augmentation of the lexicon, though other researchers would probably accept it. Nor is this like the application of “normal function” to the transformation of
(11) My car drinks gasoline.
in (Wilks, 1980) where it was suggested that “drink” should be replaced by the structure for “consume” (as in note 4 below) in a context containing broken preferences (unlike (7)) and where augmentation of the lexicon would be appropriate (like (11)) and if the “relaxation”, as some would call it, became statistically significant, as it has in the case of (11).
It is not easy to pin down quite why FL find the rule (8) so objectionable, since their rule (9), like (8), is not, as they seem to believe, distinguished by logical form considerations. The traditional (Quine, 1953) logical opacity of “want” is such that inferences like (8) and (9) can never be supported by strong claims about logical form whose transformations must be deductive, and one can always want X without wanting Y, no matter what the logical relations of X and Y. Hence, neither (8) nor (9) are deductive rules, and FL have no ground in context-dependence to swallow the one but not the other.
Contrary to what FL seem to assume, an NLP algorithm to incorporate or graft part of the lexical entry for one word (e.g. “beer”) into another (e.g. “want”) is not practically difficult. The only issue for NLP research is whether and when such inferences should be drawn: at first encounter with such a collocation, or when needed in later inference, a distinction corresponding roughly to what is distinguished by the oppositions forward and backward inference, or data-driven and demand-driven inference. This issue is connected with whether a lexical entry should be adapted rather than a database of world knowledge and, again contrary to FL’s assumptions, no NLP researcher can accept a firm distinction between these, nor is there one, any more than a firm analytic-synthetic distinction has survived decades of scepticism.
One can always force such a distinction into one’s system at trivial cost, as Carnap (1947) did with his formal and material modes of sentences containing the same information:
(12f) “Edelweiss” has the property “flower”.
(12m) An Edelweiss is a flower.
but the distinction is wholly arbitrary<sup>5</sup><sup>5</sup>5Provided one remembers always that forms like:
“Edelweiss” has nine letters
is in material mode even though it could look like the formal mode. The formal mode of what it expresses is:
“Edelweiss” has the property “nine letters”..
JP’s treatment of more structural intensional verbs like “believe” is far more ingenious than FL would have us believe, and an interesting advance on previous work: it is based on a richer notion of default than earlier IRS treatments. JP’s position, as I understand it, is that the default, or underlying, structure associated with “believe” is:
X believes p
where p is expanded by default by the rule:
(13) X believes p $``$ X believes (Y communicates p).
FL of course object again to another expansion beyond their self-imposed limit of context-freeness for which, as we saw, there is no principled defence, while failing to notice that (13) is in fact context-free in their sense.
For me, the originality of (13) is not only that it can expand forms like:
(14) John believes Mary.
but can also be a general (context-free) default, overriding forms like:
(15) John believes pigs eat carrots.
in favour of the more complex:
(16) John believes (Y communicates (pigs eat carrots)).
which is an original expansion according to which all beliefs can be seen as the result of some communication, often from oneself (when Y = John in (16)). There certainly were default expansions of “believe” in IRS before JP but not of this boldness<sup>6</sup><sup>6</sup>6In preference semantics (Wilks, 1972) “believe” had a lexical entry showing a preference for a propositional object, so that “John believes Mary” was accepted as a preference-breaking form but with a stored expansion of the object in the lexical entry for “believe” of a simple propositional form (Mary DO Y) with what is really an empty verb variable DO, and not a communication specifically act like TGL..
## 4 Some general IRS principles
Once (Wilks, 1971, 1972) I tried to lay out principles for something very like IRS, and which still seem to underlie the position arrived at in this discussion; it would be helpful for FL to see IRS not simply as some form of undisciplined, opportunistic, discipline neighbouring their own professional interests. Let me restate two of these principles that bear on this discussion:
* Meaning, in the end, is best thought of as other words, and that is the only position that can underpin a lexicon- and procedure-based approach to meaning, and one should accept this, whatever its drawbacks, because the alternative is untenable and not decisively supported by claims about ostension. Quine (1953) has argued a much more sophisticated version of this for many years, one in which representational structures are only compared against the world as wholes, and local comparisons are wholly symbolic. Meanings depend crucially upon explanations and these, formally or discursively, are what dictionaries offer. This solution to the problem is indeed circular, but not viciously so, since dictionaries rarely offer small dictionary circles (Sparck Jones, 1966) like the classic, and unsatisfying, case where “furze” is defined as “gorse” and vice versa. Meanings, in terms of other words, is thus a function of circle size: furze gorse is pathological, in the sense of unhelpful, yet, since a dictionary definition set is closed, and must be circular, not all such circles can be unhelpful or dictionaries are all and always vacuous.
On the other hand, FL’s original position of the section 2 above, is not really renounced by the end of their paper, and is utterly untenable, not only for the analytic reasons we have given, but because it could not form the basis of any possible dictionary, for humans (seeking meaning explanations) or for NLP.
Indeed, as we pointed out earlier, no lexicon is needed for the “dog”-DOG theory, since a simple macro to produce upper-case forms will do, give or take a little morphological tweaking for the “boil”-BOILING cases.
* Semantic well-formedness is not a property that can decidably be assigned to utterances, in the way that truth can to formulas in parts of the predicate calculus, and as it was hoped for many years that “syntactically-well formed” would be assignable to sentences.
This point was made in some detail in (Wilks, 1971) on the basis that no underlying intuition is available to support semantic well-formedness<sup>7</sup><sup>7</sup>7This property must intuitively underlie all decidability claims and procedures: Goedel’s proof that there are true but undecidable sentences in a class of calculi only makes sense on the assumption that we have some (intuitive) way of seeing those sentences are true (outside of proof), since our intuitions are dependent on the state of our (or our machine’s) lexicons when considering an utterance’s status, and that we are capable of expanding our lexicons (in something like the ways discussed in this paper) so as to bring utterances iteratively within the boundary of semantic well-formedness, and in a way that has no analogy in truth or syntax. Thus, no boundary drawing, of the sort required for a decidable property, can be done for the predicate Semantically-well-formed. Belief in the opposite seems one of the very few places where JP and FL agree, so further discussion may prove necessary.
|
no-problem/9902/cond-mat9902349.html
|
ar5iv
|
text
|
# Quantum Decoherence Phenomena in the Thermal Assisted Tunneling
We propose a novel approach to the problem of a transition from quantum to classical behavior in mesoscopic spin systems. This paper is intended to demonstrate that main cause of such transitions is quantum decoherence which appear as a result of a thermal interaction between spin system and its environment. We shall consider semiclassical model where spin problem has been mapped onto particle problem. In such a case, particle will be localized in one of two metastable potential wells, and transitions between this states can occur either due to over-barrier transition or via quantum tunneling. It is necessary to emphasize that thermal activation couldn’t appear in pure form as a classical phenomena even in semiclassical limit: escape is determined not only by tunneling but also by the over-barrier reflection in the case when particle obtain from the environment energy, large then height of the barrier. We shell use the next computational scheme. Consider an ensemble of particles localized in potential wells. Due to interaction with environment, each of the particles can obtain some energy $`E`$. It is clear that in this case , either due to tunneling or due to over-barrier transition, probability of escape $`P`$ will be a function of corresponding magnitude $`E`$: $`P=P(E)`$. If population for given $`E`$ in ensemble is $`n(E)`$ than for the total escape probability one obtain $`P_t=\frac{1}{N}n(E)P(E)𝑑E`$, where $`N`$ is a number of particles in ensemble. Since population $`n(E)`$ is also a function of a temperature $`T`$ we can consider $`P_t`$ as a function of $`T`$ too. We want to emphasize again that for such consideration, $`P_t(T)`$ will be absolutely smooth, without any sharp turns, as long as $`n(T)`$ and $`P(E)`$ is smooth function.
But this picture strikingly changes when one take into account the possibility of the coherence destruction by the interaction with environment. It is well known that environment induces a dynamical localization of the quantum state into a generalized coherent state. In particular, in the double-well potential, for specific values of external field’s strength and frequency, tunneling is coherently destroyed, i.e., a localized packet can be built as a superposition of two degenerate states, which remains localized ”forever” in one well. We study the behavior of semiclassical system interacting with a classical environment represented by an infinite set of harmonic oscillators with a frequencies $`\varpi .`$ For each particle in our ensemble escape probability will be now $`P^{}=PJ_0(\frac{2V}{\omega })`$, where $`J_0(x)`$ is the zero-order Bessel function. $`V`$, the magnitude of interaction with a bath, is defined as $`V=Q(\varpi )\varpi C,`$ where $`Q(\varpi )`$ is a population for the frequency $`\varpi `$, and $`C`$ is a coupling constant, which is an adjusting parameter in considered case.
Using computational scheme described above, one can now to calculate total escape probability with taking into account decoherence effects. In Fig. 1 we have plotted for example results of such a calculations for the mesoscopic system $`CrNi_6`$ (curve 1). It is visible, that quantum decoherence essentially changes behavior of $`P_t(T)`$ at low temperatures: in contrast to usual thermal assisted tunneling (monotonic curve 2) the decoherence leads to the origin of the irregularities of $`P_t(T)`$ for $`T10`$ K. In Fig.1 we also plotted for comparison results of experiments (signed as ”+”), and results of computations for the ”pure” classical thermal activation (curve 3). Similarity between curve 1 and experimental data seems to be very encouraging.
|
no-problem/9902/cs9902001.html
|
ar5iv
|
text
|
# Compacting the Penn Treebank Grammar
## 1 Introduction
The Penn Treebank (PTB) \[Marcus et al., 1994\] has been used for a rather simple approach to deriving large grammars automatically: one where the grammar rules are simply ‘read off’ the parse trees in the corpus, with each local subtree providing the left and right hand sides of a rule. Charniak \[Charniak, 1996\] reports precision and recall figures of around 80% for a parser employing such a grammar. In this paper we show that the huge size of such a treebank grammar (see below) can be reduced in size without appreciable loss in performance, and, in fact, an improvement in recall can be achieved.
Our approach can be generalised in terms of Data-Oriented Parsing (DOP) methods (see \[Bonnema et al., 1997\]) with the tree depth of 1. However, the number of trees produced with a general DOP method is so large that Bonnema \[Bonnema et al., 1997\] has to resort to restricting the tree depth, using a very domain-specific corpus such as ATIS or OVIS, and parsing very short sentences of average length 4.74 words. Our compaction algorithm can be extended for the use within the DOP framework but, because of the huge size of the derived grammar (see below), we chose to use the simplest PCFG framework for our experiments.
We are concerned with the nature of the rule set extracted, and how it can be improved, with regard both to linguistic criteria and processing efficiency. In what follows, we report the worrying observation that the growth of the rule set continues at a square root rate throughout processing of the entire treebank (suggesting, perhaps that the rule set is far from complete). Our results are similar to those reported in \[Krotov et al., 1994\]. <sup>1</sup><sup>1</sup>1For the complete investigation of the grammar extracted from the Penn Treebank II see \[Gaizauskas, 1995\] We discuss an alternative possible source of this rule growth phenomenon, partial bracketting, and suggest that it can be alleviated by compaction, where rules that are redundant (in a sense to be defined) are eliminated from the grammar.
Our experiments on compacting a PTB treebank grammar resulted in two major findings: one, that the grammar can at the limit be compacted to about 7% of its original size, and the rule number growth of the compacted grammar stops at some point. The other is that a 58% reduction can be achieved with no loss in parsing performance, whereas a 69% reduction yields a gain in recall, but a small loss in precision.
This, we believe, gives further support to the utility of treebank grammars and to the compaction method. For example, compaction methods can be applied within the DOP framework to reduce the number of trees. Also, by partially lexicalising the rule extraction process (i.e., by using some more frequent words as well as the part-of-speech tags), we may be able to achieve parsing performance similar to the best results in the field obtained in \[Collins, 1996\].
## 2 Growth of the Rule Set
One could investigate whether there is a finite grammar that should account for any text within a class of related texts (i.e. a domain oriented sub-grammar of English). If there is, the number of extracted rules will approach a limit as more sentences are processed, i.e. as the rule number approaches the size of such an underlying and finite grammar.
We had hoped that some approach to a limit would be seen using PTB II \[Marcus et al., 1994\], which larger and more consistent for annotation than PTB I. As shown in Figure 1, however, the rule number growth continues unabated even after more than 1 million part-of-speech tokens have been processed.
## 3 Rule Growth and Partial Bracketting
Why should the set of rules continue to grow in this way? Putting aside the possibility that natural languages do not have finite rule sets, we can think of two possible answers. First, it may be that the full “underlying grammar” is much larger than the rule set that has so far been produced, requiring a much larger tree-banked corpus than is now available for its extraction. If this were true, then the outlook would be bleak for achieving near-complete grammars from treebanks, given the resource demands of producing such resources. However, the radical incompleteness of grammar that this alternative implies seems incompatible with the promising parsing results that Charniak reports \[Charniak, 1996\].
A second answer is suggested by the presence in the extracted grammar of rules such as (1).<sup>2</sup><sup>2</sup>2PTB POS tags are used here, i.e. DT for determiner, CC for coordinating conjunction (e.g ‘and’), NN for noun This rule is suspicious from a linguistic point of view, and we would expect that the text from which it has been extracted should more properly have been analysed using rules (2,3), i.e. as a coordination of two simpler NPs.
$$\mathrm{𝑁𝑃}\mathrm{𝐷𝑇}\mathrm{𝑁𝑁}\mathrm{𝐶𝐶}\mathrm{𝐷𝑇}\mathrm{𝑁𝑁}$$
(1)
$$\mathrm{𝑁𝑃}\mathrm{𝑁𝑃}\mathrm{𝐶𝐶}\mathrm{𝑁𝑃}$$
(2)
$$\mathrm{𝑁𝑃}\mathrm{𝐷𝑇}\mathrm{𝑁𝑁}$$
(3)
Our suspicion is that this example reflects a widespread phenomenon of partial bracketting within the PTB. Such partial bracketting will arise during the hand-parsing of texts, with (human) parsers adding brackets where they are confident that some string forms a given constituent, but leaving out many brackets where they are less confident of the constituent structure of the text. This will mean that many rules extracted from the corpus will be ‘flatter’ than they should be, corresponding properly to what should be the result of using several grammar rules, showing only the top node and leaf nodes of some unspecified tree structure (where the ‘leaf nodes’ here are category symbols, which may be nonterminal). For the example above, a tree structure that should properly have been given as (4), has instead received only the partial analysis (5), from the flatter ‘partial-structure’ rule (1).
i. NP NP DT NN CC NP DT NN (4)
ii. NP DT NN CC DT NN (5)
## 4 Grammar Compaction
The idea of partiality of structure in treebanks and their grammars suggests a route by which treebank grammars may be reduced in size, or compacted as we shall call it, by the elimination of partial-structure rules. A rule that may be eliminable as a partial-structure rule is one that can be ‘parsed’ (in the familiar sense of context-free parsing) using other rules of the grammar. For example, the rule (1) can be parsed using the rules (2,3), as the structure (4) demonstrates. Note that, although a partial-structure rule should be parsable using other rules, it does not follow that every rule which is so parsable is a partial-structure rule that should be eliminated. There may be linguistically correct rules which can be parsed. This is a topic to which we will return at the end of the paper (Sec. 6). For most of what follows, however, we take the simpler path of assuming that the parsability of a rule is not only necessary, but also sufficient, for its elimination.
Rules which can be parsed using other rules in the grammar are redundant in the sense that eliminating such a rule will never have the effect of making a sentence unparsable that could previously be parsed.<sup>3</sup><sup>3</sup>3Thus, wherever a sentence has a parse $`P`$ that employs the parsable rule $`R`$, it also has a further parse that is just like $`P`$ except that any use of $`R`$ is replaced by a more complex substructure, i.e. a parse of $`R`$.
The algorithm we use for compacting a grammar is straightforward. A loop is followed whereby each rule $`R`$ in the grammar is addressed in turn. If $`R`$ can be parsed using other rules (which have not already been eliminated) then $`R`$ is deleted (and the grammar without $`R`$ is used for parsing further rules). Otherwise $`R`$ is kept in the grammar. The rules that remain when all rules have been checked constitute the compacted grammar.
An interesting question is whether the result of compaction is independent of the order in which the rules are addressed. In general, this is not the case, as is shown by the following rules, of which (8) and (9) can each be used to parse the other, so that whichever is addressed first will be eliminated, whilst the other will remain.
$$BC$$
(6)
$$CB$$
(7)
$$ABB$$
(8)
$$ACC$$
(9)
Order-independence can be shown to hold for grammars that contain no unary or epsilon (‘empty’) rules, i.e. rules whose righthand sides have one or zero elements. The grammar that we have extracted from PTB II, and which is used in the compaction experiments reported in the next section, is one that excludes such rules. Unary and sister rules were collapsed with the sister nodes, e.g. the structure (S (NP -NULL-) (VP VB (NP (QP ...))) .) will produce the following rules: S -\> VP ., VP -\> VB QP and QP -\> ....<sup>4</sup><sup>4</sup>4See \[Gaizauskas, 1995\] for discussion. For further discussion, and for the proof of the order independence see \[Krotov, 1998\].
## 5 Experiments
We conducted a number of compaction experiments: <sup>5</sup><sup>5</sup>5For these experiments, we used two parsers: Stolcke’s BOOGIE \[Stolcke, 1995\] and Sekine’s Apple Pie Parser \[Sekine and Grishman, 1995\]. first, the complete grammar was parsed as described in Section 4. Results exceeded our expectations: the set of 17,529 rules reduced to only 1,667 rules, a better than 90% reduction.
To investigate in more detail how the compacted grammar grows, we conducted a third experiment involving a staged compaction of the grammar. Firstly, the corpus was split into 10% chunks (by number of files) and the rule sets extracted from each. The staged compaction proceeded as follows: the rule set of the first 10% was compacted, and then the rules for the next 10% added and the resulting set again compacted, and then the rules for the next 10% added, and so on. Results of this experiment are shown in Figure 2.
At 50% of the corpus processed the compacted grammar size actually exceeds the level it reaches at 100%, and then the overall grammar size starts to go down as well as up. This reflects the fact that new rules are either redundant, or make “old” rules redundant, so that the compacted grammar size seems to approach a limit.
## 6 Retaining Linguistically Valid Rules
Even though parsable rules are redundant in the sense that has been defined above, it does not follow that they should always be removed. In particular, there are times where the flatter structure allowed by some rule may be more linguistically correct, rather than simply a case of partial bracketting. Consider, for example, the (linguistically plausible) rules (10,11,12). Rules (11) and (12) can be used to parse (10), but the latter should not be eliminated, as there are cases where the flatter structure it allows is more linguistically correct.
$$\mathrm{𝑉𝑃}\mathrm{𝑉𝐵}\mathrm{𝑁𝑃}\mathrm{𝑃𝑃}$$
(10)
$$\mathrm{𝑉𝑃}\mathrm{𝑉𝐵}\mathrm{𝑁𝑃}$$
(11)
$$\mathrm{𝑁𝑃}\mathrm{𝑁𝑃}\mathrm{𝑃𝑃}$$
(12)
i. VP VB NP NP PP ii. VP VB NP PP (13)
We believe that a solution to this problem can be found by exploiting the data provided by the corpus. Frequency of occurrence data for rules which have been collected from the corpus and used to assign probabilities to rules, and hence to the structures they allow, so as to produce a probabilistic context-free grammar for the rules. Where a parsable rule is correct rather than merely partially bracketted, we then expect this fact to be reflected in rule and parse probabilities (reflecting the occurrence data of the corpus), which can be used to decide when a rule that may be eliminated should be eliminated. In particular, a rule should be eliminated only when the more complex structure allowed by other rules is more probable than the simpler structure that the rule itself allows.
We developed a linguistic compaction algorithm employing the ideas just described. However, we cannot present it here due to the space limitations. The preliminary results of our experiments are presented in Table 1. Simple thresholding (removing rules that only occur once) was also to achieve the maximum compaction ratio. For labelled as well as unlabelled evaluation of the resulting parse trees we used the evalb software by Satoshi Sekine. See \[Krotov, 1998\] for the complete presentation of our methodology and results.
As one can see, the fully compacted grammar yields poor recall and precision figures. This can be because collapsing of the rules often produces too much substructure (hence lower precision figures) and also because many longer rules in fact encode valid linguistic information. However, linguistic compaction combined with simple thresholding achieves a 58% reduction without any loss in performance, and 69% reduction even yields higher recall.
## 7 Conclusions
We see the principal results of our work to be the following:
* the result showing continued square-root growth in the rule set extracted from the PTB II;
* the analysis of the source of this continued growth in terms of partial bracketting and the justification this provides for compaction via rule-parsing;
* the result that the compacted rule set does approach a limit at some point during staged rule extraction and compaction, after a sufficient amount of input has been processed;
* that, though the fully compacted grammar produces lower parsing performance than the extracted grammar, a 58% reduction (without loss) can still be achieved by using linguistic compaction, and 69% reduction yields a gain in recall, but a loss in precision.
The latter result in particular provides further support for the possible future utility of the compaction algorithm. Our method is similar to that used by Shirai \[Shirai et al., 1995\], but the principal differences are as follows. First, their algorithm does not employ full context-free parsing in determining the redundancy of rules, considering instead only direct composition of the rules (so that only parses of depth 2 are addressed). We proved that the result of compaction is independent of the order in which the rules in the grammar are parsed in those cases involving ’mutual parsability’ (discussed in Section 4), but Shirai’s algorithm will eliminate both rules so that coverage is lost. Secondly, it is not clear that compaction will work in the same way for English as it did for Japanese.
|
no-problem/9902/astro-ph9902298.html
|
ar5iv
|
text
|
# New limits on Ω_Λ and Ω_𝑀 from old galaxies at high redshift
## Acknowledgments
The authors are grateful to R. Brandenberg, R. Opher, M. Roos and I. Waga for helpful discussions. This work was partially suported by the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) and Pronex/FINEP (no. 41.96.0908.00).
## Table 1
|
no-problem/9902/astro-ph9902090.html
|
ar5iv
|
text
|
# NUCLEAR GAMMA RAY LINE ASTRONOMY IN THE PERSPECTIVE OF THE INTEGRAL SATELLITE
## 1 INTRODUCTION
Atoms in the universe have three sources, nuclear fusion in two astrophysical contexts and nuclear break up or spallation. The primordial source is the big-bang with its associated nucleosynthesis. Operating at a temperature of about a billion degrees, it is responsible for the production of the lightest isotopes H, D, <sup>3</sup>He, <sup>4</sup>He, <sup>7</sup>Li and lasts only about 3 minutes. Stars pursue the nuclear complexification of matter through a series of fusions at high temperature, building up progressively the composition of matter observed today in galaxies. In hydrogen burning, H is transformed into He and in more advanced stages, all nuclei, between carbon and uranium, are synthesized by thermonuclear fusion at temperatures ranging from 10<sup>7</sup> to 5.10<sup>9</sup> K. A distinct process, of non thermal nature, puts the last touch to this nuclear evolution. Cosmic rays (through rapid p and alpha interactions with CNO in the interstellar medium, ISM) and fast nuclei (rapid alpha, C and O fragmenting on interstellar H and He) produce <sup>6</sup>Li, <sup>7</sup>Li, <sup>9</sup>Be, <sup>10</sup>B, <sup>11</sup>B. To summarize very briefly, the nuclear complexification of matter has been laborious and slow: at birth the galactic composition was close to that emerging from the big-bang (H = 0.76, He = 0.24, by mass) and presently, i.e. about 15 billions of years later, it contains a small proportion of ”metals” (H = 0.70, He = 0.28, Z = 0.02, where Z is the ”metallicity” comprising all species heavier than <sup>4</sup>He). The abundances measured in various astrophysical objects, however, reflect the cumulated nucleosynthesis in the whole past. The great merit of gamma-ray line astronomy is to reveal the present nuclear activity in the Galaxy. However, the number of measurable gamma-ray emitters is restricted compared to the whole list of nuclei of the periodic table. Only a handful of isotopes is available: <sup>7</sup>Be, <sup>22</sup>Na, <sup>26</sup>Al, <sup>44</sup>Ti, <sup>56</sup>Co, <sup>57</sup>Co, <sup>60</sup>Fe. Those have mean lifetimes between 0.3 to 10<sup>6</sup> years and are produced with significant abundances by stars (table 1). Gamma ray line astronomy has a unique potential to provide information on nuclear processes and high energy interactions going on presently in the Universe. Moreover, space is transparent to gamma-ray lines up to a redshift of 100 and the fluxes are almost not affected by their travel in our Galaxy. The informations carried by gamma ray lines - intensity, line profile and width - are translated respectively in physical terms through astrophysical models of the sources. The intensity of a given source at a given energy leads to the abundance of the emitting isotope and to physical conditions prevailing to its formation (temperature, density or flux of energetic particles if the formation process is non thermal). The line shift compared to the laboratory position is used to derive, through the Doppler effect, the expansion velocity of the emitting material (wind velocity, supernova and nova envelope expansion) and/or the gravitational shift suffered by the radiation if it is released close to a compact object. The line width reflects the temperature of the medium, if thermal, or the energy spectrum of the fast particles, if non thermal.
## 2 GAMMA-RAY LINE EMISSION REQUIREMENTS
How to populate excited levels of nuclei? In principle, two means are available, impliying respectively radioactive or stable nuclei. The first way rely on gamma ray emission in the course of radioactive decay. The emission is in this case delayed by the radioactive lifetime. This process is only efficient if some (rather) abundant nuclei made under the form of their radioactive progenitor. The best example is <sup>56</sup>Ni decay (table 1). By its very nature thermal nucleosynthesis produces proton-rich isotopes at high temperatures and densities in star interiors, opaque to gamma rays. Thus radioactive species have to be transported in the ISM before decaying to deliver their gamma signature. Various means are used by nature to sprinkle the ISM with live radioactive nuclei, including stellar winds and explosions, in short, dynamical events. The second way to produce gamma ray lines is nuclear excitation of stable and abundant isotopes by fast (5-50 MeV/n) particles. Gamma emission is prompt due to the very short lifetime of the excited nuclear levels. These non-thermal processes, in order to be observable, require high fluxes of low energy particles. We do not consider the annihilation line emission at 511 keV since this subject deserves by itself a full developement.
## 3 ASTROPHYSICAL BACKGROUND
Thermal nucleosynthesis of radioactive isotopes is present under two forms; i) hydrostatic nucleosynthesis, proceeding in Wolf-Rayet stars (massive star supporting heavy mass loss through intense winds) and AGB stars (red giants developing thermal pulses); ii) explosive nucleosynthesis, occuring in core collapse supernovae (SNII, SNIb, c), thermonuclear supernovae in binary systems (SNIa) and novae. Non thermal processes take place in the ISM bombarded by fast projectiles and in solar flares.
### 3.1 Thermal nucleosynthesis
a. Hydrostatic nucleosynthesis
This nucleosynthesis is active in rather quiet stellar stages, where the temperature and the density are constant over a long period of time. As far as gamma-ray line astronomy is concerned, the principal process of interest is the production and ejection of <sup>26</sup>Al by WR stars, that seem to be the best canditates according to the recent analysis of the observations made by the COMPTEL experiment on board of the CGRO satellite. During H burning, 26Al is produced via the radiative proton capture by 25Mg. The intense wind remove the unprocessed envelope exposing the convective core, and fresh <sup>26</sup>Al is carried away by the wind before decaying. The decay, finally, produces a line at 1.809 MeV in the transparent ISM. The mass of <sup>26</sup>Al generated is estimated typically to about 10<sup>-4</sup> Mo per WR. AGB stars could also eject <sup>26</sup>Al but the astrophysical scenario is not precisely known.
b. Explosive nucleosynthesis
The time scale of explosive nucleosynthesis is so short that beta decay has no time to operate. Under these conditions, a host of isotopes are made under the form of their radioactive (proton-rich) progenitor. Among them, <sup>56</sup>Ni, <sup>57</sup>Ni and <sup>44</sup>Ti are of the highest interest for gamma-ray line astronomy. <sup>26</sup>Al is presumably also formed in core collapse supernovae and additionally by neutrino spallation during the explosion. The mass of the radioactive isotopes generated in core collapse supernovae depends on the temperature and density reached behind the shock wave and on mass cut (frontier between the imploding core giving rise to a neutron star and the ejected material). These parameters are uncertain since they are sensitive to fine details of modelization of the presupernova structure and evolution and on the detailed hydrodynamical treatment of the explosion itself. Moreover the escape of gamma’s depends on internal mixing and unstabilities. In thermonuclear supernovae (SNIa), the scenario is different: the exploding object is a white dwarf (WD) overloaded by the material accreted from a companion. When the critical (Chandrasekhar) mass of about 1.4 Mo is exceeded, the star looses its stability. Degenerate Carbon burning becomes explosive and a large fraction of the WD is transformed into <sup>56</sup>Ni. Nova explosions, originating also from WD ’s in binary systems, are much more frequent than SN ones, but they are less spectacular, and their gamma-ray line emission is only observable in the galaxy. Accretion proceeds to a different rate than that leading to SNIa explosions. The explosive eruption of these objects should release substantial amounts of <sup>7</sup>Be, <sup>22</sup>Na and <sup>26</sup>Al. Here the main uncertainties concern the amount of matter ejected and the treatment of convection which is always a problem in the modelization of stellar objects.
### 3.2 Non thermal nuclear excitation
The second mode of production of gamma-ray lines is associated with flows of fast particles. The interaction of the energetic nuclei generated by shock wave acceleration with the ambient target medium could, in principle, produce a wealth of gamma ray lines. Observations of C and O lines at 4.4 and 6.1 MeV would be the clue to the irradiation of molecular cloud by nuclei of helium, carbon and oxygen produced and accelerated to moderate energy by massive stars (WR, SN).
## 4 OBSERVATIONAL HIGHLIGHTS
### 4.1 Galactic Disk
The 1.809 MeV emission due to <sup>26</sup>Al decay deduced from the COMPTEL observations through a sophisticated procedure has been correlated with the free-free emission of the galactic disk observed by COBE in the microwave regime. Extended sources as Vela (SNR, WR) and Cygnus (Massive Stars) have been located. The distribution of <sup>26</sup>Al emission seems to indicate that massive stars are the main sources of <sup>26</sup>Al.
### 4.2 SN 1987A
This core collapse supernova has exploded in the Large Magellanic Cloud, and it appeared in the southern sky in Februrary, 1987. It has been observed by many telescopes including neutrino ones. Its light curve has shown an exponential decay of 77 days, commensurate with the mean lifetime of <sup>56</sup>Co. The passage from opacity (inherent to stars) to interstellar transparence has been induced by the thinning of the remnant under the effect of its expansion. Moreover gamma-ray lines at 0.847 and 1.238 MeV from <sup>56</sup>Co decay have been directly detected and also the 0.122 MeV line from <sup>57</sup>Co decay. Gamma-ray emission has taken place earlier than predicted, leading to speculate on internal mixing. The 57/56 isotopic ratio deduced is close to solar. The amount of iron synthesized is 0.07 Mo. This is the most precise determination of iron production by a supernova never achieved. These observations can be considered as a bright confirmation of the theory of explosive nucleosynthesis.
### 4.3 Cas A
Cas A is a young SN remnant ( 300 yr) located at a distance of 2.8 kpc. The gamma-ray line from <sup>44</sup>Ti decay has been detected by COMPTEL. Thanks to a specific supernova model, the mass mass of <sup>56</sup>Ni produced and ejected has been deduced from the mass of <sup>44</sup>Ti observed. The <sup>56</sup>Ni mass is so high that this supernova should have been easily visible at the time of its maximum brightness, whereas no historical record bears its trace. The Cas A puzzle can be solved, however, if one imagine that the explosion of the central object has been hidden by the thick and dusty wind of the progenitor star.
### 4.4 Orion and Vela complexes
The COMPTEL satellite has perhaps detected a flux of gamma rays characteristic of the deexcitation of carbon and oxygen from the Orion molecular cloud complex, located at a distance of about 500 light-years from our Solar System. A recent spectrum taken from the Vela superbubble at about the same distance is similar to that of Orion. In these regions, marked by the explosion of several supernovae, massive stars abound. However the detections are still marginal and debated. Future observations will clarify the situation.
## 5 INTEGRAL MISSION
The ESA (European Space Agency) scientific mission INTEGRAL (International Gamma-Ray Astrophysics Laboratory) is dedicated to the fine spectroscopy, owing to the use of germanium detectors $`(E/\mathrm{\Delta }E=500)`$ and imaging of celestial gamma-ray sources in the energy range 15 keV to 8 MeV. INTEGRAL will be launched in the beginning of the next decade by a Russian PROTON rocket into a highly eccentric 72-hour orbit. The nominal lifetime of the observatory will be 2 years with possible extension to up to 5 years. Most of the observing time will be made available to the worldwide scientific community. The scientific goals of INTEGRAL are addressed through the simultaneous use of high resolution spectroscopy with fine imaging and accurate positioning of celestial sources in the gamma-ray domain. Fine spectroscopy over the entire energy range will permit spectral features to be uniquely identified and line profiles to be determined for physical studies of the source region. The gamma-ray emission from the galactic plane will be mapped on a wide range of angular scales from arc-minutes to degrees in both discrete thermal and nonthermal nucleosynthesis lines, from <sup>26</sup>Al, C\* and O\* together with the 511 keV, and the wide band continuum. At the same time, source positioning at the arc-minute level within a wide field of view, of both continuum and discrete line emissions, is required to allow an extensive range of astrophysical investigations to be performed on a wide variety of sources, both targeted and serendipitious, with a good chance of identification at other wavelengths. Measurements with INTEGRAL of the shapes of the gamma-ray line profiles from supernovae, particularly SNIa in the Virgo cluster of galaxies, will provide information about the expansion velocity and density distribution inside the ejected envelope, whilst the relative intensities of the lines will provide direct insight into the physical conditions at the time of the production.
## 6 CONCLUSION
A golden age of nuclear gamma ray line astronomy is opening up in Europ, at the point of convergence of nuclear physics and astrophysics. Time is ripe to join our efforts. Explicit references can be found in the proceedings of the INTEGRAL symposia (Saint Malo, 1997 (1) and Taormina, 1999 (2) ) published by the European Space Agency.
This work was supported in part by the PICS 319, CNRS at the Institut d’Astrophysique de Paris.
|
no-problem/9902/astro-ph9902374.html
|
ar5iv
|
text
|
# A New Argument Against An Intervening Stellar Population Toward the Large Magellanic Cloud
## 1 Introduction
Zaritsky & Lin (1997) found a concentration of stars approximately 0.9 magnitudes above the red clump (RC) in a color magnitude diagram (CMD) of the Large Magellanic Cloud (LMC). They suggested that this concentration of stars might trace a foreground population of ordinary stars, and this foreground population might be responsible for a large fraction of the microlensing events seen toward the LMC by Alcock et al. (1997a) and Aubourg et al. (1993).
A number of workers raised a diverse set of objections to this hypothesis. Alcock et al. (1997b) showed that if the putative foreground population lay within 33 kpc (i.e., 0.9 mag for an assumed LMC distance of 50 kpc), then it contained no detectable population of RR Lyrae stars. Beaulieu & Sackett (1998) showed that a vertical red clump (VRC), that is a vertical extension to the usual red clump, is a typical feature of CMDs for populations of mixed age, and hence the presence of a VRC did not necessarily indicate a foreground population. Gallart (1998) showed that such features are present in the Fornax and Sextans A dwarf galaxies. I argued (Gould 1998) that if such a foreground structure were composed of tidal debris, then either it should have shown up in de Vaucouleurs’ (1957) map of the LMC or it must have an anomalously high mass-to-light ($`M/L`$) ratio to account for the microlensing events. Johnston (1998) showed that tidal debris from disrupted satellites would give rise to unacceptably high star counts away from the LMC if it were to account for the microlensing events seen toward the LMC. Bennett (1998) related the surface density of RC stars to the total surface mass density of their parent stellar distribution and showed that for typical stellar populations the density of the VRC reported by Zaritsky & Lin (1997) was too low by an order of magnitude to account for the microlensing.
Zaritsky et al. (1999) have addressed each of the objections in turn. They said that it was possible to construct an initial mass function with a much higher ratio of total mass to RC stars than for the “typical” parameters advocated by Bennett (1998). They argued that the foreground population could be at 40 kpc, rather than the 33 kpc originally proposed by Zaritsky & Lin (1997), thus evading the constraint of Alcock et al. (1997b). They pointed out that while certain star formation histories could well explain the VRC as a feature of the LMC CMD as advocated by Beaulieu & Sackett (1998), such histories were not demanded by the available data, and indeed an independently constructed history yields only a small fraction of the observed VRC. They argued that Johnston’s (1998) analysis does not apply to tidal material from an SMC-LMC interaction or from a denser than expected LMC halo. Finally they quoted from de Vaucouleurs (1957) to make it appear that he himself did not believe the outer isophotes of his map, thereby apparently dispensing with my argument (Gould 1998).
It is not my purpose here to critically examine all of these counter-arguments which would require a major investigation in its own right. Rather, I present a new argument against the hypothesis that the VRC traces a significant lensing population.
## 2 Transverse Speed of the Lenses
The speed of the lenses relative to the observer-source line of sight, $`v_{}`$, is related to the observed timescale of the events $`t_\mathrm{E}`$ by,
$$v_{}=190\mathrm{km}\mathrm{s}^1\left(\frac{t_\mathrm{E}}{40\mathrm{days}}\right)^1\left(\frac{M}{0.25M_{}}\right)^{1/2}\left(\frac{\widehat{D}}{10\mathrm{kpc}}\right)^{1/2},\widehat{D}\frac{d_{\mathrm{ol}}d_{\mathrm{ls}}}{d_{\mathrm{os}}},$$
(1)
where $`M`$ is the mass of the lens, and $`d_{\mathrm{ol}}`$, $`d_{\mathrm{ls}}`$, and $`d_{\mathrm{os}}`$ are the distances between the observer, lens, and source. This equation summarizes the major difficulty in explaining the lenses as halo objects: if the lenses were in the halo ($`\widehat{D}10\mathrm{kpc}`$) and were substellar objects ($`M<0.08M_{}`$), then to produce events with the observed time scales ($`t_\mathrm{E}40`$days), they must be moving with typical speeds $`v_{}110\mathrm{km}\mathrm{s}^1`$, which is more than a factor of two smaller than the speeds expected from halo dynamics. Hence, if they are in the halo, they are not made of hydrogen: substellar objects would be moving too slowly while stellar objects made of hydrogen would burn and be visible (e.g. Gould, Flynn, & Bahcall 1998 and references therein).
Equation (1) can also be used to draw significant conclusions about the putative foreground structure claimed by Zartisky & Lin (1997). If this structure is composed of ordinary stars ($`M0.25M_{}`$), and if it lies 0.9 mag (17 kpc) in front of the LMC ($`\widehat{D}=11\mathrm{kpc}`$), then it must be travelling at $`v_{}200\mathrm{km}\mathrm{s}^1`$ relative to the line of sight to LMC. This is approximately what would be expected for a random object travelling through the Galactic halo such as a dwarf galaxy or tidal debris from a disrupted dwarf (Zhao 1998). However, it is substantially too high for material associated with the LMC.
Hence, if the claimed foreground structure is truly responsible for a substantial fraction of the microlensing events, then there are two possibilities: either the foreground structure is not associated with the LMC, or it is associated but is composed of objects that are substantially lighter than the mass of typical stars, $`M0.25M_{}`$. I examine these two possibilities in turn, beginning with the hypothesis of a chance alignment of a structure unassociated with the LMC.
The a priori probability of such an alignment is incredibly small. Recall from Gould (1998) that the surface mass density required to explain the observed microlensing optical depth, $`\tau 2.9\times 10^7`$, is
$$\mathrm{\Sigma }=47\frac{\tau }{2.9\times 10^7}\left(\frac{\widehat{D}}{10\mathrm{kpc}}\right)^1M_{}\mathrm{pc}^2.$$
(2)
For a Milky-Way disk $`M/L=1.8`$, this corresponds to about $`V=22.8`$ mag arcsec<sup>-2</sup>. By comparison, the central surface brightnesses of the Sculptor and Sextans dwarf spheroidal galaxies are respectively 23.7 and 26.1 mag arcsec<sup>-2</sup> (Mateo et al. 1991). Moreover, the core radii of these galaxies are only 9 and 15 arcmin respectively (Mateo et al. 1991), much smaller than the several square degrees required to account for the microlensing events. The fraction of the high-latitude sky covered by dwarf galaxies of even these low surface brightnesses is well under $`10^4`$. (Note that the a priori probability of a structure associated with the LMC would not be affected by this argument.)
## 3 Expected Radial-Velocity Difference
However, it is not entirely appropriate to apply a priori statistical arguments to the presence of a dwarf galaxy in front of the LMC. The fact is that microlensing events have been discovered toward the LMC and all the explanations offered so far are a priori unlikely. If evidence is produced for an intervening stellar population after the detection of the microlensing events (e.g. Zaritsky & Lin 1997), then the low a priori probability for such a population carries less weight.
Nevertheless, this putative detection brings with it the means for an additional, truly a priori test. If the intervening population is not associated with the LMC, then its radial motion relative to the LMC should be a random value drawn from a distribution characteristic of Galactic satellites. On the other hand, if the VRC is actually composed of LMC stars, the two radial velocities should be consistent.
I find that the root-mean-square galactocentric radial velocity of 16 satellite galaxies and distant globular clusters at high latitude (422-213, AM1, Carina, Draco, Fornax, LMC, LeoI, LeoII, NGC2419, Pal3, Pal4, Pal14, Sculptor, Sextans, SMC, and UMi) is $`\sigma _{\mathrm{sat}}=86\mathrm{km}\mathrm{s}^1`$. Zaritsky et al. (1999) have measured the mean radial velocities of the two populations and find a difference,
$$\mathrm{\Delta }\overline{v}=\overline{v}_{VRC}\overline{v}_{RC}=5.8\pm 4.9\mathrm{km}\mathrm{s}^1,$$
(3)
where (in contrast to Zaritsky et al. 1999) I am quoting $`1\sigma `$ errors. This confirms the results of Ibata, Lewis, & Beaulieu (1998) based on a smaller sample and is just what one would expect if the VRC stars were part of the LMC. However, the a priori probability that the two populations would be this close if they were not associated is
$$p\frac{2\mathrm{\Delta }\overline{v}}{(2\pi )^{1/2}\sigma _{\mathrm{sat}}}\mathrm{exp}\left(\frac{v_{\mathrm{LMC}}^2}{2\sigma _{\mathrm{sat}}^2}\right)4\%,$$
(4)
where $`v_{\mathrm{LMC}}=73\mathrm{km}\mathrm{s}^1`$ is the galactocentric radial velocity of the LMC.
In brief, there are two distinct statistical arguments against the VRC being a foreground structure that is not associated with the LMC: first it is unlikely ($`<10^4`$) that such a structure would happen to be aligned with the LMC, and second it is unlikely (4%) that it would have a radial velocity consistent with that of the LMC. Together, these two arguments effectively rule out this possibility.
## 4 Mass Scale of Foreground Population
I therefore turn to the second possibility discussed in § 2, that the masses of the foreground objects are substantially smaller than those of typical stars. To investigate this possibility, it is first necessary to estimate the typical transverse velocities of populations that are associated with the LMC. For a foreground population with the same space velocity as the LMC but lying a distance $`d_{\mathrm{ls}}`$ in front of it, the transverse speed of the lens population relative to the Earth-LMC line of sight is
$$v_{,\mathrm{bulk}}=\mu _{\mathrm{LMC}}d_{\mathrm{ls}}=109\mathrm{km}\mathrm{s}^1\frac{d_{\mathrm{ls}}}{17\mathrm{kpc}}$$
(5)
where $`\mu _{\mathrm{LMC}}=1.35\mathrm{mas}\mathrm{yr}^1`$ is the proper motion of the LMC (Jones, Klemola, & Lin 1994). To find $`v_{}`$, this bulk motion must be added to any peculiar motion of the sources and lenses relative to the assumed common space motion. The internal dispersion of the VRC is small, $`18\mathrm{km}\mathrm{s}^1`$ (Zaritsky et al. 1999), and therefore can be ignored. The bulk motion of the foreground structure relative to the LMC must also be small to evade the the arguments given in the last two sections about unassociated structures, so this will also be ignored. However, the LMC sources are rotating at $`70\mathrm{km}\mathrm{s}^1`$ and this must be included. It should first be multiplied by the projection factor $`d_{\mathrm{ol}}/d_{\mathrm{os}}`$ and then added in quadrature (because the microlensing observations cover a sufficiently large part of the LMC that all directions of motion are effectively covered). Hence, at $`d_{\mathrm{ls}}=17\mathrm{kpc}`$, the expected $`v_{}=119\mathrm{km}\mathrm{s}^1`$. Inserting this value into equation (1) yields a typical mass $`M0.09M_{}`$ at $`d_{\mathrm{ls}}=17\mathrm{kpc}`$. At $`d_{\mathrm{ls}}=10\mathrm{kpc}`$, the same argument yields $`M0.06M_{}`$. Another possibility for a foreground population at $`d_{\mathrm{ls}}=10\mathrm{kpc}`$ is that it is a bound satellite orbiting about the LMC at $`70\mathrm{km}\mathrm{s}^1`$. However, this scenario leads to essentially the same mass, $`M0.07M_{}`$. For distances $`d_{\mathrm{ls}}10\mathrm{kpc}`$, it is no longer plausible that the foreground population would give rise to the observed VRC which peaks 0.9 mag brighter than the RC. Thus, $`M0.08M_{}`$ is a robust estimate of the characteristic mass of the putative foreground population.
Since $`M0.08M_{}`$ is approximately where hydrogen burning begins, this result implies that of order half the mass in the putative foreground structure lies below the hydrogen burning limit. While this is possible in principle, it should be noted that in the solar neighborhood, substellar objects account for only about 1/6 of the total stellar and substellar mass (Holmberg & Flynn 1999 and references therein).
Acknowledgements: I thank B.S. Gaudi for a careful reading of the manuscript. This work was supported in part by grant AST 97-27520 from the NSF.
|
no-problem/9902/astro-ph9902163.html
|
ar5iv
|
text
|
# Physical Conditions in the Emission-Line Gas in the Extremely Low-Luminosity Seyfert Nucleus of NGC 4395 Based in part on observations made with the NASA/ESA Hubble Space Telescope. STScI is operated by the Association of Universities for Research in Astronomy, Inc. under the NASA contract NAS5-26555. Based in part on observations made with the Infrared Space Observatory, an ESA project with instruments funded by ESA Member States with the participation of ISAS and NASA.
## 1 Introduction
NGC 4395 is a nearby (d $``$ 2.6 Mpc; Rowan-Robinson 1985), late-type (Sd IV) dwarf galaxy which, in general characteristics, bears a resemblance to the Large Magellanic Cloud (LMC). Both show a weak bar-like structure, bright H II regions, and possess subsolar abundances of heavy elements (for LMC, see Russell & Dopita 1992; for NGC 4395, see Roy et al. 1996). However, there is a bright point source very near the center of symmetry of NGC 4395. Optical and UV spectra of this source reveal a featureless continuum and strong emission lines from a wide range of ionization states and critical densities.
Although the bolometric luminosity of the central knot ($``$ 1.5 x 10<sup>40</sup> ergs s<sup>-1</sup>) could be produced by massive stars, there is ample evidence that NGC 4395 harbors a dwarf Seyfert 1 nucleus. This is discussed in detail in two previous papers (Filippenko & Sargent 1989, hereafter FS89; Filippenko, Ho, & Sargent 1993, hereafter FHS93), and here we will only summarize the evidence. First, the optical spectrum shows coronal lines, such as \[Ne V\] $`\lambda \lambda `$3346, 3426 and \[Fe VII\] $`\lambda `$6087, which indicate the presence of a significant flux of photons with energies above 97 eV. Although it is possible that such energetic radiation can be produced by stars (Terlevich & Melnick 1985), it is a straightforward consequence of the power-law continua associated with the active galactic nucleus (AGN) phenomenon. In fact, the optical to UV continuum radiation emitted by the central source is best characterized as a featureless power law and the optical continuum flux has been observed to vary by as much as 20% on timescales $``$ 1 day (Lira, Lawrence, & Johnson 1999; but see Shields & Filippenko 1992). The active nucleus appears as an unresolved point source in the radio with a nonthermal spectrum (Sramek 1992), and has been detected in the soft X-ray band (0.1 - 2.4 keV) with the ROSAT/PSPC (Moran et al. 1999). All this suggests that there is a compact energy source, analogous to Seyfert1 nuclei. Finally, the permitted lines have broad wings (FWHM $`>`$ 1000 km s<sup>-1</sup>), which are absent from any of the forbidden lines, indicating dense (n<sub>H</sub> $`>`$ 10<sup>9</sup> cm<sup>-3</sup>), fast moving gas, which is best explained by material close to the central mass/ionization source, and is one of the defining characteristics of a Seyfert 1 nucleus.
It is fascinating that the nucleus of NGC 4395 appears to possess all of the basic properties of type 1 Seyfert galaxies and QSOs; the fundamental physics of the AGN phenomenon thus applies over at least eight orders of magnitude in luminosity! This object permits us to extend studies of luminosity dependent effects (e.g., the Baldwin effect) to the lowest extreme. Furthermore, NGC 4395 is a very late-type galaxy, lacking a central bulge, unlike virtually every other known Seyfert galaxy. Finally, NGC 4395 is close to us, which allows us to examine the host galaxy with hitherto unprecedented spatial resolution.
In this paper we examine in detail the physical conditions of the Seyfert nucleus of NGC 4395. In addition to the Hubble Space Telescope/Faint Object Spectrograph (HST/FOS) spectra presented in FHS93, we include new ground-based optical spectra and IR emission-line spectra taken with the Infrared Space Observatory (ISO). The combination of UV, optical, and IR data provides the widest range of emission-line diagnostics, as well as an opportunity to deredden the lines based on the He II recombination ratios. We will use multi-component photoionization models to match the dereddened emission-line ratios and probe the physical conditions in both the broad-line region (BLR) and narrow-line region (NLR) in this AGN.
## 2 Observations and Data Analysis
### 2.1 UV and Optical Spectra
We obtained UV spectra of NGC 4395 with HST/FOS using the G130H, G190H, and G270H gratings. The UV spectra cover the wavelength range 1150 – 3300 Å at a spectral resolution of $`\lambda `$/$`\mathrm{\Delta }\lambda `$ $``$ 1000 (FWHM $`=`$ 1.2 – 3.3 Å , over the FOS waveband), and are described in more detail in FHS93. We obtained high-dispersion optical spectra during the course of an extensive spectroscopic survey of nearby galaxies conducted with the CCD Double Spectrograph (Oke & Gunn 1982) on the Hale 5 m telescope at Palomar Observatory (see Filippenko & Sargent 1985, and Ho, Filippenko, & Sargent 1995, for details of the survey). The 1<sup>′′</sup> wide slit yielded spectral resolutions of FWHM $``$ 2.3 Å and 1.6 Å in the blue (4200–5100 Å) and red (6200–6800 Å) spectra, respectively. We also obtained spectra of moderate dispersion on several occasions at Lick Observatory using the Kast spectrograph (Miller & Stone 1993) mounted on the Shane 3 m reflector. These observations used a 2<sup>′′</sup> slit, and the wavelength coverage extended from near the atmospheric cutoff (3200 Å) to almost 1 $`\mu `$m with a typical FWHM resolution of 5–8 Å. In all cases, we oriented the slit along the parallactic angle to minimize light losses due to atmospheric dispersion, and we observed featureless spectrophotometric standard stars to calibrate the relative fluxes of the spectra. One-dimensional spectra were extracted using a 4<sup>′′</sup> window centered on the nucleus, with the sky background determined from regions along the slit far from the nucleus. The reduction and calibration of the data followed conventional procedures for long-slit spectroscopy, such as those described in Ho et al. (1995).
The absolute fluxes of the UV spectra are uncertain, as evidenced by $``$30% discrepancies in the continuum fluxes of individual FOS spectra in the regions of overlap (possibly due to aperture miscentering). We scaled the UV spectra to match each other and to match the optical spectra obtained on spectrophotometric nights. Based on a comparison of these optical spectra, we expect that the uncertainty in the absolute flux calibration is $``$20%, in addition to the measurement errors that we quote later.
Figure 1 shows the UV spectrum and a representative optical spectrum, and demonstrates the large number of emission lines available for photoionization modeling. The broad and narrow components of the permitted emission lines can most easily be seen in the C IV $`\lambda `$1550 line, as shown in Figure 2. The width of the broad C IV $`\lambda `$1550 is $``$ 5000 km s<sup>-1</sup>, while broad H$`\beta `$ has a FWHM $``$ 1500 km s<sup>-1</sup> (also see Figure 2), which is evidence for stratification in the BLR. However, as we will show in Section 3, the BLR gas is well represented by a single set of physical parameters. Although the forbidden lines are quite narrow (\[O III\] $`\lambda `$5007 has a FWHM $``$ 50 km s<sup>-1</sup>, FS89), high-resolution (FWHM $`=`$ 8 km s<sup>-1</sup>) optical spectra obtained with the Keck 10-m telescope (Filippenko & Ho 1999) show differences in the profiles of the low-ionization narrow lines, such as \[O I\] $`\lambda `$6300 and \[S II\] $`\lambda `$6731, compared to higher ionization lines, such as \[O III\] $`\lambda `$5007 and \[S III\] $`\lambda `$6312. This is evidence that the NLR of NGC 4395 is stratified as well. Unlike the BLR, the NLR cannot be represented by a single set of physical parameters, as we will discuss in Section 3.
### 2.2 Mid-Infrared Spectra from ISO
We acquired the IR spectra with the Short Wavelength Spectrometer (SWS) using the AOT 2 mode to cover selected wavelength intervals for measurement of specific lines, at an average spectral resolution of $`\lambda `$/$`\mathrm{\Delta }\lambda `$ $``$ 1400. Spectral regions centered at the redshifted ($`z=`$ 0.00101, Haynes et al. 1998) wavelengths of \[S IV\] 10.51 $`\mu `$m and \[S III\] 33.48 $`\mu `$m were observed simultaneously with a total integration time of 10,400 s, while regions appropriate for \[Si IX\] 3.94 $`\mu `$m and \[O IV\] 25.89 $`\mu `$m were observed simultaneously with a total integration time of 18,600 s. While observing the source, the grating was stepped between readouts to produce a scan across the detectors of 200 s duration, followed by measurement (with multiple readouts) of the dark current.
We reduced the spectra using the Interactive Analysis and ISO Spectral Analysis version 1.3a software packages. Standard Processed Data generated by the calibration pipeline (version 5.2) were employed. Dark current measurements were interactively trimmed of large noise excursions using a 3$`\sigma `$ clipping routine, and dark current estimates for each scan were generated from a mean of dark measurements immediately bracketing the scan in time. Individual scans with discontinuous jumps in dark current levels were interactively removed on a detector-by-detector basis, and the measured dark current levels were subtracted from the remaining data. Standard corrections were applied for detector response and flux calibration, as well as wavelength correction to the heliocentric frame. For each detector, fluxes were averaged across the entire set of observations for a given line; individual scans were then shifted by a constant additive offset to bring their flux level into agreement with this value, as a correction for residual errors in the dark current removal. An overall spectrum was then generated for each detector by taking a median of individual scan spectra with 3$`\sigma `$ clipping. After shifting again to a common mean flux level, the spectra for individual detectors were then used to construct a weighted average representing the final spectrum. This process was repeated with the data subdivided according to scan direction (up or down) as a consistency check on the results.
The ISO spectra are shown in Figure 3. The plots display the full wavelength coverage of the data, corresponding to the wavelength coverage per scan for each of the four lines. We have strong detections of the \[S IV\] and \[O IV\] lines, whereas the \[Si IX\] and \[S III\] lines are not visible.
### 2.3 Measurements
We measured the fluxes of the narrow lines that lack broad counterparts directly, whereas for severely blended lines like H$`\alpha `$ and \[N II\] $`\lambda \lambda `$6548, 6583, we used the \[O III\] $`\lambda `$5007 profile as a template to deblend the lines (see Crenshaw & Peterson 1986). To measure the broad component of the permitted lines, we deblended and removed the narrow components, and determined the remaining flux above a local continuum. For some lines, the broad component was expected to be present but was too weak for a reasonable flux measurement.
We determined the reddening of the narrow emission lines from the He II $`\lambda `$1640/$`\lambda `$4686 ratio and the Galactic reddening curve of Savage & Mathis (1979). The He II lines are due to recombination and are therefore essentially insensitive to temperature and density effects; we adopt an intrinsic value of 7.2 for this ratio, which is consistent with our model values (Section 4). The observed He II ratio is 6.0 $`\pm `$ 0.8, which yields a reddening of $`E_{BV}`$ $`=`$ 0.05 $`\pm `$0.03 mag. The reddening from our own Galaxy is $``$ 0.03 mag (Murphy et al. 1996).
Table 1 gives the observed and dereddened narrow-line ratios relative to narrow H$`\beta `$, and errors in the dereddened ratios. Table 2 gives the observed broad-line ratios, relative to broad H$`\beta `$, and dereddened broad-line ratios obtained by assuming the broad lines experience the same amount of reddening as the narrow lines. We determined errors in the dereddened ratios from the sum in quadrature of the errors from three sources: photon noise, different reasonable continuum placements, and reddening.
## 3 Photoionization Models
As in our previous studies (e.g., Kraemer et al. 1998a), we have attempted to keep the number of free parameters to a minimum by using the best available observational constraints and simplest of assumptions. For example, the size of the emission-line region predicted by the models could not exceed the upper limits from ground-based measurements ($``$ 13 pc, FHS93). The spectral energy distribution (SED) of the ionizing continuum radiation was determined by simple fits to the observed fluxes in the UV and X-ray. Since it is apparent that there are distinct broad and narrow emission-line regions in the nucleus of NGC 4395, which cannot be modeled assuming a single density, we initially assumed a single component each for the BLR and NLR gas. Additional components were added to improve the fit to the observed emission-line ratios, but only if they met the observational constraints.
The details of the photoionization code are given in Kraemer (1985) and other papers (cf. Kraemer et al. 1994). Following convention, our photoionization models are parameterized in terms of the number of ionizing photons per hydrogen atom at the illuminated face of the cloud, referred to as the ionization parameter:
$$U=\frac{1}{4\pi D^2n_Hc}_{\nu _0}^{\mathrm{}}\frac{L_\nu }{h\nu }𝑑\nu ,$$
(1)
where $`L_\nu `$ is the frequency dependent luminosity of the ionizing continuum, $`n_H`$ is the number density of atomic hydrogen, $`D`$ is the distance between the cloud and the ionizing source, and $`h\nu _0`$ =13.6 eV.
In the following subsections, we will discuss how values were assigned to the various input parameters.
### 3.1 Elemental Abundances
Several studies have addressed the elemental abundances in the H II regions within NGC 4395, and there is strong evidence that the O/H ratio is significantly subsolar (Vila-Costas & Edmunds 1993; Roy et al. 1996; van Zee, Salzer, & Haynes 1998). Roy et al. (1996) derived a value of log(O/H) $`=`$ $``$3.7 for an H II region within $``$ 0.1 kpc of the nucleus, based on the \[N II\]/\[O III\] ratio and the semi-empirical calibration of Edmunds & Pagel (1984). The accuracy of this estimate and its appropriateness for the nucleus are uncertain, however, since results reported by Roy et al. for other regions in this galaxy show a scatter spanning log(O/H) = $``$4.2 to $``$3.4, with no clear radial trend (see also Vila-Costas & Edmunds 1993). Our test calculations suggest that the \[O I\] $`\lambda `$6300/H$`\beta `$ ratio is best matched with an abundance of at least log(O/H) = $``$3.5 ($``$ 1/2 solar; Grevesse & Anders 1989), in plausible agreement with the extranuclear results, and therefore we adopt this value for our analysis.
The evidence is strong that the N/O abundance ratio in NGC 4395 is also subsolar. Estimates based on optical \[N II\]/\[O II\] line ratios for H II regions in this galaxy generally fall in the range of log(N/O) $``$ $``$1.5 to $``$ 1.2 (corresponding to 0.2 – 0.4 times solar; Vila-Costas & Edmunds 1993; van Zee, Salzer, & Haynes 1998). Abundances in this range are typical of H II regions in low-metallicity galaxies, and are interpreted in terms of the combined effects of primary and secondary nitrogen production (van Zee et al., and references therein).
For the nucleus, FS89 have previously commented on the weakness of the optical nitrogen lines relative to \[O I\] $`\lambda `$6300 and \[S II\] $`\lambda \lambda `$6716, 6731, which may be taken as evidence of an underabundance of nitrogen. With the current dataset it is possible to estimate directly the N/O ratio within the nucleus using the ratio of the O III\] $`\lambda `$1664/N III\] $`\lambda `$1750 lines, as discussed by Netzer (1997). The theoretical ratio is as follows:
$$\frac{I(\lambda 1664)}{I(\lambda 1750)}=0.41T_4^{0.04}\mathrm{exp}(0.43/T_4)\frac{N(O^{+2})}{N(N^{+2})},$$
(2)
where $`T_4`$ is the temperature in units of 10,000 K. The N<sup>+2</sup> and O<sup>+2</sup> are expected to show strong overlap, such that the ratio of these ions closely reflects the total abundance ratio. N III\] $`\lambda `$1750 appears to be present (see Figure 1), although weak and near an artifact in the FOS G190H spectrum. Assuming an upper limit to the dereddened strength of this line to be 0.1 times that of H$`\beta `$, and $`T=`$ 15,000 K, equation (2) yields N(O<sup>+2</sup>)/N(N<sup>+2</sup>) $``$ 16, or log(N/O) $``$ $``$1.2, consistent with the extranuclear results. Based on these findings, we adopt a ratio of N/O equal to 1/3 solar (i.e., N/H $`=`$ 1/6 solar) for our numerical calculations.
With the exception of N, the abundances of other elements heavier than He are scaled in proportion to O. The resulting abundances, by number, relative to H are thus He=0.1, C=1.7x10<sup>-4</sup>, O=3.4x10<sup>-4</sup>, N=2.0x10<sup>-5</sup>, Ne=6.0x10<sup>-5</sup>, S=8.0x10<sup>-6</sup>, Si=1.6x10<sup>-5</sup>, Mg=1.6x10<sup>-5</sup>, and Fe=2.0x10<sup>-5</sup>. As discussed below, gas-phase abundances are modified in some cases from these values to reflect depletion onto grains.
### 3.2 The Ionizing Continuum
In these simple models, the gas is photoionized by radiation from the central AGN, and therefore the results are dependent on what we assume for the SED and total luminosity of the central source. The simplest estimate of the shape of the SED is made by fitting the observed flux in the UV to that in the soft X-ray (from ROSAT/PSPC data) as a simple power-law ($`f_\nu \nu ^\alpha `$). However, as noted by Moran et al. (1999), the X-ray continuum from NGC 4395 may be absorbed, and therefore such a fit may not represent the intrinsic SED. Using the HST/FOS spectra, FHS93 fit the continuum below 2000 Å with a power law of index $`=`$ $``$1. If, for the sake of convention, we extend this to the Lyman limit, the continuum out to 0.1 keV (the low-energy end of the ROSAT band) can be fit using a power law with an index of $``$3. This is clearly too soft to reproduce the observed He II $`\lambda `$4686/H$`\beta `$ ratio. For example, a simple photon counting calculation, which assumes all photons between 13.6 eV and 54.4 eV go into ionizing hydrogen while photons with energies above 54.4 eV go into ionizing He II (cf. Kraemer et al. 1994), yields a power-law index $``$ $``$1.5. Furthermore, Moran et al. fit the continuum between 0.1 and 2 keV with a power law with a positive spectral index. This is rare in Seyfert 1 spectra, and such occurrences are typically attributed to absorption by an intervening layer of “cold” (i.e., $`<`$ 10<sup>5</sup> K) gas intrinsic to the nucleus of the galaxy (cf. Feldmeier et al. 1999; Kraemer et al. 1998b). Therefore, we modeled the continuum as a power law extending from the Lyman limit to 1 keV, since an absorber would be relatively transparent above 1 keV, and Seyfert X-ray spectra often steepen below 1 keV (e.g., Arnaud et al. 1985; Turner & Pounds 1989). Above 1 keV, we assumed a relatively flat continuum, as is typical in Seyfert galaxies (cf. Nandra & Pounds 1994). The ionizing continuum is then expressed as $`F_\nu `$$`=`$K$`\nu ^\alpha `$, where
$$\alpha =1.0,h\nu <13.6\mathrm{eV}$$
(3)
$$\alpha =1.7,13.6eVh\nu <1000\mathrm{eV}$$
(4)
$$\alpha =0.7,h\nu 1000\mathrm{eV}.$$
(5)
To be conservative, we set $`F_\nu `$ at log($`\nu `$) $`=`$ 15.4 to the observed value from the FOS data. We have estimated $`F_\nu `$ at log($`\nu `$) $`=`$ 17.4 based on a fit to the ROSAT/PSPC data in Moran et al. (1999), with the caveat that this is likely to be a lower limit if the spectrum is indeed absorbed at lower energies. To get the luminosity, we assumed a distance of 2.6 Mpc. Again, if there is absorption at either of these energies, the luminosity assumed here ($`L_{h\nu >13.6\mathrm{eV}}`$ $``$ 2.5 x 10<sup>39</sup> ergs s<sup>-1</sup>) is an underestimate, a point to which we will return in the Discussion section.
### 3.3 Individual Components
As we have noted, the spectra (see Figure 1) of NGC 4395 show emission lines from a wide range ionization states (e.g. oxygen lines from all ionization states, from O<sup>0</sup> to O<sup>+3</sup>). The optical and UV permitted lines possess broad (FWHM $``$ 1000 km s<sup>-1</sup>) wings, which are apparently absent from even the highest excitation forbidden lines (e.g., \[Ne V\] $`\lambda `$3426, \[O III\] $`\lambda `$5007). Therefore, a high-density component is required to fit the broad emission lines. The broad H$`\alpha `$/H$`\beta `$ ratio is significantly greater than Case B (see Table 2), providing evidence for collisional excitation from the n $`=`$ 2 state of hydrogen, which is comparable to recombination at $`n_H`$ $``$ 10<sup>10</sup> cm<sup>-3</sup> (Krolik & McKee 1978; Ferland & Netzer 1979). There does not appear to be a strong broad component of Si III\] $`\lambda `$1892 compared to C III\] $`\lambda `$1909 (although there appears to be some contamination by Fe II, so it is difficult to determine this accurately at low resolution), which constrains the upper limit for the density at $``$ 5 x 10<sup>10</sup> cm<sup>-3</sup>, at which collisional de-excitation would strongly suppress C III\] $`\lambda `$1909, and, thus, the emission near 1900 Å would begin to be dominated by the Si III\] line, which has a significantly higher critical density. The final combination of ionization parameter and density was chosen to reproduce the observed C IV $`\lambda `$1550/H$`\beta `$ and C IV$`\lambda `$1550/C III\] $`\lambda `$1909 ratios. As such, for this component (BROAD) we choose $`n_H`$ $`=`$ 3 x 10<sup>10</sup> cm<sup>-3</sup> and $`U=`$ 10<sup>-2.25</sup>, at a distance $`D=`$ 3 x 10<sup>-4</sup> pc from the central source. For these conditions, dust temperatures would be greater than the sublimation temperatures of either silicate or graphite grains (i.e., in excess of 2000 K), so BROAD was assumed to be dust-free. The model was truncated at an effective column density (the column density of ionized and neutral hydrogen), $`N_{eff}`$ $``$ 7 x 10<sup>22</sup> cm<sup>-2</sup>, chosen so that the predicted broad He II $`\lambda `$4686/H$`\beta `$ did not fall too far below the observed value (there would be significant collision excitation of H$`\beta `$ in a warm, neutral envelope, which would lower this ratio).
As noted in Section 2.1, the narrow-line spectrum shows evidence for ionization stratification in the NLR. Nevertheless, we initially attempted to model the NLR with a single component using an average set of initial conditions. As expected, we found it impossible to produce the relative strengths of high- and low-ionization lines with a single set of input parameters. Consequently, multiple component models were required for a satisfactory fit to the emission-line ratios.
The observed \[O III\] $`\lambda \lambda `$4959, 5007/\[O III\] $`\lambda `$4363 is $``$ 40, which indicates either an electron temperature in excess of 20,000 K (Osterbrock 1974) or some modest modification of this ratio by collisional effects. Since it is unlikely that such a high electron temperature can occur in the O<sup>+2</sup> zone of photoionized gas, the latter is more plausible. Therefore, we have modeled this component of the NLR (INNER) assuming a density, $`n_H`$ $`=`$ 10<sup>6</sup> cm<sup>-3</sup>, which is above the critical density for the \[O III\] $`\lambda `$5007 line. In order to keep from underpredicting the C IV $`\lambda `$1550/C III\] $`\lambda `$1909 ratio, we set $`U=`$ 10<sup>-1.5</sup>, which places it at a distance $`D=`$ 0.023 pc from the central source, and assumed it was dust-free. Integration of the model was truncated when the electron temperature fell below 5000 K, and there was no longer any significant contribution to the line emission, making it effectively radiation-bounded.
The \[S II\] $`\lambda `$6716/$`\lambda `$6731 ratio is $``$ 0.9, which indicates an electron density $``$ 10<sup>3</sup> cm<sup>-3</sup>; the hydrogen density could be somewhat higher since the ionization potential of S<sup>+</sup> is below the Lyman limit and the \[S II\] lines often arise in warm, neutral gas. Collisional de-excitation suppresses \[O II\] $`\lambda `$3727 at densities $`n_H`$ $``$ 10<sup>4</sup> cm<sup>-3</sup> (De Robertis & Osterbrock 1984), and, thus, there is a negligible contribution to the $`\lambda `$3727 line from INNER. For these reasons, we included a second NLR component (OUTER) with $`n_H`$ $`=`$ 10<sup>4</sup> cm<sup>-3</sup>, $`U=`$ 10<sup>-4</sup>, and $`D=`$ 3.76 pc. The weakness of narrow Mg II $`\lambda `$2800/H$`\beta `$, particularly compared with other low ionization lines such as \[O I\] $`\lambda `$6300 and \[S II\] $`\lambda \lambda `$6716,6731, would indicate that there is likely to be depletion of magnesium onto dust grains in the lower ionization NLR gas. Therefore, we included cosmic dust in OUTER, with a dust-to-gas ratio scaled by the abundances we have chosen, i.e. approximately 50% the Galactic value. The fractional depletions are similar to those calculated by Seab & Shull (1983). This component was also radiation-bounded.
We attempted to fit the narrow-line ratios with a 2-component model, however there were several emission lines, such as \[Ne IV\] $`\lambda `$2423 and \[O IV\] 25.9 $`\mu `$m, which were underpredicted by any combination of the low- and high- density/ionization components described above. Specifically, there appears to be a component of the NLR characterized by low density and high ionization parameter. If this component is the source of the \[O IV\] 25.9 $`\mu `$m emission, its density must not be much greater than the critical density for this transition, i.e about 10<sup>4</sup> cm<sup>-3</sup>. Therefore, we added a third component (MIDDLE) with the same density and dust fraction as OUTER, but with $`U=`$ 10<sup>-1.7</sup>, at $`D=`$ 0.27 pc. Integration was truncated at $`N_{eff}`$ $``$ 10<sup>21</sup>cm<sup>-2</sup>, which, although arbitrary, gives MIDDLE the same physical depth as OUTER.
## 4 Model Results
The predicted line ratios for the three NLR models are listed in Table 3. As expected, INNER predicts strong C IV $`\lambda `$1550, C III\] $`\lambda `$1909, \[O III\] $`\lambda `$5007, and \[O III\] $`\lambda `$4363. Nearly all the \[O IV\] 25.9 $`\mu `$m and \[S IV\] 10.5 $`\mu `$m emission comes from MIDDLE, which also predicts strong \[O III\] $`\lambda `$5007, \[Ne IV\] $`\lambda `$2324, and a large He II $`\lambda `$4686/H$`\beta `$ ratio (due to the truncation of the partially neutral zone). Low-ionization lines, such as \[O II\] $`\lambda `$3727, \[S II\] $`\lambda \lambda `$6716, 6731, and \[N II\] $`\lambda \lambda `$6548, 6583, are strongest in OUTER, except for Mg II $`\lambda `$2800, which is affected by depletion onto dust.
In creating a composite model from the three narrow-line components we first fit the low-ionization component and then added the two higher ionization components in a ratio to obtain the best fit to the high-ionization lines (such as C IV $`\lambda `$1550, \[Ne IV\] $`\lambda `$2423, and \[Ne V\] $`\lambda `$3426). The following fractional contributions to the composite narrow-line model were used: OUTER, 50%; INNER, 30%; and MIDDLE, 20%. Although the fit could be marginally improved by fine-tuning the model parameters and the balance between these components, no significant additional insight would be obtained by such an exercise. This three-component model is, of course, a very simple approximation of what is likely to be a complex region consisting of a range of physical conditions. Nevertheless, we can derive some insight into the global physical parameters from these models.
The comparison of the composite model predictions and the dereddened narrow-line ratios is given in Table 4. Overall, the fit is quite good for lines from a variety of ionization states and critical densities, such as Si IV $`\lambda `$1398 $`+`$ O IV\] $`\lambda `$1402, C IV $`\lambda `$1550, \[Ne IV\] $`\lambda `$2423, \[O II\] $`\lambda `$3727, \[O I\] $`\lambda \lambda `$6300, 6364, \[S II\] $`\lambda \lambda `$6716, 6731, and many weaker lines. This indicates that the models provide a reasonable representation of the range in ionization parameter and density of the NLR gas. Also, the quality of the fit for the nitrogen lines is evidence that the chosen N/O ratio is approximately correct. The predicted strengths of the coronal lines, such as \[Fe VII\] $`\lambda `$6087 and \[Ne V\] $`\lambda \lambda `$3346, 3426, are a reasonable match to the observations; hence, there is no need to include the additional collisional heating (due to shocks) suggested by Contini (1997).
There are, as is typical of such simple models, a few discrepancies. First, the predicted He II lines are a bit strong. Although this may be due to the arbitrary truncation of MIDDLE, it may also be an indication that the SED we have assumed for the models is a bit too hard. As noted in Section 3.2, in fitting the UV to X-ray continuum, we have not corrected the UV flux to account for reddening. If the reddening is not negligible, and the derived X-ray flux at 1 keV is not biased by the strong absorption apparent at $``$ 0.1 keV, the intrinsic continuum may be somewhat softer. However, the spectral index derived from photon counting is close to the value we used ($``$1.5, as opposed to $``$1.7) and, therefore, we do not think that the intrinsic SED differs significantly from that which we have assumed. The \[O III\] $`\lambda `$5007/\[O III\] $`\lambda `$4363 ratio is a bit low, which could be remedied if the density for INNER were decreased by a factor $``$ 2. This would also help reduce the relative strength of the C III\] $`\lambda `$1909 line, which is enhanced at the higher density. Mg II $`\lambda `$2800 is a bit too strong, indicating that some dust may exist in the inner NLR, which is plausible, since dust temperatures calculated for INNER would not exceed 700 K.
Although these discrepancies could be eliminated with minor adjustments of free parameters, the underpredictions of the strengths of \[O IV\] 25.9 $`\mu `$m and \[S IV\] 10.5 $`\mu `$m are not as easily remedied. Since the strengths of lines from the same ionization states, such as O IV\] $`\lambda `$1402, are not similarly underpredicted, and the conditions in MIDDLE have been optimized for the high-ionization IR fine structure lines, the simplest explanation is that these lines arise in gas that is obscured by a layer of dust, and therefore not detected in the UV or optical. If the conditions are identical to those in MIDDLE (\[O IV\] 25.9 $`\mu `$m/H$`\beta `$ $`=`$ 2.0), the contribution from the obscured component would have to be a factor of $``$ 4 times greater than MIDDLE, and its contribution to the observed narrow H$`\beta `$ must be negligible (i.e., $`<`$ 10%). Therefore, the ratio of (\[O IV\] 25.9 $`\mu `$m/H$`\beta `$)<sub>observed</sub>/(\[O IV\] 25.9 $`\mu `$m/H$`\beta `$)<sub>emitted</sub> would be $``$ 10. From this ratio, we can derive an estimate of the column density of the obscuring layer. Assuming no extinction at 25.9 $`\mu `$m, and the reddening curve of Savage & Mathis (1979), the reddening derived from the line ratios is $`E_{BV}`$ $``$ 0.68 mag. This yields a column density $`N_{eff}`$ $``$ 7 x 10<sup>21</sup> cm<sup>-2</sup>, using the relation given in Shull & Van Steenberg (1985), scaled by the abundances we have assumed for NGC 4395. Although this is somewhat larger than the $`N_{eff}`$ for either of the low-ionization models, the truncation of each was arbitrary, and a larger column for OUTER would not appreciably affect the predicted emission-line ratios. Thus, it is probable that the \[O IV\] emission arises in a region that is obscured by a layer of dusty emission line gas. If so, there are consequences for the estimates of the covering factor of the ionized gas and the intrinsic luminosity of the central source, which we discuss in Section 5.
The comparison of the predictions from BROAD and the dereddened broad-line ratios is given in Table 5. Again, the overall agreement is quite good, in particular the ratio of C IV $`\lambda `$1550/C III\] $`\lambda `$1909, and the strengths of the Si IV $`\lambda `$1398 $`+`$ O IV\] $`\lambda `$1402 and the He II/H$`\beta `$ ratios. This is interesting, since we chose to represent the BLR with a single (i.e., average) set of physical conditions, while it is apparent that the region is extended, as evidenced by the fact that C IV $`\lambda `$1550 has broader wings than H$`\beta `$. The single discrepancy is the overprediction of the Mg II $`\lambda `$2800 line. It is unlikely that the weakness of the observed line is due to depletion of magnesium onto dust grains, since, as noted above, dust would probably be unable to exist in the BLR of NGC 4395. It is possible that the relative abundances of magnesium and other refractory elements, such as silicon and iron, are somewhat less than what we have assumed for these models, although there is no strong evidence for this, based on the narrow emission line spectrum. Since much of the Mg II $`\lambda `$2800 emission occurs at optical depths $`>`$ 5 at the Lyman limit, it is possible that the physical size of the broad-line clouds is smaller than what we have assumed. If so, a model characterized by a lower value for $`U`$, truncated at a smaller $`N_{eff}`$, might result in a slightly better fit.
## 5 Discussion
As noted above, these models provide a very good fit to the observed emission-line ratios. Also, the radial distance of OUTER, the component furthest from the central source, is 3.76 pc, which is well within the upper limit of 13 pc for the FWHM of the emission-line region (FHS93). Therefore, we feel confident that the model results can be used to estimate some of the global properties of the active nucleus of NGC 4395.
The emitted H$`\beta `$ flux predicted by each NLR model is given in the footnotes for Table 3. From these values, the luminosity assumed for the ionizing source, and the fractional contribution to the narrow H$`\beta `$ luminosity of NGC 4395, we may estimate the “covering factor” of each component, or more specifically, the fraction of ionizing photons intercepted by each component and converted into emission-line photons. If, for the sake of simplicity, we assume that each component can be approximated as a fraction of the surface of a sphere with a radius equal to the distance from the central source, we derive the following covering factors: INNER, 27%; MIDDLE, 61%, and OUTER, 42%. The emitted H$`\beta `$ flux for BROAD is given in Table 5. By comparison to the observed broad H$`\beta `$ we calculate a covering factor of 67% for BROAD. Thus, the total covering factor of the emission-line gas is $``$ 200%. This implies that the luminosity calculated from the observed continuum fluxes is an underestimate of at least a factor of 2. However, as discussed above, there is also evidence for a significant amount of obscured gas not included in our models. If we include this additional component, the total covering would increase by a factor greater than 2.
As noted in Section 3.2, there is evidence that the observed UV to X-ray continuum may be absorbed. Since the covering factor of the emission-line gas is large, it is possible that the continuum source is occulted by it. In Figure 4, we compare the incident and transmitted ionizing continnua for model MIDDLE, the NLR component with the largest covering factor. Since MIDDLE is essentially opaque at the He II Lyman limit, a fit to the 0.1 - 2 keV band yields a positive index, as seen in the ROSAT/PSPC spectrum (Moran et al. 1999). There is also attenuation below the Lyman limit due to dust. The amount of reddening in MIDDLE is $`E_{BV}`$ $``$ 0.10 mag, which is only slightly larger than the reddening determined from the emission lines, $`E_{BV}`$ $`=`$ 0.05 $`\pm `$ 0.03. If the continuum is reddened by this amount, the intrinsic luminosity in the UV could easily be more than a factor of 2 greater than what we have assumed, based on the observed fluxes. In fact, depending on the strength of the 2200 Å bump intrinsic to NGC 4395, the continuum may indeed be somewhat more reddened than the emission lines (Moran et al. 1999). We are led to the following conclusion: the covering factor of the emission line gas is near unity and the total luminosity of the central source in the Lyman continuum may be more than 4 times the estimate based on the observed fluxes, or $`L_{h\nu >13.6\mathrm{eV}}`$ $``$ 1 x 10<sup>40</sup> ergs s<sup>-1</sup>. If so, the radial distances of each of the model components would increase by more than a factor of 2.
The observed equivalent width (EW) of the broad C IV $`\lambda `$1550, 76 Å (FHS93), is a factor of 10 lower than predicted by an extrapolation of the Baldwin relation, a negative correlation between the EW and source luminosity (see Figure 3(a) in Kinney, Rivolo & Koratkar (1990)). However, as the covering factor of the emission-line gas approaches unity, the EW must approach a maximum, since at this point the conversion of continuum photons into line photons must saturate. This must be the case for NGC 4395, since the covering factor of the BLR gas is large. If NGC 4395 were included in the plot in Kinney et al. a turn-over at the low-luminosity end would be apparent. Evidence that the Baldwin effect flattens in slope at low luminosity has been reported previously by several researchers (e.g., Véron-Cetty, Véron, & Tarenghi 1983; Wu, Boggess, & Gull 1983; Kinney et al. 1990; Osmer, Porter, & Green 1994). The high covering factor determined for NGC 4395 lends support to the idea that the Baldwin effect is at least partially driven by a luminosity dependence in covering factor in AGN; the curvature seen in the Baldwin relation can then be naturally explained by coverage hitting a maximum value at low luminosities (Wampler et al. 1984).
We can also use the model results to determine the mass of the putative central black hole. As noted in Section 2.1, the FWHM of the broad H$`\beta `$ line is $``$ 1500 km s<sup>-1</sup>. From the radial distance assumed for BROAD, 3 x 10<sup>-4</sup> pc, we compute a virial mass, $`M_{bh}`$ $``$ 1.5 x 10<sup>5</sup> $`M_{}`$. which is similar to the mass derived from the stellar kinematics (Filippenko & Ho 1999), although if we scale the radial distance of BROAD by the ratio of the intrinsic to observed central source luminosity, $`M_{bh}`$ would be twice as large. Interestingly, the width of the broad C IV $`\lambda `$1550 line ($``$ 5000 km s<sup>-1</sup>) implies a mass an order of magnitude larger, although the BLR of NGC 4395 is stratified and it is likely that much of the C IV emission arises closer to the nucleus than BROAD. Also, non-gravitational effects, such as a wind, could result in a radial component of the gas motion that could bias the derivation of the central mass. As we also noted in Section 2.1, the observed widths of the narrow lines are quite small, with FWHM $``$ 50 km s<sup>-1</sup> for \[O III\] $`\lambda `$5007 (FS89). From our models, the average distance of the \[O III\] emitting clouds is 0.14 pc. Based on our mass estimate, the velocity width for clouds in virial motion would be $``$ 55 km s<sup>-1</sup>, in agreement with the observations.
The model results support our assumptions regarding the elemental abundances in the nucleus of NGC 4395. First, the heavy element abundances are clearly subsolar, as previous studies of the H II regions have also indicated (Roy et al. 1996). Although most AGN appear to possess solar or supersolar abundances (cf. Ferland et al. 1996; Oliva 1996; Netzer & Turner 1997), NGC 4395 is an extremely low-mass, low-luminosity AGN, and therefore the underabundance of heavy elements is to be expected. The underabundance of nitrogen with respect to oxygen is also typical of low metallicity dwarf galaxies (cf. van Zee et al. 1998), and is likely to be the result of the combined effects of primary and secondary origins of nitrogen.
## 6 Conclusions
We have analyzed UV, optical, and IR spectra of the active nucleus in the Sd IV galaxy NGC 4395. The permitted lines in the UV and optical were deconvolved into broad (FWHM $`>`$ 1000 km s<sup>-1</sup>) and narrow components. We have constructed photoionization models of the broad-line and narrow-line gas, and have successfully matched the observed emission-lines ratios, with the very few exceptions noted in Section 4. The model results predict an NLR size ($`<`$ 5 pc in radius) that is within the observed constraints. The models predict a covering factor for the emission-line gas greater than unity, but we have shown evidence that the observed continuum in the UV and X-ray is absorbed by an intervening layer of dusty gas which has properties similar to the low-density components used in our composite NLR model. Our analysis supports previous conclusions regarding the nature of NGC 4395 (FS89; FHS93), specifically that this object harbors a dwarf Seyfert 1 nucleus.
Our models predict that the NLR in NGC 4395 is extremely compact. In particular, the region in which most of the \[O III\] $`\lambda `$5007 emission arises is $`<`$ 1/2 pc from the central source. Also, the recombination time for INNER is roughly a few months. As such, if there were changes in the continuum flux on similar timescales (a few months to several years), we might expect to see corresponding variations in the narrow emission lines. Although the optical continuum flux from NGC 4395 may vary as much as 20% on timescales $``$ 1 day (Lira, Lawrence, & Johnson 1999), the nature of the variability over longer timescales in not well known. As such, continued monitoring of NGC 4395 to look for both long-term continuum changes and possible narrow-line variability is warranted.
Perhaps the most important conclusion is, simply, that NGC 4395 is an example of the AGN phenomenon extended to a low luminosity extreme ($``$ 10<sup>6</sup> fainter than typical QSOs). It is also the only known example of an active nucleus within a bulge-less, extreme late-type galaxy. It is fascinating that the fundamental physics of an AGN, specifically a region powered by a massive black hole, is at work over such a huge range in mass and luminosity. In fact, it is worth noting that it is likely that an AGN this weak could only be detected in a low-mass galaxy, since any other nuclear activity, such as a starburst, could easily overpower it. Being able to study the low-luminosity end of the AGN menagerie also gives us insight into properties that appear to be a function of luminosity, such as the Baldwin effect. These results help amplify the importance of covering factor for this effect, and are further evidence that large BLR covering factors may be a property of low-luminosity AGNs.
S.B.K is grateful to Jane Turner for helpful discussions concerning the ROSAT/PSPC data. J.C.S. thanks Sarah Unger and the rest of the IPAC staff for their assistance in scheduling and analyzing observations with ISO. We acknowledge support from the following NASA grants: NAG 5-4103 (S.B.K. and D.M.C.), AR-07527 (L.C.H. and A.V.F.), NAG 5-3563 (J.C.S.), and NAG 5-3556 (A.V.F.).
|
no-problem/9902/cond-mat9902048.html
|
ar5iv
|
text
|
# Electronic Structure of La2-xSrxCuO4 in the Vicinity of the Superconductor-Insulator Transition
## Abstract
We report on the result of angle-resolved photoemission (ARPES) study of La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> (LSCO) from an optimally doped superconductor ($`x=0.15`$) to an antiferromagnetic insulator ($`x=0`$). Near the superconductor-insulator transition (SIT) $`x0.05`$, spectral weight is transferred with hole doping between two coexisting components, suggesting a microscopic inhomogeneity of the doped-hole distribution. For the underdoped LSCO ($`x0.12`$), the dispersive band crossing the Fermi level becomes invisible in the $`(0,0)(\pi ,\pi )`$ direction unlike Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+y</sub>. These observations may be reconciled with the evolution of holes in the insulator into fluctuating stripes in the superconductor.
The key issue to clarify the nature of high-temperature superconductivity in the cuprates is how the electronic structure evolves from the antiferromagnetic insulator (AFI) to the superconductor (SC) with hole doping. For the hole-doped CuO<sub>2</sub> planes in the superconductors, band dispersions and Fermi surfaces have been extensively studied by angle-resolved photoemission spectroscopy (ARPES) primarily on Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+y</sub> (BSCCO) . Also for the undoped AFI, band dispersions have been observed for Sr<sub>2</sub>CuO<sub>2</sub>Cl<sub>2</sub> . However, the band structures of the AFI and the SC are distinctly different and ARPES data have been lacking around the boundary between the AFI and the SC. In order to reveal the missing link, the present ARPES study has been performed on La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> (LSCO), which covers continuously from the SC to the AFI in a single system.
In addition, the family of LSCO systems show a suppression of $`T_c`$ at a hole concentration $`\delta 1/8`$, while the BSCCO system does not. As the origin of the anomaly at $`\delta 1/8`$, the instability towards the spin-charge order in a stripe form has been extensively discussed on the basis of the incommensurate peaks in inelastic neutron scattering (INS) . Comparing the ARPES spectra of LSCO and BSCCO will help us to clarify the impact of the stripe fluctuations.
In the present paper, we discuss the novel observation of two spectral components coexisting around the SIT ($`x0.05`$), the unusual disappearance of the Fermi surface near $`(\pi /2,\pi /2)`$ in the underdoped LSCO ($`x0.12`$) , and their relevance to the stripe fluctuations.
The ARPES measurements were carried out at beamline 5-3 of Stanford Synchrotron Radiation Laboratory (SSRL). Incident photons had an energy of 29 eV and were linearly polarized. The total energy resolution was approximately 45 meV and the angular resolution was $`\pm 1`$ degree. Single crystals of LSCO were grown by the traveling-solvent floating-zone method and then annealed so that the oxygen content became stoichiometric. The accuracy of the hole concentration $`\delta `$ was $`\pm 0.01`$. The $`x=0`$ samples were slightly hole doped by excess oxygen so that $`\delta 0.005`$ was deduced from its Néel temperature $`T_N=220`$ K . The spectrometer was kept in an ultra high vacuum better than $`5\times 10^{11}`$ Torr during the measurements. The samples were cleaved in situ by hitting a post glued on the top of the samples. The measurements were done only at low temperatures ($`T11`$ K), because the surfaces degraded rapidly at higher temperatures. Throughout the paper, the spectral intensity for different angles are normalized to the intensity of the incident light. In the analysis, the spectrum at (0,0) is assumed to represent the angle-independent background, because emission from the states of $`d_{x^2y^2}`$ symmetry is not allowed for the direction normal to the CuO<sub>2</sub> plane due to a selection rule. Indeed, Figs. 1 and 3 show that spectra in the vicinity of the Fermi level ($`E_F`$) are angle-independent, when there are no dispersive features.
ARPES spectra along $`(0,0)(\pi ,0)(\pi ,\pi )`$ clearly shows angle dependence as shown in Fig. 1. Even though spectral features are broad for underdoped and heavily undoped cuprates, the dispersive component emerging around $`\stackrel{}{k}=(\pi ,0)`$ is sufficiently strong compared to the angle-independent background. Figure 2 shows the doping dependence of the ARPES spectrum at $`(\pi ,0)`$. As reported previously , a relatively sharp peak is present just below $`E_F`$ for the optimally doped sample ($`x=0.15`$). For the underdoped samples ($`x=0.12,0.10`$ and 0.07), the peak is broadened and shifted downwards. When the hole concentration is further decreased to the vicinity of the SIT ($`x=0.05`$), the peak near $`E_F`$ rapidly loses its intensity and concomitantly another broad feature appears around $`0.55`$ eV. In the AFI phase ($`x=0`$), the peak near $`E_F`$ disappears entirely while the structure around $`0.55`$ eV becomes predominant. As for LSCO, Fig. 2 is different from the scenario that a single peak is shifted downwards and continuously evolves from the SC into the AFI as seen in BSCCO and Ca<sub>2</sub>CuO<sub>2</sub>Cl<sub>2</sub> , and rather indicates that the spectral weight is transferred between the two features originated from the SC and the AFI, and
On the other hand, ARPES spectra in the (0,0)-($`\pi `$,$`\pi `$) direction show a different doping dependence. The spectra for representative doping levels are shown in Fig. 3. For the optimally doped sample ($`x=0.15`$), one can identify a band crossing $`E_F`$ around (0.4$`\pi `$,0.4$`\pi `$), although the dispersive feature is considerably weak. For the underdoped samples ($`x=0.12,0.10`$ and 0.07), the band crossing $`E_F`$ disappears, even though the system is still superconducting. The invisible band crossing along (0,0)-($`\pi `$,$`\pi `$) is reproduced for several samples for $`0.07x0.12`$, excluding accidentally inferior surfaces as its origin. For the insulating sample ($`x=0.03`$ and 0), a broad feature appears around $`0.45`$ eV near ($`\pi /2`$,$`\pi /2`$), correlated with the growth of the broad structure around $`0.55`$ eV at ($`\pi `$,0).
Overall dispersions of the spectral features have been derived from the ARPES spectra by taking second derivatives as shown in Fig. 4. The band near $`E_F`$ for $`x=0.05`$, 0.07, 0.10 and 0.12 has a dispersion similar to that for $`x=0.15`$ around $`(\pi ,0)`$: when one goes as $`(0,0)(\pi ,0)(\pi ,\pi )`$, the band approaches $`E_F`$ until $``$(0.8$`\pi `$,0), stays there until ($`\pi `$,0), then further approaches $`E_F`$ and goes above $`E_F`$ through the superconducting gap around $``$$`(\pi ,\pi /4)`$. The band seen near $`E_F`$ should be responsible for the superconductivity. On the other hand, the dispersions of the broad feature seen around $`0.5`$ eV are almost the same among $`x=0`$, 0.03 and 0.05 and similar to the band dispersion of the undoped CuO<sub>2</sub> plane in Sr<sub>2</sub>CuO<sub>2</sub>Cl<sub>2</sub> and PrBa<sub>2</sub>Cu<sub>3</sub>O<sub>7</sub> . Along the $`(0,0)`$$``$$`(\pi ,\pi )`$ cut, the broad peak moves upwards, reaches a band maximum ($`0.45`$ eV) around $`(\pi /2,\pi /2)`$ and then disappears. The broad peak emerges in going from $`(0,0)`$ to $`(\pi ,0)`$ and then disappears between $`(0,0)`$ and $`(\pi ,\pi )`$. Therefore, the band around $`0.5`$ eV originates from the lower Hubbard band (LHB) of the AFI.
In Fig. 5(a), the dispersive components of the ARPES spectra are compared between ($`\pi `$,0) and $``$($`\pi /2`$,$`\pi /2`$). Around the SIT ($`x0.05`$), the two spectral features coexist at ($`\pi `$,0), while only one broad peak is observed at $``$($`\pi /2`$,$`\pi /2`$). This excludes extrinsic origins for the two structures such as a partial charge-up of the sample. Figure 5(b) demonstrates that the spectral lineshape at ($`\pi `$,0) and the relative intensity of two structures are quite systematically evolves with hole doping, indicating that surfaces of good quality were consistently obtained for all the doping levels. The spectral weight transfer is reminiscent of earlier discussion based on angle-integrated data of LSCO and Nd<sub>2-x</sub>Ce<sub>x</sub>CuO<sub>4</sub>. A possible origin for the coexistence of the two spectral features is a phase separation into hole-poor antiferromagnetic (AF) domains and hole-rich superconducting domains. The spectra around SIT may be regarded as a superposition of the spectra of the SC and the AFI, as illustrated in Fig. 6 (b). Indeed, when holes are doped into La<sub>2</sub>CuO<sub>4+y</sub> with excess oxygens, such a phase separation occurs macroscopically as revealed by, e.g., neutron diffraction , but corresponding observation has never been reported for the Sr-doped LSCO system. A more likely interpretation is a microscopic inhomogeneity of the hole density in the sense that the doped holes are segregated in boundaries of AF domains on the scale of several atomic distances. Indeed, $`\mu ^+`$SR and <sup>139</sup>La NQR experiments have shown the presence of a local magnetic field in the so-called spin-glass phase \[Fig. 6 (a)\]. Then, since the present spectra were taken above the spin-glass transition temperature, splitting into the two structures would be due to dynamical fluctuations of such a microscopic phase separation. Furthermore, the microscopic phase separation may explain why the chemical potential is pinned against hole doping for $`x0.12`$ . However, as for the underdoped BSCCO, the splitting of the two components by $`0.5`$ eV has not been reported so far and the peak at ($`\pi `$,0) seems to be shifted smoothly in going from the SC to the AFI . The difference might imply that the tendency toward the hole segregation is stronger in LSCO than in BSCCO.
The ARPES spectra of LSCO are compared with those of BSCCO in Figs. 5 (a) and (c). Whereas the lineshapes at ($`\pi `$,0) are similar between LSCO and BSCCO irrespective of doping levels, the spectra near ($`\pi /2`$,$`\pi /2`$) are quite different: while the peak near $`E_F`$ is sharp for both the overdoped and underdoped BSCCO, one finds no well-defined feature around $`E_F`$ for underdoped LSCO. This difference is likely to be related with the stripe fluctuations, which have more static tendency in LSCO than in BSCCO, judging from the suppression of $`T_c`$ at $`\delta 1/8`$. Also for the BSCCO system, it has been reported that the sharp peak near $`E_F`$ is suppressed near $`(\pi /2,\pi /2)`$ upon Zn-doping, which is considered to pin the dynamical stripe correlations . The absence of the band crossing $`E_F`$ near $`(\pi /2,\pi /2)`$ may be reconciled with the vertically and horizontally oriented stripes in LSCO . Intuitively, while the system may be metallic along the half-filled stripes, namely, in the $`(0,0)`$-$`(\pi ,0)`$ or $`(0,0)`$-$`(0,\pi )`$ direction, the low-energy excitations should be strongly suppressed in the directions crossing all the stripes such as the $`(0,0)`$-$`(\pi ,\pi )`$ direction. This conjecture was supported by a recent numerical study of small clusters with charge stripes , and is consistent with the suppression of the Hall coefficient in the stripe-ordered state of La<sub>1.6-x</sub>Nd<sub>0.4</sub>Sr<sub>x</sub>CuO<sub>4</sub> for $`x<1/8`$ .
The doping dependence of the spectral intensity near $`E_F`$ ($`E>0.2`$ eV) is shown in Figs. 5 (d) and (e) for two normalization methods. Note that the essential features are independent of normalization. Our picture of the evolution of the spectral weight is schematically drawn in Fig. 6. In the ARPES spectra, the intensity near $`E_F`$ appears at ($`\pi `$,0) with hole doping for $`x0.05`$, where the incommensurability of the spin fluctuations also arises according to the INS study . On the other hand, the intensity near ($`\pi /2`$,$`\pi /2`$) remains suppressed with hole doping for the entire underdoped region ($`0.05x0.12`$). Hence, one may think that the segregated holes for $`x0.05`$ already start to be arranged vertically and horizontally. Therefore we propose that the hole-rich boundaries of the AF domains around the SIT continuously evolves into the stripe correlations in the underdoped SC. In going from $`x=0.12`$ to 0.15, the Fermi-surface crossing appears in the (0,0)-($`\pi `$,$`\pi `$) direction, probably corresponding to the phenomenon that the incommensurability in INS saturates for $`x0.15`$ . This may be understood that the doped holes in excess of $`x=1/8`$ overflow the saturated stripes and that the two-dimensional electronic structure recovers.
In conclusion, we have shown that the SC and AFI characters coexist in the ARPES spectra in the vicinity of the SIT for LSCO. The band crossing $`E_F`$ disappears near ($`\pi /2`$,$`\pi /2`$) for the underdoped LSCO, associated with the formation of the dynamical stripes. The present observations provide a new perspective of how the holes doped in the AFI evolve into the fluctuating stripes in the underdoped SC. Mechanism in which the SC to AFI transition occurs is a subject of strong theoretical interest and should be addressed by further studies.
We would like to thank T. Tohyama and S. Maekawa for enlightening discussions. This work was supported by the New Energy and Industrial Technology Development Organization (NEDO), Special Coordination Fund for Promoting Science and Technology from the Science and Technology Agency of Japan, the U. S. DOE, Office of Basic Energy Science and Division of Material Science. Stanford Synchrotron Radiation Laboratory is operated by the U. S. DOE, Office of Basic Energy Sciences, Division of Chemical Sciences.
|
no-problem/9902/cond-mat9902213.html
|
ar5iv
|
text
|
# Absence of simulation evidence for critical depletion in slit-pores
## I Introduction
In recent papers, one of us has reported grand canonical Monte Carlo simulation studies of a Lennard-Jones fluid confined between two structureless attractive walls arranged in a slit-pore geometry. The behaviour of the number density profile across the pore, $`\rho (z)`$, was studied for various values of the thermodynamic parameters, namely the chemical potential, $`\mu `$, and temperature, $`T`$. At certain values of $`\mu `$ and $`T`$ (apparently close to those of the bulk liquid-vapour critical point) it was observed that the average local density in the middle of the pore fell markedly below the value obtained in a fully periodic simulation performed at the same $`\mu `$ and $`T`$. These findings were used to argue in favour of the existence of a generic “critical depletion” phenomenon, namely the proposed tendency of a critical fluid to be expelled by a confining medium, even when the confining walls strongly attract the fluid particles . Such a scenario is supported by experimental findings for SF<sub>6</sub> adsorbed in mesoporous materials , for which a dramatic reduction in adsorption was observed as the bulk critical temperature was approached from above along the critical isochore.
In this note we point out that the apparent critical depletion reported in references is actually a simulation artifact arising from systematic errors associated with the corrections applied to the configurational energy to compensate for the truncation of the interparticle potential. Using new simulations, we show that if one chooses a sufficiently large truncation distance or alternatively avoids the use of truncation corrections altogether, then the depletion effect disappears.
## II Simulation details and results
The simulation arrangement and procedure employed in this work are the same as those described in refs. , and accordingly we merely summarise the principal features.
Grand canonical Monte Carlo simulations were performed for a Lennard-Jones fluid, having an interparticle potential of the form:
$$U_{LJ}(r)=4ϵ\left[\left(\frac{\sigma }{r}\right)^{12}\left(\frac{\sigma }{r}\right)^6\right].$$
(1)
where $`ϵ`$ and $`\sigma `$ are respectively the Lennard-Jones well depth and scale parameters. Two distinct geometries were studied:
1. A fully periodic system.
2. A slit-pore geometry, in which the fluid is confined between two parallel structureless walls, having periodic boundary conditions in the directions parallel to the walls.
In the latter case, the walls were taken to exert a potential on the fluid particles of the form:
$$U_{FW}=4ϵf\left[\frac{2}{5}\left(\frac{\sigma }{z}\right)^{10}\left(\frac{\sigma }{z}\right)^4\right],$$
(2)
where $`f`$ is a parameter that tunes the strength of the wall-fluid interactions relative to those of the fluid interparticle interactions.
As in the previous studies of this system , the reduced temperature was set to the value $`T=1.36`$, believed to be close to the bulk critical temperature. The chemical potential $`\mu `$ of the periodic system was then tuned until the equilibrium density reached the value $`\rho =0.365`$, believed to be close to the bulk critical density. The resulting value of $`\mu `$ was then fed into a simulation of the slit-pore system at the same temperature and with the choice $`f=0.9836`$. In both the periodic and slit-pore arrangements, the Lennard-Jones interparticle potential was truncated at some radius and a compensating correction applied to the configurational energy. For the periodic system, this correction was calculated in the standard fashion by assuming a spherical cutoff surface of radius $`r_c`$ centred on each particle, combined with a uniform density approximation (UDA) for $`r>r_c`$. This yields for the energy correction per particle:
$$u_{pbc}=\frac{1}{2}4\pi \rho _{r_c}^{\mathrm{}}𝑑rr^2U_{LJ}(r)=\frac{8}{3}\pi \rho ϵ\sigma ^3\left[\frac{1}{3}\left(\frac{\sigma }{r_c}\right)^9\left(\frac{\sigma }{r_c}\right)^3\right],$$
(3)
where $`\rho =N/V`$ is the average number density of the system.
In the case of the slit-pore system, the truncation correction used was that given by Schoen et al. , which assumes a cylindrical cutoff surface (on whose principal axis each particle lies) extending across the whole width $`D`$ of the pore, i.e. such that the cylinder ends coincide with the pore walls. This yields
$$u_{pore}=\frac{\pi ϵ\sigma ^6}{s_c^3}\mathrm{tan}^1\left(\frac{D}{s_c}\right)V\rho ^2,$$
(4)
where $`s_c`$ is the radius of the cutoff cylinder.
We have studied the effect of the cylindrical cutoff radius $`s_c`$ on the density profile $`\rho (z)`$ of the slit-pore system. For each choice of $`s_c`$, the $`\mu `$ value employed in the simulation was that yielding an average density $`\rho =0.365`$ in a periodic system of the same dimensions and with $`r_c=s_c`$. For small cylindrical cutoffs ($`s_c=3.5\sigma `$), figure 1 shows that the local density in the pore middle is depleted with respect to the density of the periodic system (dashed line) at the same $`T,\mu `$. This is the result reported in refs. . However, new results for a larger choice of the cylindrical cutoff (also included in figure 1) show that this depletion reduces as $`s_c`$ is increased, and in fact vanishes for $`s_c5.0\sigma `$. This dependence of the depletion on the choice of $`s_c`$ was missed in the previous studies .
We have also performed simulations in which we dispense with the use of cutoff corrections altogether and simply simulate a system of particles interacting via a truncated Lennard-Jones potential. The results (figure 2), exhibit no sign of a density depletion in the pore middle with respect to the periodic system.
## III Discussion and conclusions
The dependence of the density depletion on the choice of the cylindrical cutoff $`s_c`$ (figure 1) points to a breakdown of the uniform density approximation invoked in the derivation of the truncation correction for the internal energy, eq. 4. This approximation assumes that the number density outwith the cutoff surface is uniform, having the average system density $`\rho =N`$/V. However, figures 1 and 2 show that for a slit system, $`\rho (z)`$ exhibits considerable structure across the pore, especially close to the walls. Accordingly, one must expect some measure of systematic error to be associated with eq. 4. Tests show that for the choice of cutoff $`s_c=3.5\sigma `$ employed in references , this error is very small, so that in most circumstances eq. 4 represents a good approximation.
It seems, however, that in the critical region, even a very small error in the truncation correction can lead to large effects on the local pore number density. The reason for this is the large near-critical compressibility, reflected in the fact that near $`T_c`$, isotherms of $`\mu (\rho )`$ become very flat (see eg. figure $`3`$ of ). Since the error in the truncation correction acts rather like a shift in the bulk (chemical potential) field with respect to the periodic system, large alterations to the local pore density may result. This is in accord with the observation that the depletion is large close to the critical point, but diminishes as one moves to higher temperatures along the critical isochore.
To avoid similar problems arising in future studies of the effects of confinement on near critical fluids, it would seem wise to adopt one of the following strategies:
1. Employ a very large value for the truncation range and test for any systematic dependence of results on its value. This is clearly a very computationally intensive solution.
2. Employ a truncated potential and dispense with corrections altogether. Such a system is clearly well defined but causes complications if one wishes to model real substances.
3. Employ a cut and shifted potential which tends smoothly to zero at the cutoff.
In summary, we have demonstrated that the apparent critical depletion reported in was actually an artifact arising from systematic errors in the energy correction for the tail truncation in the slit-pore geometry. Although numerically small, these errors can (in the critical region) strongly influence the fluid local number density of the confined system compared to a periodic system at the same temperature and chemical potential. Thus our findings underline the care that must be taken when implementing any sort of truncation corrections for near critical fluid models.
Further simulation studies of the near-critical properties of a confined fluid are in progress and a detailed account of these and their implications for theory and experiment on critical depletion will be presented in a later paper.
###### Acknowledgements.
NBW is grateful to R. Evans and A. Maciolek for stimulating his interest in critical depletion, and for most useful correspondence. He also thanks the Royal Society (grant number 19076), the Royal Society of Edinburgh and the EPSRC (grant no. GR/L91412) for financial support. MS is grateful to G.H. Findenegg for discussions and acknowledges financial support from the Sonderforschungsbereich 448 “Mesoskopisch strukurierte Verbundsysteme”
|
no-problem/9902/astro-ph9902214.html
|
ar5iv
|
text
|
# EGRET Observations of the Diffuse Gamma-Ray Emission in Orion: Analysis Through Cycle 6
## 1 Introduction
This paper presents an analysis of the most extensive data set to date of the Orion region obtained by the Energetic Gamma Ray Experiment Telescope (EGRET) on the Compton Gamma Ray Observatory (CGRO). Study of the diffuse high energy ($`E>30`$ MeV) gamma-ray emission from nearby, massive interstellar clouds permits testing the mechanisms of gamma-ray production and measuring the local cosmic-ray (CR) density as well as properties of the interstellar medium (ISM). The goals of this work are to determine the high-energy CR density in Orion, the molecular mass calibrating factor $`XN(\mathrm{H}_2)/W_{\mathrm{CO}}`$, and to identify any point sources or resolved variations in CR density or $`X`$ within the Orion AB-Mon R2 complex of interstellar clouds. The Orion region was previously studied by Digel, Hunter, & Mukherjee (1995; hereafter DHM) using EGRET data through 1993 of CGRO. Since the time of the earlier work by DHM, the overall exposure of EGRET toward the Orion region has increased by more than a factor of two, and has become much more uniform. As EGRET is now nearing the end of the life of its spark chamber gas, the currently-available observations represent essentially all the exposure that EGRET will obtain toward Orion.
Most of the motivations for studying the diffuse gamma-ray emission in Orion remain the same. The interstellar clouds in Orion, comprising the Orion A and B clouds and the more distant Mon R2 cloud, are the nearest giant molecular clouds ($`500`$ pc), with a mass $`4\times 10^5`$ M (Maddelena et al. 1986). The clouds are well-resolved by EGRET, and are far from the plane in the outer Galaxy, so their gamma-ray emission can be studied essentially in isolation from the general diffuse emission of the Milky Way.
Since the time of the previous work, more conservative cuts on zenith angle to reject earth albedo gamma rays were adopted for the standard EGRET data products. This decreases the number of photons, and the exposure, somewhat for each viewing period. We investigate here whether this change alters the findings of DHM.
To verify the production mechanisms of gamma rays in interstellar clouds, and to determine the CR densities for individual molecular clouds, independent measurements of the interstellar matter distribution are required. The atomic hydrogen phase of the ISM is observable via the characteristic 21 cm line radiation. However, molecular hydrogen $`\mathrm{H}_2`$ generally cannot be directly observed at interstellar conditions. The standard tracer of the large-scale distribution of $`\mathrm{H}_2`$ is the $`J=10`$ line of $`\mathrm{CO}`$ at 115 GHz. $`\mathrm{CO}`$ is the second most abundant interstellar molecule after $`\mathrm{H}_2`$, tends to form under the same conditions, and is excited to the $`J=1`$ rotational state by collisions with $`\mathrm{H}_2`$. The relation between $`N(\mathrm{H}_2)`$ and $`W_{\mathrm{CO}}`$, the integrated intensity of the CO line, is empirically known to be approximately a proportionality; the proportionality constant $`N(\mathrm{H}_2)/W_{\mathrm{CO}}`$ is denoted $`X`$. All determinations of $`X`$ require an indirect tracer of $`N(\mathrm{H}_2)`$. Helium and heavier elements are assumed to be distributed like the hydrogen, as is commonly done in studies of diffuse gamma-ray emission; the emissivities referred to in other sections of this paper are therefore the effective rates per hydrogen atom. Owing to the penetration of clouds by high-energy CRs, and the transparency of the ISM to gamma rays, gamma-ray intensity can be used as such a tracer. Here we use the gamma-ray emission from Orion to calibrate the $`X`$-ratio and thereby infer the CR densities in the Orion neighbourhood.
The recent reports of an extended region of carbon and oxygen nuclear line emission in Orion from the COMPTEL instrument on CGRO (Bloemen et al. 1997) are another motivation for an updated study of the gamma-ray observations by EGRET. To explain the observed flux of the <sup>12</sup>C and <sup>16</sup>O lines at 4.44 MeV and 6.13 MeV, respectively, a large enhancement of low-energy ($`<100`$ MeV/Nucleon) CRs is needed. Note, however, that a recent re-evaluation of the COMPTEL background has shown that the Orion result was largely spurious (Bloemen et al. 1999). The CR enhancement factor depends on the amount of interstellar gas and hence on the $`X`$-ratio determined from EGRET analysis.
## 2 Data
### 2.1 H I and CO
We use the same 21 cm H I and 2.6 mm CO maps as DHM. Briefly, the H I surveys of Weaver & Williams (1973) and Heiles & Habing (1974) were combined and column densities $`N(\mathrm{HI})`$ were derived on the assumption of a uniform spin temperature of 125 K. The CO surveys of Maddalena et al. (1986) and Huang et al. (1985), as combined in Dame et al. (1987), were used to derive the map of integrated intensity in the CO line, $`W_{\mathrm{CO}}`$. The region of interest for the present study is $`l=195^{}`$ to $`220^{}`$ and $`b=25^{}`$ to $`10^{}`$, although a $`15^{}`$ wide border surrounding this area was also included in the CO and H I datasets to permit convolution with the broad PSFs (point spread functions) of EGRET in the central region. In directions where no CO data are available, principally $`b<25^{}`$, we assume $`W_{\mathrm{CO}}=0`$. For both H I and CO emissions, only one spectral line is evident along lines of sight in the region of interest; although the 21 cm line emission in particular has broad tails in velocity, all of the interstellar gas along the line of sight is assumed to be associated with Orion (i.e., have the same density of CRs) in the analysis below. The $`N(\mathrm{HI})`$ and $`W_{\mathrm{CO}}`$ maps were produced on the same grid used for binning the gamma-ray photons.
### 2.2 Gamma-Ray
We combine the data (photon counts and exposure maps) from all EGRET viewing periods with exposure within the region of interest ($`l=195^{}`$ to $`220^{}`$, $`b=25^{}`$ to $`10^{}`$; Table 1) and a $`15^{}`$ border around this region. Only the area within $`30^{}`$ of the pointing direction for any given viewing period was included in the combined datasets. The sensitivity of EGRET for inclination angles greater than $`30^{}`$ is relatively poor, so few photons and little exposure are lost. The advantage of making this truncation is that the relatively broad PSF far off axis (Thompson et al. 1993) need not be considered in the analysis; for each energy range only a single effective PSF is needed for the entire dataset. The gamma-ray data are binned on a $`30^{}`$ grid in Galactic coordinates for this analysis. Details of the instrument design are discussed in Hughes et al. (1980) andKanbach et al. (1988), and the preflight and the postflight calibrations are described by Thompson et al. (1993) and Esposito et al. (1999).
In the analysis described below, the EGRET data are analyzed for six broad energy ranges spanning 30-10,000 MeV, and two integral ranges (energy $`E>100`$ MeV and $`E>300`$ MeV). For each range, the corresponding exposure maps were derived on the assumption of an $`E^{2.1}`$ input spectrum. The intensity maps (photon counts divided by exposure) for viewing periods with large overlaps and good exposure in the region of interest were intercompared to check their relative intensity calibrations. The seven viewing periods (1.0, 2.1, 337.0, 413.0, 419.5, 420.0, and 616.1) were compared on just four broad energy ranges (30-100, 100-300, 300-1000, and 1000-10,000 MeV) to improve the statistics of the comparisons. The relative calibrations were in good agreement except for viewing periods 2.1 and 616.1, which were found to be significantly brighter and fainter than the average, respectively. The correction factors were largest for viewing period (VP 616.1), ranging up to 1.9 on 30-100 MeV. For this late VP, the performance of the spark chamber had degraded significantly; EGRET was operated in the narrow field of view mode during this VP, and so the overall effect on the composite data set is small.
Table 1 lists the numbers of photons and mean exposure (scaled as described above) for the representative energy ranges $`E>100`$ MeV and $`E>300`$ MeV. The correction factors described above have been incorporated into Table 1. The viewing periods listed in the table are grouped by observation date to show the two subsets that were considered below to check consistency with the analysis of DHM and to search for flaring point-source emission that might have been more significant in the viewing periods obtained since that work. The total number of photons for $`E>100`$ MeV in the region of interest is 10,257, compared to 5266 photons in DHM. (With the more restrictive cuts used here for Earth albedo gamma rays, the previous total for the same viewing periods is 4879.) The overall mean exposure has increased from $`5.9\times 10^8`$ cm<sup>2</sup> s (before the more restrictive zenith angle cuts were adopted) to $`13.5\times 10^8`$ cm<sup>2</sup> s.
## 3 Analysis
We use the same model as DHM for the emission in Orion, one that has been applied in several studies of diffuse gamma rays dating from the work of Lebrun et al. (1982). Under the assumption that high-energy CR electrons and protons uniformly penetrate the atomic and molecular gas in Orion, with the same densities in both phases, the distribution of photon counts may be modeled as a linear combination of the $`N(\mathrm{HI})`$ and $`W_{\mathrm{CO}}`$ maps. In principle, allowance must also be made for inverse-Compton emission and gamma-ray production on ionized gas. If the CR density were uniform, the distribution of gamma-ray photons observed for some energy range could be written as
$$\mathrm{\Theta }(l,b)=AN(\mathrm{H}\mathrm{I})_c+BW(\mathrm{CO})_c+CN(\mathrm{H}\mathrm{II})_c+IC_c+\mathrm{\Sigma }(D_i\delta _{l_ib_i})+Fϵ_c.$$
(1)
Here we have taken the finite spatial resolution of EGRET into account by convolving the predicted distribution of gamma-ray photons with the effective PSF of EGRET for the corresponding energy range. The subscript $`c`$ indicates multiplication by the exposure map and convolution with the effective PSF, as explained in DHM. $`ϵ_c`$ is the exposure map itself convolved with the effective PSF for that energy range. In Eqn. (1), the coefficient $`A`$ is the emissivity (photons s<sup>-1</sup> sr<sup>-1</sup>) per hydrogen atom, $`B=2AX`$, where $`X=N(\mathrm{H}_2)/W_{\mathrm{CO}}`$, $`C`$ is the emissivity of the ionized hydrogen, $`D_i`$ are the numbers of photons from each point source for that energy range, and $`F`$ is the isotropic diffuse emission. For the Orion region, the IC emission and contributions from CR interactions with ionized hydrogen are expected to contribute at only the several percent level, and largely in a smooth way that can be subsumed with the isotropic emission (DHM). Under these assumptions and approximations Eqn. (1) may be re-written as
$$\mathrm{\Theta }(l,b)=AN(\mathrm{H}\mathrm{I})_c+BW(\mathrm{CO})_c+Cϵ_c+\mathrm{\Sigma }(D_i\delta _{l_ib_i})$$
(2)
We use the maximum likelihood method (Mattox et al. 1996) to fit the model (Eqn. 2) to the binned photon data for the various energy ranges and combinations of viewing periods considered. The likelihood value, $`L,`$ for a given set of parameter values is the product of the probabilities that the measured photon counts are consistent with the predicted counts in each pixel on the assumption of Poisson statistics. The probability of one model with likelihood $`L_1`$ better representing the data than another model with likelihood $`L_2`$ is determined from twice the difference of the logarithms of the likelihoods, $`2(\mathrm{ln}L_2\mathrm{ln}L_1).`$ This difference, referred to as the test statistic $`TS`$, is distributed like $`\chi ^2`$ in the null hypothesis, with the number of degrees of freedom being the difference between the number of free parameters in the two models. To fit the model to the observations, the positions and fluxes of the point sources and the values of the other coefficients in Eqn. (2) are adjusted to maximize the likelihood. The strength of the dependence of the likelihood function on the various parameters of the model permits their uncertainties or significances to be estimated quantitatively. Values for the different coefficients in Eqn. (1) can be determined separately as long as their corresponding maps are distinguishable, i.e., are linearly independent. Figure 1 shows that the maps of $`N(\mathrm{HI})_c`$, $`W(\mathrm{CO})_c`$, and $`ϵ_c`$ for the Orion region are distinguishable, for the representative energy range $`E>100`$ MeV. The EGRET exposure to the region modeled is not uniform, having a gradient with longitude, so the sensitivity decreases at higher longitudes and lower latitudes. The exposure variations are accounted for in the maximum likelihood analysis, in the sense that regions with greater exposure effectively have greater weight.
We first used the maximum likelihood method to systematically search for gamma-ray point sources in the region of interest. The point source search entails determining the maximum likelihood values of the parameters in Eqn. (2) for an assumed point source at each position in the $`30^{}`$ grid, generating a map of $`TS`$. Owing to the strong energy dependence of the effective PSF of EGRET, in application this test is most sensitive if $`TS`$ maps for subranges of energy are generated separately, then added together (Mattox et al. 1996). Mattox et al. (1996) show that for the case of six combined maps, the values of $`TS`$ for the point source search are distributed as $`\chi ^2`$ with 8 degrees of freedom in the null hypothesis.
## 4 Results
A maximum likelihood search for point sources was made for the two groups of viewing periods identified in Table 1, as well as for the sum of all viewing periods. No significant source detections, greater than 4-$`\sigma `$ statistical significance, were found in any of the groups of viewing periods or for any of the energy ranges analyzed. Figure 2 shows the sum of the $`TS`$ maps for the 30-100, 100-150, 150-300, 300-500, 500-1000, and 1000-10,000 MeV ranges for the combined set of all viewing periods. The peak value of 31.2, near ($`195.0^{}`$, $`19.5^{}`$), corresponds to a significance of $`3.8\sigma `$.
The extended feature associated with this peak (Fig. 2) largely originates in the 30-100 MeV TS map. In this low energy range, the effective PSF of EGRET is quite broad and the feature could represent the presence of a soft source or sources just outside the region of interest. In fact, the Third EGRET catalog (3EG) (Hartman et al. 1999) includes two point sources just below the lower longitude limit: 3EG J0459+0544 ($`193.99^{}`$, $`21.66^{}`$) and 3EG J0530+1323 ($`191.50^{}`$, $`11.09^{}`$).
The extended excess near ($`215^{}`$,$`19^{}`$) in Figure 2 has a peak TS value of 24.0, corresponding to a statistical significance of 3.0 $`\sigma `$. However, the overall significance of the extended excess is less; the integrated residual intensity for $`E>100`$ MeV (Fig. 3$`c`$) corresponds to approximately only 25 photons in a region where 280 are expected. The 3EG catalog (Hartman et al. 1999) contains no sources consistent with this extended region, and a search of NED reveals no likely counterparts at other wavelengths. We note that position of this excess is consistent with two sources in the Second EGRET catalog (Thompson et al. 1995), GRO J0545-1156 and GRO J0552-1028. However, both of these sources were below threshold in the 3EG catalog analysis.
In their earlier analysis of the same region that we model here, DHM reported the detection of three marginal sources at a statistical significance of $`3\sigma `$. Two of these sources were probably the same as the unidentified sources GRO 0605-09 ($`l=216.6^{}`$, $`b=14.4^{}`$) and GRO 0546-02 ($`l=207.6^{}`$, $`b=15.6^{}`$) listed in the First EGRET catalog (Fichtel et al. 1994). The third source at $`l=203.0^{}`$, $`b=17.5^{}`$ had not been detected by EGRET previously. None of the above three sources were detected in the current analysis of the complete data set.
Since we are primarily interested in the diffuse gamma-ray emission here, we investigated how the maximum likelihood values of the parameters $`A,B,`$ and $`C`$ that describe the diffuse emission (Eqn. 2) depended on the number and positions of point sources in the model. The cases investigated included the following: no point sources, a hypothetical source at ($`215^{}`$, $`19^{}`$), and the two 3EG sources mentioned above. For all of these cases, the maximum likelihood values of the diffuse parameters were the same within 1-$`\sigma `$, except in the 30-100 MeV range, where the inclusion of the two 3EG sources improved the fit and changed the best-fitting $`B`$ and $`C`$ terms substantially. Although these sources were not detected with strong significance, which is not unexpected as they are outside the region we model, we included them in the model that we adopted for all of the analysis described below.
Figure 3 shows the EGRET observations, the maximum likelihood best-fitting model, and the residual map for the $`E>100`$ MeV energy range using all of the EGRET data for Orion. The gamma-ray intensities were obtained by dividing the photon map used to fit the parameters by the exposure map. The good agreement between the observed intensities and the model for $`E>100`$ MeV, calculated using the parameters for the combined groups in Table 2, is demonstrated in the residual map shown in Figure 3$`c`$. The figure shows that the model intensity map clearly has no large-scale deviation from the observed intensity. Owing to the inclusion of the two low-longitude sources, the residual intensities near longitude 195 are small. The remaining extended residual was discussed above.
Figure 3 also shows that there is no significant deviation from the fit in the region of Mon R2. This indicates that although Mon R2 is 300 pc more distant than Orion (Maddalena et al. 1986), its properties can be considered to be the same as those of the clouds in Orion. Further, we note that the residual intensity shown in Figure 3$`c`$ is not correlated with $`W_{\mathrm{CO}}`$, $`N(\mathrm{H}\mathrm{I})`$, or the total column density of interstellar gas. The absence of the correlation with $`W_{\mathrm{CO}}`$ is consistent with the $`X`$-ratio being independent of $`W_{\mathrm{CO}}`$ and of the position in Orion. There is no statistically significant variation of the $`X`$-ratio and emissivity between Orion A, Orion B, and Mon R2 molecular clouds. The lack of correlation between the residual intensity and the interstellar gas is consistent with the assumption that the atomic and molecular gas is uniformly permeated by CRs, and that the CR density is uniform. The model (Eqn. 2) therefore provides an adequate description of the high-energy gamma-ray emission from the Orion region.
The parameter values for the model fits to the combined EGRET data and their uncertainties are listed in Table 2. No significant differences are seen from the results of DHM. The uncertainties in the model parameter values have decreased as expected owing to the greatly improved exposure. Interpretations of the values of the parameters are discussed in the following paragraphs.
Figure 4$`a`$ shows the differential $`\gamma `$-ray emissivity derived from the coefficient $`A`$ of the model fit to each of the six energy ranges in Table 2. As the figure illustrates, the general energy dependence of the emissivity is well described by the electron-Bremsstrahlung $`(eB)`$ (Koch & Motz 1959; Fichtel et al. 1991) and nucleon-nucleon $`(nn)`$ (Stecker 1989) production functions parameterized by Bertsch et al. (1993) for the solar vicinity. The deviation from the Bertsch et al. production function in the 1000-10,000 MeV range has been seen in other studies of Galactic diffuse emission with EGRET data (e.g., Hunter et al. 1994; DHM; Digel et al. 1996; Hunter et al. 1997). This deviation is not seen in studies of the isotropic emission at high latitudes (e.g., Sreekumar et al. 1998) and therefore is unlikely to represent a high-energy calibration error. The most plausible interpretation, that the calculation of gamma-ray production from $`nn`$ interactions somewhat underestimates the yield (Hunter et al. 1997), does not affect the results presented here.
The integral gamma-ray emissivity in Orion is found to be $`(1.65\pm 0.11)\times 10^{26}`$ s<sup>-1</sup> sr<sup>-1</sup> for $`E>100`$ MeV, confirming the value obtained in the earlier DHM analysis. It compares well with the values obtained for the Galactic plane in the solar vicinity in large-scale studies of diffuse emission (e.g., Lebrun & Paul 1985; Strong et al. 1988; Strong & Mattox 1996), which range from (1.54–1.8)$`\times 10^{26}`$ s<sup>-1</sup> sr<sup>-1</sup>. However, studies of individual clouds within $`500`$ pc with EGRET data yield a wider range of integral emissivities: $`(2.4\pm 0.2)\times 10^{26}`$ s<sup>-1</sup> sr<sup>-1</sup> in Ophiuchus (Hunter et al. 1994), $`(2.01\pm 0.15)\times 10^{26}`$ s<sup>-1</sup> sr<sup>-1</sup> in the local clouds toward Monoceros (Digel et al. 1999), and $`(1.84\pm 0.10)\times 10^{26}`$ s<sup>-1</sup> sr<sup>-1</sup> in the local clouds toward Cepheus (Digel et al. 1996). The range of emissivities, which we note decrease with increasing Galactocentric distance of the cloud, suggests a fairly steep gradient of CR density at the solar circle that is smoothed in the large-scale studies, which typically have resolutions of 2 kpc or more.
The $`X`$-ratios in Table 2 are derived from the values of $`A`$ and $`B`$ for each energy range and are shown in Figure 4$`b`$. As expected for an intrinsic property of the molecular clouds, the value of $`X`$ does not vary significantly with energy, except possibly for a decrease in the 1000–10,000 MeV range. The reason for the marginally-significant decrease at the highest energies is not clear, as the highest-energy CRs should not be excluded from the dense, molecular parts of the clouds.
The value of $`X`$ derived for the $`E>100`$ MeV range, $`(1.35\pm 0.15)\times 10^{20}`$ cm<sup>-2</sup> \[K km s<sup>-1</sup>\]<sup>-1</sup>, is adopted here as the best overall value, in terms of the numbers of photons and the resolution of the gamma-ray observations, from our analysis. The non-uniformity of the exposure across the field (Fig. 1$`c`$) means that this should be considered an exposure-weighted average, or more properly an exposure-and-total column density weighted average. As mentioned in §3, the likelihood function is most sensitive to the model in regions with the most photons, where the exposure and gas column density are greatest. The exposure difference between the Orion A and B clouds is only about 20% (Fig. 1$`c`$), however, and the residual map in Figure 3$`c`$ indicates that the same $`X`$-ratio applies to both clouds within the resolution and statistics of the data. The value of $`X`$ reported in DHM, $`(1.06\pm 0.14)\times 10^{20}`$ cm<sup>-2</sup> \[K km s<sup>-1</sup>\]<sup>-1</sup>, is marginally less than the value found here. Owing to the greatly-improved uniformity of exposures in the dataset analyzed here, we consider the new finding to be the more reliable.
The emissivities and $`X`$-ratios we find for the Orion region are compared in Table 3 with results from earlier studies. The studies of Bloemen et al. (1984) and Houston & Wolfendale (1985) were based on COS-B data, and the findings have been scaled here to the updated CO radiation temperature scale of Bronfman et al. (1988). DHM found a value of $`X`$ much lower than that reported by Bloemen et al. (1984), and the lower value is confirmed here. The instrumental background of COS-B was significant, and had structure on the same angular scale as the molecular clouds in Orion. The final background corrections were not available at the time of the analysis by Bloemen et al. (1984), and in any case small errors in the corrections for the different COS-B viewing periods would have had a large effect on the value of $`X`$ derived. The integral gamma-ray emissivities in Table 3 are approximately consistent across the various studies.
The differential spectrum of the isotropic intensity inferred from the coefficients $`C`$ in Table 2 is shown in Figure 5. The integrated intensity for $`E>100`$ MeV is $`(1.46\pm 0.23)\times 10^5`$ cm<sup>-2</sup> s<sup>-1</sup>. The overall average spectrum of the isotropic emission found by Sreekumar et al. (1998), also shown in Figure 5, has an integral intensity of $`(1.45\pm 0.05)\times 10^5`$ cm<sup>-2</sup> s<sup>-1</sup> and a spectral index of $`2.10\pm 0.03`$. On consideration of the statistical uncertainties, the intensity found here is consistent with the expected intensity of the isotropic emission together with the intensity of inverse Compton emission and gamma-ray production on ionized hydrogen, which were neglected in the model (see §3).
## 5 Conclusions
This analysis of the EGRET data for the Orion region essentially confirms the findings of the earlier work by DHM based on much less data. No significant point sources are detected in any of four groups of viewing periods or in the combined dataset; the marginal sources reported by DHM are no longer even marginally significant. The emissivity and $`X`$-ratio derived for the diffuse emission are not significantly affected if a point source is included at the position of the greatest remaining residual intensity. A simple linear model for the gamma-ray emission, with adjustable parameters for the gamma-ray emissivity, the $`X`$-ratio, and the isotropic intensity, including two 3EG sources just outside the field, is found to fit the observations adequately across the EGRET energy range. The gamma-ray emissivity in Orion is consistent with that found for the solar circle in large-scale studies of diffuse emission, and its value relative to emissivities for other clouds in the solar vicinity suggests a fairly strong gradient of CR density with Galactocentric distance at the solar circle. The spectrum of emissivity is consistent with electron and proton CR spectra approximately the same as in the solar vicinity. The molecular mass-calibrating $`X`$-ratio is $`(1.35\pm 0.15)\times 10^{20}`$ cm<sup>-2</sup> (K km s<sup>-1</sup>)<sup>-1</sup>, and the gamma-ray emissivity for $`E>`$ 100 MeV is $`(1.65\pm 0.11)\times 10^{26}`$ s<sup>-1</sup> sr<sup>-1</sup>.
This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, under contract with the National Aeronautics and Space Administration. The authors wish to thank Hans Bloemen for his comments on the manuscript. SWD acknowledges support from the CGRO Guest Investigator Program grant NAG5-2823. EA acknowledges support from the CGRO Guest Investigator Program grant NAG5-2872. RM acknowledges support from the CGRO Guest Investigator Program grant NAG5-3696.
|
no-problem/9902/hep-th9902119.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
One of the main developments in the recent years is clarification of the idea that gauge theory and gravity are complementary descriptions of a single theory. Configurations in string/M theory have been very useful tools to study supersymmetric gauge field theories in different dimensions and with different amounts of unbroken supersymmetry. (see for a detailed review and a complete set of references up to February 1998. For more recent works, see ).
The usual construction of a four dimensional field theory contains D4 branes with one finite space direction, two NS branes (parallel for $`𝒩=2`$ supersymmetry or non - parallel for $`𝒩=1`$ supersymmetry) and possible D6 branes. The gauge gluons are given by strings between D4 branes. If there are $`N`$ D4 branes, the gauge group is $`SU(N)`$ ( $`SO(N)`$ or $`Sp(N)`$ in the presence of an orientifold O4 or O6 plane). The matter content can be given by either strings between D4 branes and D6 branes or strings between D4 branes and semi-infinite D4 branes ending on the NS branes . To describe the Higgs moduli space one can use both approaches. For massive matter the discussion was initiated in with semi-infinite D4 branes ending on one of the NS branes, and for massless matter the problem was solved in .
In this paper we will discuss a brane configuration with semi-infinite D4 branes ending on both NS branes (similar configurations are considered in ). The Seiberg-Witten curve is derived, and we observe that it can be decomposed into reducible curves and this will determine a split of the M5 brane obtained after we raise the brane configuration to M theory.
In section 2 we consider the case of the group $`SU(N)`$ and we explain in detail the M5 brane splitting. In section 3 we do the same thing for the case of orientifolds (when the gauge group is $`SO(N)`$ or $`Sp(N)`$).
## 2 $`SU(N_c)`$ Gauge Theories and M5 branes Splitting
Consider a brane configuration for $`SU(N_c)`$ gauge group with $`N_f`$ hypermultiplets in the fundamental representation. In the type IIA theory on a flat space-time with time $`x^0`$ and space coordinates $`x^1,\mathrm{},x^9`$, the brane configuration consists of two NS5 branes with worldvolume coordinates $`x^0,x^1,x^2,x^3,x^4,x^5`$, $`N_c`$ D4 branes suspended between them in the $`x^6`$-direction, $`N_r`$ semi-infinite D4 branes on the right of the right NS brane (which we call the right semi-infinite D4 branes), and $`N_l`$ semi-infinite D4 branes on the left of the left NS brane (which we call the left semi-infinite D4 branes). This configuration preserves $`N=2`$ supersymmetry in four dimensions. In the models described before in the literature (see ), for simplicity, the semi-infinite D4 branes end on only one of the NS branes.
We are interested in the M theory interpretation of these brane configurations. As usual, we introduce the complex variables:
$`v=x^4+ix^5,w=x^8+ix^9,y=\mathrm{exp}((x^6+ix^{10})/R)`$ (1)
where $`x^{10}`$ is the eleventh coordinate of M theory which is compactified on a circle of radius $`R`$. Now we rotate the right NS5 brane towards $`w`$ direction. The new location of the right NS5 brane becomes $`u:=w\mu v=0`$ and the left NS5 brane is still located at $`w=0`$. Moreover we assume that $`M_r`$ of the right semi-infinite D4 branes are massless (in the sense that they are located at $`w=0`$ ) and $`M_l`$ of the left semi-infinite D4 branes are massless (in the sense that they are located at $`u=0`$). In order to be able to rotate the NS5 brane, the M-theory Seiberg-Witten curve must be rational. Since $`u`$ and $`w`$ are two rational functions on this rational curve, they are related by a linear fractional transformation. Thus, after suitable constant shifts, we have
$`uw=\zeta `$ (2)
where $`\zeta `$ is a constant. Now we project this curve to $`(y,u)`$-space to obtain:
$`u^{M_l}{\displaystyle \underset{i=1}{\overset{N_lM_l}{}}}(uu_i)yP(u)=0,`$ (3)
where
$`P(u)=u^{N_c}+p_1u^{N_c1}+\mathrm{}+p_{N_c}`$ (4)
is some polynomial of degree $`N_c`$ because there are $`N_c`$ finite D4 branes between the two NS branes and the the number of finite D4 branes gives the degree of $`P`$.
Similarly if we project the curve to $`(y,w)`$-space, we get
$`Q(w)yAw^{M_r}{\displaystyle \underset{i=1}{\overset{N_rM_r}{}}}(ww_i)=0`$ (5)
where
$`Q(w)=w^{N_c}+q_1w^{N_c1}+\mathrm{}+q_{N_c},`$ (6)
and $`A`$ is a normalization constant.
In order for the equations (2), (3) and (5) to hold simultaneously, it is required that
$`P(u)Q(\zeta /u)Au^{M_l}{\displaystyle \underset{i=1}{\overset{N_lM_l}{}}}(uu_i)(\zeta /u)^{M_r}{\displaystyle \underset{i=1}{\overset{N_rM_r}{}}}(\zeta /uw_i)`$ (7)
for all $`u𝐂`$.
The general solutions for P and Q are of the form
$`P(u)`$ $`=`$ $`u^{N_c+M_lN_r}P^{}(u)`$ (8)
$`Q(w)`$ $`=`$ $`w^{N_c+M_rN_l}Q^{}(u).`$ (9)
assuming $`N_c>N_r`$ and $`N_c>N_l`$. The possible solutions for $`P^{}`$ and $`Q^{}`$ are
$`P^{}(u)={\displaystyle \underset{i=1}{\overset{M}{}}}(uu_{\alpha _i}){\displaystyle \underset{i=1}{\overset{N_rM_lM}{}}}(u\zeta /w_{\beta _i})`$ (10)
$`Q^{}(w)={\displaystyle \underset{j\alpha _i}{}}(w\zeta /u_j){\displaystyle \underset{j\beta _i}{}}(ww_j).`$ (11)
If we plug (8) into (3), we obtain
$`u^{M_l}{\displaystyle \underset{i=1}{\overset{N_lM_l}{}}}(uu_i)yu^{N_c+M_lN_r}{\displaystyle \underset{i=1}{\overset{M}{}}}(uu_{\alpha _i}){\displaystyle \underset{i=1}{\overset{N_rM_lM}{}}}(u\zeta /w_{\beta _i})=0,`$ (12)
$`w^{N_c+M_rN_l}{\displaystyle \underset{j\alpha _i}{}}(w\zeta /u_j){\displaystyle \underset{j\beta _i}{}}(ww_j)yAw^{M_r}{\displaystyle \underset{i=1}{\overset{N_rM_r}{}}}(ww_i)=0.`$ (13)
Now we can factorize these equations into
$`u^{M_l}{\displaystyle \underset{i=1}{\overset{M}{}}}(uu_{\alpha _i})\left({\displaystyle \underset{j\alpha _i}{}}(uu_j)yu^{N_cN_r}{\displaystyle \underset{i=1}{\overset{N_rM_lM}{}}}(u\zeta /w_{\beta _i})\right)=0`$ (14)
$`w^{M_r}{\displaystyle \underset{j\beta _i}{}}(ww_j)\left(w^{N_cN_l}{\displaystyle \underset{j\alpha _i}{}}(w\zeta /u_j)yA{\displaystyle \underset{i=1}{\overset{N_rM_lM}{}}}(ww_{\beta _i})\right)=0.`$ (15)
These equations suggest that the Seiberg-Witten curve in the $`(u,w,y)`$-space may be decomposed into irreducible curves. In fact, this is possible when $`M_l=M_r`$ and $`M=M_l+M_r`$. In this case, we can see that the curve $`u^{M_l}_{i=1}^M(uu_{\alpha _i})=0`$ in $`(u,y)`$-space and the curve $`w^{M_r}_{j\beta _i}(ww_j)=0`$ in $`(w,y)`$ are the images of a curve which lies on a threefold $`uw=\zeta `$. They describe parts of the original M5 brane without the NS branes. This means that the NS branes plus a part of the D4 branes turn into an M5 brane as usual (which we call a transversal M5 brane) and the D4 branes which are lined on the opposite sides of the NS branes turn into M5 branes that go through the transversal M5 brane (which we call a cylindrical M5 brane). So the M5 brane is decomposed into a cylindrical M5 brane located at $`u=w=0`$, $`M`$ cylindrical M5 branes located at $`u=u_{\alpha _i},w=w_i`$ and the transversal M5 brane given by the last terms in $`(\text{14})`$ and $`(\text{15})`$. This is a phenomenon described in in the discussion of the conifold compactification. A very similar discussion appears in where semi-infinite D4 branes ending on both sides of NS branes are also considered. In their paper there is another type of splitting i.e. the M5 brane splits into a flat one (without D4 branes ending on it) and a non-trivial one. In field theory this is obtained at the root of the baryonic Higgs branch.
The third part, i.e. the transverse M5 brane, is a brane configuration for $`SU(N_c3M_l)`$ with $`(N_lM_lM)`$ flavors on the left and $`(N_rM_rM)`$ flavors on the right. It obvious that when there is no massless flavor (i. e. $`M_l=M_r=0`$), the first two curves do not exist and, thus there is no factorization.
## 3 Orientifold O6 plane, SO (Sp) Groups and M5 Brane Splitting
$``$ $`SO(2N_c)`$ Case
This part will be a generalization of the case considered in where we have considered the case of fundamental flavor given by semi-infinite D4 branes and we have considered only the case when no flavor was massless. Here we go one step further and we consider both massive and massless flavor. The discussion is the same as in the $`SU`$ case.
To obtain the $`SO`$ group, we are going to introduce and orientifold O6 plane described by:
$$xy=\mathrm{\Lambda }^{4N_c42N_f}v^4.$$
(1)
Let us rotate the left NS5 brane toward $`w`$ and this determines a rotation of right brane in the mirror direction (towards $`w`$) Thus the left NS5 brane is located at $`w_+=0`$ and the right NS5 brane is located at $`w_{}=0`$ where
$`w_\pm =w\pm \mu v.`$ (2)
Let $`\mathrm{\Sigma }`$ be the corresponding M-theory Seiberg-Witten curve. On $`\mathrm{\Sigma }`$, the function $`w_+`$ will go to infinity only at one point and $`w_+`$ has only a single pole there, since there is only one NS5 brane i.e. the right NS5 brane. Thus we can identify $`\mathrm{\Sigma }`$ with the punctured complex $`w_+`$-plane possibly after resolving the singularity at $`x=y=v=0`$. Similarly, we can argue that $`w_{}`$ has a single pole on $`\mathrm{\Sigma }`$. Since these two functions are rational on a rational curve, they are related by a linear fractional transformation. After suitable constant shifts, the functions $`w_\pm `$ on $`\mathrm{\Sigma }`$ are related by
$`w_+w_{}=\zeta `$ (3)
where $`\zeta `$ is a constant. Now we project this curve to $`(y,w_+)`$-space. Then we obtain
$`w_+^{M_f}{\displaystyle \underset{i=1}{\overset{N_fM_f}{}}}(w_+m_i)yP(w_+)=0,`$ (4)
where
$`P(w_+)=w_+^{2N_c}+p_1w_+^{2N_c1}+\mathrm{}+p_{2N_c}`$ (5)
is some polynomial of degree $`2N_c`$. Similarly if we project the curve to $`(y,w_{})`$-space, we get
$`Q(w_{})yAw_{}^{M_f}{\displaystyle \underset{i=1}{\overset{N_fM_f}{}}}(w_{}m_i)=0`$ (6)
where
$`Q(w_{})=w_{}^{2N_c}+q_1w_{}^{2N_c1}+\mathrm{}+q_{2N_c},`$ (7)
and $`A`$ is a normalization constant. In order for the equations (3), (4) and (6) to hold simultaneously, it is required that
$`P(w_+)Q(\zeta /w_+)Aw_+^{M_f}(\zeta /w_+)^{M_f}{\displaystyle \underset{i=1}{\overset{N_fM_f}{}}}(w_+m_i)(\zeta /w_+m_i)`$ (8)
for all $`w_+𝐂`$. The solutions will be of the form
$`P(w_+)`$ $`=`$ $`w_+^{2N_cN_f+M_f}{\displaystyle \underset{i=1}{\overset{M}{}}}(w_+m_{\alpha _i}){\displaystyle \underset{i=1}{\overset{N_fM_fM}{}}}(w_+\zeta /m_{\beta _i})`$ (9)
$`Q(w_{})`$ $`=`$ $`w_{}^{2N_cN_f+M_f}{\displaystyle \underset{j\alpha _i}{}}(w_{}\zeta /m_j){\displaystyle \underset{j\beta _i}{}}(w_{}m_j)`$ (10)
if $`2N_cN_fM_f`$. With the choice of these $`P`$ and $`Q`$, the normalization constant $`A`$ will be
$`A=(1)^{N_fM_fM}{\displaystyle \frac{(\zeta )^{2N_cM}}{_{j\alpha _i}m_j_{i=1}^{N_fM_fM}m_{\beta _i}}}.`$ (11)
If we plug (9) into (4), we obtain
$`w_+^{M_f}{\displaystyle \underset{i=1}{\overset{N_fM_f}{}}}(w_+m_i)yw_+^{2N_cN_f+M_f}{\displaystyle \underset{i=1}{\overset{M}{}}}(w_+m_{\alpha _i}){\displaystyle \underset{i=1}{\overset{N_fM_fM}{}}}(w_++\zeta /m_{\beta _i})=0`$ (12)
$`w_{}^{2N_cN_f+M_f}{\displaystyle \underset{j\alpha _i}{}}(w_{}\zeta /m_j){\displaystyle \underset{j\beta _i}{}}(w_{}+m_j)yAw_{}^{M_f}{\displaystyle \underset{i=1}{\overset{N_fM_f}{}}}(w_{}m_i)=0.`$ (13)
After factoring out the terms $`w_+^{M_f}_{i=1}^M(w_+m_{\alpha _i})`$ and $`w_{}^{M_f}_{j\beta _i}(w_{}m_j)`$, we obtain
$`{\displaystyle \underset{j\alpha _i}{}}(w_+m_j)yw_+^{2N_cN_f+M_f}{\displaystyle \underset{i=1}{\overset{N_fM_fM}{}}}(w_+\zeta /m_{\beta _i})=0`$ (14)
$`w_{}^{2N_cN_f+M_f}{\displaystyle \underset{j\alpha _i}{}}(w_{}\zeta /m_j)yA{\displaystyle \underset{i=1}{\overset{N_fM_fM}{}}}(w_{}m_{\beta _i})=0.`$ (15)
This is nothing but a brane configuration for $`SO(2N_cM)`$ theory with $`N_fM_fM`$ flavors. The gauge group $`SO(2N_c)`$ with $`N_f`$ flavors is broken to the gauge group $`SO(2N_cM)`$ with $`N_fM_fM`$ flavors which agrees with QCD. In brane geometry, the $`M`$ semi-infinite D4 branes are matched together with finite D4 branes to move in $`w_+=0`$ complex plane and in its mirror complex plane. The parts of (14) and (15) that decouple are again interpreted as infinite cylindrical M5 branes that go through the transverse M5 brane. For the case $`M=0`$ we obtain the result of , so our results are consistent.
The terms which are factored out before arriving to the formulas (14) and (15) represent the cylindrical M5 branes which pass through the transversal M5 brane.
The extension to $`SO(2N_c+1)`$ is trivial. In the odd case the equations (4) and (6) become:
$`{\displaystyle \underset{i=1}{\overset{N_f}{}}}(w_+m_i)yP(w_+)=0`$ (16)
$`Q(w_{})yA{\displaystyle \underset{i=1}{\overset{N_f}{}}}(w_{}+m_i)=0`$ (17)
where
$`P(w_+)=w_+(w_+^{2N_c}+p_1w_+^{2N_c1}+\mathrm{}+p_{2N_c})`$ (18)
$`Q(w_{})=w_{}(w_{}^{2N_c}+q_1w_{}^{2N_c1}+\mathrm{}+q_{2N_c}).`$ (19)
and the solutions will be of the form
$`P(w_+)`$ $`=`$ $`w_+^{2N_c+1N_f}{\displaystyle \underset{i=1}{\overset{M}{}}}(w_+m_{\alpha _i}){\displaystyle \underset{i=1}{\overset{N_fM}{}}}(w_++\zeta /m_{\beta _i})`$ (20)
$`Q(w_{})`$ $`=`$ $`w_{}^{2N_c+1N_f}{\displaystyle \underset{j\alpha _i}{}}(w_{}\zeta /m_j){\displaystyle \underset{j\beta _i}{}}(w_{}+m_j)`$ (21)
if $`2N_cN_f`$. When $`M=0`$, we have a special solution
$`P(w_+)`$ $`=`$ $`w_+^{2N_c+1N_f}{\displaystyle \underset{i=1}{\overset{N_f}{}}}(w_++\zeta /m_i)`$ (22)
$`Q(w_{})`$ $`=`$ $`w_{}^{2N_c+1N_f}{\displaystyle \underset{i=1}{\overset{N_f}{}}}(w_{}\zeta /m_i)`$ (23)
which yields
$`\left({\displaystyle \underset{i=1}{\overset{N_f}{}}}m_i\right)y{\displaystyle \underset{i=1}{\overset{N_f}{}}}\left({\displaystyle \frac{w_{}m_i}{w_++m_i}}\right)=w_+^{2N_c+1}.`$ (24)
This agrees with and, thus shows the consistency of our results.
The discussion for $`M0`$ goes the same as in the $`SO(2N_c)`$ case and one obtains the infinite cylindrical D4 branes which go through the transverse M5 brane.
$``$ $`Sp(2N_c)`$ Case
The situation is similar to $`SO`$ case. Now the O6 plane is described by:
$$xy=\mathrm{\Lambda }^{2N_c42N_f}v^4.$$
(25)
Again the general solution is of the form
$`P(w_+)`$ $`=`$ $`w_+^{2N_cN_f}{\displaystyle \underset{i=1}{\overset{M}{}}}(w_+m_{\alpha _i}){\displaystyle \underset{i=1}{\overset{N_fM}{}}}(w_++\zeta /m_{\beta _i})`$ (26)
$`Q(w_{})`$ $`=`$ $`w_{}^{2N_cN_f}{\displaystyle \underset{j\alpha _i}{}}(w_{}\zeta /m_j){\displaystyle \underset{j\beta _i}{}}(w_{}+m_j)`$ (27)
if $`2N_cN_f`$. For $`M=0`$, we get a special solution given by
$`xy`$ $`=`$ $`\mathrm{\Lambda }_{N=2}^{2N_c42N_f}16\mu ^4(w_+w_{})^4`$ (28)
$`w_+w_{}`$ $`=`$ $`\zeta `$ (29)
$`\left({\displaystyle \underset{i=1}{\overset{N_f}{}}}m_i\right)y{\displaystyle \underset{i=1}{\overset{N_f}{}}}\left({\displaystyle \frac{w_+m_i}{w_{}+m_i}}\right)`$ $`=`$ $`w_+^{2N_c}.`$ (30)
On a smooth surface
$`x^{}y^{}=\mathrm{\Lambda }_{N=2}^{2N_c42N_f}`$ (31)
which maps onto the old surface via the map $`x=x^{}v^2,y=y^{}v^2`$, the special solution can be described by
$`x^{}y^{}`$ $`=`$ $`\mathrm{\Lambda }_{N=2}^{2N_c42N_f}`$ (32)
$`w_+w_{}`$ $`=`$ $`\zeta `$ (33)
$`\left({\displaystyle \underset{i=1}{\overset{N_f}{}}}m_i\right)y^{}{\displaystyle \underset{i=1}{\overset{N_f}{}}}\left({\displaystyle \frac{w_+m_i}{w_{}+m_i}}\right)`$ $`=`$ $`w_+^{2N_c+2}.`$ (34)
If we map this curve to the old surface, then the last equation becomes:
$`\left({\displaystyle \underset{i=1}{\overset{N_f}{}}}m_i\right)y{\displaystyle \underset{i=1}{\overset{N_f}{}}}\left({\displaystyle \frac{w_+m_i}{w_{}+m_i}}\right)`$ $`=`$ $`v^2w_+^{2N_c+2}`$ (35)
which is the same as (5.27) of after rescaling of variables.
The discussion for $`M0`$ is the same as before.
## 4 Conclusion
In this paper we have considered field theories obtained on the worldvolume of D4 branes suspended between two NS branes. The matter content is given by semi-infinite D4 branes ending on both NS branes (as opposed to the previously considered case when they end only on one NS brane ). The Seiberg - Witten curve can be projected to $`(y,u)`$ and $`(y,w)`$ spaces and the requirement that both projections hold simultaneously suggests that the Seiberg-Witten curve may be decomposed into irreducible curves. This implies that in M theory, the M5 brane is split into reducible curves, one being the transversal M5 brane and the rest being infinite cylindrical M5 branes. We have discussed the case with or without an orientifold O6 plane.
## 5 Acknowledgments
We thank N. Dorey for sending us a preliminary version of his work with D. Tong. We thank A. Hanany for discussions.
|
no-problem/9902/astro-ph9902045.html
|
ar5iv
|
text
|
# Exploratory ASCA Observations of Broad Absorption Line Quasi-Stellar Objects
## 1 Introduction
While Broad Absorption Line Quasi-Stellar Objects (BALQSOs) allow us to view substantial and energetically important gas outflows that are probably present in most QSOs (e.g. Weymann 1997), their study in the X-ray regime has not yet matured. BALQSOs are known to be much weaker in the soft X-ray band ($`<2`$ keV) than QSOs that lack BALs (e.g. Kopko et al. 1994; Green et al. 1995; Green & Mathur 1996, hereafter GM96), and only a few BALQSOs have reliable X-ray detections. Their low soft-X-ray fluxes may arise as a result of photoelectric X-ray absorption by the same outflowing matter that makes the BALs in the ultraviolet. Hard X-ray ($`>2`$ keV) observations of BALQSOs can in principle test the absorption hypothesis as well as constrain the properties, in particular the column densities, of BALQSO outflows. Column density constraints for BALQSOs are difficult to obtain from ultraviolet data due to BAL saturation (e.g. Arav 1997; Hamann 1998), and the limited X-ray data now available suggest that previous column density estimates from ultraviolet lines were too small by a factor of $`100`$ or more. Because the photoelectric X-ray absorption cross section is a strongly decreasing function of energy, BALQSOs that are weak in the soft X-ray band (e.g. for ROSAT) could be much brighter at higher X-ray energies (e.g. for ASCA).<sup>1</sup><sup>1</sup>1The ROSAT band is from 0.1–2.4 keV and peaks from 0.9–1.4 keV, while the ASCA band is from 0.6–9.5 keV and has high sensitivity in the 3–7 keV range. To good approximation, the photoelectric X-ray absorption cross section is proportional to $`E^{\frac{8}{3}}`$. Thus a 6 keV X-ray is $`120`$ times more penetrating than a 1 keV X-ray. In the hard X-ray band, only the BALQSOs PHL 5200 ($`z=1.98`$, $`B18.5`$) and Mrk 231 ($`z=0.042`$, $`B14.5`$) have been studied in detail. Mathur, Elvis & Singh (1995) argue for the presence of heavy X-ray absorption with $`N_\mathrm{H}1\times 10^{23}`$ cm<sup>-2</sup> in PHL 5200, although the observed flux is low making reliable spectral analysis difficult (see §3 for further discussion). The ASCA data for Mrk 231 also suggest a large intrinsic column density ($`2\times 10^{22}`$$`10^{23}`$ cm<sup>-2</sup>; Iwasawa 1999; Turner 1999).
Since at least $`10`$% and perhaps up to 30–50% of all QSOs have BAL gas along our line of sight (e.g. Goodrich 1997; Krolik & Voit 1998), our lack of knowledge about the X-ray properties of BALQSOs represents a serious deficiency. In an attempt to remedy this situation, we performed ASCA observations of several of the BALQSOs that seemed most likely to be X-ray bright above 2 keV. Given the unknown hard X-ray properties (e.g. fluxes) of BALQSOs, we adopted a conservative strategy of making moderate-depth ‘exploratory’ observations of a fairly large number of objects. In this manner, we could learn about the 2–10 keV properties of as many BALQSOs as possible without being too heavily invested in the uncertain results from any one object. BALQSOs found to be reasonably X-ray bright based on exploratory observations could then be followed up with deeper X-ray spectroscopic observations. We chose as our targets some of the optically brightest (in the $`B`$ and $`R`$ bands) BALQSOs, since there is a general correlation between optical brightness and intrinsic X-ray brightness for QSOs (e.g. Zamorani et al. 1981). In this paper we present ASCA results for five of the optically brightest BALQSOs known: PG 0043+039, 0043+008, 0226–104, PG 1700+518 and LBQS 2111–4335. In the $`B`$ band, these objects are all brighter than the ASCA-detected BALQSO PHL 5200. In addition, we include the ASCA results from archived observations of IRAS 07598+6508, Mrk 231 and PHL 5200. Redshifts, $`B`$ and $`R`$ magnitudes, and Galactic column densities for the BALQSOs in our sample are given in Table 1. Many of our BALQSOs are comparably bright in the optical band to radio-quiet (RQ) QSOs which ASCA has studied in significant detail (e.g. PG 1634+706 and PG 1718+481, Nandra et al. 1995; IRAS 13349+2438, Brandt et al. 1997), and our sample is on average significantly brighter at $`B`$ than the high-redshift RQ QSOs studied with ASCA by Vignali et al. (1999). Aside from being some of the optically brightest BALQSOs in the sky, the objects in our sample span a range of other properties (e.g. absorption line strength, absorption line shape, redshift, optical continuum polarization, infrared luminosity). While they do not form a rigorously defined complete sample, they do appear to comprise a reasonably representative subsample of the BALQSO population as a whole.
As mentioned above, if BALQSOs suffer from heavy X-ray absorption then they will be much brighter above 2 keV than at lower energies. For example, a hypothetical BALQSO at $`z=2`$ with an absorption column density of $`5\times 10^{23}`$ cm<sup>-2</sup> is expected to be $`20`$ times brighter in the 2–10 keV band than in the 0.1–2.0 keV band (observed-frame with a photon index of $`\mathrm{\Gamma }=2.0`$ for the underlying power law). The 2–10 keV sensitivity of ASCA thus makes it a superior tool to ROSAT for studying BALQSOs with X-ray column densities in the wide range from $``$ (1–500)$`\times 10^{22}`$ cm<sup>-2</sup> (column densities substantially larger than $`\sigma _\mathrm{T}^1=1.50\times 10^{24}`$ cm<sup>-2</sup> are optically thick to Thomson scattering and are thus impenetrable to X-rays with energies below $`m_\mathrm{e}c^2`$). Furthermore, even moderate-length ASCA non-detections can set important constraints on the 2–10 keV fluxes and internal column densities of BALQSOs. These are important for planning AXAF/XMM spectroscopic observations, and in some cases ASCA non-detections can raise the current ROSAT lower limits on BALQSO X-ray column densities by roughly an order of magnitude (from $`5\times 10^{22}`$ cm<sup>-2</sup> to $`5\times 10^{23}`$ cm<sup>-2</sup>). If the outflowing BAL material causes the inferred X-ray absorption, this increases the implied BALQSO mass outflow rates and kinetic luminosities.
## 2 Observations and Data Analysis
### 2.1 Observation Details and Data Reduction
In Table 1 we list the relevant observation dates, exposure times and instrument modes for our objects. The data resulting from these observations were reduced and analyzed with ftools and xselect following the general procedures described in Brandt et al. (1997). We have used Revision 2 data (Pier 1997) and adopted the standard Revision 2 screening criteria.
### 2.2 Image Analyses and Count Rate Constraints
We used xselect to create full (0.6–9.5 keV for SIS and 0.9–9.5 keV for GIS) and hard (2–9.5 keV) band images for each of the four ASCA detectors. We also created summed SIS0+SIS1 and GIS2+GIS3 images in the full and hard bands. Image analysis was performed using ximage (Giommi, Angelini & White 1997). We first attempted to independently check the ASCA astrometry using serendipitous X-ray sources within the fields of view. We correlated ASCA serendipitous source positions with objects in coincident ROSAT/Einstein images as well as the ned/simbad databases. We were able to independently verify the astrometry for the observations of all BALQSOs but PG 0043+039, Mrk 231 and PHL 5200. We comment that the ASCA astrometry is generally quite reliable (see Gotthelf 1996), and our independent checking is only done for confirmation.
We then searched all the images described above for any significant X-ray sources that were positionally coincident with the precise optical positions of our BALQSOs. In order to determine the observed SIS count rate or upper limit for a given BALQSO, the 0.6–9.5 keV counts were extracted from a circular target region with a $`3.2^{}`$ radius centered on the optical BALQSO position. This provided counts for the target region, $`N_\mathrm{t}`$. An annular or circular source-free background region was chosen and similarly extracted to give background counts. The background counts were normalized to the area of the target region to obtain $`N_\mathrm{b}`$ which was then subtracted from $`N_\mathrm{t}`$. If the difference was $`<3\sigma `$ where $`\sigma =\sqrt{N_\mathrm{b}}`$, the observation was considered a non-detection. In this case an upper limit on the count rate was taken to be $`3\sqrt{N_\mathrm{t}}/t_\mathrm{e}`$ where $`t_\mathrm{e}`$ is the exposure time listed in Table 1. For observations with $`3\sigma `$ detections, the count rate was calculated as $`(N_\mathrm{t}N_\mathrm{b})/t_\mathrm{e}`$. A similar procedure was followed for the GIS detectors. However, for the GIS the extracted energy range was from 0.9–9.5 keV, and the radius of the target region was $`5^{}`$. The results of these analyses are presented in Table 2, and we comment on special cases in §3. Only four of these optically bright BALQSOs were detected with high statistical significance: 0226–104, IRAS 07598+6508, Mrk 231 and PHL 5200. We have chosen our target region sizes based upon §7.4 of The ASCA Data Reduction Guide, and we have found that varying the target region sizes within the recommended ranges does not materially change our basic results. When calculating $`N_\mathrm{b}`$ we have investigated at least two background regions for each SIS/GIS image, and we find that our results also do not materially depend upon our choice of background region. When our SIS observations were made using 2 CCD mode (see Table 1), we took all background photons from the chip that contained the target.
We also give observed frame 2–10 keV fluxes or upper limits for our BALQSOs in Table 2. These were computed for SIS0 with pimms (Mukai 1997) using a power law with $`\mathrm{\Gamma }=2`$. The intrinsic column density was taken to be $`1\times 10^{23}`$ cm<sup>-2</sup> or the value from column three of Table 3 (whichever is larger).<sup>2</sup><sup>2</sup>2In practice, pimms is only able to work with column densities at $`z=0`$. Therefore, to include intrinsic column densities at $`z>0`$ in pimms we used xspec (Arnaud 1996) to find the column density at $`z=0`$ that produces equivalent absorption. Galactic absorption was also taken into account in this process. Note that cosmological redshifting of the X-ray spectrum can in some cases greatly reduce the effect of the intrinsic absorption. This allows us to set more sensitive upper limits on the 2–10 keV fluxes for some high redshift objects due to the energy dependence of the ASCA spectral response.
### 2.3 Column Density Constraints
Since X-ray spectra cannot be modeled for most of the BALQSOs in our sample, we have followed the method adopted by GM96 to place lower limits on their intrinsic column densities. Briefly, we assumed the underlying optical-to-X-ray continuum shape of a typical RQ QSO and added an intrinsic absorbing column until the predicted count rate matched our observed count rate or upper limit. This type of analysis relies upon the plausible but unproven assumption that our targets have typical RQ QSO X-ray continua viewed through an intrinsic absorbing column of gas. Comparisons of BALQSO and non-BALQSO optical/ultraviolet emission lines and continua show that these two types of QSOs are remarkably similar, consistent with the view that these are not two inherently different classes of objects (e.g. Weymann et al. 1991). We also take the gas in the absorbing column to have solar abundances (Anders & Grevesse 1989), and this appears to be consistent with the currently available data (see Arav 1997 and Hamann 1998). Further justification for the assumptions that underlie this general method may be found in GM96. One additional issue not discussed by GM96 is that resonant absorption lines may be a significant source of X-ray opacity due to the large velocity dispersions of BALQSO outflows (compare with §3 of Kriss et al. 1996). We have neglected this effect for consistency with all previous work and because there is presently no proof that the BAL gas also causes the inferred X-ray absorption. A detailed treatment of resonant absorption line opacity in the X-ray band will be complex and dependent upon the details of the velocity dispersion in the BAL outflow, but this opacity is expected to modify the inferred column density by less than a factor of $`2`$ (J. Krolik, private communication). Theoretical calculations examining the importance of this effect would be a valuable addition to the literature.
The optical-to-X-ray spectral index, $`\alpha _{\mathrm{ox}}`$, is the slope of a nominal power law connecting the rest-frame flux density at 2500 Å to that at 2 keV (QSOs with large values of $`\alpha _{\mathrm{ox}}`$ are those that are X-ray faint). Typical RQ QSOs are observed to have $`\alpha _{\mathrm{ox}}=1.51\pm 0.10`$ (this value is from §4.3 of Laor et al. 1997 for an optical flux density at 2500 Å; A. Laor, private communication). The 2 keV flux density can be predicted given the rest-frame 2500 Å flux density and an expected value for $`\alpha _{\mathrm{ox}}`$. Flux densities at 2500 Å for PG 0043+039 and PG 1700+518 were obtained by interpolating the continuum flux density data of Neugebauer et al. (1987) with a power law. For our non-PG BALQSOs, we derived the flux density at 2500 Å from the observed $`B`$ magnitude (see Table 1) using the flux zero point of Marshall et al. (1983) and extrapolating along an assumed optical continuum power law with slope $`\alpha _\mathrm{o}=0.5`$. The Galactic extinction corrections used by GM96 and a $`K`$-correction for the power law were also included.
An $`\alpha _{\mathrm{ox}}`$ value of 1.6 was used to predict an underlying rest-frame flux density at 2 keV for each BALQSO in our sample. We chose this value of $`\alpha _{\mathrm{ox}}`$ for consistency with GM96 and note that it is reasonably conservative (also see the discussion in §4.1 of GM96). We assumed that the underlying X-ray continuum shape was a power law with photon index $`\mathrm{\Gamma }`$, and we then calculated the expected power-law normalization at 1 keV in the observed frame. We entered this model into xspec (Arnaud 1996) allowing for the presence of absorption by a column of neutral gas intrinsic to the BALQSO. Galactic absorption was also included with the column densities given in Table 1. Using the simulation routine fakeit with the spectral response matrices for the ASCA SIS0 and GIS3 detectors \[described in §10.4.1 of the AO-7 ASCA Technical Description (AN 98-OSS-03)\], a predicted count rate was generated. The intrinsic column density was then increased until the predicted count rate from the simulation no longer exceeded the measured count rate or upper limit. In this manner, an upper limit on the count rate yielded a lower limit on the intrinsic column density. We have focused on the SIS0 and GIS3 detectors because for the standard pointing position our targets lie closer to the optical axes of these detectors. Combining like detectors (SIS0/SIS1 or GIS2/GIS3) would not substantially improve the column density constraints due to the fact that the vignetting for ASCA is highly dependent upon off-axis angle and is thus noticeably worse for SIS1 and GIS2. The results from this analysis are presented in Table 3. Since X-ray spectral shapes vary among the RQ QSO population, we have performed these calculations with both $`\mathrm{\Gamma }=1.7`$ and $`\mathrm{\Gamma }=2.0`$ in order to cover the typical range of photon index values (see Reeves et al. 1997). Using PG 0043+039 as an example, we have also illustrated the dependence of the inferred column density upon the intrinsic $`\alpha _{\mathrm{ox}}`$ value in Figure 1.
Recent evidence indicates that some, if not all, BALQSOs have significantly attenuated optical/ultraviolet continua compared to non-BAL RQ QSOs (e.g. Goodrich 1997; see this paper for the precise definition of ‘attenuation’). In this case, an underlying flux density at 2 keV that is predicted from the attenuated flux density at 2500 Å will be artificially low. While this issue is difficult to address with precision at present, correcting for it would tend to strengthen our above results. Even larger intrinsic column densities would be required to suppress the larger predicted X-ray fluxes. For two of our BALQSOs with near-infrared continuum flux densities in the literature, PG 0043+039 and PG 1700+518, we have examined this matter by attempting to predict the underlying 2 keV flux density from the 1.69 $`\mu `$m flux density (again using the data of Neugebauer et al. 1987). Laor et al. (1994) argue that the 1.69 $`\mu `$m flux density is a reasonably good predictor of the 0.3 keV flux density, and the Laor et al. (1997) data show that it is also a reasonably good predictor of the 2 keV flux density. We have computed $`\alpha _{\mathrm{ix}}`$, the spectral slope of a nominal power law between 1.69 $`\mu `$m and 2 keV, for the 20 QSOs from Laor et al. (1997) without intrinsic absorption. We find that $`\alpha _{\mathrm{ix}}=1.29\pm 0.09`$, and that $`\alpha _{\mathrm{ix}}`$ has a comparable dispersion to $`\alpha _{\mathrm{ox}}`$ for the same set of QSOs. We have repeated our column density estimates for PG 0043+039 and PG 1700+518 using $`\alpha _{\mathrm{ix}}`$ in place of $`\alpha _{\mathrm{ox}}`$, and we find that the inferred column density lower limits are within a factor of two of those presented in Table 3. We show our results for PG 0043+039 in Figure 2, and for PG 1700+518 we find an intrinsic column density $`>8.5\times 10^{23}`$ cm<sup>-2</sup> when $`\alpha _{\mathrm{ix}}=1.29`$ and $`\mathrm{\Gamma }=2.0`$.
## 3 Notes on Individual Observations
PG 0043+039: The optical and ultraviolet properties of this BALQSO were recently studied in detail by Turnshek et al. (1994), and the ASCA analysis for it was straightforward. Our large inferred X-ray column density combined with the intrinsic color excess of $`E(BV)0.11`$ may suggest that the absorber is dust poor (compare with §3.1.1 of Turnshek et al. 1994).
0043+008 (UM 275): Zamorani et al. (1981) claimed that this BALQSO was detected in a 1.6 ks Einstein observation, although our analysis of the Einstein data made us skeptical of this claim. Wilkes et al. (1994) also give only an Einstein upper limit. 0043+008 is undetected in our much deeper (23 ks) ASCA observation. Based on our analysis, it appears that the claimed Einstein detection was actually an unrelated source lying $`2.1^{}`$ from the precise optical position. In order to obtain the tightest possible constraints on the SIS and GIS count rates for 0043+008, we have excluded this source from the target regions before calculating the count rate upper limits. This source does not cause serious confusion, but there was a statistical photon excess in the target region for GIS3 that was not consistent with a point source or the optical position of 0043+008. We suspect this excess is due to imperfect removal of all the photons from the Einstein source.
0226–104: Despite the fact that this BALQSO has been intensively studied (e.g. Korista et al. 1992), the coordinates stated for it in the literature are inconsistent with each other. We have used the coordinates $`\alpha _{2000}=02^\mathrm{h}28^\mathrm{m}39^\mathrm{s}.2`$, $`\delta _{2000}=10^{}11^{}10.0^{\prime \prime }`$ (T. Barlow, private communication). 0226–104 was detected in all but the SIS1 detector as a point source (the SIS1 non-detection can be understood as due to the larger off-axis angle of 0226–104 in this detector). We excluded counts from two nearby sources before calculating the count rates.
IRAS 07598+6508: This BALQSO was marginally detected by ROSAT with a count rate of $`1.8\times 10^3`$ count s<sup>-1</sup>, but it was impossible to determine whether the observed X-ray emission was associated with accretion activity or a circumnuclear starburst (Lawrence et al. 1997). There was an X-ray source coincident with the optical position of IRAS 07598+6508 in the GIS3 detector, and the target region counts were $`8.6\sigma `$ above background for the full band image (the source was also seen in the hard band image). There were photon excesses in the target regions for SIS0 and SIS1, but the lack of distinct point sources at the optical position led us to conservatively treat these as upper limits. The sole detection in the GIS3 detector might be understood as due to the better high-energy response of the GIS detectors and the smaller off-axis angle of GIS3 compared to GIS2.
Our lower limit on the absorption column density is $`20`$ times higher than that of GM96, and this suggests that the dust-to-gas ratio in the absorbing material may be extremely small (compare with §5.2 of GM96).
Mrk 231: This object has the lowest redshift in our sample and has sometimes been classified as a Seyfert 1, but we feel its large luminosity ($`L_{\mathrm{Bol}}>10^{46}`$ erg s<sup>-1</sup>) and strong absorption lines justify its inclusion (e.g. it is also included in the sample of Boroson & Meyers 1992). Mrk 231 has recently been studied with ASCA by Iwasawa (1999) and Turner (1999). Our spectral analysis is consistent with the analyses in these papers and thus we do not detail it here. In Table 3 we give the column density discussed in §4.1.2 of Iwasawa (1999). However, even the fitted ASCA column density may be substantially smaller than the true column density to the black hole region due to possible electron scattering and partial covering effects in this extremely complicated system (see §4 for further discussion).
PG 1700+518: There were no ASCA sources close to the optical position for this BALQSO, and we were able to place tight upper limits on the count rate for all but the SIS1 detector. A statistical photon excess in the SIS1 target region was not from any obvious point source and thus we do not consider it a detection. A 40 ks RXTE observation of PG 1700+518 has also been performed with principal investigator P. Green. Given that our ASCA observation is $`\stackrel{>}{}15`$ times more sensitive than the RXTE observation, we expect an RXTE non-detection.
LBQS 2111–4335: There was clearly no point source at the optical position of the BALQSO. However, a ninth magnitude A star (HD202042) was coincident with an X-ray source detected 2.2 from LBQS 2111–4335 (the A star may have a late-type binary companion that creates most of the X-ray emission). We excluded the counts from this source before calculating the count rate upper limits. Even after excluding the source from the extraction region, photon counts above the noise level were still evident in the SIS0 detector. However, we do not consider this to be a detection.
PHL 5200: The ASCA data for this object were first presented by Mathur et al. (1995). This BALQSO was clearly detected in all but the GIS2 detector. We find that the results of spectral analysis are highly sensitive to the details of the background subtraction. Given this sensitivity, our analyses suggest that any column density from 0–$`5\times 10^{23}`$ cm<sup>-2</sup> is consistent with the ASCA data (even at 68% confidence). Thus, while the ASCA data certainly do not rule out the presence of a large column density absorber, they do not convincingly show one to be present either.
## 4 Conclusions and Future Prospects
We have presented the first moderate-sized sample of sensitive BALQSO observations in the 2–10 keV band. This band is potentially an important one for BALQSO spectroscopy due to the ability of $`>2`$ keV X-rays to penetrate large column densities, and the properties of our sample are likely to be more representative of the behavior of the BALQSO population as a whole than those derived from the two single-object ASCA studies to date (see §1). While we detect a significantly larger fraction of our objects than GM96, we find that in general even the optically brightest BALQSOs known are extremely weak in the 2–10 keV band. Assuming that our BALQSOs have typical RQ QSO X-ray continua and are weak due to intrinsic X-ray absorption, we find column densities $`\stackrel{>}{}5\times 10^{23}`$ cm<sup>-2</sup> in several cases. These are the largest X-ray column densities yet inferred for BALQSOs, being about an order of magnitude larger than the lower limits of GM96. If the same outflowing gas makes both the X-ray and ultraviolet absorption, our improved column density limits also raise the implied mass outflow rate and kinetic luminosity by about an order of magnitude.
Alternatively, it is possible that BALQSOs are intrinsically X-ray weak, perhaps due to an orientation dependence of the X-ray continuum flux. Krolik & Voit (1998) have discussed the possibility of optical/ultraviolet continuum anisotropy in BALQSOs (due to accretion disk limb-darkening which dims objects viewed equatorially), but they suggest that anisotropy of the X-ray emission is likely to be weaker. If the power-law X-ray emission of RQ QSOs indeed originates in an optically thin accretion disk corona near a Kerr black hole, the intrinsic equatorial emission may in fact be enhanced by relativistic effects (e.g. Cunningham 1975).
An important corollary of our work is that optical brightness (in the $`B`$ or $`R`$ bands) is not a good predictor of X-ray brightness for BALQSOs. Two of the four BALQSO X-ray detections in our sample (0226–104 and PHL 5200) are actually among our optically faintest objects. It appears that the prototype of the class, PHL 5200 (Lynds 1967), has an unusually high X-ray flux for a BALQSO given its optical flux, and thus its X-ray properties cannot be taken as representative of the BALQSO population. In fact, its X-ray brightness is typical of non-BALQSOs of its optical magnitude, and we do not find the evidence for a large intrinsic column density in this object to be compelling (see §3). Finally, all four of our detected BALQSOs have notably high optical continuum polarizations (2.3–5%; Schmidt, Hines & Smith 1997; 7 of our 8 BALQSOs have optical continuum polarization data in the literature), and it is worth investigating if high-polarization BALQSOs tend to be the X-ray-brighter members of the class. We refrain from making a formal claim about this issue at present due to our limited sample size, but another example of this phenomenon may be IRAS 13349+2438 (see §3.1.3 of Brandt et al. 1997). High optical continuum polarization and X-ray flux could occur together if the direct lines of sight into the X-ray nuclei of BALQSOs were usually blocked by extremely thick material (say $`10^{25}`$ cm<sup>-2</sup>). In this case, one could only hope to detect X-rays when there is a substantial amount of electron scattering in the nuclear environment by a ‘mirror’ of moderate Thomson depth. Then, any measured X-ray column density for a highly polarized BALQSO (e.g. Mrk 231 and PHL 5200) might not reflect the true column density to the black hole region but rather the column density along the electron-scattered path (compare with §5 of Goodrich 1997). We have also looked for other common properties of our four X-ray detected BALQSOs and none is apparent. For example, they are not all ‘PHL 5200-like’ BALQSOs (see Turnshek et al. 1988).
Aside from the physical constraints placed upon the X-ray column densities in BALQSOs, our results also have important practical implications for future X-ray spectroscopy of these objects. Consider the representative case of PG 1700+518. The ASCA SIS and GIS count rates for this BALQSO are $`<3.2\times 10^3`$ count s<sup>-1</sup>, and its 2–10 keV observed flux is constrained to be $`<3\times 10^{13}`$ erg cm<sup>-2</sup> s<sup>-1</sup>. Using pimms, this translates into an observed AXAF ACIS-I count rate of $`<9\times 10^3`$ count s<sup>-1</sup>. While this BALQSO may be detectable with AXAF, high-quality spectroscopy of it will be difficult and perhaps impossible (assuming no strong variability). Even in the most optimistic case, where the flux is just below our upper limit, it will take $`110`$ ks to obtain the $`1000`$ counts needed for moderate-quality spectroscopy.<sup>3</sup><sup>3</sup>3The $`1000`$ count and 10 000 count criteria are those given in §1.5 of the AXAF Proposers’ Guide, and we find that these criteria work well in practice. Similar criteria are adopted in figures 29–33 of the XMM Users’ Handbook. A high-quality ($`\mathrm{10\hspace{0.17em}000}`$ count) spectrum would require an immodest $`\stackrel{>}{}10^6`$ s of AXAF time. For the XMM EPIC-PN, the count rate will be $`<2.6\times 10^2`$ count s<sup>-1</sup>. Spectroscopy may be more tractable in this case, but it will still prove inordinately expensive if the flux is a factor of a few times smaller than our ASCA upper limit. We also comment that some of our implied column density lower limits are close to becoming optically thick to Thomson scattering. If the X-ray column densities in many BALQSOs are optically thick to Thomson scattering, this will make it difficult to improve the above situation by observing at even higher energies.
## 5 Acknowledgments
We thank M. Elvis, E. Feigelson, E. Gotthelf, J. Krolik, A. Laor, K. Mukai, D. Schneider and an anonymous referee for helpful discussions. This paper is based upon work supported by NASA grant NAG5-4826 (SCG), the NASA LTSA program (WNB), NASA contract NAS-38252 (RMS), and NASA grants NAG5-3249 and NAG5-3390 (SM).
|
no-problem/9902/astro-ph9902337.html
|
ar5iv
|
text
|
# Flat FRW Cosmologies with Adiabatic Matter Creation: Kinematic tests
## 1 Flat FRW Equations With “Adiabatic” Matter Creation
Let us now consider the flat FRW line element $`(c=1)`$
$$ds^2=dt^2R^2(t)(dr^2+r^2d\theta ^2+r^2sin^2\theta d\varphi ^2),$$
(1)
where $`r`$, $`\theta `$, and $`\varphi `$ are dimensionless comoving coordinates and $`R`$ is the scale factor.
In that background, the nontrivial EFE for a fluid endowed with “adiabatic” matter creation and the balance equation for the particle number density can be written as (Prigogine et al. 1989; Calvão, Lima and Waga 1992)
$$8\pi G\rho =3\frac{\dot{R}^2}{R^2},$$
(2)
$$8\pi G(p+p_c)=2\frac{\ddot{R}}{R}\frac{\dot{R}^2}{R^2},$$
(3)
$$\frac{\dot{n}}{n}+3\frac{\dot{R}}{R}=\frac{\psi }{n},$$
(4)
where an overdot means time derivative and $`\rho `$, $`p`$, $`n`$ and $`\psi `$ are the energy density, thermostatic pressure, particle number density and matter creation rate, respectively. The creation pressure $`p_c`$ depends on the matter creation rate, and for “adiabatic” matter creation, it assumes the following form (Calvão, Lima and Waga 1992; Lima and Germano 1992)
$$p_c=\frac{\rho +p}{3nH}\psi ,$$
(5)
where $`H=\dot{R}/R`$ is the Hubble parameter.
As usual in cosmology, the cosmic fluid obeys the “gamma-law” equation of state
$$p=(\gamma 1)\rho ,$$
(6)
where the constant $`\gamma `$ lies on the interval .
Combining Eqs. (2) and (3) with (5) and (6) it is readily seen that the scale factor satisfies the generalized FRW differential equation
$$R\ddot{R}+(\frac{3\gamma _{}2}{2})\dot{R}^2=0,$$
(7)
where $`\gamma _{}`$ is an effective “adiabatic index” given by
$$\gamma _{}=\gamma (1\frac{\psi }{3nH}).$$
(8)
To proceed further it is necessary to assume a physically reasonable expression to the matter creation rate $`\psi `$. As can be seen from (4), the dimensionless parameter $`\frac{\psi }{3nH}`$ is the ratio between the two relevant rates envolved in the process. When this ratio is very small the creation process can be neglected, and if it is much bigger than unity, we see from (5) that $`p_c`$ becomes meaningless, because it will be much greather than the energy density. A reasonable upper limit of this ratio should be the unity ($`\psi =3nH`$), since in this case $`\psi `$ has exactly the value that compensates for the dilution of particles due to expansion. In this work we confine our attention to the simple phenomenological expression (Lima, Germano and Abramo 1996)
$$\psi =3\beta nH,$$
(9)
where $`\beta `$ is smaller than unity, and presumably given by some particular quantum mechanical model for gravitational matter creation. In general, $`\beta `$ must be a function of the cosmic era, or equivalently, of the $`\gamma `$ parameter, which specifies if the universe is vacuum ($`\gamma =0`$), radiation ($`\gamma =\frac{4}{3}`$) or dust ($`\gamma =1`$) dominated. However, for the sake of brevity we denote all of them generically by $`\beta `$, assumed here to be a constant at each phase.
With this choice, the FRW equation for $`R(t)`$ given by (7) can be rewritten as
$$R\ddot{R}+\mathrm{\Delta }\dot{R}^2=0,$$
(10)
the first integral of which is
$$\dot{R}^2=\frac{A}{R^{2\mathrm{\Delta }}},$$
(11)
where $`\mathrm{\Delta }=\frac{3\gamma (1\beta )2}{2}`$, and from (2) $`A`$ is a positive constant, which must be determined in terms of the present day quantities. It is worth notice that for $`\beta 1\frac{2}{3\gamma }`$, or equivalently, $`\mathrm{\Delta }0`$, the above equations imply that $`\ddot{R}0`$, thereby leading to power law inflation. In particular, for $`\mathrm{\Delta }=0`$, these universes expand with constant velocity, and are new examples of coasting cosmologies whose dynamic behavior is driven by matter creation. The observational consequences of “coasting cosmologies” generated by exotic “K-matter”, like cosmic strings, have been studied in detail (Gott and Rees 1987; Kolb 1989). All of them are characterized by the fact the energy density $`\rho R^2`$ and the total pressure $`P_t=\frac{1}{3}\rho `$ (see equations (12) and (15)).
Using equation (11), it is straightforward to obtain the energy density, the pressures ($`p`$ and $`p_c`$) and the particle number density as functions solely of the scale factor $`R`$ and of the $`\beta `$ parameter. These quantities are given by:
$$\rho =\rho _o(\frac{R_o}{R})^{3\gamma (1\beta )},$$
(12)
$$p_c=\beta \gamma \rho _o(\frac{R_o}{R})^{3\gamma (1\beta )},$$
(13)
$$n=n_o(\frac{R_o}{R})^{3(1\beta )},$$
(14)
$$P_t=(\gamma _{}1)\rho =[\gamma (1\beta )1]\rho _o(\frac{R_o}{R})^{3\gamma (1\beta )},$$
(15)
In the above expressions the subscript “o” refers to the present values of the parameters, and the total pressure is $`P_t=p+p_c`$. As expected, for $`\beta =0`$, equations (12)-(15) reduce to those of the standard FRW flat model for all values of the $`\gamma `$ parameter (Kolb and Turner 1990).
The solution of (11) for all values of $`\gamma `$ and $`\beta `$ can be written as
$$R=R_o[1+\frac{3\gamma (1\beta )}{2}H_o(tt_o)]^{\frac{2}{3\gamma (1\beta )}}.$$
(16)
Note also that for $`\gamma >0`$, we can choose $`t_o=2H_o^1/3\gamma (1\beta )`$, with the above equation assuming a more familiar form, namely:
$$R(t)=R_o[\frac{3\gamma (1\beta )}{2}H_ot]^{\frac{2}{3\gamma (1\beta )}}.$$
(17)
In particular, for a “coasting cosmology” driven by matter creation one finds $`Rt`$ as it should be. Note also that in the limit $`\beta =0`$, equations (16) and (17) reduce to the well known expressions of the FRW flat model.
## 2 Expressions for the Observational Quantities
In what follows we assume that the present material content of the universe is dominated by a pressureless nonrelativistic gas (dust). Following standard lines we also define the physical parameters $`q_o=\frac{R\ddot{R}}{\dot{R}^2}|_{t=t_o}`$ (deacceleration parameter) and $`H_o=\frac{\dot{R}}{R}|_{t=t_o}`$ (Hubble parameter). From (10) it is readily seen that
$$q_o=\frac{13\beta }{2}.$$
(18)
Therefore, for a given value of $`\beta `$, the decceleration parameter $`q_o`$ with matter creation is always smaller than the corresponding one of the FRW flat model. The critical case ($`\beta =\frac{1}{3},q_o=0`$), describes a “coasting cosmology”. Curiosly, instead of being supported by “K-matter” (Kolb 1989), this kind of model is obtained in the present context for a dust filled universe. It is also interesting that even negative values of $`q_o`$ are allowed for a dust filled universe, since the constraint $`q_o<0`$ can always be satisfied provided $`\beta >1/3`$. These results are in line with recent measurements of the decceleration parameter $`q_o`$ using Type Ia supernovae (Perlmutter et al. 1998, Garnavich et al 1998, Riess et al 1998). Such observations indicate that the universe may be accelerating today ($`q_o<0`$), which corresponds dynamically to a negative pressure term in the EFE. This would also indicate that the universe is much older than a flat model with the usual decceleration parameter $`q_o=0.5`$, and reconcile other recent results (Freedman 1998), pointing to a Hubble parameter $`H_o`$ larger than $`50`$ km s<sup>-1</sup> Mpc<sup>-1</sup> (see discussion below Eq.(21)). To date, only models with a cosmological constant, or the so-called “quintessence” (of which $`\mathrm{\Lambda }`$ is a special case), or still a second dark matter component with repulsive self-interaction have been invoked as being capable of explaining these results (Steinhardt et al. 1997, Cornish and Starkman 1998). In the present context, these prescriptions for alternative cosmologies are replaced by a flat model endowed with an “adiabatic” matter creation process. Before continuing, we need to express the constant A in terms of $`R_o`$ and $`H_o`$. From (8) one finds
$$A=H_o^2R_o^{3(1\beta )}.$$
(19)
The kinematical relation distances must be confronted with the observations in order to put limits on the free parameter of the models.
a) Lookback Time-Redshift
For a given redshift $`z`$, the scale function $`R(t_z)`$ is related with $`R_o`$ by $`1+z=\frac{R_o}{R}`$. The lookback time is exactly the time interval required by the universe to evolving between these two values of the scale factor. Inserting the value of $`A`$ given above in the first integral (11), the lookback time-redshift relation can be easily derived and it is given by
$$t_ot(z)=\frac{2H_o^1}{3(1\beta )}\left[1\frac{1}{(1+z)^{\frac{3(1\beta )}{2}}}\right],$$
(20)
which generalizes the standard FRW flat result (Sandage 1988). In figure 1 we have plotted the lookback time as a function of the redshift for some selected values of $`\beta `$.
Taking the limit $`z\mathrm{}`$ in (20) the present age of the universe (the extrapolated time back to the bang) is
$$t_o=\frac{2H_o^1}{3(1\beta )},$$
(21)
which reduces for $`\beta =0`$ to the same expression of the standard dust model (Kolb and Turner, 1990).
Estimates of the Hubble expansion parameter from a variety of methods are now converging to $`h(H_o/100\mathrm{k}\mathrm{m}/\mathrm{sec}/\mathrm{Mpc})=0.7\pm 0.1`$ (Freedman 1998). Assuming no matter creation ($`\beta =0`$), the lower and upper limits of this value imply that the expansion age of a dust-filled flat universe , which is theoretically favored by inflation, would be either $`10.8\times 10^9`$ years or $`8.2\times 10^9`$ years. These results are in direct contrast to the measured ages of some stars and stellar systems, believed to be at least $`(1216)\times 10^9`$ years old or even older if one adds a realistic incubation time (Bolte and Hogan 1995, Pont et al. 1998). As can easily be seen from (21), the matter creation process helps because for a given Hubble parameter $`H_o`$ the expansion age $`t_o`$ is always larger than $`\frac{2}{3}H_o^1`$, which is the age of the universe for the FRW flat model. It is exactly $`H_o^1`$ for a coasting cosmology ($`\beta =\frac{1}{3}`$), and greater than $`H_o^1`$ for $`\beta >\frac{1}{3}`$. In this way, one may conclude that the matter creation ansatz (9) changes the predictions of standard cosmology, thereby alliviating the problem of reconciling observations with the inflationary scenario. It is interesting that matter creation increases the dimensionless parameter $`H_ot_o`$ while preserving the overall expanding FRW behavior.
b) Luminosity Distance-Redshift
The luminosity distance of a light source is defined as $`d_l^2=\frac{L}{4\pi l}`$, where $`L`$ and $`l`$ are the absolute and apparent luminosities respectively. In the standard FRW metric (1) it takes the form (Sandage 1961; Weinberg 1972)
$$d_l=R_or(z)(1+z),$$
(22)
where $`r(z)`$ is the radial coordinate distance of the object at light emission. Starting from (1), this quantity can be easily derived as follows: since a light signal satisfies the geodesic equation of motion $`ds^2=0`$ and geodesics intersecting $`r_o=0`$ are lines of constant $`\theta `$ and $`\varphi `$, the geodesics equation can be written as
$$_o^r𝑑r=_{R(t)}^{R_o}\frac{dR(t^{})}{\dot{R}(t^{})R(t^{})}.$$
(23)
Now, substituting (11) with the value of $`A`$ given by (19) in the above equation, the radial coordinate distance as function of redshift is given by
$$r(z)=\frac{2}{(13\beta )R_oH_o}\left[1(1+z)^{\frac{3\beta 1}{2}}\right],$$
(24)
and therefore, the luminosity distance-redshift relation is written as
$$H_od_l=\frac{2}{(13\beta )}\left[(1+z)(1+z)^{\frac{1+3\beta }{2}}\right].$$
(25)
As one may check, taking $`\beta =0`$, the above expression reduces to
$$H_od_l=2\left[(1+z)(1+z)^{\frac{1}{2}}\right],$$
(26)
which is the usual FRW result (Weinberg 1972). On the other hand, expanding (25) for small redshifts after some algebra one finds
$$H_od_l=z+\frac{1}{2}(1\frac{13\beta }{2})z^2+\mathrm{},$$
(27)
which depends explicitly on the matter creation $`\beta `$ parameter. However, inserting (18) we recover the usual FRW expansion for small redshifts, which depends only on the effective deacceleration parameter $`q_o`$ (Weinberg 1972; Kolb and Turner 1990). The luminosity distance as a function of the redshift is shown in figure $`2`$. As expected, in these diagrams different models has the same behavior at $`z<<1`$ (Hubble law), and the possible discrimination among different models comes from observations at large redshifts ($`z1`$). However, it is usually believed that in such scales evolutionary effects can not be neglected. The range of possible data at the limiting $`z`$, for which evolutionary effects are not important are indicated by the data point and error bar (Kristian, Sandage and Westphal 1978).
c) Angular Diameter-Redshift
The angular size $`\theta `$ of an object is an extremely sensitive function of the cosmic dynamics. In particular, the apparent continuity of the $`\theta (z)`$ relation for galaxies and quasars is also believed to be a strong support to the cosmological nature of the redshifts (Kapahi 1987). Here we are interested in angular diameters of light sources described as rigid rods and not isophotal diameters. These quantities are naturally different, because in an expanding world the surface brightness varies with the distance (for more details see Sandage 1988).
Let $`D`$ be the intrinsic size of a source located at $`r(z)`$, assumed independent of the redshift and perpendicular to the line of sight. If it emits photons at time $`t_1`$ that at time $`t_o`$ arrive to an observer located at $`r=0`$, its angular size at the moment of reception is defined by (Sandage 1961)
$$\theta =\frac{D(1+z)}{R_or(z)}.$$
(28)
Inserting the expression (24) for $`r(z)`$ into (28) we find
$$\theta =\frac{DH_o(13\beta )(1+z)^{\frac{3(1\beta )}{2}}}{2\left[(1+z)^{\frac{13\beta }{2}}1\right]}.$$
(29)
A log-log plot of angular size versus redshift is shown in figure $`3`$ for selected values of $`\beta `$.
For all models, the angular size initially decreases with increasing $`z`$, reaches its minimum value at a given $`z_c`$, and eventually begins to increase for fainter magnituds. This generic behavior for an expanding universe was predicted long ago in the context of the standard model (Hoyle 1959). It may qualitatively be understood in terms of an expanding space: the light observed today from a source at high $`z`$ was emitted when the object was closer. How this effect depends on the $`\beta `$ parameter? As can be seen from (29) the minimal value of which occur at $`z_c(\beta )=[\frac{3(1\beta )}{2}]^{\frac{2}{13\beta }}1`$. Hence, the minimum persists in the presence of adiabatic matter creation, and is pushed to the right direction, that is, it is displaced to higher redshifts as the $`\beta `$ parameter is increased. As expected, for $`\beta =0`$ one finds $`z_c=\frac{5}{4}`$, which is the standard result for a dust filled FRW flat universe. It is also convenient to consider the limit of small redshifts in order to clarify the role played by $`\beta `$. Expanding (29) we have $`z`$
$$\theta =\frac{DH_o}{z}\left[1+\frac{1}{2}(3+\frac{13\beta }{2})z+\mathrm{}\right].$$
(30)
Hence, “adiabatic” matter creation as modelled here also requires an angular size decreasing as the inverse of the redshift for small $`z`$. However, the second order term is a function of the $`\beta `$ parameter. Its overall effect on the angular size is depart it from the Euclidean behavior ($`\theta z^1`$) more slowly than in the corresponding FRW model (see fig.3). In terms of $`q_o`$, inserting (18) into (30) it is readily obtained
$$\theta =\frac{DH_o}{z}\left[1+\frac{1}{2}(3+q_o)z+\mathrm{}\right],$$
(31)
which is formally the same FRW result for small redshifts (Sandage 1988). Note that even at this limit, constraints on the decceleration parameter from the data are equivalent to place limits on the values of $`\beta `$ (see (18)).
d) Number Counts
Let us now derive the galaxy number per redshift interval in the presence of adiabatic matter creation. We first remark that although modifying the evolution equation driving the amplification of small perturbations, and so the usual adiabatic treatment for galaxy formation, the created matter is smeared out and does not change the total number of sources present in the nonlinear regime. In other words, the number of galaxies already formed scales with $`R^3`$.
Let $`n_g(z,L)dL`$ be the proper concentration of sources at redshift $`z`$ with absolute luminosity between $`L`$ and $`L+dL`$. The total number of galaxies $`N_g(z)`$ is proportional to the the comoving volume
$$dN_g(z)=n_gdLdV_c=4\pi n_gr^2drdL.$$
(32)
Now, by using that $`\frac{dt}{R(t)}=\frac{dR}{R\dot{R}}=dr`$, we find that
$$\frac{dN_g}{4\pi n_gdzdL}=\frac{(R_oH_o)^1r(z)^2}{\left[(1+z)^{3(1\beta )/2}\right]},$$
(33)
where $`n_g(z,L)=n_o(L)(1+z)^3`$.
On the other hand, since the radial coordinate r(z) is given by eq.(24) it follows that the expression for number-counts can be written as
$$\frac{(H_oR_o)^3dN_g}{4\pi n_oz^2dzdL}=\frac{\delta ^2\left[1(1+z)^{\frac{(13\beta )}{2}}\right]^2}{z^2(1+z)^{\frac{3(1\beta )}{2}}},$$
(34)
where $`\delta =\frac{2}{(13\beta )}`$. For small redshifts, we have that
$$\frac{(H_oR_o)^3dN_g}{4\pi n_oz^2dzdL}=12\left[\frac{(13\beta )}{2}+1\right]z+\mathrm{}.$$
(35)
The number count-redshift diagram for a dust-filled model with “adiabatic” matter creation is shown in the figure $`4`$, for the indicated values of $`\beta `$. Table 1 summarizes the limits to $`\beta `$ obtained from each kinematical test.
## 3 Conclusion
The cosmological principle (homogeneity and isotropy of space) defines the shape of the line element up to a spatial scale function, which must be time dependent from the cosmological nature of the redshifts. As discussed here, the expanding “postulate” and its main consequences may also be compatibilized with a cosmic fluid endowed with adiabatic matter creation. The similarities and differences among universe models with matter creation as described in the new thermodynamic approach and the conventional matter conserved FRW model have been analysed both from formal and observational view points. The rather slight changes introduced by the matter creation process, which is quantified by the $`\beta `$ parameter, provides a reasonable fit of some cosmological data. Kinematic tests like luminosity distance, angular diameter and number-counts versus redshift relations constrain perceptively the matter creation parameter (see table 1). For flat models with $`\beta 0`$, the age of the universe is always greather than the corresponding FRW model ($`\beta =0`$). More important still, the decceleration parameter $`q_o`$ may be negative as suggested by recent type Ia supernovae observations. In this concern, the models studied here are alternatives to universes dominated by a cosmological constant or “quintessence”.
The angular size versus redshift curves have the minimum displaced for higher values of $`z`$, thereby alliviating the problem in reconciling the angular size data from galaxies and quasars at intermediate and large redshifts. It is also interesting that all the theoretical and observational results previously obtained whitin a scenario driven by $`K`$-matter (Kolb 1989), are reproduced for a dust-filled universe with $`\beta =\frac{1}{3}`$.
We also stress that in spite of these important physical consequences, the present day matter creation rate, $`\psi _o=3n_oH_o10^{16}`$ nucleons $`cm^3yr^1`$, is nearly the same rate predicted by the steady-state universe (Hoyle, Burbidge and Narlikar 1993). This matter creation rate is presently far below detectable limits.
The constraints on the $`\beta `$ parameter should be compared with the corresponding ones using the predictions of light elements abundances from primordial nucleosynthesis. In fact, the important observational quantity for nucleosynthesis is the baryon to entropy ratio. In these models the temperature scale-factor relationship and entropy density are modified, therefore one may expect sensitive implications to the nucleosynthesis scenario.
Finally, we remark that it is not so difficult to widen the scope of the kinematic results derived here to include curvature effects as well as a non-zero cosmological constant. In particular, concerning the “age problem”, even closed universes seems to be compatible with the ages of the oldest globular clusters, when the value of the creation parameter is suficiently high. Further details about kinematic tests in closed and opened universes with matter creation will be published elsewhere (Alcaniz and Lima 1999).
* One of the authors (JASL) is grateful to Raul Abramo for helpful discussions. This work was partially supported by the project Pronex/FINEP (No. 41.96.0908.00) and Conselho Nacional de Desenvolvimento Científico e Tecnológico - CNPq (Brazilian Research Agency).
|
no-problem/9902/astro-ph9902308.html
|
ar5iv
|
text
|
# Untitled Document
The structure of the central disk of NGC 1068: a clumpy disk model
Pawan Kumar
Institute for Advanced Study, Princeton, NJ 08540
Abstract
NGC 1068 is one of the best studied Seyfert II galaxies, for which the blackhole mass has been determined from the Doppler velocities of water maser. We show that the standard $`\alpha `$-disk model of NGC 1068 gives disk mass between the radii of 0.65 pc and 1.1 pc (the region from which water maser emission is detected) to be about 7x10<sup>7</sup> M (for $`\alpha =0.1`$), more than four times the blackhole mass, and a Toomre Q-parameter for the disk is $``$0.001. This disk is therefore highly self-gravitating and is subject to large-amplitude density fluctuations. We conclude that the standard $`\alpha `$-viscosity description for the structure of the accretion disk is invalid for NGC 1068.
In this paper we develop a new model for the accretion disk. The disk is considered to be composed of gravitationally bound clumps; accretion in this clumped disk model arises because of gravitational interaction of clumps with each other and the dynamical frictional drag exerted on clumps from the stars in the central region of the galaxy. The clumped disk model provides a self-consistent description of the observations of NGC 1068. The computed temperature and density are within the allowed parameter range for water maser emission, and the rotational velocity in the disk falls off as $`r^{0.35}`$.
Subject heading: accretion disks – galaxies: individual (NGC 1068)
Alfred P. Sloan Fellow
1. Introduction
The discovery of emission from water masers in the central region of Seyfert II galaxy NGC 1068 provides an opportunity to study the properties of the associated accretion disk. The mass of the central black hole is estimated to be 1.5x10<sup>7</sup> M (Greenhill & Gwinn, 1997) from the measurement of the Doppler velocities of the masing spots.
The nucleus of NGC 1068 is heavily obscured, and the luminosity of the central source is determined from the observed flux by modeling the dust obscuration and the scattering of photons by ionized gas in the nuclear region. According to a careful analysis carried out by Pier et al. (1994) the bolometric luminosity of NGC 1068 is estimated to be about 8x10<sup>44</sup> erg s<sup>-1</sup>; the luminosity is perhaps uncertain by a factor of a few.
We use the blackhole mass and the bolometric luminosity to construct the standard viscous disk model for NGC 1068 (§2) and find that the disk is highly self-gravitating thereby rendering the $`\alpha `$-disk model inapplicable. The effect of the irradiation of the disk from the central source does not modify this conclusion (§2b).
In section 3 we present a new model for the disk in NGC 1068 composed of gas clumps. The accretion in this case arises as a result of gravitational interaction amongst clumps (§3.1); the structure of the clumped-disk model (CD model) of NGC 1068 is described in §3.2.
The velocity of the masing spots appears to be falling off with distance from the center as r<sup>-0.35</sup> (Greenhill & Gwinn, 1997), which is less rapid than the Keplerian power law of $`0.5`$. One possible reason for this could be that the disk of NGC 1068 is sufficiently massive so as to modify the rotation curve. However, we show in §3 that this is not so. Another possibility is that the flattening of the rotation curve is due to a central star cluster. The influence of the star cluster on the accretion rate and the disk structure is discussed in section 3.2.
2. Standard thin disk model for NGC 1068
The standard viscous accretion disk model for NGC 1068, including the irradiation of the disk from the central source, is presented below. Throughout this paper we take the mass of the blackhole at the center of NGC 1068 to be 1.5x10<sup>7</sup> M (Greenhill & Gwinn, 1997), and the bolometric luminosity to be 8x10<sup>44</sup> erg s<sup>-1</sup> (Pier et al., 1994).
The theory of thin accretion disk is well developed and is described in a number of review articles and monographs e.g. Frank, King & Raine (1992). When the flux intercepted by disk is not small compared to the local energy generation rate, such as that expected for the masing disk of NGC 1068, the incident flux must be included in determining the thermal structure of the disk.
The fractional luminosity intercepted by the disk depends on the disk geometry, the scattering of radiation by the coronal gas etc. For instance if the dominant source of radiation intercepted by the disk were the scattered radiation from an extended corona then we might expect the incident flux at the disk to be roughly uniform. On the other hand for a flaring or a warped disk the radiation intercepted from the source directly might dominate, and the incident radiation in this case depends on the inclination angle of the normal to the disk. We adopt the second model to analyze the disk structure; it is straightforward to consider other possibilities, but we do not pursue these since the main result of this section viz., the standard $`\alpha `$-disk model for NGC 1068 is highly unstable to gravitational perturbations turns out to be independent of whether we include irradiation and radiative forces in the calculation of disk structure.
Let the luminosity of the central source be $`L_c`$. The flux incident at the disk, $`F_{in}`$, a distance $`r`$ from the central source is taken to be
$$F_{in}(r)=\frac{L_c}{4\pi r^2}\frac{dH}{dr}=\frac{\beta HL_c}{4\pi r^3},$$
$`(1)`$
where $`H`$ is the vertical scale height, the constant factor $`\beta `$ is defined by $`dH/dr=\beta H/r`$, and determines the fraction of the flux that is intercepted by the disk. The upward energy flux due to mass accretion rate $`\dot{M}`$ is
$$F_{up}(r)=\frac{3}{8\pi }\mathrm{\Omega }^2\dot{M},$$
$`(2)`$
and the ratio of these fluxes $`F_{in}/F_{up}ϵ\beta H/R_{sc}`$, where $`ϵ0.1`$ is the efficiency of the conversion of the rest mass to energy by the central source, and $`R_{sc}`$ is the Schwarzschild radius of the blackhole, $`\mathrm{\Omega }=(GM_t/r^3)^{1/2}`$ is the angular rotation speed, and $`M_t`$ is the mass contained inside the radius $`r`$. The effective temperature of the disk at $`r`$ is given by
$$\sigma T_{eff}^4(r)=F_{up}+F_{in}F_t.$$
$`(3)`$
The temperature at the mid-plane of the disk ($`T_c`$) can be calculated by considering the first moment of the radiative transfer equation, and making use of the Eddington approximation, and is given by
$$T_c^4=\frac{3}{4}T_{eff}^4\left(\frac{F_{up}}{F_t}\right)\left[\tau _0+\frac{4}{3}\frac{F_t}{F_{up}}\frac{2}{3}\right],$$
$`(4)`$
where $`\tau _0=\kappa \mathrm{\Sigma }`$ is the optical depth of the disk. The opacity $`\kappa `$, for masing disks, with $`H_2`$ density and temperature in the range 10<sup>8</sup>–10<sup>10</sup> cm<sup>-3</sup> and 100–1000 K respectively is dominated by metal grains and is given by (Bell & Lin 1994)
$$\kappa 0.1T^{1/2}\mathrm{cm}^2\mathrm{g}^1.$$
$`(5)`$
The temperature in the disk mid-plane is not sensitive to the height where the energy is deposited so long as the optical depth of the disk to the incident radiation is much greater than one.
The solution to the hydrostatic equilibrium equation in the vertical direction, which includes the radiation pressure at the disk surface $`(2F_{in}+F_{up})/c`$, yields
$$\rho _cc_s^2\mathrm{\Omega }^2\mathrm{\Sigma }H+\frac{2F_t}{c}\left(1\frac{\tau _0F_{up}}{2F_t}\right).$$
$`(6)`$
where $`\mathrm{\Sigma }`$ is the surface mass density, $`c_s`$ & $`\rho _c`$ are the sound speed and gas density in the mid plane of the disk. Taking the effective viscosity in the disk to be $`\alpha c_sH`$ the mass accretion rate $`\dot{M}`$ is given by
$$\dot{M}=\frac{2\pi \alpha \mathrm{\Sigma }c_s^2}{\mathrm{\Omega }}.$$
$`(7)`$
Equations (4), (6) and (7) are three equations in three unknowns viz. $`\mathrm{\Sigma }`$, $`H`$ and $`T_c`$, which are solved numerically and the results are shown in fig. 1. Also shown in fig. 1, for comparison, are the solutions for $`\beta =0`$ i.e. when the irradience of the disk is neglected. The solution in the latter case can be obtained analytically and is given below
$$T_c=200\alpha ^{2/9}\dot{m}^{4/9}M_7^{7/9}r^1\mathrm{K},$$
$`(8)`$
$$n=10^{10}\alpha ^{2/3}\dot{m}^{1/3}M_7^{5/6}r^{3/2}\mathrm{cm}^3,$$
$`(9)`$
and the Toomre-Q parameter for the stability of the disk is
$$Q=\frac{\mathrm{\Omega }c_s}{\pi G\mathrm{\Sigma }}=2.4\times 10^3\alpha ^{2/3}\dot{m}^{1/3}M_7^{1/6}r^{3/2},$$
$`(10)`$
where $`\dot{m}`$ is the accretion rate in terms of the Eddington rate, $`M_7=M/(10^7M_{})`$, and $`r`$ is the radial distance from the center in parsecs. For the NGC 1068 system $`M_71.5`$ and $`\dot{m}0.4`$. The inner and the outer radii of the masing disk are at 0.65pc and 1.1pc respectively.
For the numerical results shown in fig. 1 we took the value of $`\beta `$ such that the disk intercepts about 50% of the flux from the central source. In this case the disk temperature is dominated by the incident flux, and the temperature in the disk midplane is close to $`T_{eff}511`$K at $`r=1`$pc. The structure of the disk ($`\mathrm{\Sigma }`$, $`Q`$, $`H`$, $`T_c`$) in this case is almost independent of the opacity of the gas and consequently it is unaffected by any uncertainty in $`\kappa `$. The molecular mass of the masing disk, when irradiation is included, is $`7.0`$x10$`{}_{}{}^{7}M_{}^{}`$ (for $`\alpha =0.1`$), and the $`Q`$ is about 10<sup>-3</sup> (see fig. 1). For the irradiation dominated disk, the disk mass decreases with $`\alpha `$ as $`\alpha ^{0.9}`$, and the $`Q`$ increases as $`\alpha ^{0.9}`$. Since $`Q`$ should be greater than one for stability, we see that the masing disk of NGC 1068 is highly unstable to gravitational perturbation. A decrease in the irradiation flux makes the disk more unstable.
The main conclusion of this section is that the standard $`\alpha `$-disk model for the NGC 1068 masing disk is inconsistent. The disk according to this model is highly unstable, and should fragment into clumps. A consistent analysis in this case should include the effect of self-gravity, and clump interaction, which is described below.
3. A model for clumpy self gravitating disks
A disk with small value for ($`1Q`$) is likely to develop spiral structure which can transport angular momentum outward. However, when $`Q1`$, as in the case of NGC 1068 (see §2), the disk is likely to breakup into clumps, and the accretion rate is determined by the gravitational interaction among these clumps. Accretion in self gravitating disks was considered by Paczynski (1978), Lin & Pringle (1987), Shlosman & Begelman (1987), Shlosman et al. (1990), and has been investigated more recently by Kumar (1998) in some detail for clumpy disks. In §3.1 we derive the results that we need to construct model for clumpy disks (CDs), and its application to NGC 1068 is discussed in §3.2.
§3.1 Velocity dispersion and accretion rate in a clumpy disk
Consider a disk consisting of clumps of size $`l_c`$, and take the mean separation between clumps to be $`d_c`$. The tidal radius of a cloud, the distance to which the gravity of the cloud dominates over the gravity of the central mass, is $`d_tr(m_c/M_t)^{1/3}`$; where $`m_c\sigma _cl_c^2`$ is the cloud mass, $`\sigma _c`$ is the surface mass density of the cloud and $`M_t`$ is the total axisymmetrically distributed mass contained inside the radius $`r`$.
The change to the displacement amplitude of the epicyclic oscillation of a clump as a result of gravitational interaction with another clump, with impact parameter $`d\mathrm{}>d_t`$, can be shown to be $`m_cr^3/(M_td^2)`$. Gravitational encounters with $`d\mathrm{}<d_t`$ are almost adiabatic and these leave the epicyclic energy of clumps unchanged (two particles moving on circular orbits merely interchange their trajectories in such an encounter). Thus the dominant gravitational interactions for exciting epicyclic oscillation are those with impact parameter of $`d_t`$, and the change to the epicyclic amplitude in such an encounter is $`\delta rm_cr^3/(M_td_t^2)d_t`$, or
$$\delta rd_c^{2/3}r^{1/3}\left(\frac{M_r}{\pi M_t}\right)^{1/3},$$
$`(11)`$
where
$$M_r=\frac{\pi r^2\sigma _cl_c^2}{d_c^2}\pi r^2\overline{\sigma }(r),$$
$`(12)`$
and $`\overline{\sigma }(r)`$ is the mean surface mass density of the disk at radius $`r`$.
The mean time interval for a clump to undergoes strong gravitational interaction with another clump i.e. with impact parameter $`d_t`$ is $`t_{int}\mathrm{\Omega }^1(d_c/d_t)^2`$, and so long as the cloud size ($`l_c`$) is not much smaller than $`d_c`$ the timescale for physical collision between clumps is also of the same order i.e. clouds undergo physical collision after having undergone one strong gravitational encounter on the average, and so the cloud velocity dispersion is $`\mathrm{\Omega }\delta r`$ or
$$\overline{v}_r(r\mathrm{\Omega })\left[\frac{M_r}{\pi M_t}\right]^{1/3}\left[\frac{d_c}{r}\right]^{2/3},$$
$`(13)`$
The expression for $`\overline{v}_r`$ is similar to that given by Gammie et al. (1991) for the velocity dispersion of molecular clouds in the Galaxy.
We assume that the kinetic energy of the relative motion of colliding clouds is dissipated and their orbit is circularized. Colliding clumps might coalesce so long as the size of the resulting object does not exceed the maximum length for gravitational instability, $`l_{max}(rM_r)/M_t`$; clouds of larger size are susceptible to fragmentation due to the rotational shear of the disk which limits their size.
The characteristic time for a clump to fall to the center can be estimated using equation (11) and is given by
$$t_r\mathrm{\Omega }^1\left(\frac{d_c}{r}\right)^2\left(\frac{M_t}{m_c}\right)^{4/3},$$
$`(14)`$
and the associated average mass accretion rate is
$$\dot{M}\mathrm{\Omega }(r)M_r\left(\frac{M_r}{\pi M_t}\right)^{4/3}\left(\frac{d_c}{r}\right)^{2/3}.$$
$`(15)`$
This result is applicable so long as the density contrast between the clump and the inter-clump medium is about a factor of two or larger, and $`l_c`$ is of order $`d_c`$.
For $`l_cd_c`$ clouds undergo several gravitational encounters with other clouds before undergoing a physical collision, and during this time the random velocity of clouds continues to increase. The rate of increase of velocity dispersion in two body gravitational interaction is given by
$$\frac{dv^2}{dt}\frac{G^2m_cM_r\mathrm{\Omega }}{r^2v^2}.$$
$`(16)`$
We assume that the scattering gives rise to isotropic velocity dispersion, and so the scale height for the vertical distribution of clumps is $`v/\mathrm{\Omega }`$. The time for a clump to undergo physical collision, assumed to be completely inelastic, is $`t_{col}\mathrm{\Omega }^1(d_c/l_c)^2`$, and therefore the velocity dispersion is
$$\overline{v}^2\frac{Gm_c}{l_c}.$$
$`(17)`$
This corresponds to the Safronov number being equal to one. Since the cloud collision time is much greater than $`\mathrm{\Omega }^1`$ the effective viscosity is suppressed by a factor $`(\mathrm{\Omega }t_{col})^2`$ compared to the case where collision frequency is greater than $`\mathrm{\Omega }`$, (cf. Goldreich and Tremaine, 1978), i.e. $`\nu _e\overline{v}_r^2(t_{col}^1/\mathrm{\Omega }^2)`$, and thus the mass accretion rate is given by
$$\dot{M}M_r\mathrm{\Omega }\left(\frac{M_r}{M_t}\right)\left(\frac{l_c}{r}\right)\mathrm{\Omega }M_r\left(\frac{M_r}{M_t}\right)^{4/3}\left(\frac{d_c}{r}\right)^{2/3}\left(\frac{l_c}{d_t}\right).$$
$`(18)`$
This accretion rate is smaller than given by equation (15) by a factor of $`(d_t/l_c)`$.
We can parameterize the effect of unknown size distribution and separation between clumps on the accretion rate and the velocity dispersion by a dimensionless parameter $`\eta `$, and rewrite equations for $`\dot{M}`$ and $`\overline{v}_r`$ in the following form for future use:
$$\dot{M}=\eta \mathrm{\Omega }(r)M_r(r)\left(\frac{M_r}{\pi M_t}\right)^2,$$
$`(19)`$
and
$$\overline{v}_r=\eta r\mathrm{\Omega }(r)\left(\frac{M_r}{\pi M_t}\right).$$
$`(20)`$
For $`l_cd_cl_{max}`$, we see from equations (16) and (18) that $`\eta 1`$. The effective kinematic viscosity in this case is approximately $`ł_{max}^2\mathrm{\Omega }Q^2H^2\mathrm{\Omega }`$ (where $`Ql_{max}/H`$ is the Toomre $`Q`$-parameter, $`H\overline{v}_r/\mathrm{\Omega }`$), same as in the ansatz suggested by Lin & Pringle (1987) for self-gravitating-disk. We note that the relative velocity of collision between clouds is smaller than their orbital speed by a factor of the ratio of the total mass to the molecular mass, and as long as the cooling time for post-collision gas is less than the time between collisions the clouds are not smeared away due to heating and the disk remains clumpy.
§3.2 Application to NGC 1068
The equation (19) can be recast in the following form
$$\frac{M_r^2}{M_r+M_c}=\psi ,$$
$`(21)`$
where
$$\psi \frac{\pi ^2}{\eta ^{2/3}}\frac{r\dot{M}^{2/3}}{G^{1/3}}.$$
$`(22)`$
The solution to this equation in the case where the central mass dominates is
$$M_r\psi ^{1/2}\left(M_c^{1/2}+\frac{\psi ^{1/2}}{2}\right),$$
$`(23)`$
and from equation (20) we find the velocity dispersion of clouds to be
$$\overline{v}_r\eta ^{2/3}[G\dot{M}]^{1/3}.$$
$`(24)`$
For NGC 1068, the luminosity $``$ 8x10<sup>44</sup> erg s<sup>-1</sup>, $`M_c`$ 1.5x10<sup>7</sup> M, and $`\dot{M}8`$x10<sup>24</sup> g s<sup>-1</sup>. Substituting these numbers in the above equations we find
$$M_r1.3\times 10^6M_{}r^{1/2}\eta ^{1/3},$$
$`(25)`$
and
$$\overline{v}_r8\mathrm{km}\mathrm{s}^1\eta ^{2/3},$$
$`(26)`$
where $`r`$ is measured in parsecs. Note that the disk mass in this model is more than an order of magnitude smaller than in the standard $`\alpha `$-disk model discussed in §2. The velocity dispersion of clumps is larger than the sound speed, and is of order the observed scatter in the velocity of masing spots in NGC 1068.
The number density of molecules and the gas temperature can be obtained by solving the hydrostatic and the thermal equilibrium equations for blobs in the vertical direction in the presence of incident flux (see §2). We find the thermal temperature of blobs to be about 510 K at $`r=1`$pc (see fig. 1), and the density scale height $`H1.5\times 10^{16}`$ cm. The mean number density of $`H_2`$ molecules $`nM_r/(4\pi r^2H\mu )=5\times 10^8`$ cm<sup>-3</sup>. Thus both the number density and the temperature are within the allowed range for water maser emission.<sup>1</sup> The allowed range for number density for water maser emission is 10<sup>8</sup>—10<sup>11</sup> cm<sup>-3</sup> and the temperature range is 200—1000 K. The size of clumps is $`\mathrm{}<r(M_r/M_t)0.1`$ pc and their mass is $`10^3`$ M. Since the Jeans mass is $`10`$ M the clumps could have some star formation activity. However, the efficiency of star formation is usually quite low, of order a few percent even for the giant molecular clouds, so we don’t expect the gas in the clumps to be converted into stars in the short lifetime of order 10<sup>6</sup> years or less for the clumps in the masing disk of NGC 1068.
Note that the velocity dispersion of blobs is independent of $`r`$ (eq. ), so the scale height for the vertical distribution of blobs increases as $`r^{3/2}`$ provided that the velocity distribution of clouds is nearly isotropic; this increase of disk thickness with $`r`$ is more rapid than in the standard $`\alpha `$-disk model considered in the last section. At a distance of 2 pc from the center of NGC 1068 the scale height is about 0.3 pc (for $`\eta =5`$), so the flaring disk blocks a significant fraction of the radiation from the central source.
Clouds colliding at a relative speed of 8 km s<sup>-1</sup> raise the gas temperature to approximately 10<sup>4</sup> K which is too hot for maser emission. However, the cooling time of gas at the density of $`10^9`$ cm<sup>-3</sup>, is of order 100 years, which is short compared to both the orbital time and the collision time of order $`10^4`$ years<sup>2</sup> The density enhancement of the shocked gas depends on the strength of magnetic field in the clumps and is order unity for equipartition magnetic field. ensuring that the disk remains cold and that a steady state solution exists.
The slope of the rotation curve in the disk follows from the use of equation (25)
$$\frac{d\mathrm{ln}V_{rot}}{d\mathrm{ln}r}=0.5+\frac{1}{2}\frac{d\mathrm{ln}M_t}{d\mathrm{ln}r}=0.5+\frac{\psi ^{1/2}}{4M_c^{1/2}}0.5+\frac{M_r}{4M_c}.$$
$`(27)`$
The disk mass for the NGC 1068 system, in our model, is small ($`M_r/M_t0.1`$), and so the slope of the rotation curve is very close to $`0.5`$.
A slower fall off of the rotation curve requires a more rapid increase of the total mass ($`M_t`$) with $`r`$ than in the model discussed above. This could arise for instance if there is a star cluster at the center. The rotation curve falls off as $`r^{0.35}`$ when the mass of the star cluster within a parsec of the center of NGC 1068 is about 8x10<sup>6</sup> M. Thatte et al. (1997) have carried out near infrared speckle imaging of the central 1” of NGC 1068, and conclude that about 6% of the near-infrared light from this region is contributed by a star cluster. They estimate that the stellar mass within 1” ($`50`$pc) of the nucleus of NGC 1068 is about 6x10<sup>8</sup> M; if the density in this cluster falls off as $`r^2`$, as in a singular isothermal sphere, then the expected stellar mass inside 1 pc is 1.2x10<sup>7</sup> M. We note that the stellar mass within 100 pc of our own galactic center is estimated to be in excess of 5x10<sup>8</sup> M (Kormendy & Richstone 1995, Genzel et al. 1994), and the mass enclosed within radius $`r`$ is seen to increase almost linearly with $`r`$ for $`r\mathrm{}>2`$pc. Thus the possibility of a similar stellar mass cluster at the center of NGC 1068 is not surprising.
We discuss below the effect a star cluster has on the structure of the accretion disk.
§3.2.1 Effect of a star cluster on accretion disk
Let us consider that the stellar mass within a radius $`r`$ of the center is $`M_s(r)`$. The disk mass, as before, is taken to be $`M_r(r)`$, and the central mass is $`M_c`$. The use of equation (23), which still applies with $`M_c`$ replaced by $`(M_c+M_s)`$, implies that the disk structure is not much affected at small radii where $`M_s(r)M_c`$. At larger radii, where the stellar mass becomes comparable to or exceeds $`M_c`$, the mass of the gas disk ($`M_r`$) increases with $`r`$ as $`r^{1/2}(M_c+M_s)^{1/2}`$, which is somewhat more rapid than the case of $`M_s=0`$ considered in §3.1. However, for the masing disk of NGC 1068 $`M_s\mathrm{}<M_c`$ for $`r\mathrm{}<1.6`$ pc, and the effect of the stellar cluster on the disk structure, in particular the number density of molecules and the gas temperature, is small.
The velocity dispersion of clouds due to gravitational encounters is proportional to the local surface mass density $`\overline{\sigma }`$ of gas and is unaffected by the stellar cluster (see eq. ). Therefore, the disk thickness (H) at first increases with radius as $`r^{3/2}`$ and then flattens out when $`M_s`$ starts to dominate the total mass:
$$H\frac{\overline{v}_r}{\mathrm{\Omega }(r)}\frac{\eta r^3\overline{\sigma }(r)}{M_t}.$$
$`(28)`$
For $`\overline{\sigma }r^{3/2}`$, expected of the CD model with constant $`\dot{M}`$, $`H/rr^{1/2}`$, however a more rapid decrease of $`\overline{\sigma }`$ with $`r`$ leads to a corresponding decrease of $`H/r`$.<sup>3</sup> A decrease of $`\overline{\sigma }`$ with $`r`$ that is more rapid than $`r^{3/2}`$ leads to a drop in $`\dot{M}`$. However, the assumption of steady state accretion breaks down beyond some radius where the accretion time, $`\mathrm{\Omega }^1(M_c+M_s)^2/M_r^2`$, is longer compared with the evolution time scale of the disk.
The accretion rate calculated in §3.1 is modified due to the dynamical friction suffered by clumps in the disk by the star cluster; the orbits of stars in the cluster are gravitationally perturbed by the clumps so that on the average the density of stars behind a clump is greater than the density in the front. We estimate below the accretion rate that arises as a result of the frictional drag exerted by stars. We assume that clumps are not stretched out by the tidal force of stars, which is valid so long as the cloud mass density is greater than the mean mass density associated with the star cluster i.e. $`n_{H_2}\mathrm{}>M_s/(m_{H_2}r^3)`$ or $`n_{H_2}\mathrm{}>2\times 10^8\mathrm{cm}^3(M_s/10^7M_{})(r/1pc)^3`$; this condition is satisfied for NGC 1068.
The dynamical friction timescale for a clump to fall to the center is (cf. Binney and Tremaine, 1987)
$$t_{df}\frac{r^2M_tv_{rot}}{Gm_cM_s\mathrm{ln}\mathrm{\Lambda }},$$
$`(29)`$
where $`m_c`$ is clump mass, $`\mathrm{\Lambda }rv_{rot}^2/(Gm_c)`$, and $`v_{rot}^2=GM_t/r`$. Taking the clump size to be the largest length scale for gravitational instability in a shearing disk i.e. $`l_c\pi ^2G\overline{\sigma }/\mathrm{\Omega }^2`$, we find $`m_c\pi M_r^3/M_t^2`$. Thus, $`\mathrm{ln}\mathrm{\Lambda }3\mathrm{ln}(M_t/M_r)`$ is about 7 for the NGC 1068 system. Substituting these in the above equation we find
$$t_{df}\frac{\mathrm{\Omega }^1}{10}\left(\frac{M_t^4}{M_sM_r^3}\right).$$
$`(30)`$
For $`M_sM_t/3`$ and $`M_r/M_t0.1`$, values applicable to the NGC 1068 system, the dynamical friction time is of the same order as the timescale for clumps to fall to the center due to gravitational encounter with other clumps (see eq. ). Thus we see that a star cluster of modest mass at the center of NGC 1068 can both give rise to a sub-Keplerian rotation curve, as perhaps observed by the water maser, and also because of its dynamical friction on clumps remove angular momentum of the gaseous disk resulting in accretion rate that is of the same order as needed to account for the observed luminosity (the total $`\dot{M}`$ is the sum of the accretion rate due to gravitational interaction between clumps, and the dynamical friction drag exerted by stars).
It was pointed out by Ostriker (1983) that even a smooth disk is subject to frictional drag from a star cluster. He showed that the characteristic drag time for removing angular momentum of a gaseous disk is of the same order as the relaxation time for the star cluster. The accretion rate due to this process, in a form applicable to the NGC 1068 system, is given by
$$\dot{M}M_r\mathrm{\Omega }I_0\left(\frac{r_{}}{r}\right)^2\left(\frac{M_s}{m_{}}\right),$$
$`(31)`$
where $`r_{}`$ is the radius and $`m_{}`$ is the mass of stars in the cluster, and $`I_010`$ is a dimensionless quantity that depends on the ratio of the escape velocity at the surface of a star to the stellar velocity dispersion. We see that the accretion rate resulting from friction drag from stars on a smooth disk is much smaller than the rate resulting from the frictional drag on clumps calculated earlier.
4. Conclusion
We find that the structure of masing disk of NGC 1068, as determined using the standard $`\alpha `$-viscosity prescription, is both inconsistent with the observations and is self contradictory. In particular, the disk mass according to this model is 7x10<sup>7</sup> M, which is much greater than the blackhole mass, and the Toomre $`Q`$ parameter has an extremely low value of $`10^3`$ (for $`\alpha =1`$). This makes the disk highly unstable to gravitational perturbations, and suggests that it breaks up into clumps, a conclusion that is inconsistent with the smooth disk assumption of standard $`\alpha `$-disks.
We have described in this paper a different model for the disk. We consider the disk to consist of gas clumps which undergo gravitational interactions with one another leading to an inward accretion of gas. The mass of the masing disk of NGC 1068 according to this model is about 10<sup>6</sup> M, and the velocity dispersion of clumps is about 10 km s<sup>-1</sup> which is in rough agreement with the observed velocity dispersion of masing spots. The temperature and the density of clumps in this model are approximately 510 K and 5x10<sup>8</sup> cm<sup>-3</sup> respectively, which are within the allowed parameter range for maser emission.
However, the rotational velocity of clumps, in this model, are close to the Keplerian value, whereas the observations of masing spots indicate a slower fall off of their rotational velocity (Greenhill & Gwinn, 1997). One obvious way these results can be reconciled is if the mass in stars within 1 pc of the center of NGC 1068 is of order the blackhole mass. Speckle observations of the central 1” region of NGC 1068 in near infrared in fact suggests that the stellar mass contained within 50 pc of the center is about 5x10<sup>8</sup> M (Thatte et al. 1997), which is sufficient to explain the flattening of the observed rotation curve in the masing disk. The dynamical friction exerted by this star cluster on the clumps in the disk removes angular momentum at a rate that is of the same order as needed for the nearly Eddington accretion rate for the system. Thus the clump-clump gravitational interaction and the dynamical friction drag force on clumps together determine the disk structure, which we find to be consistent with observations.
The clumpy-disk (CD) model considered in this paper for the disk of NGC 1068 should also apply to any AGN at a distance of about a pc or more from the center, depending on the accretion rate and the luminosity, where the disk becomes self-gravitating and the standard $`\alpha `$ model is too inefficient to account for the accretion rate (Shlosman & Begelman, 1989).
Acknowledgment: I am indebted to Ramesh Narayan for sharing his work and insights on the accretion disk of NGC 1068, and for his detailed comments on this work. I am grateful to Scott Tremaine for very helpful discussion about disk dynamics. I thank John Bahcall, Mitch Begelman, Kathryn Johnston, Douglas Richstone, Phil Maloney and an anonymous referee for helpful comments.
REFERENCES
Bell, K.R. & Lin, D.N.C., 1994, ApJ 427, 987
Binney, J.J. & Tremaine, S.D., 1987, Galactic Dynamics, Princeton University Press, Princeton
Frank, J., King, I.R., & Raine, D., 1992, Accretion Power in Astrophysics, (Cambridge: Cambridge univ. press)
Gammie, C., Ostriker, J.P., and Jog, C., 1991, ApJ 378, 565
Genzel, R., Hollenbach, D. and Townes, C.H., 1994, Rep. Prog. Phys. 57, 417
Goldreich, P., and Tremaine, S., 1978, ICARUS 34, 227
Greenhill, L.J., & Gwinn, C.R., 1997, CfA preprint no. 4508
Kormendy, J. & Richstone, D., 1995, ARA & A 33, 581
Lin, D.N.C., and Pringle, J.E., 1987, MNRAS 225, 607
Ostriker, J.P., 1983, ApJ 273, 99
Paczynski, B., 1978, Acta Astr. 28, 91
Pier, E.A., Antonucci, R., Hurt, T., Kriss, G., and Krolik, J., 1994, ApJ 428, 124
Shlosman, I., and Begelman, M., 1987, Nature 329, 810
Shlosman, I., and Begelman, M., 1989, ApJ 341, 685
Shlosman, I., Begelman, M., and Frank, J., 1990, Nature 345, 679
Thatte, N., Quirrenbach, A., Genzel, R., Maiolino, R. & Tecza, M., 1997, ApJ 490, 238
Figure Caption
FIG. 1.— The panel on the left shows the gas temperature ($`T_c`$) and the number density of $`H_2`$ molecules ($`n_c`$) in the mid plane of the disk for the standard $`\alpha `$-disk model without any irradiation from the central source (thin continuous line, and thin dashed line respectively). The thick continuous and dashed lines are $`T_c`$ and $`n_c`$ respectively for an $`\alpha `$-disk model that includes irradiation from the central source. The irradiation is taken to be proportional to $`H/r`$, where $`H`$ is the scale height of the disk, and the proportionality constant ($`\beta `$) is chosen so that the flux intercepted by the disk at distance $`r`$ is about 50% of the flux from the central source at $`r`$ (such a large value for the flux intercepted by the disk corresponds to $`\beta =50`$ which might arise if the disk were extremely warped). The panel on the right shows the Toomre Q-parameter for the two disk models; thin continuous curve is for the standard $`\alpha `$-disk with no irradiation, and the thick curve is for a disk that intercepts about 50% of the flux. Note that $`Q`$ is much less than one in both of these models which corresponds to the disk being highly unstable to gravitational perturbations. In both of these models we chose $`\alpha =0.1`$ ($`Q`$ scales as roughly $`\alpha ^{0.9}`$ when irradiation dominates), the blackhole mass $`M=1.5\times 10^7M_{}`$, and the mass accretion rate ($`\dot{m}`$) in units of the Eddington rate was taken to be 0.4.
|
no-problem/9902/cond-mat9902111.html
|
ar5iv
|
text
|
# Hysteresis and Avalanches in Two Dimensional Foam Rheology Simulations
## I Introduction
In addition to their wide-spread industrial importance , foams provide significant clues to the rheology of other complex fluids, such as emulsions, colloids and polymer melts, because we can observe their structures directly. The topological structures and the dynamics studied here also occur in other cellular materials, such as biological tissues and polycrystalline alloys. One of the most remarkable and technologically relevant features of foams is the range of mechanical properties that arises from their structure. For sufficiently small stress, foams behave like a solid and are capable of supporting static shear stress. For large stress, foams flow and deform arbitrarily like a fluid. However, we do not yet fully understand the relationship between the macroscopic flow properties of foams and their microscopic details, e.g. liquid properties, topological rearrangements of individual bubbles and structural disorder. Constructing a full multiscale theory of foam rheology is challenging. Foams display multiple length scales with many competing time scales, memory effects (e.g. the hysteresis discussed in Sec. IV below), and slow aging punctuated by intermittent bursts of activity (e.g. the avalanches of T1 events discussed in Sec. V below), all of which severely limit their predictability and control. These problems are intriguing both from an applied and from a fundamental perspective — they provide beautiful concrete examples of multiscale materials, where structure and ordering at the microscale, accompanied by fast and slow time scales, can lead to a highly nonlinear macroscopic response. Here we study the relation between the microscopic topological events and the macroscopic response in two-dimensional non-coarsening foams using a driven extended large-Q Potts model.
In foams, a small volume fraction of fluid forms a continuous network separating gas bubbles . The bubble shapes can vary from spherical to polyhedral, forming a complex geometrical structure insensitive to details of the liquid composition or the average bubble size . Because of the complexity of describing the network of films and vertices in three-dimensional foams, most studies have been two-dimensional. In two-dimensional foams free of stress, all vertices are three-fold and the walls connecting them meet at $`120^{}`$ angles. Minimization of the total bubble wall length dictates that a pair of three-fold vertices is energetically more favorable than a four-fold vertex. Therefore, topology and dynamics are intimately related, with the dominance of three-fold vertices resulting from considerations of structural stability in the presence of surface tension. When shear stress is present, a pair of adjacent bubbles can be squeezed apart by another pair (Fig. 1), leading to a T1 switching event . This local but abrupt topological change results in bubble-complexes rearranging from one metastable configuration to another. The resulting macroscopic dynamics is highly nonlinear and complex, involving large local motions that depend on structures at the bubble scale. The spatio-temporal statistics of T1 events is fundamental to the plastic yielding of two-dimensional liquid foams.
The nonlinear and collective nature of bubble rearrangement dynamics have made analytical studies difficult, except under rather special assumptions. Computer simulations can therefore provide important insights into the full range of foam behavior. Previous studies in this field can be categorized through their use of constitutive, vertex, center or bubble models.
The constitutive models have evolved from the ideas of Prud’homme and Princen . They modeled foam as a two-dimensional periodic array of hexagonal bubbles where T1 events occur instantaneously and simultaneously for the entire foam. Khan and Armstrong further developed the model to calculate the detailed force balance at the films and vertices, and studied the stress-strain relationships as a function of hexagon orientation, liquid viscosity and liquid fraction. Reinelt and Kraynik extended the same model to study a polydisperse hexagonal foam and derived explicit relations between stress and strain tensors. While analytical calculations exist only for periodic structures or for linear response, foams are naturally disordered with an inherent nonlinear response. Treating the foam as a collection of interacting vertices, vertex models studied the effect of stress on structure and the propagation of defects in foams with zero liquid fraction (i.e. dry foam) . Okuzono and Kawasaki studied the effect of finite shear rate by including the force on each vertex, a term which depends on the local motion and is based on the work of Schwartz and Princen . They predicted avalanche-like rearrangements in a slowly driven foam, with a power law distribution of avalanche size vs. energy release, characteristic of self-organized criticality. Durian’s “bubble” model, treating bubbles as disks connected by elastic springs, measured foam’s linear rheological properties as a function of polydispersity and liquid fraction. He found similar distributions for the avalanche-like rearrangements with a high frequency cutoff. Weaire et al. , using a center model based on Voronoi construction from the bubble centers, applied extensional deformation and bulk shear to a two-dimensional foam. They concluded that avalanche-like rearrangements are possible only for wet foams, and that topological rearrangements can induce ordering in a disordered foam. A review by Weaire and Fortes includes some computer models of the mechanical and rheological properties of liquid and solid foams. However, few models have attempted to relate the structural disorder and configuration energy to foam rheology. Only recently, Sollich et al. , studying mechanisms for storing and dissipating energy, emphasized the role of both structural disorder and metastability in the rheology of soft glassy materials, including foams. Langer and Liu , using a bubble model similar to Durian’s, found that the randomness of foam packing has a strong effect on the linear shear response of a foam. One of the goals of our study is to quantify the extent of metastability by measuring hysteresis, and relate the macroscopic mechanical response to microscopic bubble structures.
Experiments have measured the macroscopic mechanical properties of three-dimensional foams. But due to the difficulty of direct visualization in three-dimensional foams, no detailed studies of rearrangements exist. Khan et al. applied bulk shear to a foam trapped between two parallel plates and measured the stress-strain response, as well as the yield strain as a function of liquid fraction. Princen and Kiss , applying shear in a concentric cylinder viscometer (i.e. boundary shear), determined the yield stress and shear viscosity of highly concentrated water/oil emulsions. Recently, with the help of diffusing wave spectroscopy (DWS), experiments by Gopal and Durian on three-dimensional shaving creams showed that the rate of rearrangements is proportional to the strain rate, and that the rearrangements are spatially and temporally uncorrelated ; Höhler et al. found that under periodic boundary shear, foam rearrangements cross from a linear to nonlinear regime; Hébraud et al. in a similar experiment on concentrated emulsions, found that some bubbles follow reversible trajectories while others follow irreversible chaotic trajectories. However, none of these experiments has directly observed changes in bubble topology. Dennin and Knobler performed a bulk shear experiment on a monolayer (2D) Langmuir foam and counted the number of bubble side-swapping events. Unfortunately, limited statistics rendered their results difficult to interpret.
In an attempt to reconcile the different predictions of different models and experiments, we use a Monte Carlo model, the extended large-Q Potts model, to study foam rheology. The large-Q Potts model has successfully modeled foam structure, coarsening and drainage , capturing the physics of foams more realistically than other models. Here we extend the model to include the application of shear to study the mechanical response of two-dimensional foams under stress.
This paper is organized as follows: Sec. II presents our large-Q Potts model; Sec. III contains a description of simulation details; Sec. IV presents results on hysteresis; Sec. V discusses the dynamics and statistics of T1 events; Sec. VI discusses structural disorder and Sec. VII contains the conclusions.
## II Model
The great advantage of our extended large-Q Potts model is its simplicity. The model is “realistic” in that the position and diffusion of the walls determine the dynamics, as they do in real foams and concentrated emulsions. Previous models were based on different special assumptions about the energy dissipation. Since the energy dissipation is poorly understood and also hard to measure in experiments, the exact ranges of validity for these models are not clear. Not surprisingly, these models lead to conflicting predictions, e.g. for the distribution of avalanche-like rearrangements (Sec. V). None of these models alone captures the full complexity of real foams.
The extended large-Q Potts model, where bubbles have geometric properties as well as surface properties, is not based on any a priori energy dissipation assumption. In addition, it has the advantage of simultaneously incorporating many interactions, including temperature effects, for foams with arbitrary disorder and liquid content .
Both the film surface properties and the geometry of bubbles are fundamental to understanding foam flow. The contact angle of walls between vertices indicates whether the structure is at equilibrium, corresponding to minimizing the surface energy. In a real evolving pattern, the equilibrium contact angle occurs only for slow movements during which the vertices remain adiabatically equilibrated. Whenever a topological rearrangement (a T1 event) of the pattern occurs, the contact angles can be far from their equilibrium values. The walls then adjust rapidly, at a relaxation rate depending on the effective foam viscosity, to re-establish equilibrium. The same holds true for the other possible topological change, the disappearance of a bubble, a T2 event . However, disappearance only occurs in foams that do not conserve bubble number and area, which we do not consider in this study. A difficulty in two-dimensional foams is that the effective viscosity depends primarily on the drag between the Plateau borders and the top/bottom surfaces of the container, not the liquid viscosity. Container chemistry, surfactant properties, and foam wetness all change the effective viscosity. Thus even in experiments, the effective viscosity is not equivalent to the liquid viscosity and not possible to derive from liquid viscosity. We define the equilibrium contact angle so that any infinitesimal displacement of the vertex causes a second-order variation of the surface energy, while during a T1 event the energy must vary macroscopically over a small but finite coherence length, typically the rigidity length of a bubble. In our simulations, a bubble under stress can be stretched or compressed up to $`60\%`$ of its original length, while conserving its area.
In a center model based on the Voronoi construction (see, e.g., ), the coherence length of a bubble is comparable to its diameter. Contact angles are given correctly at equilibrium but approach and remain near $`90^{}`$ during a T1 event, since the centers are essentially uninfluenced by topological details such as the difference between a four-fold vertex and a pair of three-fold vertices.
In a vertex model (see, e.g.), the walls connecting the vertices adiabatically follow an out-of-equilibrium, slowly-relaxing vertex. In such a model, the walls are constrained to be straight and vertices typically have arbitrary angles. In essence, the deviation of the vertex angles from the equilibrium value represents the integrated curvature of the bubble walls. Because of their unphysical representation of contact angles, pure vertex models with straight walls cannot handle T1 events correctly.
The extended large-Q Potts model avoids these limitations: walls are free to fluctuate, which is not true in vertex models; and the contact angles during a T1 event are correct, which is not true in center models. A further advantage of our extended large-Q Potts model is that it allows direct measurement of T1 events. The other models cannot directly count T1 events. Instead, they quantify rearrangement events by their associated decreases in energy. We will discuss later this energy decrease is not always directly proportional to the number of T1 events. Our model therefore delivers accurate information about individual T1 events as well as the averaged macroscopic measures such as total bubble wall length, thereby allowing new insights into the connection between microscopic foam structure and macroscopic mechanical response.
Before describing the details of the Potts model, we should first mention its major limitations. Viscosity is one of the basic physical properties of foams, but it is not easily specified a priori in the Potts model. Although we can extract the effective viscosity and the viscoelasticity of foams from simulations, we lack a clear quantitative description of the foam viscosity in Potts model simulations and how it relates to the effective and liquid viscosities of a two-dimensional foam. However, our ignorance about simulation viscosity is equivalent to our ignorance about experimental two-dimensional foam viscosity. Quantitative experiments will help to separate the roles of the Plateau borders, fluid viscosity, and topological rearrangements in determining the effective foam viscosity. A second possible limitation is the size effect due to lattice discretization. We show in Sec. III that this problem does not invalidate our simulations. A third drawback is that the Monte Carlo algorithm results in uncertainties in the relative timing of events on the order of a few percent of a Monte Carlo step. While this uncertainty is insignificant for well separated events, it can change the measured interval between frequent events.
The extended large-Q Potts model treats foams as spins on a lattice. Each lattice site $`i=(x_i,y_i)`$ has an integer “spin” $`\sigma _i`$ chosen from $`\{1,\mathrm{},Q\}`$. Domains of like spins form bubbles, while links between different spins define the bubble walls (films). Thus each spin merely acts as a label for a particular bubble. The surface energy resides on the bubble walls only. Since the present study focuses on shear-driven topological rearrangements over many loading cycles, we prohibit foam coarsening by applying an area constraint on individual bubbles. In practical applications, foam deformation and rearrangement under stress is often much faster than gas diffusion through the walls, so neglecting coarsening is reasonable. The Potts Hamiltonian, the total energy of the foam, includes the surface energy and the elastic bulk energy:
$$=\underset{ij}{}𝒥_{ij}(1\delta _{\sigma _i\sigma _j})+\mathrm{\Gamma }\underset{n}{}(a_nA_n)^2,$$
(1)
where $`𝒥_{ij}`$ is the coupling strength between neighboring spins $`\sigma _i`$ and $`\sigma _j`$, summed over the entire lattice. The first term gives the total surface energy. The second term is the area constraint which prevents coarsening. The strength of the constraint ($`\mathrm{\Gamma }`$) is inversely proportional to the gas compressibility; $`a_n`$ is the area of the $`n`$th bubble and $`A_n`$ its corresponding area under zero applied stress. We can include coarsening by setting $`\mathrm{\Gamma }`$ to zero.
We extend the Hamiltonian to include shear:
$$^{}=+\underset{i}{}\gamma (y_i,t)x_i(1\delta _{\sigma _i\sigma _j}).$$
(2)
The new term corresponds to applying shear strain (a detailed explanation follows below) to the wall between neighboring bubbles $`\sigma _i`$ and $`\sigma _j`$, with $`\gamma `$ corresponding to the strain field, $`(x_i,y_i)`$ to the coordinate of spin $`\sigma _i`$ and $`(1,0)`$ is the direction of the strain.
The system evolves using Monte Carlo dynamics. Our algorithm differs from the standard Metropolis algorithm: we choose a spin at random, but only reassign it if it is at a bubble wall and then only to one of its unlike neighbors. The probability of accepting the trial reassignment follows the Boltzmann distribution, namely:
$$P\{\begin{array}{ccc}1\hfill & \mathrm{\Delta }^{}<0\hfill & \\ \mathrm{exp}(\mathrm{\Delta }^{}/T)\hfill & \mathrm{\Delta }^{}0\hfill & \end{array},$$
(3)
where $`\mathrm{\Delta }^{}`$ is the change in $`^{}`$ due to a trial spin flip, and $`T`$ is temperature. Time is measured in units of Monte Carlo steps (MCS), where one MCS consists of as many spin trials as there are lattice sites. This algorithm reproduces the same scaling as classic Monte Carlo methods in simulations of foam coarsening, but significantly reduces the simulation time .
The second term in $`^{}`$ biases the probability of spin reassignment in the direction of increasing $`x_i`$ (if $`\gamma <0`$) or decreasing $`x_i`$ (if $`\gamma >0`$). From dimensional analysis of $`^{}`$, $`\gamma `$ has units of force, but we can interpret it as the strain field for the following reason: In the Potts model a bubble wall segment moves at a speed proportional to the reassignment probability $`P`$; in this case,
$$v\sqrt{\gamma }P,$$
(4)
where the prefactor follows from dimensional analysis. This shear term effectively enforces a velocity $`v`$ at the bubble walls, therefore it imposes a strain rate on the foam. The strain $`ϵ(t)`$ is then proportional to a time integral of $`v`$,
$$ϵ_0^t\sqrt{\gamma (t^{})}P𝑑t^{}.$$
(5)
If we limit the application of this term to the boundaries of the foam, we impose a boundary shear, equivalent to moving the boundary of the foam with no-slip between bubbles touching the boundary and the boundary, i.e.,
$$\gamma =\{\begin{array}{cc}\gamma _0G(t)\hfill & y_i=y_{\mathrm{min}}\hfill \\ \gamma _0G(t)\hfill & y_i=y_{\mathrm{max}}\hfill \\ 0\hfill & otherwise\hfill \end{array},$$
(6)
where $`\gamma _0`$ is the amplitude of the strain field and $`G(t)`$ is a normalized function of time. On the other hand,
$$\gamma =\beta y_iG(t),$$
(7)
with $`y_i`$ between $`y_{\mathrm{min}}`$ and $`y_{\mathrm{max}}`$, corresponds to applying bulk shear with the strain rate varying linearly as a function of position in the foam. The gradient of strain rate is the shear rate, $`\beta `$. The corresponding experiment would be similar to Dennin and Knobler’s monolayer Langmuir foam experiment : a monolayer foam (2D) on the surface of a liquid is sheared in a concentric Couette cell, with no-slip conditions between the bubbles and the container surface. In all our studies we use $`G(t)=1`$ for steady shear, and $`G(t)=\mathrm{sin}(\omega t)`$ for periodic shear. Since for steady shear, the strain is a constant times time, or $`\sqrt{\gamma }Pt`$, plotting with respect to time is equivalent to plotting with respect to strain.
Note that our driving in the Potts model differs from that in driven spin systems, for which a large body of literature addresses the dynamic phase transition as a function of driving frequency and amplitude . Our driving term acts on the bubble walls (domain boundaries) only, while in driven spin systems e.g. the kinetic Ising model, all spins couple to the driving field. The resulting dynamics differ greatly.
## III Simulation details
Experimental observations show that the mechanical responses of a foam, including the yield strain, the elastic moduli, and the topological rearrangements, are sensitive to the liquid volume fraction . In particular, the simulations of both Durian and Weaire et al. showed a critical liquid fraction at which a foam undergoes a “melting transition.” Although different liquid content and drainage effects can be readily incorporated in the Potts model , we focus on the dynamics of topological rearrangements and do not consider liquid fraction dependence of flow behavior, i.e. we assume the dry foam limit in this study. Also, we ignore gas diffusion across the walls, assuming that bubble deformation and rearrangement are much faster than coarsening.
The definition of time (Monte Carlo steps or MCS) is not directly related to real time, but we have made choices to ensure that we do not under-resolve events. A shear cycle in the periodic shear case takes about 4000 MCS. In our simulations, a single deformed bubble recovers on a timescale of a few MCS while the relaxation of a cluster of deformed bubbles takes a much longer time, on the order of 10 to 100 MCS. A T1 event by definition takes one MCS (the short life of a four-fold vertex), but the viscous relaxation has to average over at least the four bubbles involved in the T1 event, and thus lasts much longer.
We used periodic boundary conditions in the $`x`$ direction, to mitigate finite size effects. For ordered foams under boundary shear, we used a $`400\times 100`$ lattice with each bubble containing $`20\times 20`$ lattice sites; for ordered foams under bulk shear, we used a $`256\times 256`$ lattice with $`16\times 16`$ sites for each bubble. When unstressed, all the bubbles are hexagons, except for those truncated bubbles touching the top and bottom boundaries. In the case of disordered foams, we used a $`256\times 256`$ lattice with various area distributions. We have also performed simulations using a lattice of size $`1024\times 1024`$ with $`64\times 64`$ bubbles and a lattice of size $`1024\times 1024`$ with $`16\times 16`$ bubbles. The results did not appear to differ qualitatively. A $`16\times 16`$ bubble has a side length around 10 lattice sites, so its smallest resolvable tilt angle is approximately $`arctan(1/10)5.7^{}`$. Had lattice effects been a problem, we would have expected a significant difference in the simulations with bubbles of size $`64`$, where the smallest angle is about four times smaller. But increasing the simulation size from $`16^2`$ to $`64^2`$ did not lead to significant changes in the quantities we measured. Thus, we used bubbles of size $`16^2`$ in all the simulations reported in this paper.
Lattice anisotropy can induce artificial energy barriers in lattice simulations. All our runs use a fourth-nearest neighbor interaction on a square lattice, which has a lattice anisotropy of $`1.03`$, very close to the isotropic situation (lattice anisotropy of 1).
Standard quantitative measures of cellular patterns are the topological distributions and correlations, area distributions, and wall lengths — all quantities that in principle can be measured in experiments. Since the areas are constrained, the evolution of the area distribution is not useful. We define the topological distribution $`\rho (n)`$ as the probability that a bubble has $`n`$ sides; its $`m`$-th moments are $`\mu _m_n\rho (n)(nn)^m`$. The area distribution $`\rho (a)`$ and its second moment $`\mu _2(a)`$ are defined in a similar fashion for the bubble areas. We use a variety of disordered foams with different distributions, as characterized by their $`\mu _2(n)`$ and $`\mu _2(a)`$.
In practice, we generate the initial configuration by partitioning the lattice into equal-sized square domains, each containing $`16\times 16`$ lattice sites. The squares alternate offsets in every other row, so the pattern resembles a brick wall arranged in common bond. We then run the simulation with area constraints, but without strain, at finite temperature for a few Monte Carlo steps, and then decrease the temperature to zero and let the pattern relax. The minimization of total surface energy (and hence the total bubble wall length) results in a hexagonal pattern, the initial configuration for the ordered foam. For disordered initial configurations, we continue to evolve the hexagonal pattern without area constraints at finite temperature so that the bubbles coarsen. We monitor $`\mu _2(n)`$ of the evolving pattern, and stop the evolution at any desired distribution or degree of structural disorder. Then we relax the patterns at zero temperature with area constraints to guarantee that they have equilibrated, i.e. without added external strain or stress the bubbles would not deform or rearrange.
For all our simulations, $`\mathrm{\Gamma }=1`$ (which is sufficiently large to enforce air incompressibility in bubbles) and $`𝒥_{ij}=3`$ (except when we vary the coupling strength to change the effective viscosity of the foam). Most of the simulations shown in this paper are run at zero temperature except when we study temperature effects on hysteresis, because the data are less noisy and easier to interpret. A finite but low temperature speeds the simulations, but does not appear to change the results qualitatively.
The number of sides of a bubble is defined by its number of different neighbors. During each simulation, we keep a list of neighbors for each bubble. A change in the neighbor list indicates a topological change which, since bubbles do not disappear, has to be a T1 event.
## IV Hysteresis
We can view foam flow as a collective rearrangement of bubbles from one metastable configuration to another. We investigate the configurational metastability by studying hysteresis of the macroscopic response.
Hysteresis is the phenomenon in which the macroscopic state of a system does not reversibly follow changes in an external parameter, resulting in a memory effect. Hysteresis commonly appears in systems with many metastable states due to (but not limited to) interfacial phenomena or domain dynamics. The classic example of the former is that the contact angle between a liquid and a solid surface depends on whether the front is advancing or retreating. The classic example of the latter is ferromagnetic hysteresis, in which the magnetization lags behind the change in applied magnetic field. In cellular materials, including foams, hysteresis can have multiple microscopic origins, including stick-slip interfacial and vertex motion, local symmetry-breaking bubble rearrangement (T1 events), and the nucleation of new and annihilation of old cells. In all of these, noise and disorder play an intrinsic role in selecting among the many possible metastable states arising when the foam is driven away from equilibrium. By focusing on non-coarsening foams, we rule out nucleation and annihilation as sources for hysteresis. Our foam is therefore an ideal testing ground for improving our understanding of hysteresis as it arises from local rearrangements and interfacial dynamics.
In accordance with , we define the quantity:
$$\varphi \underset{i,j}{}\theta (1\delta _{\sigma _i,\sigma _j}),$$
(8)
as the total stored elastic energy. Here sites $`i,j`$ are neighbors, summation is over the whole lattice, and $`\theta `$ is the wall thickness that we choose to be $`1`$ in all our simulations (dry foam limit). Thus $`\varphi `$ gives essentially the total bubble wall length, which differs by a constant, namely the surface tension, from the total surface energy. In zero temperature simulations, the area constraint is almost always satisfied so that small fluctuations in areas contribute only $`10^3`$ of the total energy. Thus we can neglect the elastic bulk energy of the bubbles, and assume that the total foam energy resides on the bubble walls only, i.e. all forces concentrate at the bubble walls. We can calculate values of the averaged stress by taking numerical derivatives of the total surface energy with respect to strain . However, the calculation via derivatives is not suitable for foams undergoing many topological changes, since the stored elastic energy changes discontinuously when topological rearrangements occur. The alternative is to calculate stress directly, as given in , by the sum of forces acting on the bubble walls, which locally is proportional to the wall length change of a bubble. Because forces on the bubble walls in Potts model foams are not well characterized, we limit our discussions to energy-strain relationships. The more rigorous definition of strain involves the definition of a mesoscopic lengthscale corresponding to a cluster of bubbles, over which the effects of bubble wall orientation and bubble deformation can be averaged. In , the average stress tensor, defined as $`\sigma =1/A_{i,j}|r_{ij}|\widehat{r}_{ij}\widehat{r}_{ij}`$, with $`A`$ the total area of the foam and $`r_{ij}`$ the distance between two neighboring vertices, is directly related to $`\varphi `$ via $`\varphi =Tr(\sigma )`$. Hereafter, we present our $`\varphi `$ data as $`\varphi (t)/\varphi (0)`$ to scale out differences due to initial configurations.
### A Hysteresis in ordered foams
The simplest perturbation which induces topological rearrangements is boundary shear on an ordered foam. In this case we can confine the deformation to the bubbles touching the moving boundaries, and easily locate all the T1 events. As the applied boundary shear increases, the bubbles touching the boundaries distort, giving rise to a stored elastic energy. We show snapshots of the pattern in Fig. 2(a). When a pair of vertices come together to form a four-fold vertex, the numbers of sides changes for the cluster of bubbles involved. Different shades of gray in Fig. 2(a) reflect the topologies of the bubbles. Note that a five-sided (dark grey) and a seven-sided (light grey) bubble always appear in pairs except during the short lifetime of a four-fold vertex (when the number of sides are ambiguous because of the discrete lattice). Once the strain exceeds a critical value, the yield strain, all the bubbles touching the moving boundaries undergo almost simultaneous rearrangements, thereby releasing stress. The stored elastic energy, $`\varphi `$, increases with time when the bubbles deform, then decreases rapidly when the bubbles rearrange. Stress accumulates only in the two boundary layers of bubbles, and never propagates into the interior of the foam. The whole process repeats periodically, due to the periodic bubble structure, as shown in Fig. 2(b), the energy-strain plot (as mentioned at the end of Sec. II, for steady shear, plotting time is equivalent to plotting strain). This result corresponds to the mechanical response obtained in the model of Khan et al. with periodic hexagonal bubbles oriented at zero degrees with respect to applied strain .
When applying periodic shear $`\gamma (t)=\gamma _0\mathrm{sin}(\omega t)`$, we keep the period $`2\pi /\omega `$ fixed and vary the amplitude, $`\gamma _0`$. Under sinusoidal periodic shear, we observe three types of behavior. When the strain amplitude is small, bubbles deform and recover their shapes elastically when stress is released. No topological rearrangement occurs and the energy-strain plot is linear, corresponding to an elastic response . This result agrees perfectly with the experimental result of DWS in . As the strain amplitude increases, the energy-strain curve begins to exhibit a small butterfly-shaped hysteresis loop before any topological rearrangements occur, indicating a macroscopic viscoelastic response. If we keep increasing the strain amplitude, the hysteresis loop increases in size. When the applied strain amplitude exceeds a critical value, T1 events start occurring, and the foam starts to flow, which leads to a further change in the shape of the hysteresis loop. Even larger strain amplitude introduces more T1 events per period, and adds small loops to the “wings” of the hysteresis loop. Figure 3(a) shows the smooth transition between the three types of hysteresis in the energy-strain curve.
We can adjust the viscosity of the bubble walls by changing the coupling strength $`𝒥_{ij}`$. Smaller coupling strength corresponds to lower viscosity. Similar transitions from elastic, to viscoelastic to fluid-like flow behavior occur for progressively lower values of coupling strength, shown in Fig. 3(b). The phase diagram in Fig. 3(c) summarizes $`44`$ different simulations and shows the elastic, viscoelastic and fluid-like behavior (as derived from the hysteretic response) as a function of the coupling strengths $`𝒥_{ij}`$ (i.e. viscosity) and strain amplitudes $`\gamma _0`$. A striking feature is that the boundaries between these regimes appear to be linear. Figure 3(d) shows the effect of finite temperature on the energy-strain curves. With progressively increasing temperature, noise becomes more dominant and eventually destroys the hysteresis loop. This result implies diminished metastability at finite temperature. However it does not seem to change the trend in mechanical response.
A more conventional experiment is the application of bulk shear , with the shear strain varying linearly as a function of the vertical coordinate, from $`\gamma _0`$ at the top of the foam to $`\gamma _0`$ at the bottom. In our bulk shear simulations with an ordered foam, the energy-strain relationship has two distinct behaviors depending on the shear rate. At small shear rates, a “sliding plane” develops in the middle of the foam. As shown in Fig. 4(a), non-hexagonal bubbles appear only at the center plane. The energy-strain curve, shown in Fig. 4(b), therefore, resembles that for boundary shear on an ordered foam. The energy curve in Fig. 4(b) also shows that the baseline of energy is larger and that the decrease in amplitude of the energy due to T1s is smaller than in Fig. 2(b), because bulk shear induces a more homogeneous distribution of distortion and thus of stored elastic energy. Again the periodic structure of the bubbles cause the periodicity of the curve, reminiscent of the shear planes observed in metallic glasses in the inhomogeneous flow regime, where stress-induced rearrangement causes plastic deformation .
At high shear rates, the ensemble of T1 events no longer localize in space \[Fig. 5(a)\]. Non-hexagonal bubbles appear throughout the foam. The energy-strain curve, shown in Fig. 5(b), is not periodic but rather smooth; beyond the yield point, the bubbles constantly move without settling into a metastable configuration and correspondingly, the foam displays dynamically induced topological disorder. The transition between these two regimes, localized and nonlocalized T1 events, occurs when the shear rate is in the range $`1\times 10^2<|\beta |<5\times 10^2`$. This transition can be understood if we look at the relaxation time scale of the foam. Due to surface viscous drag and geometric confinement of other bubbles, the relaxation time for a deformed bubble in a foam is on the order of $`10`$ MCS. For a shear rate $`\beta =5\times 10^2`$, $`\beta ^1`$ is of the same order as the relaxation time. Thus for shear rates above the natural internal relaxation time scale, the macroscopic response changes from jagged and piece-wise elastic to smooth and viscous response, as observed in fingering experiments in foams .
### B Hysteresis in disordered foams
In a disordered foam, bubbles touching the moving foam boundary have different sizes. Boundary strain causes different bubbles to undergo T1 rearrangements at different times. Stress no longer localizes in (sliding) boundary layers, but propagates into the interior \[Fig. 6(a)\]. The yield strain is much smaller. When the size distribution of the foam is broad, the linear elastic regime disappears, since even a small strain may lead to topological rearrangements of small bubbles. In other words, with increasing degree of disorder, the yield strain decreases to zero and the foam changes from a viscoelastic solid to a viscoelastic fluid. We show an example of such viscoelastic fluid behavior in Fig. 6(b) for a random foam, which shows no energy accumulation, namely, its yield strain is zero. The foam deforms and yields like a fluid upon application of the smallest strain.
Under a periodic shear, the stored energy increases during an initial transient period but reaches a steady state after a few periods of loading. Energy-strain plots show hysteresis due to topological rearrangements similar to those in ordered foams, but as the degree of disorder increases, the corresponding elastic regime shrinks and eventually disappears.
Rearrangement events in a disordered foam under bulk shear at a low shear rate \[snapshots shown in Fig. 7(a)\] correspond to those in an ordered foam at a high shear rate. The rearrangements are discrete and avalanche-like, resembling a stick-slip process, or adding sand slowly to a sandpile. However, at sufficiently high shear rate all the avalanches overlap and the deformation and rearrangements are more homogeneous and continuous, as in a simple viscous liquid. Figure 7(b) shows the typical energy-strain curve of a disordered foam under steady bulk shear.
Note that in all our hysteresis plots, the energy-strain curves cross at zero strain, indicating no residual stored energy at zero strain. This crossing is an artifact of our definition of energy, which ignores angular measures of distortion, i.e. the total bubble wall length does not distinguish among the directions in which the bubbles tilt. A choice of stress definition which included angular information would show some residual stress at zero strain, but would not affect the results reported here.
## V T1 avalanches
In both experiments and our simulations, the contact angles of the vertices remain close to $`120^{}`$ until two vertices meet. The applied strain rate determines the rate at which vertices meet. The resulting four-fold vertex rapidly splits into a vertex pair, recovering $`120^{}`$ contact angles, at a rate determined by the viscosity. This temporal asymmetry in the T1 event contributes to the hysteresis.
In vertex model simulations, sudden releases of energy occur once the applied shear exceeds the yield strain . The event size $`n`$, i.e. the energy release per event in the dry foam limit, follows a power-law distribution: $`\rho (n)n^{\frac{3}{2}}`$. Durian found a similar power-law distribution in his bubble model, with an additional exponential cutoff for large events. Simulations of Weaire et al. , however, suggested that power-law behavior only appeared in the wet foam limit. Experiments, on the other hand, have never found system-wide events or long range correlations among events . One of our goals is to reconcile these different predictions.
These differences may result from the use of energy release as proxy for topological changes, rather than enumerating actual events, as well as the assumption of a linear relation between jumps in the stored elastic energy and the number of T1 events, namely, $`d\varphi /dt=cN`$, where $`N`$ is the number of T1 events and $`c`$ is a constant. A drastic drop in the total bubble wall length indicates a large number of T1 events. However, in a disordered foam, all T1 events are not equal, since they do not all release the same amount of stored elastic energy. The energy released during a T1 event scales as the bubble perimeter, i.e. smaller bubbles release less energy. Hence smaller bubbles undergo more T1 events. Moreover, a T1 event is not strictly local, but deforms its neighborhood over a certain finite range, as demonstrated by T1 manipulations in magnetic fluid foam experiments . Therefore, the number of T1 events is not always directly proportional to the decrease in total bubble wall length. Thus we cannot compare the energy dissipation and T1 events directly. Furthermore, the mechanisms of energy dissipation differ in these models. Kawasaki et al. included the dissipation due to the flow of liquid out of the Plateau borders; Durian considered only the viscous drag of the liquid, while Weaire et al. modeled an equilibrium calculation involving quasistatic steps in the strain that do not involve any dissipation. In our model, the evolution minimizes the total free energy naturally. To avoid ambiguities, we directly count T1 events in addition to tracking energy.
The avalanche-like nature of rearrangements appears in the sudden decreases of the total elastic energy as a function of time. Figure 4(b) shows the relation between energy and the number of T1 events in an ordered foam under steady bulk shear for a small strain. The stored energy increases almost linearly until the yield strain is reached. The avalanches are well separated. Every cluster of T1 events corresponds to a drastic decrease in the stress, and the periodicity is due to the ordered structure of the foam. At a higher shear rate \[Fig. 5(b)\], the yield strain remains almost the same, but the avalanches start to overlap and the energy curve becomes smoother. In the sandpile analogy, instead of adding sand grains one at a time and waiting until one avalanche is over before dropping another grain, the grains accumulate at a constant rate and the avalanches, large and small, overlap one another. A sufficiently disordered foam may not have a yield strain \[Fig. 6(b)\]; T1 events occur at the smallest strain. The foam flows as a fluid without going through an intermediate elastic regime.
To study the correlation between T1 events, we consider the power spectrum of $`N(t)`$, the number of T1 events at each time step,
$$p_N(f)=𝑑t𝑑\tau e^{if\tau }N(t)N(t+\tau ),$$
(9)
where $`f`$ is the frequency with unit $`MCS^1`$. Figure 8(a) shows typical power spectra of the time series of T1 events in an ordered foam under bulk shear. At a shear rate $`\beta =0.01`$, the T1 events show no power law. The peak at $`10^3`$ is due to the periodicity of bubble structure in an ordered foam when a “sliding plane” develops. At shear rate $`\beta =0.02`$, the spectrum resembles that of white noise. As the shear rate increases to $`\beta =0.05`$, the power spectrum develops a power law tail at the low frequency end, with an exponent very close to 1. In a disordered foam, with increasing shear rate, the spectra for the T1 events gradually change from completely uncorrelated white noise to $`1/f`$ at higher shear rates. By $`1/f`$, we mean any noise of power spectrum $`S(f)f^\alpha `$ where $`0<\alpha <2`$ or near 1, i.e. intermediate between Brownian noise ($`\alpha =2`$) and white noise ($`\alpha =0`$).
These power spectra suggest that the experimental results for T1 events as reported in correspond to a low shear rate, with no long-range correlation among T1 events. Structural disorder introduces correlations among the events. Power-law avalanches do not occur in ordered hexagonal cells at low shear rate, where rearrangements occur simultaneously. At a high shear rate, when the value of $`\dot{\gamma }^1`$ is comparable to the duration of rearrangement events, the bubbles move constantly. The foam behaves viscously, since rearrangements are continuously induced before bubbles can relax into metastable configurations which can support stress elastically. At these rates, even an initially ordered structure behaves like a disordered one, as shear destroys its symmetry and periodicity.
In a disordered foam, whenever one T1 event happens, the deformed bubbles release energy by viscous dissipation and also transfer stress to their neighboring bubbles, which in turn are more likely to undergo a T1 switch. Thus, T1 events become more correlated. Shown in Fig. 8(b), the power spectra change from that of white noise toward $`1/f`$ noise. When the first sufficiently large region to accumulate stress undergoes T1 events, it releases stress and pushes most of the rest of the bubbles over the brink, causing an “infinite avalanche”: some bubbles switch neighbors, triggering their neighbors to rearrange (and so on), until a finite fraction of the foam has changed configuration, causing a decrease in the total stored energy, mimicking the cooperative dynamic events in a random field Ising model . We never observe system-wide avalanches as claimed in the vertex model simulations , agreeing with Durian’s simulations and Dennin et al.’s experiments . For even greater disorder, the bubbles essentially rearrange independently, provided spatial correlations for area and topology are weak. Pairs of bubbles switch as the strain exceeds their local yield points. Although more frequent, the avalanches are small, without long correlation lengths. Figure 8(c) shows the power spectra for T1 events for a highly disordered structure with $`\mu _2(n)=1.65`$. We observe no power law behavior, even at high shear rates. Thus a highly disordered foam resembles a homogeneous but nonlinear viscous fluid.
Over a range of structural disorder the topological rearrangement events are strongly correlated. The question naturally arises whether the transition between these correlated and uncorrelated regimes is sharp or smooth, and what determines the transition points. We are currently carrying out detailed simulations involving different structural disorder to study this transition.
Previous simulations and experiments measured $`\overline{N}`$, the average number of T1 events per bubble per unit shear, and concluded that $`\overline{N}`$ was independent of the shear rate. Our simulation results in three different foams with shear rates covering two orders of magnitude, however, disagree. As shown in Fig. 9, our data indicate that $`\overline{N}`$ depends sensitively on both the polydispersity of the foam and the shear rate. Only at large shear rates does $`\overline{N}`$ seem to be independent of the shear rate, which might correspond to the above-mentioned experiments.
The avalanches and $`1/f`$ power spectra resemble a number of systems with many degrees of freedom and dissipative dynamics which organize into marginally stable states . Simple examples include stick-slip models, driven chains of nonlinear oscillators, and sandpile models. In sandpile models, both the energy dissipation rate (total number of transport events at each time step) and the output current (the number of sand grains leaving the pile) show power-law scaling in their distributions. In particular, if the avalanches do not overlap, then the power spectrum of the output current follows a power law with a finite size cutoff . The $`1/f`$-type power spectra result from random superposition of individual avalanches .
If the analogy with sandpiles holds, we should expect the power spectra of the time derivative $`d\varphi /dt`$, of the stored energy, i.e. the energy change at every time step, to be $`1/f`$-like, and thus the power spectra of $`\varphi `$ to be $`f^2`$. However, in our simulations $`d\varphi /dt`$ does not show $`1/f`$-type broadband noise. Figures 10 (a-c) show the corresponding power spectra for $`\varphi `$ from the same simulations as Fig. 8, which are obviously not $`f^2`$, i.e. the topological rearrangements are not in the same universality class as sandpiles. In particular, Fig. 8(c) shows a complicated trend: the power spectrum changes from a small slope at shear rate $`\beta =0.001`$ to $`f^{0.8}`$ spanning over 4 decades at $`\beta =0.005`$. But as the shear rate increases, the power law disappears. Instead, a flat tail develops at high frequencies due to Gaussian noise. Other different slopes appear over different regimes of different sizes, indicating the existence of multiple time-scales and length-scales. We will further explore the implications of these spectra for $`\varphi `$ elsewhere .
## VI Effects of structural disorder
As structural disorder plays an important role in mechanical response, we study the effect of disorder on the yield strain and the evolution of disorder in foams under shear. We define the yield strain, at which the first T1 avalanches occur, as the displacement at the top boundary of the foam divided by half the height of the foam (since the zero strain is in the middle of the foam) rescaled by the average bubble width.
Figure 11(a) shows the yield strain as a function of shear rate $`\beta `$ for different foam disorders. We find that for an ordered foam at low shear rates, when a sliding plane occurs in the middle of the foam, the yield strain is independent of shear rate. We expect this independence because T1 events occur almost simultaneously in the sliding plane, and the bubble size determines the yield strain. At high shear rates, T1 events distribute more homogeneously throughout the foam which lowers the yield strain. The upper limit for the yield strain in an ordered foam is $`2/\sqrt{3}`$, when all the vertices in a hexagonal bubble array simultaneously become four-fold under shear. The nucleation of topological defects (5-and-7-sided bubble pairs) and their propagation in foams lower the yield strain. But the yield strain does not reach zero even at a very high shear rate of $`\beta =0.05`$. An ordered foam remains a solid with finite yield strain. For a disordered foam, the yield strain is lower for higher shear rates; and at the same shear rate, the yield strain decreases drastically to zero as disorder increases — the foam changes from a viscoelastic solid to a viscoelastic fluid.
The most commonly used measure for topological disorder is the second moment of the topological distribution, $`\mu _2(n)`$. During diffusional foam coarsening, the topological distribution tends to a stationary scaling form and $`\mu _2(n)`$ assumes a roughly constant value. Experiments on soap foams with up to 10000 bubbles in the initial state and early smaller simulations gave a value of $`\mu _2(n)=1.4`$ in the scaling regime. Other simulations showed a slightly lower value of $`\mu _2=1.2`$ . Weaire et al. reported shear-induced ordering, i.e. reduction of $`\mu _2(n)`$ with shearing. However, in our simulations, foams with initial $`\mu _2(n)`$ ranging from $`0.81`$ to $`2.02`$ show no shear-induced ordering. Instead, $`\mu _2(n)`$ increases and never decreases back to its initial unstrained value. Figure 11(b) shows the evolution of $`\mu _2(n)`$ for a variety of initial topological distributions. The difference between the simulations of Weaire et al. and ours is not surprising. Weaire et al. applied step strain and observed the resulting equilibrated pattern. In our simulations, bubbles are constantly under shear, i.e. the foam is not in equilibrium. The topological disorder, as measured by $`\mu _2(n)`$, therefore increases as the energy accumulates and decreases as the energy releases, as does the number of topological events, and does not necessarily settle to an equilibrium value.
In an ordered foam at shear rate $`\beta =0.01`$ \[Fig. 4\] with separated T1 avalanches, $`\mu _2(n)`$ fluctuates in synchrony with the total energy, shown in Fig. 11(c). When the avalanches overlap, $`\mu _2(n)`$ fluctuates more smoothly, but almost always has a positive correlation with the total stored energy.
Notice that in the energy-strain plots \[Fig. 2(b), Fig.4(b)\], stored energy slowly increases over long times, because we continuously apply shear and the foam is always out of equilibrium. Bubbles do not fully recover their original shapes. This deformation slowly accumulates at long times. In disordered foams, topological rearrangement may enhance the spatial correlation of bubbles, i.e. small bubbles cluster over time, as predicted by Langer and Liu’s bubble model . We will report results on spatial correlations for bubble topology $`n`$, and area $`a`$ elsewhere .
## VII Conclusions
We have included a driving term in the large-Q Potts model to apply shear to foams of different disorder. When the driving rate is too fast for the foam to relax, the system falls out of equilibrium. The mechanical response then lags behind the driving shear, resulting in hysteresis. Our model differs from most well studied driven spin models: our spins do not couple to an external field the way Ising spins couple to an oscillating magnetic field, and all action occurs only at the domain boundaries.
Because of the difficulty in characterizing local stress and strain in Potts model foams, we have chosen to use the rescaled total bubble wall length, $`\varphi `$, as the order parameter for hysteresis. While the hysteresis loops reflect the nonlinearity and metastability of bubble configurations, it is still an open question whether we can find more appropriate order parameter(s) that will provide more insight into the dynamics of T1 events. Another consequence of this difficulty is the lack of a clear quantitative description of the viscosity in our simulated foams in terms of the model parameters, a difficulty mirrored in the lack of understanding of effective foam viscosity in experiments. As mentioned above, a fundamental problem is the lack of experimental data on the viscosity of two-dimensional foams. We hope that these simulations will motivate new experiments in this direction.
The local cellular patterns characteristic of T1 events in foams are strikingly similar to the low temperature defects and the hexatic-square Voronoi patterns observed in two-dimensional (particle) systems, e.g. two-dimensional liquid crystals and colloidal suspensions, where studies have focused on the melting phase transitions . This similarity lead us to try the defect description used in melting studies, namely the nearest-neighbor-bond-orientation order parameter $`_n\mathrm{exp}(6i\theta _n)`$, where $`\theta _n`$ is the angle between two neighboring bonds. However, we found it insensitive to the orientation change of bubble walls during T1 events and thus not a useful order parameter. How the nucleation and propagation of topological defects in a sheared foam relate to the nucleation and role of topological defects in the two-dimensional melting studies remains an interesting question.
We have demonstrated three different hysteresis regimes in an ordered foam under oscillating shear. At small strain amplitudes, bubbles deform and recover their shapes elastically after stress release. The macroscopic response is that of a linear elastic solid. For larger strain, the energy-strain curve starts to exhibit hysteresis before any topological rearrangements occur, indicating a macroscopic viscoelastic response. Increasing the strain amplitude increases the area of the hysteresis loop. When the applied strain amplitude exceeds a critical value, the yield strain, T1 events occur and the foam starts to flow, and we observe macroscopic irreversibility.
We are currently testing this observation in an experiment similar to , applying periodic boundary shear to a homogeneous foam, and measuring the total bubble wall length directly (instead of using diffusion wave spectroscopy) to obtain $`\varphi `$. We can directly compare this data with the predicted three distinct behaviors. The viscoelasticity of foams is better characterized using the complex modulus $`G(\omega )`$ , which we plan to use in future investigations.
The comparison between the mechanical responses of ordered and disordered foams provides some insight into the relation between local structure and macroscopic response. An ordered foam has a finite yield strain. Structural disorder decreases the yield strain; sufficiently high disorder changes the macroscopic response of a foam from a viscoelastic solid to a viscoelastic fluid. A random foam with broad topology and area distributions lacks the linear elastic and viscoelastic solid regimes. Any finite stress can lead to topological rearrangements of small bubbles and thus to plastic yielding of the foam. More detailed simulations and experiments are needed to determine the dependence of the yield strain on the area and topological distributions of the foam, and on the shear rates. High shear rates effectively introduce more topological defects into the foam, as manifested in ordered foams driven at high shear rates. Local topological rearrangements (the appearance of non-hexagonal bubbles) occur throughout the foam, resulting in more homogeneous flow behavior, as in disordered foams.
Our simulations show that $`\overline{N}`$, the average number of T1 events per bubble per unit shear, is sensitive to the area distribution of the foam and the shear rate. Only for a small range of shear rates do foams having similar distributions show similar values of $`\overline{N}`$, which may explain previous studies . Our results emphasize the importance of both structural disorder and configurational metastability to the behavior of soft cellular materials.
In disordered foams, the number of T1 events is not directly proportional to the elastic energy release, because each T1 event is non-local and different T1 events can release different amounts of energy. Therefore, we count T1 events and energy release separately.
Avalanche-like topological rearrangements play a key role in foam rheology. Our simulations show that T1 events do not have finite long-range correlations for ordered structures or at low shear rates, consistent with experimental observations. As the shear rate or structural disorder increases, the topological events become more correlated. Over a range of disorders, the power spectra are $`1/f`$. As Hwa and Kardar pointed out, $`1/f`$ noise may arise from a random superposition of avalanches . These $`1/f`$ spectra suggest that avalanches of different sizes, although they overlap, are independent of each other. Both greater structural disorder and higher shear rate introduce a flat tail at the high frequency end, a signature of Gaussian noise, but do not change the exponent in the power law region.
However, unlike the sandpile model, the power spectra of the total energy, rather than of the energy dissipation, show a similar trend toward $`1/f`$. One major difference between T1 avalanches and sand avalanches is that each sand grain carries the same energy, while each T1 event can have a different energy. A better analogy may be a “disordered sandpile” model, where the sand grains have different sizes or densities, and avalanches overlap.
Avalanches of T1 events decrease the stored elastic energy, leading to foam flow. How do single T1 events contribute to the global response? Magnetic fluid foam experiments offer a unique opportunity to locally drive a vertex and force a single T1 event (or a T1 avalanche) by a well-controlled local magnetic field. We are investigating the effects of single T1 events using magnetic fluid foam experiments and the corresponding Potts model simulations .
## Acknowledgment
We would like to thank F. Graner, M. Sano, S. Boettcher and I. Mitkov for fruitful discussions. This work was supported in part by NSF DMR-92-57001-006, ACS/PRF and NSF INT 96-03035-0C, and in part by the US Department of Energy.
## Figure Captions
Figure 1. Schematic diagram of a T1 event, where bubbles $`a,b,c`$ and $`d`$ swap neighbors. Notice that as the edge between the pair of vertices shrinks, the contact angles not in contact with this edge remain $`120^{}`$.
Figure 2. An ordered foam under boundary shear: (a) Snapshots; different shades of grey encode bubble topologies (lattice size $`256\times 256`$). (b) Energy-strain curve and the number of T1s presented in 50 MCS bins.
Figure 3. Energy-strain curves for ordered foam under periodic boundary shear: (a) Numbers above the figures are $`\gamma _0`$. Progressively increasing shear amplitude at $`𝒥_{ij}=3`$ leads to a transition between three types of hysteresis: $`\gamma _0=1.0`$ corresponds to an elastic response, $`\gamma _0=3.5`$ shows viscoelastic response (before any T1 event occurs), $`\gamma _0=7.0`$ is a typical response when only one T1 event occurs during one cycle of strain loading. The intermediate steps show that the transition between these three types is smooth. (b) Numbers above the figures are $`𝒥_{ij}`$. Progressively decreasing liquid viscosity (increasing $`𝒥_{ij}`$ at $`\gamma _0=7`$) shows a similar transition between elastic ($`𝒥_{ij}=10`$) and viscoelastic ($`𝒥_{ij}=5`$) regimes, and flow due to T1 events ($`𝒥_{ij}=3`$ for one T1 event and $`𝒥_{ij}=1`$ for three T1 events during one strain cycle. (c) Phase diagram of hysteresis in the parameter space $`\gamma _0`$ vs. $`𝒥_{ij}`$; (d) Effect of progressively increasing temperature $`T`$ ($`𝒥_{ij}=3`$, $`\gamma _0=4`$). All data shown here are averaged over 10 periods.
Figure 4. An ordered foam under bulk shear with shear rate $`\beta =0.01`$: (a) Snapshots; shades of grey encode bubble topologies as in Fig. 2. (b) Energy-strain curve and the number of T1s. The magnified view in the box shows the correlation between stress releases and overlapping avalanches of T1 events.
Figure 5. An ordered foam under bulk shear with shear rate $`\beta =0.05`$: (a) Snapshots; (b) Energy-strain curve and the number of T1s. The magnified view in the box shows the correlation between stress releases and overlapping avalanches of T1 events.
Figure 6. A disordered foam under boundary shear: (a) Snapshots; shades of grey encode bubble topologies (lattice size $`256\times 256`$). (b) Energy-strain curve and number of T1 events presented in 100 MCS bins.
Figure 7. A disordered foam under bulk shear at shear rate $`\beta =0.01`$: (a) Snapshots; shades of grey encode bubble topologies (lattice size $`256\times 256`$): (b) Energy-strain curve and number of T1 events.
Figure 8. Power spectra of the number of T1 events: (a) An ordered foam for three shear rates $`\beta =0.01`$, $`0.02`$ and $`0.05`$ respectively; (b) A disordered foam \[$`\mu _2(n)=0.81,\mu _2(a)=7.25`$\] for five shear rates from $`0.001`$ to $`0.05`$; (c) A very disordered foam \[$`\mu _2(n)=1.65,\mu _2(a)=21.33`$\] for three shear rates.
Figure 9. Number of T1 events per unit shear per bubble as a function of shear rate for four foams: squares correspond to a foam of 180 bubbles, with $`\mu _2(n)=1.65`$, $`\mu _2(a)=21.33`$; stars correspond to a foam of 246 bubbles with $`\mu _2(n)=1.72`$, $`\mu _2(a)=15.1`$; triangles correspond to a foam of 377 bubbles, with $`\mu _2(n)=1.07`$, $`\mu _2(a)=2.50`$; and circles correspond to a foam of 380 bubbles, with $`\mu _2(n)=0.95`$, $`\mu _2(a)=2.35`$. The inset shows on a log-log scale that $`\overline{N}`$ varies by several orders of magnitude.
Figure 10. Power spectra of the energy: (a) An ordered foam for three shear rates. (b) A disordered foam \[$`\mu _2(n)=0.81,\mu _2(a)=7.25`$\] for five shear rates. (c) A very disordered foam \[$`\mu _2(N)=1.65,\mu _2(a)=21.33`$\] for five shear rates.
Figure 11. (a) Yield strain as a function of shear rate. (b) Evolution of $`\mu _2(n)`$ under constant bulk shear, legend denotes the initial $`\mu _2(n)`$. (c) Evolution of $`\mu _2(n)`$ under steady bulk shear for an ordered foam, showing the correlation between the stress decreases and $`\mu _2(n)`$;
|
no-problem/9902/hep-ph9902386.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Since any kind of Bose particles shares the same statistical properties as photons, it sounds reasonable to pose a question as to whether it could be feasible to realize coherent emission of other Bose particles by analogy with the laser radiation of photons. This question is now intensively discussed in connection with the possibility of realizing atom lasers \[1–7\]. Experiments show that Bose condensed atoms in a trap are in a coherent state. Therefore a condensate released from the trap propagates according to a single–mode wave equation represented by the nonlinear Schrödinger equation \[10–12\]. Output couplers for Bose condensed atoms are realized by means of short radiofrequency pulses transferring atoms from a trapped state to a untrapped state . With the help of additional external fields one could create Bose condensates in non–ground states or in vortex states , thus, forming various modes of atom lasers.
Another possibility is related to the creation of a large number of pions in hadronic, nuclear, and heavy–ion collisions . In such collisions up to hundreds of pions can be created simultaneously. When the density of pions produced in the course of these collisions is such that their mean particle separation approaches the thermal wave–length then multi–particle interference becomes important. Strong correlations between pions can result in the formation of a coherent state and in the feasibility of getting a pion laser .
Coherent states are usually associated with Bose condensed states. Therefore those particles that could exhibit Bose condensation under extreme conditions characteristic of fireballs produced in heavy–ion collisions could be also considered as candidates for lasing. For example, such candidates could be dibaryons that, as was shown \[20–23\], can form a Bose–Einstein condensate.
One of the main stipulations for the creation of coherent states is, as is mentioned above, sufficient density of generated Bose particles. It is, hence, necessary to understand what are the optimal conditions providing the maximal possible density of bosons. It is the aim of this paper to analyse the behaviour of dense and hot nuclear matter in order to answer the questions – what kind of bosons and under what proviso can be generated in large quantities in such a matter.
## 2 Multichannel Model
To consider dense and hot nuclear matter, in which various kinds of particles can be generated, we use the multichannel approach to clustering matter . The idea of this approach goes back to the methods of describing composite particles \[24–28\]. The most complete basis for this problem was formulated by Weinberg \[29–33\]. According to this approach, it is possible to introduce into any theory fictitious elementary particles, or quasiparticles, without changing any physical predictions. To accomplish this, the interaction among the original, truly elementary, particles must be modified in the appropriate way. By ”composite particles” one can mean bound states or resonances. If fictitious elementary particles, quasiparticles, are introduced to take the place of all composite particles, then perturbation theory can always be used. The modification of the Hamiltonian weakens the original interaction enough to remove divergencies. If such quasiparticles are introduced for each resonance or bound state, then two–body scattering problems can always be solved by perturbation theory. A nice account of the quasiparticle approach was given by Weinberg in Ref.. A resumé of this approach can be formulated as follows: One introduces fictitious elementary particles into the theory, in rough correspondence with the bound states of the theory. In order not to change the physics, one must at the same time change the potential. Since the bound states of the original theory are now introduced as elementary particles, the modified potential must not produce them also as bound states. Hence, the modified potential is weaker, and can in fact be weak enough to allow the use of perturbation theory.
Composite particles in other words are called clusters. Following the multichannel approach to describing clustering matter , let us consider an ensemble of particles that can form different bound states interpreted as composite particles or clusters. A space $`_i`$ of quantum states associated with a cluster of $`z_i`$ particles is termed an $`i`$–channel. The number $`z_i`$ of particles forming a bound cluster is a compositeness number. The average density of matter is a sum
$$\rho =\underset{i}{}z_i\rho _i,$$
(1)
in which
$$\rho _i=\frac{\zeta _i}{(2\pi )^3}n_i(\stackrel{}{k})d\stackrel{}{k}$$
(2)
is an average density of $`i`$–channel clusters, $`\zeta _i`$ being a degeneracy factor, and
$$n_i(\stackrel{}{k})=a_i^{}(\stackrel{}{k})a_i(\stackrel{}{k})$$
is a momentum distribution of the $`i`$–channel clusters. The statistical weight of each channel is characterized by the channel probability
$$w_iz_i\frac{\rho _i}{\rho }.$$
(3)
The Hamiltonian of clustering matter reads
$$H=\underset{i}{}H_i+CV,$$
(4)
where $`H_i`$ is an $`i`$–channel Hamiltonian and $`CV`$ is a nonoperator term providing the validity of the principle of statistical correctness , $`V`$ being the system volume. Since strong short–range interactions between original particles are included into the definition of bound clusters, the left long–range interactions can be treated as week \[29–34\]. These long–range interactions permit us to apply the mean–field approximation resulting in an $`i`$–channel Hamiltonian
$$H_i=\underset{k}{}\omega _i(\stackrel{}{k})a_i^{}(\stackrel{}{k})a_i(\stackrel{}{k})$$
(5)
with an effective spectrum
$$\omega (\stackrel{}{k})=\sqrt{k^2+m_i^2}+U_i\mu _i,$$
(6)
where $`m_i`$ is an $`i`$–cluster mass; $`U_i`$, a mean field; and $`\mu _i`$ the chemical potential of $`i`$–clusters. Then the momentum distribution of $`i`$–clusters, in the Hartree approximation, takes the form
$$n_i(\stackrel{}{k})=\frac{1}{\mathrm{exp}\{\beta \omega _i(\stackrel{}{k})1\}},$$
(7)
in which $`\beta `$ is inverse temperature; the upper or lower signs in (7) stand for Bose– or Fermi clusters, respectively. When the average baryon density
$$n_B=\underset{i}{}\rho _iB_i,$$
(8)
where $`B_i`$ is the baryon number of an $`i`$–cluster, is fixed, then the chemical potentials of $`i`$–clusters,
$$\mu _i=\mu _BB_i(n_B=const),$$
(9)
are expressed through the baryon potential $`\mu _B`$ defined from (8).
The mean density of matter (1) may be written as the sum
$$\rho =\rho _1+\rho _z,\rho _1\underset{\{i\}_1}{}\rho _i,\rho _z\underset{\{i\}_z}{}z_i\rho _i$$
(10)
of the density of unbound particles, $`\rho _1`$, and the density of particles bound in clusters, $`\rho _z`$. Then the conditions of statistical correctness are
$$\frac{\delta H}{\delta \rho }=0,\frac{\delta H}{\delta \rho _z}=0.$$
(11)
The original unbound particles in nuclear matter are quarks and gluons. Their collection is named quark–gluon plasma. The mean–field potential of the quark–gluon plasma can be written as
$$U_1U(\rho )=J^{1+\nu }\rho ^{\nu /3},$$
(12)
where $`J`$ is an effective intensity of interactions and $`\nu `$ is an exponent of a confining potential, $`0<\nu 2`$. In what follows we take $`\nu 2`$. The mean field for $`i`$–channel clusters reads
$$U_i=z_i\left[\mathrm{\Phi }\rho _z+U(\rho )U(\rho _z)\right],$$
(13)
where $`\mathrm{\Phi }`$ is a reference interaction parameter. With the potential (12), we have
$$U_i=z_i\mathrm{\Phi }\rho _z+z_iJ^{1+\nu }\left(\rho ^{\nu /3}\rho _z^{\nu /3}\right).$$
From here and the condition of statistical correctness (11), we find the correcting term
$$C=\frac{\nu }{3\nu }J^{1+\nu }\left(\rho ^{1\nu /3}\rho _z^{1\nu /3}\right)\frac{1}{2}\mathrm{\Phi }\rho _z^2.$$
(14)
We have yet two undefined parameters, $`J`$ and $`\mathrm{\Phi }`$. The first of them is an effective intensity of interactions in the quark–gluon plasma, which we take as $`J=225MeV`$. The second, that is the reference parameter $`\mathrm{\Phi }`$, may be scaled by means of nucleon–nucleon interactions $`V_{33}(r)`$ as follows:
$$\mathrm{\Phi }=\frac{1}{9}V_{33}(r)d\stackrel{}{r}.$$
(15)
Accepting for $`V_{33}(r)`$ the Bonn potential , we get $`\mathrm{\Phi }=35MeVfm^3`$. For nuclear matter of the normal baryon density $`n_{0B}=0.167fm^3`$, this gives an average interaction energy $`\mathrm{\Phi }n_{0B}=5.845MeV`$.
In this way, the model is completely defined and we can calculate all its thermodynamic characteristics. For the pressure we have
$$p=\underset{i}{}p_iC,p_i=\pm T\frac{\zeta _i}{(2\pi )^3}\mathrm{ln}\left[1\pm n_i(\stackrel{}{k})\right]d\stackrel{}{k}.$$
(16)
The energy density is
$$\epsilon =\underset{i}{}\epsilon _i+C,\epsilon =\frac{\zeta _i}{(2\pi )^3}\sqrt{k^2+m_i^2}n_i(\stackrel{}{k})d\stackrel{}{k}+\rho _iU_i.$$
(17)
From here, we may find the specific heat and the reduced specific heat,
$$C_V=\frac{\epsilon }{T},\sigma _V=\frac{T}{\epsilon }C_V,$$
(18)
respectively, and the compression modulus
$$\kappa _T^1=n_B\frac{p}{n_B}.$$
(19)
One may also define an effective sound velocity, $`c_{eff}`$, by the ratio
$$c_{eff}^2=\frac{p}{\epsilon }.$$
(20)
Statistical weights of the corresponding channels are given by the channel probabilities defined in (3). For the following analysis, it is convenient to introduce also the plasma–channel probability
$$w_{pl}=\frac{1}{\rho }\left(\rho _g+\rho _u+\rho _{\overline{u}}+\rho _d+\rho _{\overline{d}}\right),$$
(21)
where $`\rho _g`$ is the density of gluons, while other terms are the densities of $`u`$– and $`d`$–quarks and antiquarks, respectively. The pion–channel probability
$$w_\pi =\frac{2}{\rho }\left(\rho _{\pi ^+}+\rho _\pi ^{}+\rho _{\pi ^0}\right)$$
(22)
is expressed through the densities of $`\pi ^+,\pi ^{}`$, and $`\pi ^0`$ mesons. The probability of other meson channels, except pions, is
$$w_{\eta \rho \omega }=\frac{2}{\rho }\left(\rho _\eta +\rho _{\rho ^+}+\rho _\rho ^{}+\rho _{\rho ^0}+\rho _\omega \right).$$
(23)
The nucleon–channel probability writes
$$w_3=\frac{3}{\rho }\left(\rho _n+\rho _{\overline{n}}+\rho _p+\rho _{\overline{p}}\right)$$
(24)
containing the densities of neutrons, antineutrons, protons, and antiprotons. We calculate also the probabilities of multiquark channels, such as the dibaryon–channel probability
$$w_6=\frac{6}{\rho }\left(\rho _{6q}+\rho _{6\overline{q}}\right),$$
(25)
and, analogously, the $`9`$–quark and $`12`$–quark channel probabilities.
Now we can analyse the thermodynamic behaviour of the described model in order to define what kinds of Bose particles and under what conditions can be generated in large quantities, that is, when the corresponding Bose–channel probabilities are maximal. The choice of parameters is the same as in Ref. .
## 3 Analysis
The multichannel model of nuclear matter described in the previous section has been solved numerically. The pressure (16) is shown in Fig.1 as a function of temperature $`\mathrm{\Theta }=k_BT`$ in $`MeV`$ and of relative baryon density $`n_B/n_{0B}`$. The pressure is a monotonic function of its variables as well as the energy density (17) in Fig.2. But it is interesting that their ratio (20) in Fig.3 is a nonmonotonic function displaying a maximum at temperature around $`T_d=160MeV`$. The latter, as will be clear from the following, can be associated with the temperature of the deconfinement crossover. The specific heat and the reduced specific heat given in (18) are presented in Fig.4 and 5, respectively. The compression modulus (19) is shown in Fig.6. Again, the maxima of the reduced specific heat and the compression modulus can be associated with the deconfinement crossover. The following Figs. 7 to 11 present the behaviour of the channel probabilities for the quark–gluon plasma (21), pions (22), other mesons (23), nucleons (24) and dibaryons (25). Since the possibility of the appearance of the dibaryon Bose condensate is of special interest, we show in Fig. 12 the corresponding channel probability $`w`$. The Bose condensates of heavier multiquark clusters do not arise. The channel probabilities of such heavier clusters are negligibly small being, for instance, for $`9`$– and $`12`$–quark clusters less than $`10^3`$ and $`10^5`$, respectively. We show also in Figs. 13 to 15 the channel probabilities, as functions of the relative baryon density $`n_B/n_{0B}`$ at zero temperature, for the quark–gluon plasma (21), nucleons (24), and dibaryons (25).
The analysis demonstrates that the maximal density of pions can be generated around the temperature $`T160MeV`$ of the deconfinement crossover at low baryon density $`n_B<n_{0B}`$. The corresponding channel probability of pion production can reach $`w_\pi 0.6`$. The total probability of other meson channels reaches only $`w_{\eta \rho \omega }0.16`$ at $`T200MeV`$ and $`n_B<n_{0B}`$. However, the generation of these mesons is more noticeable than that of pions at high temperatures and baryon densities, although being always not intensive, with the related probability not exceeding the order of $`10^1`$.
The optimal region for the creation of dibaryons, where their channel probability reaches $`w_60.7`$, is the region of low temperatures $`T<20MeV`$ and the diapason of baryon densities $`n_B/n_{0B}520`$. At zero temperature their probability rather slowly diminishes with increasing the baryon density, so that at $`n_B100n_{0B}`$, we have $`w_60.4`$. At low temperatures dibaryon form a Bose–condensed state.
Above the deconfinement crossover temperature $`T_d160MeV`$, there is an intensive generation of gluons in the quark–gluon plasma. At sufficiently high temperatures, gluon radiation can, in principle, become so intensive that to acquire a noticeable coherent component.
Thus, the most probable candidates for realizing laser generation are pions, dibaryons, and gluons. Each kind of these Bose particles has its own region where the corresponding channel probability is maximal. For pions it is $`T160MeV`$ and $`n_B<n_{0B}`$; for dibaryons, $`T<20MeV`$ and $`n_B(520)n_{0B}`$; and for gluons, this is the high–temperature region $`T>160MeV`$. If it is feasible to realize the corresponding conditions, one could get a pion laser, dibaryon laser, or gluon laser, respectively. Note that to realize such a lasing in reality one has to accomplish several other requirements of which we are considering here only one necessary condition.
It is also worth noting that if one tries to achieve the desired conditions of lasing in the process of hadronic or heavy–ion collisions then one can get only a pulsed radiation of Bose particles. If the lifetime of a fireball formed during a collision is longer than the local–equilibrium time then the quasiequilibrium picture of the process is permissible. In such a case, it is possible to use the multichannel model, as is described here, with temperature and baryon density given as functions of time, the time dependence being in accordance with the related fireball expansion.
Acknowledgement
I am grateful to E.P. Yukalova for useful discussions. A grant from the University of Western Ontario, London, Canada, is appreciated.
Figure captions
Fig.1. The pressure (in units of $`J^4`$) of the multichannel model.
Fig.2. The energy density (in unites of $`J^4`$) on the temperature–baryon density plane.
Fig.3. The pressure–to–energy density ratio related to an effective sound velocity squared, $`c_{eff}^2`$.
Fig.4. The specific heat (in units of $`J^3`$) for the multichannel model.
Fig.5. The reduced heat displays a maximum that can be associated with the deconfinement crossover.
Fig.6. The compression modulus (in units of $`J^4`$) for the multichannel model.
Fig.7. The channel probability of the quark–gluon plasma.
Fig.8. The pion channel probability.
Fig.9. The total probability of other, except pion, meson channels.
Fig.10. The nucleon channel probability.
Fig.11. The dibaryon channel probability.
Fig.12. The channel probability of Bose–condensed dibaryons.
Fig.13. The plasma channel probability at zero temperature as a function of the relative baryon density.
Fig.14. The nucleon channel probability at zero temperature.
Fig.15. The dibaryon channel probability at zero temperature.
|
no-problem/9902/cond-mat9902020.html
|
ar5iv
|
text
|
# Characterization of transport and magnetic properties in thin film La0.67(CaxSr1-x)0.33MnO3 mixtures
## I Introduction
The effect of dopants for the ABO<sub>3</sub>-type manganese oxides has been an area of intense research activity, primarily for attempting to understand the physics behind the colossal magnetoresistance (CMR) behavior seen in these materials. Typically, studies have been carried out either by replacing the trivalent ion or the Mn ion. Recently a study of polycrystalline La<sub>0.75</sub>Ca<sub>0.25-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> was carried out in order to look at changes in magnetic entropy, where the doping variation is on the divalent site. In the work by Hwang et al. the system La<sub>0.7</sub>(Ca<sub>x</sub>Sr<sub>1-x</sub>)<sub>0.3</sub>MnO<sub>3</sub> was looked at, and exhibited a change in the tolerance factor t, which is defined as $`t=(d_{AO})/\sqrt{2}(d_{MnO})`$, from $``$ 1.2 to 1.24. We have undertaken a study of the changes in electronic, structural, and magnetic properties of thin films of La<sub>0.67</sub>(Ca<sub>x</sub>Sr<sub>1-x</sub>)<sub>0.33</sub>MnO<sub>3</sub> (LCSMO) as the Ca/Sr ratio is varied.
## II Sample preparation and characterization
Our samples were grown by off-axis sputtering using composite targets of La<sub>0.67</sub>Ca<sub>0.33</sub>MnO<sub>3</sub> (LCMO) and La<sub>0.67</sub>Sr<sub>0.33</sub>MnO<sub>3</sub> (LSMO) material mounted in copper cups. The substrates were (100) oriented neodymium gallate (NdGaO<sub>3</sub>), silver-pasted onto a stainless steel substrate holder that was radiatively heated from behind by quartz lamps. Although there was no direct measurement of the holder temperature for the runs used in this study, previous runs (under nominally the same conditions) using a thermocouple clamped onto the front surface of the holder indicated a temperature of 670 C. The LCMO target was radio frequency (rf) sputtered and the LSMO target was direct current (dc) sputtered in a sputter gas composed of 80% Ar and 20% O<sub>2</sub> (as measured by flow meters) and at a total pressure of 13.3 Pa. These conditions gave deposition rates of $``$ 17-50 nm/hr, with film thicknesses being typically 100 nm. After deposition, the samples were cooled in 13.3 kPa of oxygen. We find that our system can produce films of LCMO and LSMO that have low resistivities and high peak temperatures without the use of an ex-situ anneal in oxygen.
The samples were characterized by standard and high resolution $`\theta 2\theta `$ x-ray diffraction scans, atomic force microscopy, electrical resistivity measurements (using the van der Pauw method) in an applied field perpendicular to the film plane, and magnetization measurements at low fields parallel to the film plane using a Quantum Design SQUID Magnetometer. All magnetization data had the large paramagnetism of the NdGaO<sub>3</sub> substrates subtracted out.
## III Structure, Transport and Magnetic Properties
On (100) NdGaO<sub>3</sub> we find surface roughness values of $``$ 1.5 nm for pure films of LSMO and LCMO, while for the mixtures the surface roughness increases to $``$ 2.8 nm, as measured by atomic force microscopy. The grain sizes for the pure LSMO and LCMO films is typically 100 nm, while for the mixtures it is reduced to $``$ 50 nm. High resolution X-ray diffraction along the growth direction shows only the presence of peaks from NdGaO<sub>3</sub> for the LCMO samples. This would be expected, since the lattice match of pseudo-cubic (100) LCMO ($`a_o`$ $``$ 0.387 nm) to pseudo-cubic (100) NdGaO<sub>3</sub> ($`a_o`$ $``$ 0.385 nm) should be excellent. From this we take the orientation of the LCSMO films to be (100). Films of LSMO on NdGaO<sub>3</sub> however as shown in Fig. 1 do exhibit a peak corresponding to a pseudo-cubic length of 0.388 nm. The rocking curve width for this line is 337 arc-seconds, with an instrumental width of 12 arc-seconds, and phi scans show excellent in-plane registry of the film with the NdGaO<sub>3</sub> substrate. From our work on LSMO and LCMO grown on both (100) and (110) MgO, we know that LSMO films typically grow with a slightly larger value of the pseudo-cubic cell length, $`a_o`$, compared to LCMO. On (100) MgO, for example, we find that $`a_o`$ is 0.387 and 0.388 nm for LCMO and LSMO, respectively, while for (110) MgO we find 0.388 and 0.390 nm for the two materials. Obviously the lattice match is not as good for the case of LSMO, and this will introduce strain into the LSMO film. We also see from Fig. 1 that as the calcium fraction is increased, the well defined peak seen for the LSMO film moves to smaller d spacings, consistent with the trend towards LCMO, appearing as a shoulder on the low angle side of the NdGaO<sub>3</sub> (200) peak. As the calcium fraction increases further, the shoulder diminishes, and for pure LCMO (not shown), it is indistinguishable from the substrate peak. From this we surmise that the films will be strained, with the strain decreasing as the calcium fraction increases.
In Fig. 2 we present the resistivity data in zero applied magnetic field for the LCSMO films for the various Ca/Sr ratios, along with a plot of the peak temperatures (T<sub>p</sub>) and Curie temperatures (T<sub>C</sub>) determined from magnetization data for the samples (taken at 400 Oe). For the case of pure LCMO, we see the usual $`\rho `$(T) behavior, with a thermally activated resistivity (activation energy of $``$ 52 meV) and a peak temperature of 260 K, which is the same as the Curie temperature. For pure LSMO (x=0), we find that the resistivity has a peak temperature (410 K) much higher than the Curie temperature (330 K) This discrepancy between the peak and Curie temperatures is often seen for LSMO. The Curie temperature we see for our LSMO is lower than that seen in bulk LSMO with 1/3 doping, which we feel is due to disorder in the sample. The difference in disorder or strain between the LSMO and LCMO samples was also seen in the measurements of the coercive field for the two samples. At 10 K, the coercive field for the LCMO film was 20 Oe, which is quite low. However for the LSMO film, the coercive field was 170 Oe. Now as the concentration of Ca is varied from either extreme, we see sudden changes in the sample properties. For the x=0.91 sample, we see a large decrease in the sample resistivity, with a concurrent rise in the T<sub>p</sub> and T<sub>C</sub>. The increase in the Curie temperature is likely due to the change in the tolerance factor as the Ca atoms are replaced with Sr atoms. On the other end, at x=0.25, the increase in Ca fraction causes a large increase in the resistivity, along with a large drop in T<sub>p</sub> and T<sub>C</sub>. The large increase in resistivity is likely the reason for the low value of T<sub>C</sub> observed. The values for T<sub>p</sub> and T<sub>C</sub> are similar for the x=0.91 sample, but diverge for the samples with lower x. We see that as the Ca fraction decreases, T<sub>p</sub> and T<sub>C</sub> tends to increase as one would expect for the change in tolerance factor being introduced. However, the variation we see in T<sub>C</sub> is not as gradual as was seen in either the previous bulk studies on divalent doping. In the work by Hwang et al. an apparent smooth variation in T<sub>C</sub> from 250 to 365 K is seen as the Ca/Sr ratio is varied. In the work by Guo et al. there was a jump in T<sub>C</sub> at a Ca fraction, x, of $``$ 0.45 when the system went from orthorhombic to rhombohedral. No such jump is seen in our data, but with the strong lattice match to the NdGaO<sub>3</sub> substrate, we would not expect a structural change. We see instead that the value of T<sub>C</sub> changes very slowly as the value of x is changed, but with rather sudden changes near x=0 and x=1. We feel that part of the explanation for the non-monotonic behavior of T<sub>C</sub>, as well as T<sub>p</sub>, is due to disorder in the samples, as we discuss below.
In previous studies of the low temperature (T $`<`$ 200 K) resistivity in the manganites, several different equations have been used to characterize the behavior. Schiffer et al. used the equation $`\rho (T)=\rho _0+\rho _1T^{2.5}`$ for LCMO polycrystalline material and found good fits to the data. Similar results for LCMO films have also been seen. Urashibara et al. found for LSMO material a T<sup>2</sup> dependence, which was interpreted as being due to electron-electron scattering. We found that we could also get reasonable fits using the approach in Schiffer et al. if we limited the data selection to T $`<`$ 150 K. However, if we look at the data for T $`<`$ 200 K, we find that we get better agreement if we use
$$\rho (T)=\rho _0+\rho _2T^2+\rho _5T^5$$
(1)
as seen in Figure 3. A similar result was seen in well annealed LSMO and LCMO films by Snyder et al., but here they used a $`T^{4.5}`$ term instead of $`T^5`$, in light of the prediction of spin-wave scattering by Kubo and Ohata. However, from the work on Pb-doped LCMO single crystals, the contribution from the $`T^{9/2}`$ term is expected to be much smaller ( $``$ 0.5 $`\mu \mathrm{\Omega }`$-cm at 100 K) than that seen in our results, which is $``$ 10 $`\mu \mathrm{\Omega }`$-cm. We also observe the reduction in the contribution of the $`T^2`$ term at low temperatures, which is interpreted by Jaime et al. as an indication that the $`T^2`$ term arises from single-magnon scattering, and not electron-electron scattering.
Our derived values for $`\rho _0`$ and $`\rho _2`$ determined from fitting Eq.1 are shown in the inset to Fig. 3. The values of $`\rho _5`$ are typically 1 f$`\mathrm{\Omega }`$-cm $`K^5`$. We see that the temperature independent term, $`\rho _0`$ is lowest for the pure LCMO films, with values similar to that seen in Snyder et al. As the Ca fraction decreases, we see an increase in $`\rho _0`$, which indicates an increase in the disorder in the films. This increase in disorder might be initially thought to be due to random location of Sr on Ca sites, but since the Ca sites are already located at random in LCMO, it is difficult to see how replacing Ca with Sr has increased the randomness in the system. The trend continues until pure LSMO is reached, when we see a drop in the static term. We notice however that the low temperature resistivity is higher for our pure LSMO films than for pure LCMO, which reflects the increased disorder for the LSMO film as seen in the coercive field measurements. A similar result was also seen in Ref.. For the temperature dependent term, we see a similar non-monotonic trend, with a peak in the value of $`\rho _2`$ as the Ca fraction decreases, and a large drop when pure LSMO is reached. The values of $`\rho _2`$ that we observe for pure LSMO and LCMO are larger than that seen in Snyder et al., however both our values and those for Snyder show a similar correlation, as seen in Fig. 4. Clearly there appears to be a connection between the values of $`\rho _2`$ and $`\rho _0`$, with the ratio of the two being approximately 60-70 x 10<sup>-6</sup> K<sup>-2</sup>. If the $`\rho _2`$ term is due to electron-electron scattering, it is very hard to see what correlation would exist between the static disorder in the sample and the terms in e-e scattering, such as the Fermi energy. The model of Jaime et al. also would give no correlation between the two terms. If there was a coincidental correlation between the two terms, due say to changes in E<sub>F</sub> (which affects $`\rho _2`$) and changes in strain (which affects $`\rho _0`$) with x, we would not expect to see the same correlation for the films in the work by Snyder et al. since the points with the lowest and highest values of $`\rho _0`$ are for pure LCMO films.
In Figure 5 we show the magnetoresistance at 6 Tesla applied field as a function of temperature for the films, defined as
$$MR=\frac{(R(H=0T)R(H=6T))}{R(H=0T)}.$$
(2)
For the range of temperatures studied, we see a maximum in the room temperature magnetoresistance for the x=0.91 sample, (since T<sub>p</sub> is close to room temperature) however the largest magnetoresistance occurs for the pure LCMO sample. The magnetoresistance at 77 K for all the samples is linear in applied field, going from $``$ 0.5 % for the LCMO sample to 3 % for the LSMO sample at 6 Tesla of field. The field dependence of the magnetoresistance at room temperature for the samples undergoes a change as would be expected for T<sub>p</sub> moving from above to below room temperature as seen in Fig. 6. Near zero field, the curves for the pure LCMO and the x=0.91 sample exhibit positive concavity, which is seen for samples with T $`>`$ $`T_p`$, however for higher fields we see that the concavity for the x=0.91 sample switches to negative which is that seen for the other samples. As seen in Ref., we can fit the change in resistance for the case of T$`<`$ T<sub>C</sub> to the equation
$$\rho (H)=\rho _{\mathrm{}}+\frac{\mathrm{\Delta }}{1+H/\gamma }.$$
(3)
The values of $`\gamma `$ for samples with x$`<`$1 at room temperature are shown in the inset in Fig. 6. We see that the values of $`\gamma `$ decrease as the Ca fraction increases. In Ref. for pure LCMO a value of $`\gamma `$ = 2.7 Tesla was found at 0.9 $`T_C`$, which would fit in reasonably into our values, assuming of course that $`\gamma `$ is not strongly temperature dependent. If $`\gamma `$ is dependent on the relative difference between T and T<sub>C</sub> or T<sub>p</sub>, we would not get the smooth variation seen in the inset of Figure 6, since T<sub>C</sub> and T<sub>p</sub> are not a monotonic function of x, as seen in Figure 2.
For the pure LCMO sample, we could fit the data equally well to the equation proposed in Ref. $`\rho (H)=\rho _{\mathrm{}}+\mathrm{\Delta }/(1+(H/\beta )^2)`$, or the form $`\rho (H)=\rho _0+aH^2+bH^4`$. However the use of the first equation resulted in values of $`\rho _{\mathrm{}}<0`$, which is unphysical. The value of $`\beta `$ is $``$ 8.5 T, which is larger than that seen in Ref., 5.7 T. The data for the x=0.91 cannot be fit over the entire range with any of the formulations, since it exhibits a concavity change with field. However, for high fields (above 2 Tesla), it can be fit by Eq. 3, giving a value of $`\gamma `$ as seen in Fig. 6.
## IV Conclusions
We have observed that LCSMO films grow with (100) pseudo-cubic orientation on NdGaO<sub>3</sub> substrates with somewhat rougher surfaces and smaller grain size than either pure LCMO or LSMO films. As the Ca fraction decreases, the lattice constant for LCSMO increases towards the value for LSMO, resulting in an increase in strain in the system. This strain is manifested by a reduction in the Curie temperature, and increases in the coercive fields and low temperature resistivity. We have also observed the T<sup>2</sup> dependence to the resistivity, and have observed a correlation between this term and the static term. The field dependence to the magnetoresistance for LCSMO films is predicted well by the equations in Ref. , with the value of $`\gamma `$ increasing as the Ca fraction is reduced.
## V Acknowledgments
We would like to gratefully acknowledge the assistance of Michael Miller for the AFM measurements and Andrew Patton in the production of the films.
|
no-problem/9902/cond-mat9902012.html
|
ar5iv
|
text
|
# Slave fermion theory of confinement in strongly anisotropic systems
## Abstract
We present a mean field treatment of a strongly correlated model of electrons in a three–dimensional anisotropic system. The mass of the bare electrons is larger in one spatial direction (the $`c`$–axis direction), than in the other two (the $`ab`$–planes). We use a slave fermion decomposition of the electronic degrees of freedom and show that there is a transition from a deconfined to a confined phase in which there is no coherent band formation along the $`c`$–axis.
The Date
One of the most controversial, and hard to understand, problems related with high-$`T_c`$ cuprates is the anomalous charge transport observed experimentally. The charge dynamics reflects the anisotropy in the crystal structure of these compounds, which consists of weakly coupled planes. In the usual notation, we will refer to “$`c`$–axis” and “$`ab`$–planes” as the directions transverse and parallel to the planes respectively. The in–plane conductivity $`\sigma _{ab}`$ shows a behavior characteristic of the metallic state. On the other hand, close to the insulating state, in the so called underdoped regime, the $`c`$–axis conductivity $`\sigma _c`$ is “incoherent”: the values of $`\sigma _c`$ are below the minimum metallic conductivity, the temperature dependence is anomalous, and the frequency dependence does not show signatures of Drude–like behavior.
Band structure calculations indicate an anisotropy which, within the framework of Boltzman transport, imply metallic behavior with an anisotropy $`\sigma _c/\sigma _{ab}`$ well above the experimental observation. Perturbative treatments within the Fermi–liquid theory indicate that the anisotropy is not renormalized by interactions. Perhaps the main objection to the “conventional” theories of $`c`$–axis transport is the observed value of the anisotropy of the conductivity. In the superconducting phase coherence is reestablished in all directions. This lead Anderson and others to attribute the anomalies in transport in the normal state to the effect of strong electronic correlations, and to conclude that in order to describe the incoherent $`c`$–axis conductivity the Fermi–liquid picture should be abandoned. The starting point used as a paradigm is the one-dimensional correlated problem where it is rigorously known that the Fermi–liquid picture fails. Considerable work has been done in weakly coupled chains that suggest that a state can be formed in which the coherence is confined to the motion along the chains, the motion transverse to the chains being incoherent.
A complete theory for the charge dynamics in anisotropic strongly correlated systems is not yet available. Due to the complexity of the problem, much work remains to be done in order to develop a fully consistent and controlled calculation scheme that could account for the phenomenology indicated by the experiments. In the mean time, the analisys of simple models is useful as a starting point towards the final answer. Here we present a mean field treatment of a system of coupled planes that includes the strong anisotropy, and incorporates the strong correlations responsible for the non–Fermi liquid behavior. We show that, within that mean field, a transition from a deconfined to a confined phase takes place. The parameter signaling the transition is the gain in kinetic energy due to band formation in the $`c`$–axis direction.
We consider the Hubbard model in the limit of infinite on–site repulsion, described by the following Hamiltonian
$$H=\underset{i,j}{}t_{i,j}\underset{\sigma }{}(1n_{i,\sigma })c_{i,\sigma }^{}c_{j,\sigma }(1n_{j,\sigma }),$$
(1)
where $`i,j`$ refers to near neigbors on a cubic lattice where the anisotropy is incorporated in the values of hopping the matrix elements: $`t_{i,j}=t_{}`$ for in–plane hoppings and $`t_{i,j}=t_{}`$ for the motion along the $`c`$–axis. The fermion operators $`c_{i,\sigma }^{}`$ create an electron at site $`i`$ only if the site is empty.
A well known mean field description of Hamiltonian 1 is the slave–boson approach in which each local configuration has associated with it a fermionic or bosonic degree of freedom, such that $`c_{i,\sigma }^{}=a_{i,\sigma }^{}e_i`$, where $`a_{i,\sigma }^{}`$ creates a fermion with spin $`\sigma `$ at the $`i`$–th site representing a singly occupied configuration, and $`e_{i\text{ }}`$destroys a boson representing the empty state at the same site. A standard mean field calculation decouples fermions and bosons and relaxes the exact constraint of one “particle” (fermion + boson) per site. The resulting problem is that of non–interacting bosons and fermions self–consistently coupled. As a result the ideal bosons condensate in a $`k=0`$ state, the overall effect being a renormalization of the masses of the fermions. It is important to note that, even for an anisotropic system, the $`k=0`$ bosonic ground state wave function does not know about the anisotropy, and the mass renormalization is the same in all spatial directions. Consequently, such an approach preserves the anisotropy and the Fermi liquid character of the ground state. At least formally, one can conceive corrections to this state that improve the treatment of the constraint to avoid multi–occupancy of the particles at the same site. There are, however, other alternative treatments that–still within the mean field level–take into account the hard core constraint for the bosons exactly. In the present work we present a mean field along this line. In what follows we show that at an alternative description in terms of slave–fermions for the infinite–$`U`$ case breaks the Fermi liquid description and produces a confined coherent sate in the $`ab`$ planes.
We introduce a description in which the original projected fermions are represented by three fermions:
$$\overline{c}_{i,\sigma }c_{i,\sigma }(1n_{i,\sigma })=a_{i,\sigma }f_{i,}^{}f_{i,}$$
(2)
The above representation respects the anticommutation relation between the projected operators $`\overline{c}_{i,\sigma }`$ and $`\overline{c}_{i,\sigma }^{}`$ provided one stays within the physical Hilbert space. A related fermion linearization was presented in Ref. .
The product $`f_{i,}^{}f_{i,}`$ is a spin flip operator corresponding to a pseudo–spin degree of freedom not related to $`\sigma `$. When this fictitious spin is $``$ in site $`i`$ this means that the site is occupied, and the site is empty if the spin is $``$: there are as many $`f_{}`$’s as there are electrons and as many $`f_{}`$’s as there are holes. The $`f`$ fermions therefore satisfy
$$f_{i,}^{}f_{i,}+f_{i,}^{}f_{i,}=1,\text{ }\underset{\sigma }{}a_{i,\sigma }^{}a_{i,\sigma }+f_{i,}^{}f_{i,}=1,$$
(3)
and, in turn
$$\underset{\sigma }{}a_{i,\sigma }^{}a_{i,\sigma }=1\delta ,$$
(4)
with $`\delta `$ representing the fractional deviation in occupation number with respect to the half–filling case of one electron per site.
At the mean field level the ground state wave function consists of a direct product of three Fermi seas, one per each of the fermion degrees of freedom. The total energy in this approximation is given by
$$E_0=\underset{i,j}{}t_{i,j}A_{i,j}\chi _{i,j}^2,$$
(5)
with
$$A_{i,j}=\underset{\sigma }{}a_{i,\sigma }^{}a_{j,\sigma },$$
(6)
$$\chi _{i,j}=f_{i,}^{}f_{j,}=f_{i,}^{}f_{j,},$$
(7)
where the last equality holds because we are dealing with a bipartite lattice with particle–hole symmetry. The three species of fermions are free with their hopping amplitudes renormalized by the factors $`A_{i,j}`$, and $`\chi _{i,j}`$. These factors are responsible for renormalizing the anisotropy and can be better visualized in the mean field Hamiltonian
$$H_{\mathrm{MF}}=\underset{i,j}{}t_{i,j}\underset{\sigma }{}\left[\chi _{i,j}^2a_{i,\sigma }^{}a_{j,\sigma }+A_{i,j}\chi _{i,j}f_{i,\sigma }^{}f_{j,\sigma }\right]+C,$$
(8)
with $`C`$ a constant.
Note that, for small deviations from half filling, the $`f_{i,}`$ ($`f_{i,}`$) fermions are moving close to the bottom (top) of their band, whereas the $`a`$ fermions are close to the center of the band. This makes their respective Fermi surfaces different.
Our mean field can be understood in two steps. First the $`a`$ fermions are decoupled from the $`f`$ fermions. At that level, the $`a`$ fermions are free, but the $`f_{}`$ fermions and the $`f_{}`$ fermions are strongly correlated. The dynamics of the system of $`f`$ fermions at this level is identical to that of an $`xy`$–model, and can be mapped onto a hard–core boson problem. In the second step the $`f`$ fermions of different spin are decoupled and treated as free fermions (with a self consistent constraint on the dynamics). Note that, at the level of step one, the above mentioned system of hard–core bosons will in principle have an anisotropy in the expectation values of the kinetic energy terms that will depend on direction. This is due to the quantum fluctuations introduced by the hard–core constraint.
At half filling ($`\delta =0`$), the kinetic energy of the $`f`$ fermions is zero, the renormalization factor $`\chi _{ij}=0`$, giving the localized limit of the $`a`$ fermions which we identify as the Mott insulating state. On the other hand, far from half filling, for $`\delta 1`$, the density of $`a`$ –fermions is so low that they should behave as non–interacting, but our mean field fails to recover this limit. Decoupling the $`f`$–fermions from the $`a`$–fermions is not a good approximation in the limit of high doping $`\delta `$ because the probability of finding an $`f_{}`$-fermion at a site occupied by an $`a`$ fermion is very low \[$`(1\delta )^2`$\], while the exact dynamics requires this probability to be one. Therefore our results will be valid close to half filling, or $`\delta 0`$.
Due to translational invariance, the ground state energy will be a function of the four quantities $`A_{}`$, $`A_{}`$, $`\chi _{}`$ and $`\chi _{}`$:
$$E_0=4t_{}A_{}\chi _{}^22t_{}A_{}\chi _{}^2.$$
(9)
The one particle energies of the $`f`$ and $`a`$ fermions are respectively
$$E_𝐤^f=t_{}A_{}\chi _{}\epsilon _k_{}t_{}A_{}\chi _{}2\mathrm{cos}k_z,$$
(10)
$$E_𝐤^a=t_{}\chi _{}^2\epsilon _k_{}t_{}\chi _{}^22\mathrm{cos}k_z,$$
(11)
with
$$\epsilon _k_{}=2(\mathrm{cos}k_k+\mathrm{cos}k_y).$$
(12)
Effective chemical potentials $`\lambda `$ and $`\mu `$ have to be determined for each of the two types of fermions through the equations
$$\frac{1}{N}\underset{𝐤}{}f(E_𝐤^f\lambda )=\delta ,\frac{1}{N}\underset{𝐤}{}f(E_𝐤^a\mu )=\frac{1\delta }{2}.$$
(13)
We approximate the reduced density of states corresponding to the motion within the plane by a constant: $`_k_{}\delta (\epsilon \epsilon _k_{})=\mathrm{\Theta }(4|\epsilon |)/4`$, and find that the mean field equations can be written in terms of the parameters $`\alpha `$ and $`\beta `$ defined as
$$\alpha =\frac{t_{}}{t_{}}\frac{A_{}\chi _{}}{A_{}\chi _{}},\beta =\frac{t_{}}{t_{}}\left(\frac{\chi _{}}{\chi _{}}\right)^2.$$
(14)
After straightforward integrations, and using the fact that close to half filling the Fermi surface of the $`a`$ fermions is open, the mean field equations are
$$A_{}=\frac{1}{2}(1\delta ^2)\left(\frac{\beta }{2}\right)^2,A_{}=\frac{\beta }{4},$$
(15)
$$\chi _{}=\frac{1}{8\pi }\left\{\left[1\left(\frac{\stackrel{~}{\lambda }}{4}\right)^2\right]2k_0\frac{\alpha ^2}{4}\left(k_0+\frac{1}{2}\mathrm{sin}2k_0\right)\frac{\stackrel{~}{\lambda }\alpha }{2}\mathrm{sin}k_0\right\},$$
(16)
$$\chi _{}=\frac{1}{4\pi }\left\{\left(1\frac{\stackrel{~}{\lambda }}{4}\right)2k_0\mathrm{sin}k_0+\frac{\alpha }{2}\left(k_0+\frac{1}{2}\mathrm{sin}2k_0\right)\right\},$$
(17)
with $`\stackrel{~}{\lambda }=\lambda /(t_{}A_{}\chi _{})`$ determined from the equation
$$\delta =\frac{1}{8\pi }\left\{(\stackrel{~}{\lambda }+4)k_0+2\alpha \mathrm{sin}k_0\right\},$$
(18)
and $`k_0=\mathrm{cos}^1[(\stackrel{~}{\lambda }+4)/2\alpha ]`$ for $`|(\stackrel{~}{\lambda }+4)/2\alpha |<1`$ and $`\pi `$ otherwise. Note that $`\alpha `$ plays the role of an effective anisotropy of the $`f`$–fermions. For a given $`\delta `$, if we fix $`\alpha `$, the renormalization factors $`\chi `$ and $`A`$ are determined by the Equations (15) through (18). This means that $`\alpha `$ plays the role of a variational parameter with respect to which we have to minimize the energy $`E_0`$. As an example, in Figure 2 we show some curves of $`E_0`$ vs. $`\alpha `$ for different values of doping using as a parameter the bare anisotropy $`t_{}/t_{}`$.
The curves indicate that for fixed $`t_{}/t_{}`$ there is a discontinuous jump in the position of the minimum of $`E_0`$ as $`\delta `$ is varied. The curve shown in Figure 2 for $`\delta =0.002`$ corresponds to the confined phase for which $`\alpha =0`$, and the renormalization factor $`\chi _{}=0`$. The curve for $`\delta =0.0018`$ has its minumum at finite $`\alpha `$ and hence corresponds to a three dimensional metal with a renormalized anisotropy.
A phase diagram that result from our calculation is shown in Figure 3
A very important point is to establish that the particle motion does not correspond to a Fermi liquid. We show this by computing the form of the occupation number of the original fermions in the confined phase within our mean–field squeme:
$`n_{k,\sigma }c_{k,\sigma }^{}c_{k,\sigma }={\displaystyle \frac{n}{2}}+{\displaystyle \frac{1}{N}}{\displaystyle \underset{ij}{}}e^{ik(R_iR_j)}c_{i,\sigma }^{}c_{j,\sigma }`$with $`n`$ the particle density. The term $`c_{i,\sigma }^{}c_{j,\sigma }`$ is evaluated using the representation of Equation 2. In mean field the result is a convolution of the occupation numbers of the three fermions ($`f`$’s and $`a`$). Using the constraints of Equations 3 and 4 one obtains
$`n_{k,\sigma }={\displaystyle \frac{1\delta }{2}}\left[1\delta (1\delta )\right]+{\displaystyle \frac{1}{N^2}}{\displaystyle \underset{pq}{}}n_{p,\sigma }^{(a)}n_{q,}^{(f)}n_{p+qk,}^{(f)}.`$
The occupation numbers above correspond to three Fermi surfaces. For small $`\delta `$ the Fermi surfaces corresponding to the $`f`$ fermions are are two circles centered respectively at $`𝐤=(0,0)`$ and $`𝐤=(\pi ,\pi )`$. On the other hand, the Fermi surface of the $`a`$ fermions are close to a diamond. The result of the convolution above (See Figure 4) is that $`n_{k,\sigma }`$ does not have a discontinuity, implying an non–Fermi liquid state.
A few points related to the calculation deserve a comment:
i) Due to the approximation made in the density of states we cannot recover the isotropic case. The approximation used is aimed at describing anisotropic systems.
ii) In our calculation the confined regime is identified by the vanishing of the expectation value of the interplane hopping indicating that there is not band formation along this direction. We interpret this result as an indication of incoherence, even though one expects some interplane–coupling to remain in the exact incoherent regime. The picture is analog to the slave boson description of the Mott insulator. There, the insulating state is characterized by a vanishing of the inter–site hopping, while we know that in the exact ground state this magnitude is small but finite.
In summary, we have presented a mean field calculation and derived a phase–diagram of a strongly interacting anisotropic system. We have shown that, as the anisotropy increases, for small deviations from half filling a transition takes place from a deconfined phase to a confined phase in which the motion in the $`c`$–axis direction is completely incoherent while the motion in the $`ab`$–direction corresponds to a coherent, non–Fermi liquid state.
|
no-problem/9902/astro-ph9902134.html
|
ar5iv
|
text
|
# Partition function based analysis of CMB maps
## 1 Introduction
In a few years the forthcoming CMB data sets from the missions MAP (NASA) and PLANCK (ESA) will offer us a much better image of the young universe than ever before. The CMB represents a view of the universe when it was about $`0.002\%`$ of its present age. CMB anisotropies provide a link between theoretical predictions and observational data. Undoubtedly these data will constrain more accurately the fundamental cosmological parameters. In recent years several groups have been very active in the study of the CMB anisotropies. Many statistical methods have been adapted to the analysis of the future CMB maps, and others are being developed.
There are many methods which can give relatively accurate values for the parameters of the cosmological models. For example, the power spectrum is considered to be the best discriminator between different models (Bond et al. 1997, Hinshaw et al. 1996a, Wright et al. 1996, Tegmark 1996). Related through a Legendre expansion to the power spectrum, the two-point correlation function is also a useful discriminator (Cayón et al 1996, Hinshaw et al 1996b). These analyses, based on the power spectrum, are considered as classical but there are many other methods that do not make use of the power spectrum. The quality of the CMB maps demands for other statistics to supplement the power spectrum, looking for instance at morphological or topological characteristics of the data. For example the one-dimensional analysis is a geometrical method useful for one-dimensional scans of CMB data and is based on the study of regions above or below a certain level (Gutiérrez et al. 1994). The peak analyses, similar to the previous one but for two-dimensional data, deal with the number of spots above a given threshold (Fabbri & Torres 1995) or with other geometrical properties like the Gaussian curvature or excentritity of the maxima (Barreiro et al. 1997). Other approaches are the genus (Smoot et al. 1994, Torres et al. 1995), Minkowski functionals, which relate several geometrical aspects at the same time (Schmalzing & Górski 1997, Winitzki & Kosowsky 1997), wavelet based techniques (Pando et al. 1998, Hobson et al. 1998) who have shown wavelets to be very effective at detecting non-Gaussianity in the CMB, and fractality (Pompilio et al. 1995, De Gouveia dal Pino et al. 1995, Mollerach et al. 1998).
In this paper we present an alternative method to analyze CMB maps based on the partition function. This function contains useful information about the temperature anisotropies at the different scales and moments. The method presented here is related to the one used by Smoot et al. (1994) based on moments at different smoothing angles. However, our method is more general and powerful because it works with any moment, not only with positive and integer ones.
The structure of the paper is as follows. In Section 2 we present the partition function and discuss its main characteristics. In the same section three different analyses based on that function are introduced: a likelihood analysis, a multifractal analysis and a test of Gaussianity. The likelihood analysis uses the partition function to search for the parameters ($`Q_{\mathrm{r}ms\mathrm{P}S}`$ and $`n`$) that best fit a given data set. The multifractal analysis searches for scaling laws and fractal behavior of the data. There are theoretical reasons (Sachs-Wolfe effect) to expect scaling in the data. Here we present the generalized fractal dimensions and the scaling exponents and also comment on the possible multifractality of the CMB sky. In a recent paper, Ferreira et al. (1998) found evidence of non-Gaussianity in COBE–DMR data at $`99\%`$ confidence level. We show that the partition function can be used to study this property. There is a clear relation between the partition function and the cumulant function, the last one having a specific form for a Gaussian signal. In section 3 we apply the results of the previous sections to the COBE–DMR 4 years data set and we compare with other results. We conclude in section 4.
## 2 The partition function
Let us start directly with the definition,
$$Z(q,\delta )=\underset{i=1}{\overset{N_{\mathrm{b}oxes}(\delta )}{}}\mu _i(\delta )^q$$
(1)
where $`Z(q,\delta )`$ is the partition function. The quantity $`\mu _i(\delta )`$ is called the measure, it is a function of $`\delta `$ which is the size or scale of the boxes used to cover the sample. The boxes are labeled by $`i`$ and $`N_{\mathrm{b}oxes}(\delta )`$ is the number of boxes (or cells) needed to cover the map when the grid with resolution $`\delta `$ is used. The exponent $`q`$ is a continuous real parameter that plays the role of the order of the moment of the measure.
Let us consider a map of $`N`$ pixels. Now the map is divided in boxes of size $`\delta \times \delta `$ pixels and the measure $`\mu _i(\delta )`$ is computed in each one of the resulting boxes. Changing both, $`q`$ and $`\delta `$, one calculates the function $`Z(q,\delta )`$. We would like to emphasize that the calculation of $`Z(q,\delta )`$ is $`O(N)`$.
One is free to make any choice of the measure $`\mu (\delta )`$ provided that several conditions are satisfied, the most restrictive being $`\mu _i(\delta )0`$. There are no general rules to decide which is the best choice. For CMB maps, we use the most natural measure defined as follows :
$$\mu _i(\delta )=\frac{1}{T_{}}\underset{\mathrm{p}ix_j\mathrm{b}ox_i}{}T_{\mathrm{p}ix_j}.$$
(2)
Thus the measure in the box $`i`$ is the sum of the absolute temperatures $`T_{\mathrm{p}ix}`$ of the pixels inside the box in units of Kelvin. This is a very natural measure comparable to the measure used in the study of galaxies or clusters distribution (Martínez et al. 1990, see Borgani 1995 for a review), where the measure is taken as the total mass (or the total number of galaxies/clusters) contained in the box. The constant $`T_{}`$ is a normalization constant. The measures are interpreted as probabilities and they have to be normalized, i.e $`_i\mu _i=1`$. So $`T_{}`$ is simply the sum of the absolute temperatures over all pixels and therefore is a constant for all boxes and scales.
The temperature in the pixels is almost the same everywhere because of the homogeneity of the signal, and one expects that different models will behave in a very similar way, making difficult the task of distinguishing them. We shall show how the partition function overcomes this problem.
Alternatively, Pompilio et al. (1995), in a multifractal analysis of string induced CMB anisotropies (one dimensional scans) used as measure:
$$\mu _i(\delta )=\underset{j=iM\delta /2}{\overset{j=i+M\delta /2}{}}[\mathrm{\Delta }_j\mathrm{\Delta }_{j+1}]^2,$$
(3)
where $`\mathrm{\Delta }_j`$ denotes the fluctuation of the temperature in pixel $`j`$ with respect to the mean. Here M is the total number of points in the data set, and $`(iM\delta /2)`$ and $`(i+M\delta /2)`$ are the lower and upper edges of the $`i`$th segment with $`M\times \delta `$ points, centered on the $`i`$th point of the scan. The scale $`\delta `$ runs between $`1/M`$ for the smallest segment and 1 for the whole segment. However, this measure is not sensitive to the sign of the temperature fluctuations because of the square in its definition. Due to this fact the full information of the fluctuations is not conveniently considered. In addition, the generalization of this measure to 2D maps is not unique.
Using the measure proposed in this paper, the differences between two temperature data sets appear when high values of the exponent $`q`$ are considered. The method is able to differentiate between two very close models with $`q`$ ranging between $`[2.5\times 10^5,+2.5\times 10^5]`$. This range for $`q`$ is in agreement with the level of inhomogeneity. We are using absolute temperatures, that is, we have inhomogeneities of order $`10^5`$ with respect to the mean value and the signal is almost flat. One can consider $`q`$ as a powerful microscope, able to enhance the smallest differences of two very similar maps. Furthermore, $`q`$ is a selective parameter. Choosing large values of $`q`$ in the partition function, favors contributions from cells with relatively high values of $`\mu _i(\delta )`$ since $`\mu _i^q\mu _j^q`$ for $`\mu _i>\mu _j`$, if $`q0`$. Conversely, $`q0`$ favors the cells with relatively low values of the measure. This is the role played by the moments, changing $`q`$ one explores the different parts of the measure probability distribution. The other parameter, $`\delta `$, acts like a filter. Choosing big values of $`\delta `$ is similar to apply a large scale filter to the map. One looks at different scales when the parameter $`\delta `$ is changed.
To summarize, $`Z(q,\delta )`$ contains information at different scales and moments. The multi-scale information gives an idea of the correlations in the map, meanwhile the moments are sensitive to possible asymmetries in the data, as well as some deviations from Gaussianity. In what follows we show the power of the partition function to extract useful information from CMB data. Three different analyses are used for this purpose.
### 2.1 Likelihood analysis
We shall use the partition function to encode the information of a given map. We compute it both for the experimental data and for simulated ones corresponding to different models. In this process we are comparing the data and the models at several scales and using different moments. If there are some differences at some scale or moment, then the partition function should make it evident. The likelihood function will have a maximum for the best-fitting model to the data. For the CMB maps analyses, we consider models corresponding to different values of the spectral index $`n`$ and the normalization $`Q_{\mathrm{r}ms\mathrm{P}S}`$.
The likelihood is defined in the usual way (assuming a Gaussian distribution for $`\mathrm{ln}Z(q,\delta )`$). We work with $`𝒵=\mathrm{ln}Z(q,\delta )`$ instead of $`Z(q,\delta )`$ because of the large values of $`q`$ which make imposible to compute directly $`Z(q,\delta )`$,
$$L(Q_{\mathrm{r}ms\mathrm{P}S},n)=\frac{1}{(2\pi )^{n/2}(detM)^{1/2}}\mathrm{exp}(\frac{1}{2}\chi ^2),$$
(4)
where,
$$\chi ^2=\underset{i=1}{\overset{N_p}{}}\underset{j=1}{\overset{N_p}{}}(𝒵(i)𝒵_D(i))M_{ij}^1(𝒵(j)𝒵_D(j)),$$
(5)
and $`𝒵(i)`$ is the average of the $`𝒵`$ for the $`N_{\mathrm{r}ea}`$ realizations of the model at bin $`i`$. The index $`i`$ defines pairs of values ($`q`$,$`\delta `$) and runs from 1 to $`N_q\times N_\delta `$. That is, $`i`$ runs from 1 to the total number of points $`N_p`$ where $`Z(q,\delta )`$ is defined.
$`𝒵_D(i)`$ is the value of $`𝒵`$ for the experimental data at bin $`i`$. $`M_{ij}`$ is the covariance matrix calculated with Monte Carlo realizations:
$$M_{ij}=\frac{1}{N_{\mathrm{r}ea}}\underset{k=1}{\overset{N_{\mathrm{r}ea}}{}}(𝒵_k(i)𝒵(i))(𝒵_k(j)𝒵(j)).$$
(6)
$`𝒵_k(i)`$ denotes the value of $`𝒵`$ at bin $`i`$ for the $`k`$ realization.
We tried different number of realizations $`N_{\mathrm{r}ea}`$ but the results appear to be stable for $`N_{\mathrm{r}ea}>2000`$ per value of $`Q_{\mathrm{r}ms\mathrm{P}S}`$ and $`n`$.
We have two possibilities to perform a best fit to the data. The first one is to minimize $`\chi ^2`$ and take the values of $`Q_{\mathrm{r}ms\mathrm{P}S}`$ and $`n`$ at the minimum of the $`\chi ^2`$ surface as the best-fitting values. The second possibility is to work with the likelihood $`L`$ looking for the maximum. We tested the two possibilities using simulated CMB maps derived from a given pair of parameters ($`Q_{\mathrm{r}ms\mathrm{P}S}`$, $`n`$) and then using these maps as the data maps. Due to cosmic variance we obtain a set of maxima in the likelihood and of minima in the $`\chi ^2`$. The conclusion is that the likelihood is somewhat better than the $`\chi ^2`$ as expected. For instance with 2000 input realizations with $`Q_{\mathrm{r}ms\mathrm{P}S}=14\mu `$K and $`n=1.3`$ the distribution of maximun likelihood values finds a maximum at $`Q_{\mathrm{r}ms\mathrm{P}S}=13_{4.25}^{+3}\mu `$K and $`n=1.2_{0.15}^{+0.8}`$ while the $`\chi ^2`$ renders a minimum in $`Q_{\mathrm{r}ms\mathrm{P}S}=17_4^{+5}\mu `$K and $`n=1.0_{0.35}^{+0.45}`$. The errors are marginalised at the 68% confidence level and are similar to those obtained with the standard methods based on the power spectrum (see for instance Wright et al. 1996).
### 2.2 Multifractal analysis
The notion of multifractal measure was first introduced by Mandelbrot (Mandelbrot 1974) in order to study different aspects of the intermittency of turbulence (see also Sreenivasan and Menevau 1988). The multifractal formalism was further developed by many other authors and today it is a standard tool applied in almost all fields of science: molecular physics, biology, geology, astronomy, etc . In the context of the description of the large scale structure of the Universe it was first introduced by Jones et al. (1988).
Some authors (Pietronero & Sylos Labini 1997) suggest that the distribution of matter in the universe is fractal with dimensionality $`D_22`$. They defend that the scaling remains up to the larger scales probed by the present day available redshift catalogues. Many other authors, however, have found enough evidence of homogeneity at large scales (Davis 1997, Guzzo 1997, Scaramella et al. 1998) in the analysis of the same data sets. One of the basic tenets of the standard cosmology is that at very large scales the distribution of matter is homogeneous. The homogeneity and isotropy of the CMB support this overwhelming evidence, indicating that there exists a continuous transition between scale invariant clustering at small scales and homogeneity at large scales (Martínez et al. 1998; Wu, Lahav & Rees 1998).
At large angular scales, the CMB anisotropies $`\mathrm{\Delta }T/T`$ generated from a scale-free density perturbation power spectrum in a flat $`\mathrm{\Omega }=1`$ universe can be described by a fractional Brownian fractal (as shown in Mollerach et al. 1998). In particular, both inflationary and defect models predict an approximately scale invariant Harrison-Zel’dovich spectrum on large angular scales showing the scaling predicted by the Sachs-Wolfe effect. At small angular scales $`(0.2^{}\theta 1^{})`$ the predictions of inflation and topological defects models are different (Durrer et al 1997) allowing to differentiate them. It is then interesting to study the possible fractality of the CMB anisotropies, since the seeds or fluctuations that are supposed to be the precursors of the largest structures observed today, are yet unperturbed by evolutionary phenomena. Several works follow this kind of analysis. In the paper by De Gouveia dal Pino et al. (1995), the authors based their analysis in the study of the perimeter-area relation of the isocontours of temperature at a given threshold. They used the COBE–DMR 1 year data set and only the 53 GHz channel. They found evidence for a fractal structure in the COBE–DMR data with dimension $`D=1.43`$ suggesting that the CMB could not be homogeneous. Apart from the fact that these data have a low signal to noise ratio, this does not necessarily mean that the CMB is not homogeneous. This dimension corresponds to the temperature isocontours, and not to the temperature itself. Other works use multifractal analysis with CMB. Pompilio et al. (1995) apply the multifractal analysis to simulated string-induced CMB scans searching for the non-Gaussian behavior induced by cosmic strings. More recently, Mollerach et al. (1998) have applied a fractal analysis in order to study the roughness of the last scattering surface and used this technique to search for the model that best fit the COBE–DMR 4yr data. These authors show the capabilities of this method for the analysis of future data, in particular for those experiments with high signal to noise ratio.
In this section we will use the partition function to study the possible multifractality of the CMB sky, using as measure the absolute temperature (see eq. 2). The multifractal analysis has been presented in several versions but the most popular is due to Frisch and Parisi (1985), Jensen et al. (1985) and Halsey et al. (1996), where the spectrum of singularities $`f(\alpha )`$ was introduced. We will give here a brief description of the multifractal approach. A presentation of the method can be found in Feder 1988, Schuster 1989, Vicsek 1989 and more formally in Falconer 1990.
The multifractal formalism has as starting point the partition function. The generalized or Renyi dimensions are defined by the asymptotic behavior (as the scale $`\delta `$ tends to zero) of the ratio between $`\mathrm{ln}Z(q,\delta )`$ and $`\mathrm{ln}\delta `$,
$$D(q)=\underset{\delta 0}{lim}\frac{1}{q1}\frac{\mathrm{ln}Z(q,\delta )}{\mathrm{ln}\delta }.$$
(7)
It is easy to see that for $`q=0`$ we obtain the box-counting or capacity dimension,
$$D(0)=\underset{\delta 0}{lim}\frac{\mathrm{ln}N_{\mathrm{b}oxes}(\delta )}{\mathrm{ln}(1/\delta )}.$$
(8)
For $`q=1`$, $`D(1)`$ is the information dimension, which is obtained from Eq. (7) by applying L’Hôpital’s rule. For $`q=2`$, $`D(2)`$ is the correlation dimension (see Schuster 1998 for other alternative definitions and the relation between them). A simple fractal or monofractal is defined by a constant $`D(q)`$. Dependence of $`D`$ on $`q`$ defines a multifractal. In most of the practical applications of the multifractal analysis, the limit in Eq. (7) cannot be calculated, either because we do not have information for small distances (as it happens in this case) or because below a minimum physical length no scaling can exist at all (for example the size of a galaxy in the multifractal nature of the galaxy distribution). This problem is usually overcome by finding a scaling range $`[\delta _1,\delta _2]`$ where a power–law can be fitted to the behavior of the partition function
$$Z(q,\delta )\delta ^{\tau (q)}\text{for}\delta _1\delta \delta _2.$$
(9)
The scaling exponents $`\tau (q)`$ are related with the generalized dimensions by
$$\tau (q)=(q1)D(q).$$
(10)
Other quantity, commonly used in the characterization of multifractals, is the so–called $`f(\alpha )`$ spectrum. If for a given box (labeled by $`j`$) the measure scales as
$$\mu _j(\delta )\delta ^{\alpha _j},$$
(11)
then, the exponent $`\alpha `$, which depends in principle on the position is known as crowding index or Hölder exponent. If all the points have the same scaling, then all the exponents $`\alpha `$ will be the same and this corresponds to a monofractal. Otherwise, if we have boxes with different scaling, what we have is a mixture of monofractals. This set is known as a multifractal (each monofractal formed by the points with the same scaling and therefore with the same exponent $`\alpha `$). The exponent $`\alpha `$ is used to label the boxes covering the set supporting a measure, thereby allowing a separate counting for each value of $`\alpha `$. In a multifractal set $`\alpha `$ can take different values within a certain range, corresponding to the different strength of the measure (Halsey et al. 1996). The subset formed by the boxes with the same $`\alpha `$ will be denoted $`S_\alpha `$. This subset has $`N_\alpha (\delta )`$ elements (boxes) and in general, for a multifractal set, this number varies with the scale $`\delta `$ as
$$N_\alpha (\delta )\delta ^{f(\alpha )}.$$
(12)
Comparing this expression with the definition of the box-counting dimension, Eq. (8), the quantity $`f(\alpha )`$ can be interpreted as the fractal dimension of the subset $`S_\alpha `$. However, this physical meaning of the function $`f(\alpha )`$ is not always true (Grassberger, Badii & Politi 1988; Falconer 1990).
It can be shown (Halsey et al. 1996; Martínez et al. 1990) that the quantities $`q`$ and $`\tau (q)`$ can be related through a Legendre transformation with $`\alpha `$ and $`f(\alpha )`$. These relations are:
$$\alpha (q)=\frac{d\tau (q)}{dq},$$
(13)
$$f(\alpha )=q\alpha (q)\tau (q).$$
(14)
To illustrate this section we use the well known multiplicative multifractal cascade (Meakin 1987; Martínez et al. 1990). The construction of this multifractal is as follows: A square is divided into four equal square pieces and a probability $`p_i`$, $`(i=1,\mathrm{},4)`$, such that $`_{i=1}^4p_i=1`$, is assigned to each one. Each piece is again subdivided in four small squares, allocating again a value $`p_i`$ randomly permuted to each one. The measure assigned to each one of the new subsquares is the product of this value of $`p_i`$ and the corresponding value of its parent square. The subdivision process is continued recursively. In Fig. 1 we show a realization of this multifractal on a grid of $`256\times 256`$ pixels for the values of the probabilities $`p_1=0.18`$, $`p_2=0.23`$, $`p_3=0.28`$ and $`p_4=0.31`$. We can easily calculate the theoretical values of the multifractal functions $`D(q)`$ and $`f(\alpha )`$ for this illustrative example (Martínez et al. 1990). With the multiplicative multifractal we tested the power of the method to recover the true dimensions. In Fig. 2 we show the generalized dimensions $`D(q)`$ and the corresponding spectrum of fractal dimensions $`f(\alpha )`$. These curves match perfectly the theoretically expected ones. Note that a single monofractal should render a straight line for $`D(q)`$ and a single point for $`f(\alpha )`$.
### 2.3 Testing Gaussianity
A Gaussian distribution of CMB temperature fluctuations is a generic prediction of inflation. Forthcoming high-resolution maps of the CMB will allow detailed tests of Gaussianity down to small angular scales, providing a crucial test of inflation. Most of the works that analyse CMB maps assume Gaussian initial fluctuations. Kogut et al. (1996) find that the genus, three-point correlation function, and two-point correlation function of temperature maxima and minima are all in good agreement with the hypothesis that the CMB anisotropy on angular scales larger than $`7^{}`$ represents a random-phase Gaussian field. Other alternative methods are proposed, like the angular-Fourier transform (Lewin et al. 1998), Minkowsky functionals (Schmalzing & Górsky 1998), correlation of excursion sets (Barreiro et al. 1998), and the bispectrum (Heavens 1998). In an analysis of the 4 years COBE–DMR data based on the bispectrum Ferreira et al. (1998) have found that Gaussianity is ruled out at a confidence level in excess of $`99\%`$ near the multipole of order $`l=16`$.
In this section we will test the Gaussianity of the CMB data using an alternative method. The idea is to use the relation between the partition function and the generating function, the last one defined as,
$$G_x(t)=e^{tx}.$$
(15)
If we know that $`x`$ is Gaussian distributed then, solving the integral corresponding to the mean value of the previous definition, results :
$$G_x^{\mathrm{G}auss}(t)=e^{tx+\frac{t^2\sigma _x^2}{2}}.$$
(16)
It follows from the last expression that the cumulant function $`F`$ is:
$$F(t)=\mathrm{ln}G(t)=tx+\frac{t^2\sigma _x^2}{2}.$$
(17)
Finally, the function
$$H(t)=F(t)tx\frac{t^2\sigma _x^2}{2}$$
(18)
should be zero for all $`t`$ for a Gaussian field.
Let us consider the definition of $`Z(q,\delta )`$. If the measure is defined as
$$\mu _i^{}(\delta )=e^{\mu _i(\delta )},$$
(19)
with $`\mu _i(\delta )`$ the same as in section 2. Then the partition function is
$$Z(q,\delta )=\underset{i=1}{\overset{N_{\mathrm{b}oxes}(\delta )}{}}e^{q\mu _i(\delta )},$$
(20)
or equivalently,
$$Z(q,\delta )=N_{\mathrm{b}oxes}(\delta )e^{q\mu }=N_{\mathrm{b}oxes}(\delta )G_\mu (q).$$
(21)
This relation between $`Z(q,\delta )`$ and $`G_\mu (q)`$ allows us to construct the function $`H(q)`$ which, for a Gaussian measure $`\mu `$, should be zero for all $`q`$ at each scale $`\delta `$. This is a simple way to find non-Gaussian signals. The function $`H(q)`$ represents the contribution of all the moments larger than 2. This contribution should be zero only for a Gaussian field. A plot of this function indicates directly the deviations from Gaussianity.
## 3 Results: Application to COBE–DMR data
As a practical use of the methods presented we will apply them to the 4 years COBE–DMR data.
### 3.1 Description of the data
We use the COBE–DMR 4 years $`53+90`$ GHz maps combination, which is the choice with the largest signal to noise ratio (Bennet et al. 1996). These data are in the Quad-Cube pixelization with a pixel size of $`2.6^{}`$ and the resulting number of pixels is 6144. The data in each pixel represents $`\mathrm{\Delta }T`$ in $`mK`$ units. The dipole has already been subtracted. Assigned to each pixel there is an additional information, the number of times that this part of the sky was explored by the antenna. This information is relevant for the estimation of the instrumental noise.
Part of the data is contaminated by the galactic emission. There is a strip between $`\pm 20^{}`$ (in galactic coordinates) in which the galactic emission dominates the CMB signal. This strip should not be included in the analysis in order to avoid spurious signals. In addition to this strip there are two patches in the sky (one near Orion and the other one in Ophiucus) that show a strong galactic emission at mm wavelengths (Cayón et al. 1995), and should therefore also be removed from the analysis. When this mask is applied, the number of surviving pixels reduces to 3881 from the original 6144.
### 3.2 Likelihood analysis
In order to determine which are the values of the quadrupole normalization $`Q_{\mathrm{r}ms\mathrm{P}S}`$ and the spectral index $`n`$ that best fit the COBE data, we perform Monte Carlo simulations of the CMB maps for a scale-free model with a power spectrum given by $`P(k)k^n`$, which has variance in the $`a_{lm}`$ multipoles given by (Bond and Efstathiou 1987):
$$C_l=\frac{4\pi }{5}Q_{\mathrm{r}ms\mathrm{P}S}^2\frac{\mathrm{\Gamma }[l+(n1)/2]\mathrm{\Gamma }[(9n)/2]}{\mathrm{\Gamma }[l+(5n)/2]\mathrm{\Gamma }[(3+n)/2]}.$$
(22)
We consider different values for $`Q_{\mathrm{r}ms\mathrm{P}S}`$ and the $`n`$ ranging from $`Q_{\mathrm{r}ms\mathrm{P}S}=4\mu `$K to $`Q_{\mathrm{r}ms\mathrm{P}S}=35\mu `$K and from $`n=0.3`$ to $`n=2.3`$. We add instrumental noise based on the number of data collected by COBE–DMR at each pixel. Furthermore, there is another effect that must be taken into account, the cosmic variance. To treat conveniently this effect we perform a large number of simulations ($``$ 2000) for each pair of values ($`Q_{\mathrm{r}ms\mathrm{P}S}`$, $`n`$) and then we compare the average $`𝒵=\mathrm{ln}Z(q,\delta )`$ values of these simulations with the $`𝒵`$ corresponding to the COBE–DMR data (the used values for $`q`$ and $`\delta `$ were $`q=120000,40000,72000,152000`$ and $`\delta =3,4,8,16`$ pixels). The choice for $`q`$ and $`\delta `$ values is based on the test described in the last part of section 2.1. The size of the $`Z(q,\delta )`$ grid, $`N_q\times N_\delta `$ is not critical and what is now relevant is the $`q`$ values considered. In particular, high order moments (i.e large $`q`$) are very sensitive to the tail of the distribution and therefore the results obtained with those high values on the parameter estimates are not stable. The combination of $`q`$ and $`\delta `$ values, was one of the combinations for wich the recovered parameters $`Q_{\mathrm{r}ms}`$ and $`n`$ were closer to the input parameters and with smaller error bars. As mentioned in section 2, $`q`$ should take values of order $`10^5`$ in order to distinguish between models with temperature fluctuations of order $`10^5`$. The values of $`q`$ where chosen to be asymmetric in an attempt to consider possible asymmetries that could exist between the negative and positive temperature fluctuations. The range for $`\delta `$ runs from 2 pixels (approximately the antenna size) to 24 pixels which is the largest box size required to have at least 8 boxes. Using a maximum likelihood method one can determine which are the best-fitting parameter values of the simulations (signal + noise) to the COBE–DMR data.
In Fig. 3 we show a contour plot of the likelihood obtained for the COBE–DMR data. The maximum is at $`Q_{\mathrm{r}ms\mathrm{P}S}=10_{2.5}^{+3}\mu `$K and $`n=1.8_{0.65}^{+0.35}`$ (95% marginalised errors) and the contour level at 68% is compatible with the assumed standard value $`Q_{\mathrm{r}ms\mathrm{P}S}=18\pm 3\mu `$K for the Einstein-de Sitter model with a scale invariant primordial spectrum of density perturbations, $`n=1`$. The various analysis of the 4 years COBE data combined give as the best-fitting parameters $`Q_{\mathrm{r}ms\mathrm{P}S}=15.3_{2.8}^{+3.8}\mu `$K and $`n=1.2\pm 0.3`$. The result presented here predicts larger values of $`n`$ and smaller values of $`Q_{\mathrm{r}ms\mathrm{P}S}`$ than the result indicated above (although always inside the anticorrelation law for the two parameters). This result is in agreement with the one found by Smoot et al. (1994), using a similar approach. Smoot et al. (1994) found for the best fit, $`Q_{\mathrm{r}ms\mathrm{P}S}=13.2\pm 2.5\mu `$K and $`n=1.7_{0.6}^{+0.3}`$. A possible explanation for the discrepancy between our results and those obtained with the standard methods could be a bias present in the likelihood estimator. In the tests of our algorithm we found a systematic bias in the marginalized likelihood functions both for $`Q_{\mathrm{r}ms}`$ and $`n`$ with tipical values of $`\delta n+0.2`$ and $`\delta Q_{\mathrm{r}ms}2`$ which could explain part of our discrepancy. The reason for this bias can be the difference between the assumed Gaussian form for the likelihood of the partition function in eq. (4) and the real non-Gaussian distribution. The probability distribution of the $`𝒵`$ at each $`(q,\delta )`$ obtained from simulations is similar to a Gaussian probability distribution but with a longer tail for high values. We also think that maybe the noise can contribute to that bias. The high order moments (large $`q`$) of the partition function are very sensitive to the tails of the distribution of the temperature fluctuations. A low signal to noise ratio (as is the case for the COBE-DMR data) could raise the parameter $`n`$ that best fit the COBE-DMR data. We did some tests in this direction and apparently the noise can increase the value of $`n`$ (and consequently can produce a lower value of $`Q_{rms}`$).
### 3.3 Multifractal analysis
We apply the formalism of section 2.2 to the simulations and to the COBE–DMR data. In Fig. 4 we plot $`D(q)`$ and $`f(\alpha )`$ for the COBE–DMR and for one model ($`Q_{\mathrm{r}ms\mathrm{P}S}=15\mu `$K, $`n=1.2`$) inside the $`Q_{\mathrm{r}ms\mathrm{P}S}`$$`n`$ degeneration with its error bars. The $`D(q)`$ curve has been obtained by fitting a power–law to the partition function in the range of scales $`2\delta 24`$ pixels, following Eqs. (9) and (10) . Note that the value of $`D(0)`$ is not 2 as it would be expected for a continuous bidimensional surface. The mask slightly lowers this value.
By means of a Legendre transform (Eqs. (13) and (14)) we have obtained the corresponding $`f(\alpha )`$ curve. A narrow $`f(\alpha )`$ curve means a very homogeneous data. If the measure associated to the data is multifractal in nature, these curves should be the same for all the scale ranges. We have found that this is not the case for the COBE–DMR data. The multifractal curves corresponding to different scale ranges do not match each other. CMB simulations without noise show the same behavior. The reason for that lies in the fact that a scaling like that in Eq. (9) is not present. This can be illustrated by looking at the behavior of the local slopes of $`\mathrm{ln}Z(q,\delta )`$ vs. $`\mathrm{ln}\delta `$. In Fig. 5 we show the change in the reduced slopes ($`\tau (q,\delta )/(q1)`$) as a function of the scale for a fixed value of the parameter $`q`$ for the COBE–DMR data. For this plot the analysis was performed only in the top and bottom faces of the Quad-Cube which are not affected by the mask. For a multifractal measure these curves should be horizontal straight lines. As we can appreciate in the left panel, this is not the case for the COBE–DMR data. The result for a simulation without noise is shown in the right panel. In both cases, we do not see a neat plateau for large absolute values of $`q`$. However it is not clear whether the fluctuations of the local reduced slopes are just due to numerical noise related to the resolution of the maps (i.e. the limited number of pixels) or, on the contrary, these fluctuations are intrinsic to the measure and, therefore, prove that the measure is not a multifractal. Although our result neither support nor contradict this interpretation, it seems more natural to expect fractal behaviour in the case that one is using the absolute value of the relative temperature fluctuations $`\mathrm{\Delta }T/T`$ as the measure (Mollerach et al. 98). As shown in that paper, $`\mathrm{\Delta }T/T`$ fluctuations generated by the Sachs–Wolfe effect behave like a fractional brownian fractal.
### 3.4 Gaussianity
To test whether the COBE–DMR data are Gaussian distributed, we compare the $`H(q)`$ curve for COBE–DMR with those curves arising from the best-fitting CMB Gaussian models obtained in section 3.2. In Fig. 6 we show the plots of $`H(q)`$ for different grid scales. For each realization, th measure is rescaled in order to have dispersion equal to one. This allows to have a small and equal range of $`q`$ values for all scales. We would like to point out the deviation of the mean value from zero when $`q`$ moves away from zero. This is due to the fact that we have a finite number of pixels (i.e. a cosmic variance effect). The predicted behavior of equation (15) is only true when we compute the mean over infinite values (or equivalently, solve the integral between $`\mathrm{}`$ and $`+\mathrm{}`$). Otherwise $`H(q)`$ is not zero at large values (positive and negative) of $`q`$. Fig. 7 shows the likelihood distribution of the 1000 Gaussian realizations (with noise) and the dotted line corresponds to the COBE value. It is clear that the COBE–DMR is perfectly compatible with the Gaussian hypothesis.
## 4 Discussion and Conclusions
We have shown in this work the power of the partition function to describe CMB maps taking into account the information given at different scales and by different moments. We have also shown the flexibility of such a function to be used in various analyses: standard likelihood, multifractal and Gaussian. We applied these analyses to the 4 years COBE–DMR data.
Based on the likelihood function we find the best-fitting parameters $`Q_{\mathrm{r}ms\mathrm{P}S}=10_{2.5}^{+3}\mu `$K and $`n=1.8_{0.65}^{+0.35}`$. It is remarkable the agreement between our work and the one by Smoot et al. (1994).
The COBE–DMR data (and the simulations of scale invariant power spectrum) do not show a fractal behavior, regarding the absolute temperature map. On the other hand, recent galaxy surveys covering large scales ($`>100`$ Mpc) do not show either a fractal behaviour (Wu, Lahav & Rees 1998). Both results allow to conclude that neither the mass distribution (assuming a linear bias) nor the intensity of the CMB show a fractal behaviour on large scales.
The partition function analysis performed shows no evidence for non-Gaussianity in the COBE–DMR data. This is in agreement with all the previous analyses of the COBE–DMR data except the one by Ferreira et al. (1998). Simulations done at higher resolution have shown the power of this method to discriminate between Gaussian and non-Gaussian signals. That analysis will be presented in a future paper.
Finally, we would like to remark that the likelihood analysis based on the partition function is computationally intensive. Actually a non optimized code applied to the COBE-DMR data takes a few days (CPU time) to run in an Alpha server 2100 5/250. Moreover, the computation of the partition function increases with the number of pixels $`N`$ as $`O(N)`$. This rate should be compared with the most widely applied method used to compress data and to estimate cosmological parameters, the power spectrum of the fluctuations. The direct computation of the power spectrum goes like $`O(N\mathrm{log}N)`$ (this behaviour is due to the FFT). Standard brute-force approaches used to estimate the power spectrum go like $`O(N^3)`$. The reason for this $`O(N^3)`$ rate is the matrix inversion and determinant calculation whose dimension grows as the number of pixels. On the contrary, in the partition function likelihood analysis the number of bins (or matrix dimension) of the likelihood is $`N_q\times N_\delta `$, being this number usually well below one thousand (even for high resolution maps). The number of moments $`q`$ is an arbitrary parameter independent of $`N`$ and the number of scales $`\delta `$ increases as $`O(N^{1/2})`$. The process of inverting the correlation matrix is clearly reduced in the case of the partition function. This point makes the method useful for forthcoming large data-sets. One can therefore consider the partition function as an alternative way to compress large data sets. Furthermore, for the general situation of non-Gaussian data sets, the partition function is clearly preferable to the power spectrum since the former contains information on several moments of the data.
## Acknowledgments
We would like to thank R. B. Barreiro for kindly providing her program for the simulations, L. Cayón for help dealing with the COBE–DMR maps and interesting discussions. SM acknowledges CONICET for financial support and to the Vicerrectorado de Investigacion de la Universidad de Valencia. JMD thanks the DGES for a fellowship. This work has been financially supported by the Spanish DGES, project n. PB95-1132-C02-02 and project n. PB96-0707, and by the Spanish CICYT, project n. ESP96-2798-E. The COBE data sets were developed by the NASA Goddard Space Flight Center under the guidance of the COBE Science Working Group and were provided by the NSSDC.
|
no-problem/9902/physics9902045.html
|
ar5iv
|
text
|
# One and two dimensional Bose-Einstein condesation of atoms in dark magneto-optical lattices
## I Introduction
After the outstanding experiments on Bose-Einstein condensation (BEC) of an atomic gas in a magnetic trap studies of phenomenon originated from quantum statistics of particles became one of the main subjects in contemporary atomic physics. Achieving of BEC by optical means is of especial interest . On the one hand this allows deeper understanding of the nature and physical properties of condensates under various conditions. On the other hand, the works in this direction may lead to new effective methods of super-deep cooling. It is worth noting, that in the refs. laser fields play an auxiliary role and they are used for precooling down to sufficiently low temperatures with successive evaporating cooling in a magnetic trap, when in the final stage of experiments optical fields are absent and the main part of atoms escape from a trap under evaporative cooling. One of the main components of the optical methods is the formation of a non-dissipative optical or magneto-optical potential, when atoms scatter photons with very low rates. There exist two principal ways to solving this problem. The first one consists in the use of far-off-resonance light fields with the high intensity . In this case the potential depth of order of $`10^3\epsilon _r`$ ($`\epsilon _r=(\mathrm{}k)^2/2M`$ is the single-photon recoil energy) and the rate of spontaneous photon scattering about $`1s^1`$ are achived . Another way is connected with the use of coherent population trapping (CPT) phenomena in near-resonance light fields. As is known , under the resonance interaction of a polarized radiation with atoms, having optical transitions $`F_g=FF_e=F`$ with $`F`$ an integer and $`F_g=FF_e=F1`$ ($`g`$ and $`e`$ denote the ground and excited states respectively), there exist dark states. These states are coherent superposition of the ground-level Zeeman substates, which is fully decoupled with light $`(\widehat{𝐝}𝐄)|\psi _{nc}=0`$. Due to optical pumping atoms are trapped in these states and do not scatter light. The use of fields with a polarization gradient allows to create a potential in dark states (dark potential). Although for dark states ac Stark shift vanishes, dark potentials can be created by the atomic translational motion (so-called gauge or geometric potentials) , or by applying of a static (magnetic or electric) field. In the later case the atomic multipole moments are spatially non-uniform, that leads to the coordinate dependence of the interaction energy with a static field. As a result, a periodic potential is formed. From the other hand, static fields induce precession of multipole moments and destroy dark states. However, this effect can be suppressed to negligible values due up to two factors:
1. We can choose a specific geometry of fields where atoms are localized near the points, where dark states are not destroyed by a static field.
2. The use of high-intensity laser field allows to lack atoms in dark states. In the result, as is shown in ref., the rate of spontaneous scattering is inverse proportional to the light intensity.
A quantitative treatment of dark magneto-optical lattices in the non-dissipative regime at high-intensity laser field had been developed in ref. in the one-dimensional case. It had been shown that the potential depth is determined by the ground-state Zeeman splitting $`\mathrm{}\mathrm{\Omega }`$, while the periodof lattice is of order of the light wavelength $`\lambda `$. Thus, we can obtain a very deep potential with a large spatial gradient. Both these reasons lead to the large energy separation between vibrational levels $`\sqrt{\mathrm{}\mathrm{\Omega }\epsilon _r}`$, which can exceed the laser cooling temperature. Under these conditions, atoms being in the lower vibrational levels can be localized within a very small distance $`\lambda \sqrt{\epsilon _r/\mathrm{}\mathrm{\Omega }}`$. We stress that tunneling between wells is exponentially small (by factor $`\mathrm{exp}(\sqrt{\mathrm{}\mathrm{\Omega }/\epsilon _r})`$), that allows to consider atoms in each of wells as independent systems.
In the present paper, with $`F_g=1F_e=1`$ transition as an example, one- and two-dimensional non-dissipative magneto-optical lattices are considered with especial attention to the formation of atomic structures with lower dimensions ($`2D`$ – planes, $`1D`$ – lines). In such a lattice the spontaneous scattering of photons is strongly reduced and the main dissipative mechanism is elastic interatomic collisions. In the framework of an ideal Bose-gas model it is shown that under applying of an additional weak confining potential, the condensation is possible at the temperatures and densities typical for the current experiments on laser cooling.
It is worth noting, in the low-saturation limit, that corresponds to the dissipative regime, dark magneto-optical lattices had been theoretically and experimentally studied. Sub-Doppler cooling down to $`20\epsilon _r`$ had been predicted and observed. Experimental evidence of the translational motion quantization had been presented.
## II Non-dissipative dark magneto-optical lattice
Let us consider a gas of Bose-atoms with total angular momenta $`F_g=1F_e=1`$ in a resonant spatially nonuniform monochromatic laser field
$$𝐄(𝐫,t)=𝐄(𝐫)e^{i\omega t}+c.c.$$
(1)
in the presence of a static magnetic field $`𝐁`$. As is known , for all $`FF`$ ($`F`$ is an integer) transitions there exist dark CPT-states uncoupled with a laser field (1):
$$\left(\widehat{𝐝}𝐄(𝐫)\right)|\psi _{nc}(𝐫)=0,$$
(2)
where $`\widehat{𝐝}`$ is the dipole moment operator. In the case under consideration $`F=1`$ this state has the form :
$$|\psi _{nc}(𝐫)=\frac{1}{|𝐄(𝐫)|}\underset{q=0,\pm 1}{}E^q(𝐫)|g,\mu =q,$$
(3)
where $`E^q(𝐫)`$ are the field components in the spherical basis $`\{𝐞_0=𝐞_z,𝐞_{\pm 1}=(𝐞_x\pm i𝐞_y)/\sqrt{2}\}`$. The state (3) is a superposition of the ground-state Zeeman wave functions $`|g,\mu `$. We note that in the general case this state is neither eigenvector of the Hamiltonian of the interaction with the magnetic field $`\widehat{H}_B=(\widehat{\mu }𝐁)`$ nor of the kinetic energy Hamiltonian $`\widehat{H}_K=\widehat{p}^2/2M`$. Thus, the state $`|\psi _{nc}(𝐫)`$ is not strictly stationary. However, the corrections to the wave function resulting from the translational motion and the magnetic field can be considered as small perturbations with respect to the atom-light interaction under the conditions:
$$V(𝐫)\sqrt{G}k\overline{v},\mathrm{\Omega },$$
(4)
where $`V(𝐫)=|<\widehat{d}>𝐄(𝐫)|/\mathrm{}`$ is the Rabi frequency, $`G=V^2(𝐫)/[\gamma ^2/4+\delta ^2+V^2(𝐫)]`$ is the effective saturation parameter, $`\delta `$ is the detuning, $`\gamma `$ is the inverse lifetime of the excited state and $`\overline{v}`$ is the average atomic velocity. In this case the main part of atoms is pumped into the dark state $`|\psi _{nc}(𝐫)`$. Under the conditions (4) the relative populations in the CPT-state $`n_{nc}`$ and in the excited state $`n_e`$ obey the relation:
$$(1n_{nc})n_e\left(\frac{\mathrm{max}\{k\overline{v},\mathrm{\Omega }\}}{V(𝐫)\sqrt{G}}\right)^21.$$
Then the evolution of a single atom can be described, with the same accuracy, by the effective Hamiltonian:
$$\widehat{H}_{eff}^{(1)}=\psi _{nc}(𝐫)|(\widehat{H}_K+\widehat{H}_B)|\psi _{nc}(𝐫).$$
(5)
Using the explicit form of the CPT-state (3), we write the one-particle Hamiltonian (5) as a sum of four terms:
$$\widehat{H}_{eff}^{(1)}=\frac{\widehat{p}^2}{2M}+U(𝐫)+\frac{1}{2M}\left\{(𝐀(𝐫)\widehat{𝐩})+(\widehat{𝐩}𝐀(𝐫))\right\}+W(𝐫).$$
(6)
The first term is the kinetic Hamiltonian. The second one is the magneto-optical potential:
$$U(𝐫)=\mathrm{}\mathrm{\Omega }\frac{i(𝐁[𝐄(𝐫)\times 𝐄^{}(𝐫)])}{|𝐁||𝐄(𝐫)|^2},$$
(7)
which is independent of the amplitude and phase of light field. The last two corrections in Eq.(6) caused by the translational motion of atom. The first of these is of the order of $`kv`$ and can be interpreted as the interaction with the effective vector-potential:
$$A_j(𝐫)=i\mathrm{}\left(\frac{𝐄^{}}{|𝐄|}\frac{}{x_j}\frac{𝐄}{|𝐄|}\right).$$
(8)
The second correction is of the order of the recoil energy $`\mathrm{}\omega _r`$ and makes a contribution into the atomic potential energy:
$$W(𝐫)=\frac{\mathrm{}^2}{2M}\underset{j}{}\left|\frac{}{x_j}\frac{𝐄}{|𝐄|}\right|^2.$$
(9)
If the Zeeman splitting obeys the conditions
$$\mathrm{\Omega }k\overline{v},\epsilon _r/\mathrm{},$$
(10)
then the last two terms in Eq.(6) are negligible, i.e.
$$\widehat{H}_{eff}^{(1)}\frac{\widehat{p}^2}{2M}+U(𝐫).$$
In this case the problem is reduced to the motion of a particle in the magneto-optical potential (7) only. The depth of this potential is determined by the ground-state Zeeman splitting $`\mathrm{}\mathrm{\Omega }`$ (below we suppose $`\mathrm{\Omega }>0`$) and its period is of order of the light wavelength $`\lambda `$.
As is well known, in a periodic potential the energy spectra has the band structure. However, due to the condition (10) the tunneling is negligible for the lower bands, i.e. the strong binding of the particle in a single well is realized. It can be shown that the widths of the lower bands are exponentially small by the factor $`\mathrm{exp}\left(\sqrt{\mathrm{}\mathrm{\Omega }/\epsilon _r}\right)`$ with respect to the energy separation between bands. The positions of these bands are determined (with good accuracy) by a harmonic expansion of the potential in the vicinity of the well bottom:
$$U(𝐫)\mathrm{}\mathrm{\Omega }k^2\underset{i,j=1,2,3}{}C_{ij}x_ix_j.$$
(11)
As is seen, the separation between lower levels is of the order of $`\sqrt{\mathrm{}\mathrm{\Omega }\epsilon _r}`$ at the eigenvalues of $`\widehat{C}`$ of order of $`1`$.
As it has been shown in ref., atoms being in the lower vibrational levels scatter photons with extremely low rates:
$$\tau ^1\gamma \left(\frac{\mathrm{\Omega }}{V}\right)^2\sqrt{\frac{\epsilon _r}{\mathrm{}\mathrm{\Omega }}}\gamma .$$
Here the factor $`(\mathrm{\Omega }/V)^21`$ is directly connected with CPT-effect, when in a strong light field the probability of leaving of a dark state is inverse proportional to the light intensity. The additional multiplier $`\sqrt{\epsilon _r/\mathrm{}\mathrm{\Omega }}1`$ arises from the localization of atoms in the vicinity of points, where dark state are not destroyed by a magnetic field.
## III Ideal Bose-gas in dark magneto-optical lattices
As it has been indicated in the Introduction, the spontaneous photon scattering in dark magneto-optical lattices can be strongly suppressed. Hence the main dissipative mechanism, leading to the thermal equilibrium, is the elastic interatomic collisions. This allows to apply the methods of statistical physics to study of a stationary state of atoms, which corresponds to the thermodynamic equilibrium. In the present paper we restrict our treatment by the ideal gas model when the contribution of atom-atom interactions into the system energy is negligible. At the same time collisions are implicitly taken into account as a reason of the thermal equilibrium.
BEC is one of the most interesting phenomenon arising in a Bose gas under sufficiently low temperatures. As it has been shown by recent studies, the character and parameters of the phase transition essentially depend on the system dimensions and on the presence of an external confining potential. So, it is well-known , that in the case of a free gas BEC in the 1D and 2D cases is absent. However, as it has been recently shown in refs. in both 1D and 2D cases BEC becomes possible under applying a confining potential. Moreover, the conditions for the BEC achievement are appreciably less stringent than in the 3D case.
The non-dissipative dark magneto-optical lattice considered in the previous section are promising tools for studies of BEC in systems with lower dimensions. Let us consider the concrete examples.
### A 1D lattice – 2D condensation
The simplest realization of 1D dark magneto-optical lattice is the $`linlin`$ light field configuration plus a magnetic field directed along the wave prorogation direction (see in fig.2). Here the dark magneto-optical potential has the form :
$$U=\mathrm{}\mathrm{\Omega }\mathrm{cos}(2kz).$$
(12)
Atoms are localized in the planes $`z_n=\lambda n/2`$, where the field has the $`\sigma _{}`$ polarization and the dark state coincides with the Zeeman substate $`|F_g=1,\mu =1`$. The lower energy levels of the potential (12) correspond to the localization of atoms in the vicinity of the single well bottom, when tunneling between wells is negligible. Basing on a harmonic approximation, one can find the energy separation between the lower levels $`\mathrm{\Delta }\epsilon =\sqrt{8\mathrm{}\mathrm{\Omega }\epsilon _r}`$. Then under the temperatures
$$k_BT<\sqrt{8\mathrm{}\mathrm{\Omega }\epsilon _r}$$
atoms are in the vibrational ground state. In the other words, the translational motion of atoms along $`z`$ is frozen. For instance, for the D1-line of $`{}_{}{}^{87}Rb`$ ($`\lambda =787nm`$) under the magnetic field amplitude $`B4G`$ freezing is achived at $`T<10^5K`$. Due to the absence of tunneling, each of localization planes can be considered as an independent thermodynamic and mechanical system, where particles freely move along the $`x`$ and $`y`$ axes.
If an additional confining (in the $`xy`$-plane) potential is applied (for example, far-off-resonance optical shift), then BEC can be reached in a single plane. As it was shown in ref., for 2D harmonic potential with the frequency $`f`$ the expression for the transition temperature is given by:
$$N=1.6\left(\frac{k_BT_c}{\mathrm{}f}\right)^2,$$
(13)
where $`N`$ is the number of atoms in a single plane. For an atomic sample with the density $`n10^{11}10^{12}cm^3`$ and with the size $`L10^1cm`$ we have $`N10^510^6`$ for each plane at the periodicity about $`10^4cm`$. Then for a confining potential with $`f10^3Hz`$ the transition temperature $`T_c10^5K`$ is in few orders higher than the transition temperature in 3D magnetic traps .
In the case under consideration BEC can be observed, for instance, by the time-of-flight measurements of the atomic momentum distribution after turning off of a confining potential. It should be noted that the phases of condensates in each planes are independent. So, if the lattice is considered as whole, we have the quasicondensation only.
### B 2D lattice – 1D condensation
The example of three-beam field configuration where the 2D dark magneto-optical potential is formed is shown in fig.3. Here the three wave-vectors lie in the $`xy`$-plane and make an angle $`2\pi /3`$ one with another. The linear polarizations of beams make the same angle $`\varphi 0,\pi /2`$ with the $`z`$-axis. This angle $`\varphi `$ can be varied in a wide domain. The main reason for the inequality $`\varphi \pi /2`$ is the fact that in the case of $`\varphi =\pi /2`$ there exist lines, where the field amplitude is equal to zero due to the interference. In the vicinity of these lines the CPT conditions (4) are violated. The case of $`\varphi =0`$ is not suitable due to the absence of magneto-optical potential (7).
Atoms are localized in the vicinity of straight lines, where the field has $`\sigma _{}`$ circular polarization. In the same manner as in the previous section, one can show that under sufficiently low temperatures $`k_BT<\sqrt{\mathrm{}\mathrm{\Omega }\epsilon _r}`$ the translational degrees of freedom along $`x`$ and $`y`$ are frozen out. Each of lovalization lines can be considered as an independent 1D system. Under applying of a confining (along $`z`$) harmonic potential we have the condensation at the temperature :
$$N=\frac{k_BT_c}{\mathrm{}f}\mathrm{log}\left(\frac{2k_BT_c}{\mathrm{}f}\right),$$
(14)
where $`N`$ is the number of atoms in a line, $`f`$ is the oscillation frequency.
For an atomic sample with the density $`n10^{12}cm^3`$ and the size $`L10^1cm`$ we have $`N250`$ for each line at the periodicity about $`10^4cm`$. Then under $`f10^3Hz`$ we find $`T_c10^6K`$, that is typical for laser cooling experiments.
Note that like the case of 1D lattice, here the quasicondensation is possible only, if the whole gas volume is considered.
## IV Conclusion
We have considered dark magneto-optical lattices in the regime, when the Rabi frequency of laser field is much greater than the Zeeman splitting of the ground state. In such a regime the lattice is essentially non-dissipative. We have shown that $`2D`$ and $`1D`$ atomic structures can be formed in these lattices. Then we have studied the Bose-Einstein condensation in the lower-dimensional systems in the framework of the ideal gas model. It was predicted that BEC is possible in both $`2D`$ and $`1D`$ cases under the temperatures $`10^610^5K`$ and densities $`10^{11}10^{12}cm^3`$. Concluding we note that the above developed approach can be applied to any non-dissipative optical lattice with the sufficient large depth. The example is a far-off-resonance optical lattice .
###### Acknowledgements.
Authors thank to Leo Hollberg and all members of his scientific group at NIST, Boulder for helpful discussions. AVT and VIYu acknowledge the hospitality of NIST, Boulder. This work was supported by the Russian Foundation for Basic Research (grant no. 98-02-17794).
|
no-problem/9902/hep-ph9902405.html
|
ar5iv
|
text
|
# References
DESY 99-019
IISc-CTS-3/99
hep-ph/9902405
February 1999
Infra-red stable fixed points of R-parity violating Yukawa couplings in supersymmetric models
B. Ananthanarayan,
Centre for Theoretical Studies, Indian Institute of Science,
Bangalore 560 012, India
P. N. Pandita,
Theory Group, Deutsches Elektronen-Synchrotron DESY,
Notkestrasse 85, D 22603 Hamburg, Germany
and
Department of Physics, North-Eastern Hill University,
Shillong 793 022, India<sup>1</sup><sup>1</sup>1Permanent Address
## Abstract
We investigate the infra-red stable fixed points of the Yukawa couplings in the minimal version of the supersymmetric standard model with R-parity violation. Retaining only the R-parity violating couplings of higher generations, we analytically study the solutions of the renormalization group equations of these couplings together with the top- and b-quark Yukawa couplings. We show that only the B-violating coupling $`\lambda _{233}^{^{\prime \prime }}`$ approaches a non-trivial infra-red stable fixed point, whereas all other non-trivial fixed point solutions are either unphysical or unstable in the infra-red region. However, this fixed point solution predicts a top-quark Yukawa coupling which is incompatible with the top quark mass for any value of $`\mathrm{tan}\beta `$.
PACS No.: 11.10.Hi, 11.30.Fs, 12.60.Jv
Keywords: Supersymmetry, R-parity violation, Infra-red fixed points
There is considerable interest in the study of infra-red (IR) stable fixed points of the standard model (SM) and its extensions, especially those of the minimal supersymmetric standard model (MSSM). This interest follows from the fact that in the SM (and in the MSSM) there are large number of unknown dimensionless Yukawa couplings, as a consequence of which the fermion masses cannot be predicted. One may attempt to relate the Yukawa couplings to the gauge couplings via the Pendleton-Ross infra-red stable fixed point (IRSFP) for the top-quark Yukawa coupling , or via the quasi-fixed point behaviour . The predictive power of the SM and its supersymmetric extensions may, thus, be enhanced if the renormalization group (RG) running of the parameters is dominated by IRSFPs. Typically, these fixed points are for ratios like Yukawa coupling to the gauge coupling, or, in the context of supersymmetric models, the supersymmetry breaking tri-linear A-parameter to the gaugino mass, etc. These ratios do not always attain their fixed points values at the weak scale, the range between the GUT (or Planck) scale and the weak scale being too small for the ratios to closely approach the fixed point. Nevertheless, the couplings may be determined by quasi-fixed point behaviour , where the value of the Yukawa coupling at the weak sale is independent of its value at the GUT scale, provided the Yukawa couplings at the unification scale are large. For the fixed point or quasi-fixed point scenarios to be successful, it is necessary that these fixed points be stable .
Since supersymmetry necessciates the introduction of superpartners for all known particles in the SM (in addition to the introduction of two Higgs doublets), which transform in an indentical manner under the gauge group, we have additional Yukawa couplings in supersymmetric models which violate baryon number (B) or lepton number (L). In the MSSM a discrete symmetry called R-parity ($`R_p`$) is invoked to eliminate these B and L violating Yukawa couplings . However, the assumption of $`R_p`$ conservation at the level of MSSM appears to be ad hoc, since it is not required for the internal consistency of the model. Therefore, the study of MSSM, including R-parity violation, deserves a serious consideration.
Recently attention has been focussed on the study of renormalization group evolution of $`R_p`$ violating Yukawa couplings of the MSSM , and their quasi-fixed points. This has led to certain insights and constraints on the quasi-fixed point behavior of some of the $`R_p`$ violating Yukawa couplings, involving higher generation indices. We recall that the usefulness of the fixed point and quasi-fixed point scenarios is the existence of stable infra-red fixed points. The purpose of this paper is to address the important question of the infra-red fixed points of supersymmetric models with $`R_p`$ violation, and their stability. Our interest is in the structure of the infra-red stable fixed points, rather than the actual values of the fixed points.
To this end we shall consider the supersymmetric standard model with the minimal particle content and with $`R_P`$ violation, and refer to it as MSSM with R-parity violation. We begin by recalling some of the basic features of the model. The superpotential of the MSSM is given by
$$W=\mu H_1H_2+(h_u)_{ab}Q_L^a\overline{U}_R^bH_2+(h_d)_{ab}Q_L^a\overline{D}_R^bH_1+(h_E)_{ab}L_L^a\overline{E}_R^bH_1,$$
(1)
to which we add the L and B violating terms
$`W_L=\mu _iL_iH_2+{\displaystyle \frac{1}{2}}\lambda _{abc}L_L^aL_L^b\overline{E}_R^c+\lambda _{abc}^{}L_L^aQ_L^b\overline{D}_R^c,`$ (2)
$`W_B={\displaystyle \frac{1}{2}}\lambda _{abc}^{\prime \prime }\overline{D}_R^a\overline{D}_R^b\overline{U}_R^c,`$ (3)
respectively, as allowed by gauge invariance and supersymmetry. In Eq. (1), $`(h_U)_{ab}`$, $`(h_D)_{ab}`$ and $`(h_E)_{ab}`$ are the Yukawa coupling matrices, with $`a,b,c`$ as the generation indices. The Yukawa couplings $`\lambda _{abc}`$ and $`\lambda _{abc}^{\prime \prime }`$ are antisymmetric in their first two indices due to $`SU(2)_L`$ and $`SU(3)_C`$ group structure. Phenomenological studies of supersymmetric models of this type have placed constraints on the various couplings $`\lambda _{abc}`$, $`\lambda _{abc}^{}`$ and $`\lambda _{abc}^{\prime \prime }`$, but there is still considerable room left. We note that the simultaneous presence of the terms in Eq. (2) and Eq. (3) is essentially ruled out by the stringent constraints implied by the lack of observation of nucleon decay.
In addition to the dominant third generation Yukawa couplings $`h_t(h_U)_{33}`$, $`h_b(h_D)_{33}`$ and $`h_\tau (h_E)_{33}`$ in the superpotential (1), there are 36 independent $`R_p`$ violating couplings $`\lambda _{abc}`$ and $`\lambda _{abc}^{}`$ in Eq. (2), and 9 independent $`\lambda _{abc}^{\prime \prime }`$ in Eq. (3). Thus, one would have to solve 39 coupled non-linear evolution equations in the L-violating case, and 12 in the B-violating case, in order to study the evolution of the Yukawa couplings in the minimal model with $`R_p`$ violation. In order to render the Yukawa coupling evolution tractable, we need to make certain plausible simplications. Motivated by the generational hierarchy of the conventional Higgs couplings, we shall assume that an analogous hierarchy amongst the different generations of $`R_p`$ violating couplings exists. Thus we shall retain only the couplings $`\lambda _{233}`$, $`\lambda _{333}^{}`$ and $`\lambda _{233}^{\prime \prime }`$, and neglect the rest. We note that the $`R_p`$ violating couplings to higher generations evolve more strongly because of larger Higgs couplings in their evolution equations, and hence could take larger values than the corresponding couplings to the lighter generations. Furthermore, the experimental upper limits are stronger for the $`R_p`$ violating Yukawa couplings corresponding to the lighter generations.
We shall first consider the evolution of Yukawa couplings arising from superpotentials (1) and (3), which involve baryon number violation. The one-loop renormalization group equations for $`h_t,h_b,h_\tau `$ and $`\lambda _{233}^{\prime \prime }`$ (all others set to zero) are:
$`16\pi ^2{\displaystyle \frac{dh_t}{d(\mathrm{ln}\mu )}}=h_t\left(6h_t^2+h_b^2+2\lambda _{233}^{\prime \prime 2}{\displaystyle \frac{16}{3}}g_3^23g_2^2{\displaystyle \frac{13}{15}}g_1^2\right),`$
$`16\pi ^2{\displaystyle \frac{dh_b}{d(\mathrm{ln}\mu )}}=h_b\left(h_t^2+6h_b^2+h_\tau ^2+2\lambda _{233}^{\prime \prime 2}{\displaystyle \frac{16}{3}}g_3^23g_2^2{\displaystyle \frac{7}{15}}g_1^2\right),`$
$`16\pi ^2{\displaystyle \frac{dh_\tau }{d(\mathrm{ln}\mu )}}=h_\tau \left(3h_b^2+4h_\tau ^23g_2^2{\displaystyle \frac{9}{5}}g_1^2\right),`$ (4)
$`16\pi ^2{\displaystyle \frac{d\lambda _{233}^{\prime \prime }}{d(\mathrm{ln}\mu )}}=\lambda _{233}^{\prime \prime }\left(2h_t^2+2h_b^2+6\lambda _{233}^{\prime \prime 2}8g_3^2{\displaystyle \frac{4}{5}}g_1^2\right).`$
For completeness we list the well-known evolution equations for the gauge couplings, which at one-loop order are identical to those in the MSSM, since the additional Yukawa coupling(s) do not play a role at this order:
$`16\pi ^2{\displaystyle \frac{dg_i}{d(\mathrm{ln}\mu )}}=b_ig_i^3,i=1,2,3,`$ (5)
with $`b_1=33/5,b_2=1,b_3=3`$. With the definitions
$$R_t=\frac{h_t^2}{g_3^2},R_b=\frac{h_b^2}{g_3^2},R_\tau =\frac{h_\tau ^2}{g_3^2},R^{\prime \prime }=\frac{\lambda _{233}^{\prime \prime 2}}{g_3^2},$$
(6)
and retaining only the $`SU(3)_C`$ gauge coupling constant, we can rewrite the renormalization group equations as ($`\stackrel{~}{\alpha }_3=g_3^2/(16\pi ^2)`$):
$`{\displaystyle \frac{dR_t}{dt}}=\stackrel{~}{\alpha _3}R_t\left[\left({\displaystyle \frac{16}{3}}+b_3\right)6R_tR_b2R^{\prime \prime }\right],`$ (7)
$`{\displaystyle \frac{dR_b}{dt}}=\stackrel{~}{\alpha _3}R_b\left[\left({\displaystyle \frac{16}{3}}+b_3\right)R_t6R_bR_\tau 2R^{\prime \prime }\right],`$ (8)
$`{\displaystyle \frac{dR_\tau }{dt}}=\stackrel{~}{\alpha _3}R_\tau \left[b_33R_b4R_\tau \right],`$ (9)
$`{\displaystyle \frac{dR^{\prime \prime }}{dt}}=\stackrel{~}{\alpha _3}R^{\prime \prime }\left[\left(8+b_3\right)2R_t2R_b6R^{\prime \prime }\right],`$ (10)
where $`b_3=3`$ is the beta function for $`g_3`$ in the MSSM, and $`t=\mathrm{ln}\mu ^2`$. Ordering the ratios as $`R_i=(R^{\prime \prime },R_\tau ,R_b,R_t)`$, we rewrite the RG equations (7) - (10) in the form
$$\frac{dR_i}{dt}=\stackrel{~}{\alpha }_3R_i\left[(r_i+b_3)\underset{j}{}S_{ij}R_j\right],$$
(11)
where $`r_i=_R2C_R`$, $`C_R`$ is the QCD Casimir for the various fields ($`C_Q=C_{\overline{U}}=C_{\overline{D}}=4/3`$), the sum is over the representation of the three fields associated with the trilinear coupling that enters $`R_i`$, and $`S`$ is a matrix whose value is fully specified by the wavefunction anomalous dimensions. A fixed point is then reached when the right hand side of Eq. (11) is 0 for all $`i`$. If we were to write the fixed-point solutions as $`R_i^{}`$, then there are two fixed point values for each coupling: $`R_i^{}=0`$, or
$$\left[\left(r_i+b_3\right)\underset{j}{}S_{ij}R_j^{}\right]=0.$$
(12)
It follows that the non-trivial fixed point solution is
$$R_i^{}=\underset{j}{}(S^1)_{ij}(r_j+b_3).$$
(13)
Since we shall consider the fixed points of the couplings $`h_t`$, $`h_b`$ and $`\lambda _{233}^{^{\prime \prime }}`$ only, we shall ignore the evolution equation (9). However, the coupling $`h_\tau `$ does enter the evolution Eq. (8) of $`h_b`$, but it can be related to $`h_b`$ at the weak scale (which we take to be the top-quark mass), since
$$h_\tau (m_t)=\frac{\sqrt{2}m_\tau (m_t)}{\eta _\tau v\mathrm{cos}\beta },$$
(14)
and
$$h_\tau (m_t)=\frac{m_\tau (m_\tau )}{m_b(m_b)}\frac{\eta _b}{\eta _\tau }h_b(m_t)=0.6h_b(m_t),$$
(15)
where $`\eta _b`$ gives the QCD or QED running of the b-quark mass $`m_b(\mu )`$ between $`\mu =m_b`$ and $`\mu =m_t`$ (similarly for $`\eta _\tau `$), and $`\mathrm{tan}\beta =v_2/v_1`$ is the usual ratio of the Higgs vacuum expectation values in the MSSM, with $`v=(\sqrt{2}G_F)^{1/2}=246`$ GeV. The anomalous dimension matrix $`S`$ can, then, be written as
$$S=\left(\begin{array}{ccc}6& 2& 2\\ 2& 6+\eta & 1\\ 2& 1& 6\end{array}\right),$$
(16)
where $`\eta =h_\tau ^2(m_t)/h_b^2(m_t)0.36`$ is the factor coming from Eq. (15). We, therefore, get the following fixed point solution for the ratios:
$`R_1^{}R^{\prime \prime }={\displaystyle \frac{385+76\eta }{3(170+32\eta )}}0.76,`$
$`R_2^{}R_b^{}={\displaystyle \frac{20}{170+32\eta }}0.11,`$ (17)
$`R_3^{}R_t^{}={\displaystyle \frac{20+4\eta }{170+32\eta }}0.12.`$
Since each of the $`R_i`$’s is positive, this is a theoretically acceptable fixed point solution.
We next try to find a fixed point solution with $`R^{\prime \prime }=0`$, with $`R_b`$ and $`R_t`$ being given by their non-zero solutions. We need to consider only the lower right hand $`2\times 2`$ sub-matrix of the matrix S in Eq. (16) to obtain the fixed point solutions for $`R_b`$ and $`R_t`$ in this case. We then have
$`R_1^{}R^{\prime \prime }=0,`$
$`R_2^{}R_b^{}={\displaystyle \frac{35}{3(35+6\eta )}}0.36,`$ (18)
$`R_3^{}R_t^{}={\displaystyle \frac{7(5+\eta )}{3(35+6\eta )}}0.34.`$
This is also a theoretically acceptable solution, as all the fixed point values are non-negative. We must also consider the fixed point with $`R_b^{}=0`$, which is relevant for the low values of the parameter $`\mathrm{tan}\beta .`$ In this case, we have to reorder the couplings as $`R_i=(R_b,R^{\prime \prime },R_t)`$, so that we have the anomalous dimension matrix (in this case denoted as $`\stackrel{~}{S}`$)
$$\stackrel{~}{S}=\left(\begin{array}{ccc}6+\eta & 2& 1\\ 2& 6& 2\\ 1& 2& 6\end{array}\right).$$
(19)
Since $`R_b^{}=0,`$ we have to determine the non-zero fixed point values for $`R^{\prime \prime }`$ and $`R_t`$ only. For this we consider the lower right hand $`2\times 2`$ submatrix of the matrix in (19) to obtain
$`R_1^{}R_b^{}=0,`$
$`R_2^{}R^{\prime \prime }={\displaystyle \frac{19}{24}}0.79,`$ (20)
$`R_3^{}R_t^{}={\displaystyle \frac{1}{8}}0.12.`$
which is an acceptable fixed point solution as well. Since there are more than one theoretically acceptable IRSFPs in this case, it is important to determine which, if any, is more likely to be realized in nature. To this end, we must examine the stability of each of the fixed point solutions.
The infra-red stability of a fixed point solution is determined by the sign of the eigenvalues of the matrix $`A`$ whose entries are ($`i`$ not summed over)
$$A_{ij}=\frac{1}{b_3}R_i^{}S_{ij},$$
(21)
where $`R_i^{}`$ is the set of the fixed point solutions of the Yukawa couplings under consideration, and $`S_{ij}`$ is the matrix appearing in the corresponding RG equations (11) for the ratios $`R_i`$. For stability, we require all the eigenvalues of the matrix Eq. (21) to have negative real parts (note that the QCD $`\beta `$-function $`b_3`$ is negative). Considering the fixed point solution (17), the matrix $`A`$ can be written as
$$A=\frac{1}{3}\left(\begin{array}{ccc}6R_1^{}& 2R_1^{}& 2R_1^{}\\ 2R_2^{}& (6+\eta )R_2^{}& R_2^{}\\ 2R_3^{}& R_3^{}& 6R_3^{}\end{array}\right),$$
(22)
where $`R_i^{}`$ are given in Eq. (17). The eigenvalues of the matrix Eq. (22) are calculated to be
$$\lambda _1=1.6,\lambda _2=0.2,\lambda _3=0.2,$$
(23)
which shows that the fixed point (17) is an infra-red stable fixed point. We note that the eigenvalue $`\lambda _1`$ is larger in magnitude as compared to the other eigenvalues in (23), indicating that the fixed point for $`\lambda _{233}^{^{\prime \prime }}`$ is more attractive, and hence more relevant.
Next, we consider the stability of the fixed point solution (18). Since in this case the fixed point of the coupling $`R^{\prime \prime }=0`$, we have to obtain the behaviour of this coupling around the origin. This behaviour is determined by the eigenvalue
$$\lambda _1=\frac{1}{b_3}\left[\underset{j=2}{\overset{3}{}}S_{1j}R_j^{}(r_1+b_3)\right],$$
(24)
where $`r_1=2(C_{\overline{U}}+C_{\overline{D}})=8`$, the $`C`$s are the quadratic Casimirs of the fields occuring in the B-violating terms in the superpotential (3), and the $`S_{ij}`$ is the matrix (16), with the fixed points $`R_i^{},i=1,2,3`$ given by Eq. (18). Inserting these values in Eq. (24), we find
$$\lambda _1=\frac{385+76\eta }{9(35+6\eta )}>0,$$
(25)
thereby indicating that the fixed point is unstable in the infra-red. The behaviour of the couplings $`R_b`$ and $`R_t`$ around their respective fixed points is governed by the eigenvalues of the the $`2\times 2`$ lower submatrix of the matrix $`A`$ in Eq. (22)
$$\frac{1}{3}\left(\begin{array}{cc}(6+\eta )R_2^{}& R_2^{}\\ R_3^{}& 6R_3^{}\end{array}\right),$$
(26)
which we find to be
$$\lambda _2=0.78,\lambda _3=0.56.$$
(27)
Although $`\lambda _2`$ and $`\lambda _3`$ are negative, because of the result (25), the fixed-point solution (18) is unstable in the infra-red. In other words, the $`R_p`$ conserving fixed point solution (18) will never be achieved at low energies and must be rejected.
Finally we come to the question of the stability of the fixed point solution (20). The behaviour of the coupling $`R_b^{}`$ around the origin is determined by the eigenvalue
$$\lambda _1=\frac{1}{b_3}\left[\underset{j=2}{\overset{3}{}}\stackrel{~}{S}_{1j}R_j^{}(r_1+b_3)\right],$$
(28)
where $`r_1=2(C_{\overline{Q}}+C_{\overline{D}})=16/3`$, and $`\stackrel{~}{S}`$ is the matrix (19). Inserting these numbers, we find
$$\lambda _1=\frac{5}{24}0.21>0,$$
(29)
with the other two eigenvalues for determining the stability given by the eigenvalues of the matrix which is obtained from the lower $`2\times 2`$ submatrix of the matrix $`\stackrel{~}{S}`$ in (19). This submatrix can be written as
$$\stackrel{~}{A}=\frac{1}{3}\left(\begin{array}{cc}6R_2^{}& 2R_2^{}\\ 2R_3^{}& 6R_3^{}\end{array}\right),$$
(30)
where $`R_2^{}`$ and $`R_3^{}`$ are given by Eq. (20). The eigenvalues are
$$\lambda _2=1.61,\lambda _3=0.22.$$
(31)
It follows, once again, that the fixed point solution given in (20) is not stable in the infra-red and is, therefore, never reached at low-energies.
One may also consider the case where the couplings $`\lambda _{233}^{\prime \prime }`$ and $`h_b`$ attain trivial fixed point values, whereas $`h_t`$ attains a non-trivial fixed point value. In this case we have $`R_3^{}R_t^{}=7/18`$, the well-known Pendleton-Ross top-quark fixed point of the MSSM. To study the stability of this solution in the present context, we must consider the eigenvalues
$`\lambda _i={\displaystyle \frac{1}{b_3}}(S_{i3}R_3^{}(r_i+b_3)),i=1,2,`$
where $`S_{i3}`$ are read off from the matrix (16), which yields
$`\lambda _1={\displaystyle \frac{38}{27}},\lambda _2={\displaystyle \frac{35}{54}}.`$
Since the sign of each of $`\lambda _1`$ and $`\lambda _2`$ is positive, this solution is also unstable in the infra-red region. However, from our discussion of infra-red fixed point solution (17), it is clear that the Pendelton-Ross fixed point would be stable in case $`h_b`$ and $`\lambda _{233}^{^{\prime \prime }}`$ are small, though negligible at the GUT scale. In this case, these would, of course, evolve away from zero at the weak scale, though realistically they would still be small (but not zero) at the weak scale. Thus, the only true infra-red stable fixed point solution is the baryon number, and $`R_p`$, violating solution (17). This is one of the main conclusions of this paper. We note that the value of $`R_t^{}`$ in (17) is lower than the corresponding value of $`7/18`$ in MSSM with $`R_p`$ conservation.
It is appropriate to examine the implications of the value of $`h_t(m_t)`$ predicted by our fixed point analysis for the top-quark mass. From (17), and $`\alpha _3(m_t)0.1`$, the fixed point value for the top-Yukawa coupling is predicted to be $`h_t(m_t)0.4`$. This translates into a top-quark (pole) mass of about $`m_t70\mathrm{sin}\beta `$ GeV, which is incompatible with the measured value of top mass, $`m_t174`$ GeV, for any value of $`\mathrm{tan}\beta `$. It follows that the true fixed point obtained here provides only a qualitative understanding of the top quark mass in MSSM with $`R_p`$ violation.
We now turn to the study of the renormalization group evolution for the lepton number violating, and $`R_p`$, violating couplings in the superpotential (2). Here we shall consider the dimensionless couplings $`\lambda _{233}`$ and $`\lambda _{333}^{}`$ only. The relevant one-loop renormalization group equations are:
$`16\pi ^2{\displaystyle \frac{dh_t}{d(\mathrm{ln}\mu )}}=h_t\left(6h_t^2+h_b^2+\lambda _{333}^2{\displaystyle \frac{16}{3}}g_3^2\right),`$
$`16\pi ^2{\displaystyle \frac{dh_b}{d(\mathrm{ln}\mu )}}=h_b\left(h_t^2+6h_b^2+h_\tau ^2+6\lambda _{333}^2{\displaystyle \frac{16}{3}}g_3^2\right),`$
$`16\pi ^2{\displaystyle \frac{dh_\tau }{d(\mathrm{ln}\mu )}}=h_\tau \left(3h_b^2+4h_\tau ^2+4\lambda _{233}^2+3\lambda _{333}^2\right),`$ (32)
$`16\pi ^2{\displaystyle \frac{d\lambda _{233}}{d(\mathrm{ln}\mu )}}=\lambda _{233}\left(4h_\tau ^2+4\lambda _{233}^2+3\lambda _{333}^2\right),`$
$`16\pi ^2{\displaystyle \frac{d\lambda _{333}^{}}{d(\mathrm{ln}\mu )}}=\lambda _{333}^{}\left(h_t^2+6h_b^2+h_\tau ^2+\lambda _{233}^2+6\lambda _{333}^2{\displaystyle \frac{16}{3}}g_3^2\right).`$
Defining the new ratios
$$R=\frac{\lambda _{233}^2}{g_3^2},R^{}=\frac{\lambda _{333}^{}}{g_3^2},$$
(33)
we may now rewrite the equations (32) as
$`{\displaystyle \frac{dR}{dt}}=\stackrel{~}{\alpha _3}R\left[b_34R3R^{}4R_\tau \right],`$ (34)
$`{\displaystyle \frac{dR^{}}{dt}}=\stackrel{~}{\alpha _3}R^{}\left[\left({\displaystyle \frac{16}{3}}+b_3\right)R6R^{}R_\tau 6R_bR_t\right],`$ (35)
$`{\displaystyle \frac{dR_\tau }{dt}}=\stackrel{~}{\alpha _3}R_\tau \left[b_34R3R^{}3R_b4R_\tau \right],`$ (36)
$`{\displaystyle \frac{dR_b}{dt}}=\stackrel{~}{\alpha _3}R_b\left[\left({\displaystyle \frac{16}{3}}+b_3\right)6R^{}R_\tau 6R_bR_t\right],`$ (37)
$`{\displaystyle \frac{dR_t}{dt}}=\stackrel{~}{\alpha _3}R_t\left[\left({\displaystyle \frac{16}{3}}+b_3\right)R^{}R_b6R_t\right].`$ (38)
Ordering the ratios as $`R_i=(R,R^{},R_\tau ,R_b,R_t)`$, we can write these RG equations as:
$$\frac{dR_i}{dt}=\stackrel{~}{\alpha }_3R_i\left[(r_i+b_3)\underset{j}{}S_{ij}R_j\right],$$
(39)
where $`r_i=_R2C_R`$, with $`C_R`$ denoting the quadartic Casimir of the each of the fields, the sum being over the representation of fields that enter $`R_i`$, and $`S`$ fully specified by the respective wavefunction anomalous dimensions. It follows that there are two fixed point values for each coupling: $`R_i^{}=0`$, or the non-trivial fixed point solution
$$R_i^{}=\underset{j}{}(S^1)_{ij}(r_j+b_3).$$
(40)
We shall be interested in the fixed-point solutions of the couplings $`\lambda _{233},\lambda _{333}^{},h_b,h_t`$ only, and shall not consider the $`h_\tau `$ coupling. Therefore, we replace it, as we did earlier, by $`h_\tau (m_t)=0.6h_b(m_t)`$ at the weak scale in the determination of the fixed point solutions (40). The anomalous dimensions matrix can then be written as:
$$S=\left(\begin{array}{cccc}4& 3& 4\eta & 0\\ 1& 6& 6+\eta & 1\\ 0& 6& 6+\eta & 1\\ 0& 1& 1& 6\end{array}\right)$$
(41)
This leads to the fixed point values for the ratios:
$`R_1^{}R^{}=0,`$
$`R_2^{}R^{}={\displaystyle \frac{315+194\eta }{366\eta 315}},`$
$`R_3^{}R_b^{}={\displaystyle \frac{140}{122\eta 105}},`$ (42)
$`R_4^{}R_t^{}={\displaystyle \frac{110\eta 105}{366\eta 315}}.`$
We note that $`R_2^{}R^{}<0`$, and therefore, this fixed point solution is not an acceptable fixed point. We, thus, see that a simultaneous fixed point for the lepton number violating couplings $`\lambda _{233},\lambda _{333}^{}`$, and $`h_b,h_t`$ does not exist.
We now consider the two L-violating couplings separately, i.e., we shall take either $`\lambda _{233}\lambda _{333}^{^{}}`$, or $`\lambda _{333}^{^{}}\lambda _{233}`$, respectively. In the case when $`\lambda _{333}^{}`$ is the dominant of the couplings, we order the couplings as $`R_i=(R^{},R_b,R_t)`$, so that the matrix $`S`$ that enters Eq. (40) for this case can be written as
$$S=\left(\begin{array}{ccc}6& 6+\eta & 1\\ 6& 6+\eta & 1\\ 1& 1& 6\end{array}\right).$$
(43)
Since the determinant of this matrix vanishes, there are no fixed points in this case. We thus conclude that a simultaneous non-zero fixed point for the coupling $`\lambda _{233}^{},h_b,h_t`$ does not exist. We note that the vanishing of the determinant corresponds to a solution with a fixed line or surface.
If $`h_b`$ is small (e.g., for the case of small $`\mathrm{tan}\beta `$) we may reorder the couplings $`R_i=(R_b,R^{},R_t)`$, and the matrix $`S`$, to find the fixed point solution
$`R_1^{}R_b^{}=0,`$
$`R_2^{}R^{}={\displaystyle \frac{1}{3}},`$ (44)
$`R_3^{}R_t^{}={\displaystyle \frac{1}{3}}.`$
In order to study the stability of this solution, we must obtain the behaviour of the coupling $`R_b^{}`$ around the origin from the eigenvalue
$$\lambda _1=\frac{1}{b_3}\left[\underset{j=2}{\overset{3}{}}S_{1j}R_j^{}(r_1+b_3)\right],$$
(45)
where $`r_1=16/3`$. Inserting the relevant $`R_i^{}`$’s into (45), we get
$$\lambda _1=0,$$
(46)
from which we conclude that the fixed point Eq. (44) will never be reached in the infra-red region. This fixed point is either a saddle point or an ultra-violet fixed point. We conclude that there are no non-trivial stable fixed points in the infra-red region for the lepton number violating coupling $`\lambda _{333}^{^{}}`$.
Finally, we consider the case when $`\lambda _{333}^{^{}}\lambda _{233}`$. We find the fixed point solution
$`R_1^{}R^{}={\displaystyle \frac{315194\eta }{12(35+6\eta )}},`$
$`R_2^{}R_b^{}={\displaystyle \frac{35}{3(35+6\eta )}},`$ (47)
$`R_3^{}R_t^{}={\displaystyle \frac{7(5+\eta )}{3(35+6\eta )}},`$
which is unphysical. We, therefore, try a fixed point with $`R_b^{}=0`$. We find
$`R_1^{}R_b^{}=0,`$
$`R_2^{}R^{}=3/4,`$ (48)
$`R_3^{}R_t^{}=7/18,`$
which, again, is unphysical. We have also checked that: (1) trivial fixed points for $`\lambda _{233}`$ and $`h_b`$ and the Pendleton-Ross type fixed point for the top-quark Yukawa coupling, or (2) trivial fixed points for $`\lambda _{333}^{}`$ and $`h_b`$ and the Pendleton-Ross fixed point for the top-quark Yukawa coupling, are, both, unstable in the infra-red region. We, therefore, conclude that there are no fixed point solutions for the lepton number violating coupling $`\lambda _{233}`$.
To summarize, we have analyzed the one-loop renormalization group equations for the evolution of Yukawa couplings in MSSM with $`R_p`$ violating couplings to the heaviest generation, taking into account B and L violating couplings one at a time. The analysis of the system with $`R_p`$, and the baryon number, violating coupling $`\lambda _{233}^{\prime \prime }`$ yields the surprising and important result that only the simultaneous non-trivial fixed point for this coupling and the top-quark and b-quark Yukawa couplings $`h_t`$ and $`h_b`$ is stable in the infra-red region. However, the fixed point value for the top-quark coupling here is lower than its corresponding value in the MSSM, and is incompatible with the measured value of the top-quark mass. The $`R_p`$ conserving solution with $`\lambda _{233}^{\prime \prime }`$ attaining its trivial fixed point, with $`h_t`$ and $`h_b`$ attaining non-trivial fixed points, is infra-red unstable, as is the case for trivial fixed points for $`\lambda _{233}^{\prime \prime }`$ and $`h_b`$, with a non-trivial fixed point for $`h_t`$. Our analysis shows that the usual Pendelton-Ross type infra-red fixed point of MSSM is unstable in the presence of $`R_p`$ violation, though for small, but negligible, values of $`h_b`$ and $`\lambda _{233}^{^{\prime \prime }}`$ it could be stable. The system with $`L`$, and $`R_p`$, violating couplings does not possess a set of non-trivial fixed points that are infra-red stable. Our results are the first in placing strong theoretical constraints on the nature of $`R_p`$ violating couplings from fixed-point and stability considerations: the fixed points that are unstable, or the fixed point that is a saddle point, cannot be realized in the infra-red region. The fixed points obtained in this work are the true fixed points, in contrast to the quasi-fixed points of , and serve as a lower bound on the relevant $`R_p`$ violating Yukawa couplings. In particular, from our analysis of the simultaneous (stable) fixed point for the baryon number violating coupling $`\lambda _{233}^{^{\prime \prime }}`$ and the top and bottom Yukawa couplings, we infer a lower bound on $`\lambda _{233}^{^{\prime \prime }}\stackrel{>}{}0.98`$.
Note addded: After this paper was submitted for publication, another paper which considers the question of infra-red fixed points in the supersymmetric standard model with $`R_p`$ violation has appeared. The fixed points for $`\lambda _{333}^{^{}}`$, $`\lambda _{233}^{^{\prime \prime }}`$ and $`h_t`$, neglecting all other Yukawa couplings, is considered. Their results, where there is an overlapp, agree with ours. However, unlike in the present work, the stability of the fixed points has not been considered in .
Acknowledgements: One of the authors (PNP) thanks the Theory Group at DESY for hospitality while this work was done. The work of PNP is supported by the University Grants Commission, India under the project No. 10-26/98(SR-I).
|
no-problem/9902/astro-ph9902180.html
|
ar5iv
|
text
|
# Scattered Ly𝛼 Radiation Around Sources Before Cosmological Reionization
## 1 Introduction
Following cosmological recombination at $`z10^3`$, the Universe became predominantly neutral and hence optically-thick to Ly$`\alpha `$ photons. The first galaxies that lit up at lower redshifts were surrounded by a neutral intergalactic medium. Their observed spectrum should therefore show a deep trough shortward of their rest-frame Ly$`\alpha `$ wavelength due to absorption by neutral hydrogen (HI) along the line-of-sight, the so-called Gunn-Peterson effect (Gunn & Peterson 1965). For typical cosmological parameters, the optical depth at the Ly$`\alpha `$ resonance is so large that one might naively expect the damping wing of the Ly$`\alpha `$ trough to eliminate any trace of a Ly$`\alpha `$ emission line in the observed source spectrum (Miralda-Escudé & Rees 1998; Miralda-Escudé 1998). However, here we point out that the Ly$`\alpha `$ line photons absorbed by intergalactic HI are subsequently re-emitted and hence do not get destroyed<sup>1</sup><sup>1</sup>1At the redshifts of interest, $`z_\mathrm{s}10`$, the low densities and lack of chemical enrichment of the IGM make the destruction of Ly$`\alpha `$ photons by two-photon decay or dust absorption unimportant.. Rather, these photons scatter and diffuse in frequency to the red of the Ly$`\alpha `$ resonance due to the Hubble expansion of the surrounding HI. Eventually, when their net frequency redshift is sufficiently large, they escape and travel freely towards the observer. In this paper we calculate the resulting brightness distribution and spectral line profile of the diffuse Ly$`\alpha `$ radiation around high-redshift sources<sup>2</sup><sup>2</sup>2The photons absorbed in the Gunn-Peterson trough are also re-emitted by the IGM around the source. However, since these photons originate on the blue side of the Ly$`\alpha `$ resonance, they travel a longer distance from the source than the Ly$`\alpha `$ line photons do, before they escape to the observer. The Gunn-Peterson photons are therefore scattered from a larger and hence dimmer halo around the source. The Gunn-Peterson halo is made even dimmer relative to the Ly$`\alpha `$ line halo, by the fact that the luminosity of the source per unit frequency is often much lower in the continuum than in the Ly$`\alpha `$ line. We therefore focus on the Ly$`\alpha `$ line halo in this paper..
The lack of a Gunn-Peterson trough (i.e., the detection of transmitted flux shortward of the Ly$`\alpha `$ wavelength at the source redshift) in the observed spectra of galaxies at $`z5.6`$ (Dey et al. 1998; Hu, Cowie, & McMahon 1998; Spinrad et al. 1998; Weymann et al. 1998) implies that most of the intergalactic hydrogen in the Universe was reionized before then. Indeed, popular cosmological models predict that reionization took place around a redshift $`z10`$ (e.g., Haiman & Loeb 1998a,b; Gnedin & Ostriker 1997). At earlier times, the abundance of ionizing sources was small and each of these sources produced an expanding HII region in the surrounding intergalactic medium (IGM). The volume filling factor of HII increased dramatically as the number of ionizing sources grew and their associated HII regions expanded. Eventually, these HII regions overlapped and hydrogen throughout most of the cosmic volume became ionized<sup>3</sup><sup>3</sup>3Note, however, that even after much of the IGM was ionized, the optical depth at the Ly$`\alpha `$ resonance was still substantial due to the small residual abundance of HI. Estimates indicate that the reionized Universe provided a measurable transmission of Ly$`\alpha `$ photons only around $`z7`$ (Haiman & Loeb 1998; Miralda-Escudé et al. 1998). (Loeb 1997). Subsequently, the ionizing background radiation penetrated the denser HI condensations around collapsed objects (Miralda-Escudé et al. 1998; Barkana & Loeb 1999).
The physics of reionization involves complicated gas dynamics and radiative transfer. Numerical simulations had only recently started to incorporate the relevant equations of radiative transfer rigorously (Abel, Norman, & Madau 1998). The detection of diffuse Ly$`\alpha `$ photons around high-redshift sources offers an important empirical tool for probing the neutral IGM before and during the reionization epoch, and can be used to test related theoretical calculations. In this context, one is using the Ly$`\alpha `$ source as a light bulb which illuminates the HI fog in the surrounding IGM. Fortunately, the highest redshift galaxies are found to be strong Ly$`\alpha `$ emitters (see, e.g. Dey et al. 1998), probably due to their low dust content.
In this paper we calculate the intensity distribution of the diffuse Ly$`\alpha `$ line in the simplest setting of a pure Hubble flow in a neutral IGM around a steady point source. We leave more complicated configurations for future work. In §2 we describe the formalism and derive analytical solutions in the diffusion regime. In §3 we present a complete numerical solution to this problem using a Monte-Carlo simulation. The detectability of Ly$`\alpha `$ halos is discussed in §4. Finally, §5 summarizes the implications of our results.
## 2 Analytical Formalism
Although the radiative transfer of line radiation was treated in the past for different geometries of stationary (Harrington 1973) or moving (Neufeld & McKee 1988) atmospheres, no particular attention was given to the cosmological context. As a first treatment of this involved problem, we consider here the transfer of the Ly$`\alpha `$ line in a uniform, fully neutral IGM, which follows a pure Hubble expansion around a steady point source. As shown later, the diffuse Ly$`\alpha `$ emission around a high-redshift source could extend over sufficiently large radii where these idealized assumptions are indeed valid.
For newly created Ly$`\alpha `$ photons the IGM is very opaque, and during scattering events the photon frequencies are redistributed symmetrically away from the line center due to the isotropic distribution of thermal velocities of the hydrogen atoms. However, as the photon frequencies drift away from resonance and the medium becomes less opaque, the asymmetric redshift bias imposed by the Hubble expansion of the surrounding IGM becomes dominant. Since the IGM is still highly opaque when the redshift effect starts to dominate<sup>4</sup><sup>4</sup>4The neutral IGM is even colder than the microwave background prior to reionization. The corresponding thermal velocities of hydrogen atoms at $`z30`$ are smaller by $`3`$ orders of magnitude than the Doppler velocity shift required for the cosmological escape of resonant Ly$`\alpha `$ photons ($`10^3\mathrm{km}\mathrm{s}^1`$)., we focus in the following only on the Hubble expansion when calculating the observed intensity distribution around the source. In terms of the comoving frequencies used here, this implies that the scattering may be regarded as coherent (elastic).
We consider a source surrounded by a spherically-symmetric atmosphere of neutral hydrogen with a linear radial velocity: $`v(r)=H_\mathrm{s}r`$, where $`H_\mathrm{s}`$ is the Hubble expansion rate at the source redshift, $`z_\mathrm{s}`$. Let $`I=I(\nu ,\mu ,r)`$ be the specific intensity (in $`\mathrm{photons}\mathrm{cm}^2\mathrm{s}^1\mathrm{sr}^1\mathrm{Hz}^1`$) at comoving frequency $`\nu `$ in a direction $`\mu =\mathrm{cos}\theta `$ relative to the radius vector at radius $`r`$, as seen by an observer comoving with the gas (i.e. in the cosmic rest frame). Assuming isotropic coherent scattering, the comoving transfer equation for a line at a resonant frequency $`\nu _0`$ is then given by,
$$\mu \frac{I}{r}+\frac{(1\mu ^2)}{r}\frac{I}{\mu }+\alpha \frac{I}{\nu }=\chi _\nu \left(JI\right)+S,$$
(1)
where $`\nu `$ is the frequency redshift, namely the resonant frequency $`\nu _0`$ minus the photon frequency; $`\chi _\nu `$ is the scattering opacity at $`\nu `$; $`J=\frac{1}{2}_1^1𝑑\mu I`$ is the mean intensity; $`S`$ is the emission function for newly created photons (in $`\mathrm{photons}\mathrm{cm}^3\mathrm{s}^1\mathrm{sr}^1\mathrm{Hz}^1`$); and
$$\alpha \frac{H_\mathrm{s}\nu _0}{c}=2.7\times 10^{13}h_0\left[\mathrm{\Omega }_M(1+z_\mathrm{s})^3+(1\mathrm{\Omega }_M\mathrm{\Omega }_\mathrm{\Lambda })(1+z_\mathrm{s})^2+\mathrm{\Omega }_\mathrm{\Lambda }\right]^{1/2}\mathrm{Hz}\mathrm{cm}^1,$$
(2)
with $`\mathrm{\Omega }_M`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ being the density parameters of matter and the cosmological constant, and $`h_0`$ being the current Hubble constant, $`H_0`$, in units of $`100\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$. Note that at sufficiently high redshifts the value of $`\alpha `$ does not depend on $`\mathrm{\Omega }_\mathrm{\Lambda }`$. The source function on the right-hand-side of equation (1) can be written as
$$S=\frac{\dot{N}_\alpha }{(4\pi )^2r^2}\delta (\nu )\delta (r),$$
(3)
where $`\dot{N}_\alpha =\mathrm{const}`$ is the steady emission rate of Ly$`\alpha `$ photons by the source (in $`\mathrm{photons}\mathrm{s}^1`$).
In terms of our frequency variable, the Ly$`\alpha `$ opacity is given by (Peebles 1993, p. 573),
$$\chi _\nu =\left(1\frac{\nu }{\nu _0}\right)^4\frac{\beta }{\nu ^2+\mathrm{\Lambda }^2[1\nu /\nu _0]^6/16\pi ^2},$$
(4)
where $`\nu _0=2.47\times 10^{15}\mathrm{Hz}`$ is the Ly$`\alpha `$ frequency, $`\mathrm{\Lambda }=6.25\times 10^8\mathrm{s}^1`$ is the rate of spontaneous decay from the $`2p`$ to the $`1s`$ energy levels of hydrogen, and
$$\beta \left(\frac{3c^2\mathrm{\Lambda }^2}{32\pi ^3\nu _0^2}\right)n_{\mathrm{HI}}=1.5\mathrm{\Omega }_bh_0^2(1+z_\mathrm{s})^3\mathrm{cm}^1\mathrm{Hz}^2.$$
(5)
Here $`n_{\mathrm{HI}}`$ is the mean hydrogen density in the (neutral) IGM at a redshift $`z_\mathrm{s}`$, expressed in terms of the current baryonic density parameter, $`\mathrm{\Omega }_b`$, and the normalized Hubble constant, $`h_0`$. For typical cosmological parameters, the mean-free-path at the line center is negligible compared to the size of the system. Hence, a distant observer would see only those photons which scatter to the wings of the line, where to a good approximation the opacity scales as,
$$\chi _\nu \frac{\beta }{\nu ^2},$$
(6)
with $`\beta `$ being a constant due to the assumed uniformity of the IGM. This approximate relation holds as long as the frequency shift is small, $`\nu /\nu _01`$, and is adequate throughout the discussion that follows.
Before proceeding, we should point out that the transfer problem described by equation (1) is completely analogous to that of time-dependent monochromatic isotropic scattering, where the frequency $`\nu `$ here plays the role of “time.” This is easily understood, since a photon is subjected to a constant redshifting due to the Hubble expansion, and the scattering is coherent, so its frequency is a perfect surrogate for time. This implies that the problem can be viewed as an initial value problem in frequency, so the solution for a fixed frequency determines the behavior of the solution at all other frequencies. One should note, however, when using this analogy that the opacity law here is a decreasing function of “time.”
We would now like to normalize the frequency shift and radius in the problem by convenient scales. An appropriate frequency scale, $`\nu _{}`$, is introduced by equating the Ly$`\alpha `$ optical depth from the source to the observer to unity,
$$\tau _\nu =_0^{\mathrm{}}\frac{\beta }{\left(\nu _{}+\alpha r\right)^2}𝑑r=\frac{\beta }{\alpha \nu _{}}=1,$$
(7)
yielding
$$\nu _{}=\frac{\beta }{\alpha }=5.6\times 10^{12}\mathrm{\Omega }_bh_0\left[\mathrm{\Omega }_M(1+z_\mathrm{s})^3+(1\mathrm{\Omega }_M\mathrm{\Omega }_\mathrm{\Lambda })(1+z_\mathrm{s})^4+\mathrm{\Omega }_\mathrm{\Lambda }(1+z_\mathrm{s})^6\right]^{1/2}\mathrm{Hz}.$$
(8)
Note that for the popular values of $`\mathrm{\Omega }_b0.05`$, $`\mathrm{\Omega }_M=0.4`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.6`$ and $`h_0=0.65`$ (e.g., Turner 1999, and references therein), the frequency shift of the photons which escape to infinity is $`(\nu _{}/\nu _0)4\times 10^3`$ at $`z_\mathrm{s}=10`$. This corresponds to a spectroscopic velocity shift of $`(\nu _{}/\nu _0)c10^3\mathrm{km}\mathrm{s}^1`$ at the source. The proper radius where the Doppler shift due to the Hubble expansion produces the above frequency shift is given by $`r_{}=\nu _{}/\alpha `$, or
$$r_{}=\frac{\beta }{\alpha ^2}=\frac{2.1\times 10^{25}\left(\mathrm{\Omega }_b/\mathrm{\Omega }_M\right)\mathrm{cm}}{\left[1+(1\mathrm{\Omega }_M\mathrm{\Omega }_\mathrm{\Lambda })\mathrm{\Omega }_M^1(1+z_\mathrm{s})^1+(\mathrm{\Omega }_\mathrm{\Lambda }/\mathrm{\Omega }_M)(1+z_\mathrm{s})^3\right]}6.7\left(\frac{\mathrm{\Omega }_b}{\mathrm{\Omega }_M}\right)\mathrm{Mpc},$$
(9)
The last equality was obtained for high redshifts, $`z_\mathrm{s}1`$. We thus find that for an $`\mathrm{\Omega }_M=1`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$ cosmology or at high redshifts for any other cosmology, the physical distance $`r_{}`$ is independent of the source redshift and depends only on the baryonic mass fraction, $`F_b\mathrm{\Omega }_b/\mathrm{\Omega }_M`$. This fraction can be calibrated empirically from X-ray data on clusters of galaxies (Carlberg et al. 1998; Ettori & Fabian 1999) which yield a value of $`F_b0.1`$ for $`h_0=0.65`$. Note that $`\nu _{}`$ was derived at the source rest frame; its value is lower by a factor of $`(1+z_\mathrm{s})`$ in the observer’s frame.
It is convenient for further developments to rescale the variables in the problem and make them dimensionless. Radius and frequency are rescaled using the characteristic quantities $`r_{}`$ and $`\nu _{}`$, that is, $`\stackrel{~}{r}(r/r_{})`$ and $`\stackrel{~}{\nu }(\nu /\nu _{})`$. We rescale all radiation quantities using the characteristic quantity $`I_{}=\dot{N}_\alpha /(r_{}^{}{}_{}{}^{2}\nu _{})`$, that is, $`\stackrel{~}{I}=I/I_{}`$, $`\stackrel{~}{J}=J/I_{}`$, etc. We also define the rescaled source term,
$$\stackrel{~}{S}=\frac{Sr_{}}{I_{}}=\frac{1}{(4\pi )^2\stackrel{~}{r}^2}\delta (\stackrel{~}{\nu })\delta (\stackrel{~}{r})$$
(10)
Equation (1) can then be written in dimensionless coordinates,
$$\mu \frac{\stackrel{~}{I}}{\stackrel{~}{r}}+\frac{(1\mu ^2)}{\stackrel{~}{r}}\frac{\stackrel{~}{I}}{\mu }+\frac{\stackrel{~}{I}}{\stackrel{~}{\nu }}=\frac{1}{\stackrel{~}{\nu }^2}\left(\stackrel{~}{J}\stackrel{~}{I}\right)+\stackrel{~}{S},$$
(11)
Next, we take the first two angular moments of equation (11). In addition to $`\stackrel{~}{J}`$, these will involve the two moments $`\stackrel{~}{K}=\frac{1}{2}_1^1𝑑\mu \mu ^2\stackrel{~}{I}`$, and $`\stackrel{~}{H}=\frac{1}{2}_1^1𝑑\mu \mu \stackrel{~}{I}`$. In order to close the above system of equations, one needs a relation between $`\stackrel{~}{J}`$ and $`\stackrel{~}{K}`$. The closure relation is often parameterized through the Eddington factor $`f=f(\nu ,r)\stackrel{~}{K}/\stackrel{~}{J}`$ (Hummer & Rybicki 1971; Mihalas 1978). This parametrization yield for the comoving transfer equation (Mihalas, Kunasz, & Hummer 1976),
$$\frac{1}{\stackrel{~}{r}^2}\frac{(\stackrel{~}{r}^2\stackrel{~}{H})}{\stackrel{~}{r}}+\frac{\stackrel{~}{J}}{\stackrel{~}{\nu }}=\stackrel{~}{S}$$
(12)
and
$$\frac{(f\stackrel{~}{J})}{\stackrel{~}{r}}+\frac{(3f1)}{\stackrel{~}{r}}\stackrel{~}{J}+\frac{\stackrel{~}{H}}{\stackrel{~}{\nu }}=\frac{\stackrel{~}{H}}{\stackrel{~}{\nu }^2}.$$
(13)
### 2.1 The Extended Eddington Approximation
We can close the above set of equations by specifying the radial and frequency dependence of the Eddington factor $`f`$. The simplest choice is to use the Eddington approximation $`f=1/3`$, which is expected to be valid at large optical depths in the medium. However, for the moment we assume $`f`$ to be a constant, but not necessarily equal to $`1/3`$.
For convenience, we define the rescaled variables $`h\stackrel{~}{r}^2\stackrel{~}{H}`$, $`j\stackrel{~}{r}^2\stackrel{~}{J}`$, and $`s=\stackrel{~}{r}^2\stackrel{~}{S}`$. This yields,
$$\frac{h}{\stackrel{~}{r}}=\frac{j}{\stackrel{~}{\nu }}+s,$$
(14)
and
$$f\frac{j}{\stackrel{~}{r}}(1f)\frac{j}{\stackrel{~}{r}}=\frac{h}{\stackrel{~}{\nu }}\frac{h}{\stackrel{~}{\nu }^2}.$$
(15)
By taking the $`\stackrel{~}{r}`$–derivative of equation (15) and substituting the $`\stackrel{~}{\nu }`$–derivative of equation (14) into it, we get
$$\frac{}{\stackrel{~}{r}}\left(f\frac{j}{\stackrel{~}{r}}[1f]\frac{j}{\stackrel{~}{r}}\right)=\frac{^2j}{\stackrel{~}{\nu }^2}+\frac{1}{\stackrel{~}{\nu }^2}\frac{j}{\stackrel{~}{\nu }}\frac{s}{\stackrel{~}{\nu }}\frac{s}{\stackrel{~}{\nu }^2}.$$
(16)
If we now define the function
$$g=f\stackrel{~}{r}^{[1f]/f}jf\stackrel{~}{r}^{[3f1]/f}\stackrel{~}{J}.$$
(17)
then equation (16) can be written as,
$$\frac{g}{\stackrel{~}{\nu }}f\stackrel{~}{\nu }^2\left(\frac{1}{\stackrel{~}{r}^{[1f]/f}}\frac{}{\stackrel{~}{r}}\stackrel{~}{r}^{[1f]/f}\frac{g}{\stackrel{~}{r}}\frac{1}{f}\frac{^2g}{\stackrel{~}{\nu }^2}\right)=\stackrel{~}{\nu }^2\frac{\stackrel{~}{s}}{\stackrel{~}{\nu }}+\stackrel{~}{s},$$
(18)
where $`\stackrel{~}{s}=f\stackrel{~}{r}^{[3f1]/f}\stackrel{~}{S}`$.
Equation (18) resembles the causal diffusion equation (Narayan, Loeb, & Kumar 1994, and references therein), with the substitution of frequency shift for time, and a “diffusion” coefficient $`D=f\stackrel{~}{\nu }^2`$. The Ly$`\alpha `$ photons are expected to redshift away from the line center at a rate which is dictated by this diffusion coefficient. Inspection of the second-order terms provides the condition for “causality” in this problem, namely $`\stackrel{~}{\nu }\stackrel{~}{r}/\sqrt{f}`$. This condition has a simple interpretation: the frequency shift at any given radius is greater than the Doppler shift due to the Hubble velocity there, since the photon could have scattered multiple times before arriving at that radius. Unfortunately, the line opacity is not constant and the above diffusion equation is complicated by the frequency dependence of the diffusion coefficient.
Equation (18) requires a particular choice for the value of $`f`$. There are two physical regimes where the assumption of $`f=const`$ holds: (i) the diffusion regime where the optical depth is high and $`f1/3`$, and (ii) the free-streaming regime where the optical depth is low and $`f1`$. Note that $`g=\frac{1}{3}\stackrel{~}{J}`$ in the first regime and $`g=\stackrel{~}{r}^2\stackrel{~}{J}`$ in the second. In the diffusion regime, the number of scatterings is large and so $`\nu \alpha r`$ or $`\stackrel{~}{\nu }\stackrel{~}{r}`$. This implies that the $`_{\stackrel{~}{\nu }}^2g`$ term can be neglected relative to the $`_{\stackrel{~}{r}}^2g`$ term in equation (18) and the problem is simplified considerably. Such a simplification is not possible in the free-streaming case. We therefore focus next on the diffusion regime where $`\stackrel{~}{r}\stackrel{~}{\nu }1`$, and $`f1/3`$.
### 2.2 The Diffusion Solution
In the diffusion regime, equation (18) reads
$$\frac{\stackrel{~}{J}}{\stackrel{~}{\nu }}\frac{\stackrel{~}{\nu }^2}{3}\frac{1}{\stackrel{~}{r}^2}\frac{}{\stackrel{~}{r}}\stackrel{~}{r}^2\frac{\stackrel{~}{J}}{\stackrel{~}{r}}=\delta (\stackrel{~}{\nu })\frac{\delta (\stackrel{~}{r})}{(4\pi )^2\stackrel{~}{r}^2}.$$
(19)
With the definition of a new variable, $`\sigma =(\stackrel{~}{\nu }^3/9)`$, we get the standard diffusion equation in spherical geometry,
$$\frac{\stackrel{~}{J}}{\sigma }^2\stackrel{~}{J}=\frac{1}{4\pi }\delta (\sigma )\delta (\stackrel{~}{𝐫}),$$
(20)
where $`𝐫`$ is the radius vector, and where $`\sigma `$ now plays the role of the usual time variable. The solution to this equation for $`\nu >0`$ (redshifted photons) is
$$\stackrel{~}{J}=\frac{1}{(4\pi )^{5/2}\sigma ^{3/2}}e^{\stackrel{~}{r}^2/4\sigma }=\frac{1}{4\pi }\left(\frac{9}{4\pi \stackrel{~}{\nu }^3}\right)^{3/2}\mathrm{exp}\left\{\frac{9\stackrel{~}{r}^2}{4\stackrel{~}{\nu }^3}\right\}.$$
(21)
This solution satisfies the integral relation,
$$(4\pi )^2_0^{\mathrm{}}\stackrel{~}{J}\stackrel{~}{r}^2𝑑\stackrel{~}{r}=1,$$
(22)
which follows by performing the Gaussian integral, or directly from the diffusion equation (19). In a way this may be viewed as a result of photon conservation, since frequency here is analogous to time in the usual diffusion equation.
Eventually the diffusion solution becomes invalid when the opacity is small enough to allow photons to escape to the observer. Most of the Ly$`\alpha `$ photons escape to infinity from a smaller radius than $`r_{}`$ as they scatter many times and hence redshift in frequency more than $`\alpha r`$ at any given radius $`r`$. Even though $`\nu _{}`$ and $`r_{}`$ may not represent the true parameters of the escaping photons, equations (8) and (9) provide the scaling of the exact numerical solution with model parameters, since we have already shown that these constants can be used to normalize the basic transfer equation (1) and bring it to a scale-free form. Next we proceed to derive the full numerical solution for the escaping radiation in the dimensionless variables $`\stackrel{~}{\nu }`$ and $`\stackrel{~}{r}`$.
## 3 Results from a Monte-Carlo Simulation
The lack of an analytical solution applicable to both the diffusion and free-streaming regimes motivated us to find solutions by numerical means. The Monte Carlo method suggested itself by its generality and ease of formulation. The usual disadvantage of this method of having to treat large numbers of photons is largely irrelevant in the present case, because all the physical parameters have been scaled out of the equations, and only one Monte Carlo run is needed to solve the problem completely.
The Monte Carlo method follows the histories of a large number of photons in accordance with the following rules:
Step 1: A newly created photon is chosen from the distribution given by the form of the emission function $`S`$. Ideally we would like this to be a photon with $`\stackrel{~}{r}=0`$ and $`\stackrel{~}{\nu }=0`$. However, because of the singularity of the opacity, this cannot be done, so in practice we choose our new photons with a small, but nonzero, frequency (e.g., $`\stackrel{~}{\nu }=10^2`$) and use the diffusion solution (21) to determine a random starting value of $`\stackrel{~}{r}`$.
Step 2: Suppose a photon has been scattered (or has been newly created) at a radius $`\stackrel{~}{r}`$ and with a frequency $`\stackrel{~}{\nu }`$. A random direction is then chosen from an isotropic distribution; this is most easily done by taking $`\mu =2R1`$ where $`R`$ is a random deviate uniform on the interval $`(0,1)`$.
To choose a step length for the photon flight, we use the fact that the optical depth $`\tau `$ along the ray to the next scattering follows the exponential distribution law, $`\mathrm{exp}(\tau )`$. This can be simulated by chosing $`\tau =\mathrm{ln}R^{}`$, where $`R^{}`$ is another random deviate uniform on the interval $`(0,1)`$. To convert this to distance, we compute $`\tau `$ as,
$$\tau =_0^{\mathrm{}}\chi _\nu 𝑑\mathrm{}^{}=_0^\stackrel{~}{\mathrm{}}\frac{d\stackrel{~}{\mathrm{}}^{}}{(\stackrel{~}{\nu }+\stackrel{~}{\mathrm{}}^{})^2}=\frac{1}{\stackrel{~}{\nu }}\frac{1}{\stackrel{~}{\nu }+\stackrel{~}{\mathrm{}}},$$
(23)
where $`\mathrm{}`$ is the pathlength along the ray, and $`\stackrel{~}{\mathrm{}}=\mathrm{}/r_{}`$. Taking $`\stackrel{~}{\mathrm{}}\mathrm{}`$, we see that there is a maximum optical depth to infinity,
$$\tau _{\mathrm{max}}=\frac{1}{\stackrel{~}{\nu }}.$$
(24)
If $`\tau `$ is greater than $`\tau _{\mathrm{max}}`$, the photon escapes and we proceed to Step 3. If $`\tau `$ is less than $`\tau _{\mathrm{max}}`$, a scattering event will occur. We may then solve equation (23) for the pathlength,
$$\stackrel{~}{\mathrm{}}=\frac{\stackrel{~}{\nu }\tau }{\tau _{\mathrm{max}}\tau }.$$
(25)
At the new scattering site, the photon’s new radius $`\stackrel{~}{r}^{}`$ and new comoving frequency $`\stackrel{~}{\nu }^{}`$ are
$`\stackrel{~}{r}^{}`$ $`=`$ $`(\stackrel{~}{r}^2+\stackrel{~}{\mathrm{}}^2+2\stackrel{~}{r}\stackrel{~}{\mathrm{}}\mu )^{1/2},`$ (26)
$`\stackrel{~}{\nu }^{}`$ $`=`$ $`\stackrel{~}{\nu }+\stackrel{~}{\mathrm{}}.`$ (27)
With the substitutions $`\stackrel{~}{r}^{}\stackrel{~}{r}`$ and $`\stackrel{~}{\nu }^{}\stackrel{~}{\nu }`$, we now repeat Step 2 to find the positions and frequencies of the photon at each of its successive scatterings. This loop is repeated until escape occurs.
Step 3: Escape ends the random walk for this photon. We characterize an escaped photon by its scaled impact parameter and its “observed” scaled frequency, relative to the source center,
$`\stackrel{~}{p}`$ $`=`$ $`\stackrel{~}{r}\sqrt{1\mu ^2},`$ (28)
$`\stackrel{~}{\nu }_{\mathrm{obs}}`$ $`=`$ $`\stackrel{~}{\nu }\stackrel{~}{r}\mu .`$ (29)
Here the frequency is still defined at the source rest frame and should be divided by $`(1+z_\mathrm{s})`$ for conversion to the observer’s frame.
Steps 1, 2 and 3 are repeated for as many photons as are necessary to get good statistical estimates for the physical quantities of interest.
In order to relate the results of the Monte Carlo simulations to observable quantities, histograms were constructed. Introducing discrete sets of impact parameters $`\stackrel{~}{p}_i`$ and “observed” frequencies $`\stackrel{~}{\nu }_j`$, the final values for the escaping photons can be binned into a two-dimensional histogram. The “observed” intensity field $`\stackrel{~}{I}(\stackrel{~}{p}_i,\stackrel{~}{\nu }_j)`$ is then estimated as proportional to
$$\frac{N_{ij}}{(2\pi \stackrel{~}{p}_i\mathrm{\Delta }\stackrel{~}{p}_i)(\mathrm{\Delta }\stackrel{~}{\nu }_j)},$$
(30)
where $`N_{ij}`$ is the number of photons falling into the $`i`$,$`j`$ bin of widths $`\mathrm{\Delta }\stackrel{~}{p}_i`$, $`\mathrm{\Delta }\stackrel{~}{\nu }_j`$. Similar estimates can be made for the intensity $`\stackrel{~}{I}(\stackrel{~}{p})`$, integrated over frequencies, and $`\stackrel{~}{I}(\stackrel{~}{\nu })`$, integrated over the area of the plane of the sky. Again, we define these intensities at the source frame. For conversion to the observer’s frame, $`\stackrel{~}{I}(\stackrel{~}{p}_i,\stackrel{~}{\nu }_j)`$ should be divided by $`(1+z_\mathrm{s})^2`$, and $`\stackrel{~}{I}(\stackrel{~}{p})`$ should be divided by $`(1+z_\mathrm{s})^3`$ (since the photon phase space density $`I(p,\nu )/\nu ^2`$ is conserved during the Hubble expansion).
Although the quantities in equations (26) and (27) are not directly observed, it is of theoretical interest to bin them also into a histogram in order to estimate the value of mean intensity at interior points, based on the fact that the number of scatterings per unit volume is proportional to the mean intensity.
A Monte Carlo run using $`10^8`$ photons seemed to give satisfactory results for most of the histogram bins. The exceptions are those at small values of the impact parameter or radius, where the corresponding bin areas or volumes become quite small, making the statistical errors large.
The Monte Carlo results are given in Figures 1–5. Figure 1 shows the intensity in the Ly$`\alpha `$ line integrated over frequency,
$$\stackrel{~}{I}(\stackrel{~}{p})=_0^{\mathrm{}}\stackrel{~}{I}(\stackrel{~}{p},\stackrel{~}{\nu })𝑑\stackrel{~}{\nu },$$
(31)
as a function of impact parameter. The intensity has a fairly compact central core; the characteristic impact parameter at which the intensity has fallen to half its central value is only about $`0.1r_{}`$, an order of magnitude less than the characteristic length $`r_{}`$. Because the intensity scales inversely as the square of the characteristic impact parameter, this means that the central intensities are two orders of magnitude larger than a simple estimate might have indicated. The half-light radius $`p_{1/2}`$ (i.e., the radius of a circular aperture containing half of the Ly$`\alpha `$ emission) is further out, at $`0.63r_{}`$, but at that impact parameter the intensity is already down by over an order of magnitude from its central value. Since the sky brightness limits the observational sensitivity, it may be difficult to infer $`p_{1/2}`$ observationally. However, since the scattered light is expected to be highly polarized (Rybicki & Loeb 1999), its contrast relative to an unpolarized background could be enhanced by using a polarization filter.
The angular radius on the sky corresponding to the physical radius of $`p=0.1r_{}`$ over which the scattered Ly$`\alpha `$ halo maintains a roughly uniform surface brightness, is given by
$$\theta =\frac{p}{d_\mathrm{A}},$$
(32)
where $`d_\mathrm{A}`$ is the angular diameter distance. In an $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$ cosmology, $`d_\mathrm{A}=2cH_0^1[\mathrm{\Omega }_Mz_\mathrm{s}+(\mathrm{\Omega }_M2)(\sqrt{1+\mathrm{\Omega }_Mz_\mathrm{s}}1)]/[\mathrm{\Omega }_M^2(1+z_\mathrm{s})^2]`$ (e.g., Padmanbhan 1993), while in an $`\mathrm{\Omega }_\mathrm{\Lambda }0`$ cosmology the expression is more involved (Edwards 1972, Eisenstein 1997; see also Fig. 13.5 in Peebles 1993). We then find for the example of $`\mathrm{\Omega }_M=0.4`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$, $`h_0=0.65`$, and $`z_\mathrm{s}=10`$, that $`\theta =15^{\prime \prime }\times (p/70\mathrm{kpc})`$, with a slightly larger value for a flat $`\mathrm{\Omega }_\mathrm{\Lambda }=0.6`$ cosmology. Thus, the Ly$`\alpha `$ luminosity of a typical source at $`z_\mathrm{s}10`$ is expected to spread over a characteristic angular radius of $`15^{\prime \prime }`$ on the sky.
Figure 2 shows the total photon emission rate (luminosity) per unit frequency,
$$\stackrel{~}{L}(\stackrel{~}{\nu })=8\pi ^2_0^{\mathrm{}}\stackrel{~}{I}(\stackrel{~}{p},\stackrel{~}{\nu })\stackrel{~}{p}𝑑\stackrel{~}{p},$$
(33)
which can be used to get the observed spectral flux of photons $`F(\nu )`$ (in photons cm<sup>-2</sup> s<sup>-1</sup> Hz<sup>-1</sup>) from the entire Ly$`\alpha `$ halo,
$$F(\nu )=\frac{\stackrel{~}{L}(\stackrel{~}{\nu })}{4\pi d_\mathrm{L}^2}\frac{\dot{N}_\alpha }{\nu _{}}(1+z_\mathrm{s})^2,$$
(34)
where $`\nu =\stackrel{~}{\nu }\nu _{}/(1+z_\mathrm{s})`$, and $`d_\mathrm{L}=d_\mathrm{A}(1+z_\mathrm{s})^2`$ is the standard luminosity distance to the source. This flux is expressed in the frame of a local observer. The extra factor of $`(1+z_\mathrm{s})^2`$ is due to the fact that we evaluate the photon number flux per unit frequency rather than the energy flux – which is used in the usual definition of $`d_\mathrm{L}`$. As mentioned before, the observed intensity is
$$I(p,\nu )=\frac{\stackrel{~}{I}(\stackrel{~}{p},\stackrel{~}{\nu })}{(1+z_\mathrm{s})^2}\frac{\dot{N}_\alpha }{r_{}^2\nu _{}},$$
(35)
and similarly the observed integral of the intensity over frequency,
$$I(p)=\frac{\stackrel{~}{I}(\stackrel{~}{p})}{(1+z_\mathrm{s})^3}\frac{\dot{N}_\alpha }{r_{}^2}.$$
(36)
The peak of the function $`\stackrel{~}{L}(\stackrel{~}{\nu })`$ is seen to be redshifted by roughly $`0.44`$ (recall that we define $`\stackrel{~}{\nu }`$ to be the negative deviation from line center). The full width at half maximum is $`\mathrm{\Delta }\stackrel{~}{\nu }=1.75`$. A linear plot of $`\stackrel{~}{L}(\stackrel{~}{\nu })`$ is presented in Figure 3. Because the shifts are small, the abscissa can be equally interpreted as proportional to wavelength shift from line center, so Figure 3 also shows the observed lineshape as a function of wavelength. One notes the very extended red wing of this line, which goes approximately as $`1/\stackrel{~}{\nu }^2`$. Because of this behavior, the centroid of the line is not well defined.
Figure 4 shows the observed intensities as a function of frequency for five different impact parameters ($`\mathrm{log}\stackrel{~}{p}=`$ $`0.0`$, $`0.5`$, $`1.0`$, $`1.5`$, and $`2.0`$). The curves for $`\mathrm{log}\stackrel{~}{p}=1.5`$ and $`2.0`$ have been smoothed, but still show some fluctuations due to limited Monte Carlo sampling, especially at the smallest frequencies. All curves share roughly the same lineshape, and (apart from vertical scaling) differ only in the redshift of the peak. The peak redshift is at a minimum, $`\stackrel{~}{\nu }=0.19`$, for the central rays, and increases to $`0.33`$ for $`\stackrel{~}{p}=0.1`$ and to $`1.4`$ for $`\stackrel{~}{p}=1`$. Using appropriate integration, these curves could be used to compute the expected lineshape from observations in specific circular apertures or slits (e.g., Dey et al. 1998).
Figure 5 is mostly of theoretical interest and shows the internal mean intensity as a function of radius for a variety of frequencies. At the smallest frequencies the curves are reasonably well approximated by the diffusion solution (21), shown as the dotted curves. As $`\stackrel{~}{\nu }`$ increases into the free-streaming regime ($`\stackrel{~}{\nu }1`$), the curves deviate from the diffusion solution, and eventually become almost shock-like, with photons piling up near the “causality” surface $`\stackrel{~}{r}=\stackrel{~}{\nu }`$, consistently with the “causal” diffusion equation (18).
One caveat concerning Figure 5 should be noted. Many of these curves exhibited considerable statistical error at small values of $`\stackrel{~}{r}`$, especially for the largest values of $`\stackrel{~}{\nu }`$. In these uncertain regions the plots were completed by assuming that all curves approach a constant value at small $`\stackrel{~}{r}`$, determined by the solution at larger $`\stackrel{~}{r}`$. This extrapolation procedure is certainly justified in the diffusion limit, but its validity in the general case needs to be investigated further.
## 4 Observational Considerations
In this section we consider the detectability of the Ly$`\alpha `$ halos. Many of the basic properties of the early sources are unknown, and so our estimates are meant for illustrative purposes only.
The cosmological parameters affect our estimates, but not greatly. In this section, we assume $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$, $`\mathrm{\Omega }_M=0.4`$, $`\mathrm{\Omega }_b=0.05`$, and $`h_0=0.65`$. Of greater uncertainty is the Ly$`\alpha `$ photon luminosity, $`\dot{N}_\alpha `$, of high-redshift sources. For the sake of definiteness, we adopt the measured luminosity for a known Ly$`\alpha `$ galaxy at $`z_\mathrm{s}=5.34`$, which was discovered by Dey et al. (1998). To estimate the integral under the observed spectral line profile in their Figure 3, we assume an intensity amplitude of $`5`$ $`\mu `$Jy and a rest-frame line width of $`2`$ Åİn our cosmological model, this source has a luminosity distance of $`d_\mathrm{L}=1.64\times 10^{29}`$ cm, yielding a value of
$$\dot{N}_\alpha =6.4\times 10^{53}\text{ s}\text{-1}.$$
(37)
Let us now assume that a candidate high-reshifted source is identified and its reshift determined by some other unscattered lines, such as $`\mathrm{H}_\alpha `$. As a specific example, suppose the source is at $`z_\mathrm{s}=10`$, so that its observed Ly$`\alpha `$ line falls in the infrared at $`1.34\mu \mathrm{m}`$. From equations (8) and (9) we find,
$`\nu _{}`$ $`=`$ $`9.8\times 10^{12}\text{ Hz},`$
$`r_{}`$ $`=`$ $`2.3\times 10^{24}\text{ cm}=745\text{ kpc}.`$ (38)
The angular diameter distance is found to be $`d_\mathrm{A}=3.0\times 10^{27}`$ cm, implying a typical angular size of the halo of 0.1 $`r_{}/d_A=7.7\times 10^5=16^{\prime \prime }`$.
The observed integrated brightness near the center of the halo is given by equation (36). Figure 1 shows that $`\stackrel{~}{I}(0)=0.2`$, and so
$$I_{\mathrm{halo}}(0)=18\text{ photons cm}\text{-2}\text{ s}\text{-1}\text{ sr}\text{-1}.$$
(39)
We suppose that a narrow filter is used to observe only in a narrow wavelength band containing the Ly$`\alpha `$ line, in order to reduce the sky background as much as possible. From Figure 3 we infer the Full Width at Half Maximum of the line $`\mathrm{\Delta }\stackrel{~}{\nu }1`$, implying an observing filter width of 50 Å for a source at $`z_\mathrm{s}=10`$. However, one could use narrower filters to take advantage of the fact that the monochromatic intensity depends jointly on frequency and impact parameter in a known way, as shown in Figure 4. If one wanted to concentrate only on the innermost core of emission (out to $`\stackrel{~}{p}=0.1`$), then a width $`\mathrm{\Delta }\stackrel{~}{\nu }=0.5`$ would be sufficient, implying a filter width of only 25 Å. We adopt this narrower filter width in our estimate of the sky background. In frequency this corresponds to $`\mathrm{\Delta }\nu =4.2\times 10^{11}`$.
The brightness of the halo is to be compared to the sky background at 1.34 $`\mu `$m, which is obviously minimized in observations from space. Using sufficiently high resolution to eliminate contaminating point sources, the sky brightness from space is dominated by interplanetary dust (IPD) emission. The local IPD contamination might be reduced by having a spacecraft well above the ecliptic plane, but for the moment we shall proceed conservatively and assume that the IPD is the primary sky background. Hauser et al. (1998) quote a value for the IPD brightness at 1.25 $`\mu `$m of $`375`$ nW m<sup>-2</sup> sr<sup>-1</sup>. This implies a sky brightness within our filter band of
$$I_{\mathrm{sky}}=4.7\times 10^5\text{ photons cm}\text{-2}\text{ s}\text{-1}\text{ sr}\text{-1}.$$
(40)
Thus, the halo brightness is only $`4\times 10^5`$ of the sky background. Given this low value, Ly$`\alpha `$ halos would be difficult to observe. However, there are a number of reasons for reasonable optimism in the long run for their detection.
The choice of Ly$`\alpha `$ luminosity in equation (37), based on Dey et al. (1998), was made purely for definiteness. The observed radiation intensity scales linearly with the value of $`\dot{N}_\alpha `$, and so our estimates may be easily modified for any other value. If the values of $`\dot{N}_\alpha `$ for some high redshift sources were much larger (e.g. due to higher star formation rates or lower dust content), then the difficulty in observing the Ly$`\alpha `$ halos would be eased considerably. In this regard we note that even if the value given in equation (37) is typical for high-redshift galaxies, it is likely that there will be a power-law distribution of observed fluxes (see Figure 2 in Haiman & Loeb 1998b), and some sources could have values of $`\dot{N}_\alpha `$ larger by factors of ten or more.
In addition, the interplanetary dust emission could be much lower if the orbit of a spacecraft such as the Next Generation Space Telescope (NGST) is chosen so that the telescope spends some of its time outside the orbit of the Earth or well above the ecliptic plane. The limiting noise level in this case is provided by the cosmic infrared background. This background is poorly known, but Hauser et al. (1998) give an upper limit at 1.25 $`\mu `$m of $`\nu I_075`$ nW m<sup>-2</sup> sr<sup>-1</sup>, a factor of five less than the value adopted in equation (40). Given this sky background, we can calculate the minimum exposure time necessary for the detection of the Ly$`\alpha `$ halo signal. Ignoring instrumental noise and adopting a detector quantum efficiency close to unity (see http:
augusta.stsci.edu/ngst-etc/ngstetc.html, for more realistic estimates), we can find the best signal-to-noise ratio, $`\mathrm{S}/\mathrm{N}`$, that is attainable after an exposure time $`t`$ on an NGST telescope of 8 meter diameter,
$$\frac{\mathrm{S}}{\mathrm{N}}=10\left(\frac{\dot{N_\alpha }}{6\times 10^{54}\mathrm{s}^1}\right)\left(\frac{t}{10\mathrm{hours}}\right)^{1/2}$$
(41)
Hence, the halo of a source at $`z_\mathrm{s}10`$ which is an order of magnitude brighter in Ly$`\alpha `$ than the Dey et al. (1998) galaxy might be detectable.
Another circumstance favoring such observations is that the Ly$`\alpha `$ halos are highly polarized (see Rybicki & Loeb 1999). This polarization is highest at the outermost radii, but is $`14`$% even at the core radius of $`0.1r_{}`$. Differencing two maps in orthogonal linear polarization is potentially a way to improve the signal-to-noise ratio. More importantly perhaps is that the presence of polarization would be a clear signal that the measured halo was a Ly$`\alpha `$ halo of the type described here rather than some other, possibly instrumental, effect.
## 5 Conclusions
We have shown that Ly$`\alpha `$ sources before the reionization redshift should be surrounded by an intergalactic halo of scattered Ly$`\alpha `$ photons. These sources are expected to appear more spatially extended in the Ly$`\alpha `$ line than they are in the continuum to the red of the line. Our numerical solution implies that the Ly$`\alpha `$ halo has a roughly uniform surface brightness out to an impact parameter $`p0.1r_{}70\mathrm{kpc}`$ (see Fig. 1). At this impact parameter, the line is broadened and redshifted by of order $`10^3\mathrm{km}\mathrm{s}^1`$ relative to the source (Fig. 3). These substantial broadening and redshift signatures cannot be easily caused by galactic kinematics and hence signal the intergalactic origin of the scattered line.
The detection of intergalactic Ly$`\alpha `$ halos with the above characteristics around sources down to a limiting redshift at which the neutral IGM ceases to exist, can be used as a direct method for inferring the redshift of reionization. Alternative methods are more ambiguous, as they rely on the detection the Gunn-Peterson damping wing which might be confused with damped Ly$`\alpha `$ absorbers along the line-of-sight (Miralda-Escudé 1998; see also Haiman & Loeb 1999 for $`z_\mathrm{s}7`$), the detection of the weak damping factor of small-scale microwave anisotropies which is an integral quantity depending also on other cosmological parameters (e.g., Hu & White 1997; Haiman & Loeb 1998a), or the detection of very faint (and somewhat uncertain) spectral features in the cosmic background which is highly challenging technologically (Haiman, Rees, & Loeb 1997; Gnedin & Ostriker 1997; Baltz, Gnedin, & Silk 1998; Shaver et al. 1999).
Our calculation assumed the simplest configuration of a uniform, neutral, IGM with a pure Hubble flow around a steady Ly$`\alpha `$ source. In popular Cold Dark Matter cosmologies, the characteristic nonlinear mass scale of collapsed objects at $`z_\mathrm{s}10`$ is $`10^8M_{}`$ (e.g., Haiman & Loeb 1998a). A galaxy of total mass $`10^8M_8M_{}`$ is assembled from a radius of $`4.4(M_8/\mathrm{\Omega }_Mh_0^2)^{1/3}[(1+z_\mathrm{s})/10]^1\mathrm{kpc}`$ in the IGM, which is more than an order of magnitude smaller than our inferred Ly$`\alpha `$ halo radius. Similarly, the Hubble velocity at the Ly$`\alpha `$ halo radius is larger by more than an order of magnitude than the characteristic velocity scale of nonlinear objects at these redshifts. Hence, our simplifying assumptions of a smooth IGM immersed in a Hubble flow are likely to be satisfied on the Ly$`\alpha `$ halo scale. Modest corrections due to the density enhancements and peculiar velocities in the infall regions around sources might, however, be necessary. More importantly, the ionization effect of a bright quasar on its surrounding IGM might extend out to the scale of interest. In such a case, the intensity distribution of the Ly$`\alpha `$ halo will depend on the spectrum and luminosity history of the ionizing radiation emitted by the source, which determine the neutral fraction as a function of radius around it. This “proximity effect” might be important for quasars but less so for galaxies whose ultraviolet emission is typically strongly suppressed beyond the Lyman limit due to absorption in stellar atmospheres and in the interstellar medium. Other changes to the halo intensity and polarization profiles might result from short-term variability (on $`10^5`$ years) or anisotropic Ly$`\alpha `$ emission by the source. These complications could be easily incorporated into our Monte Carlo approach, for particular source parameters.
Detection of the predicted Ly$`\alpha `$ halo might become feasible over the next decade either with larger ground-based telescopes or with the Next Generation Space Telescope<sup>5</sup><sup>5</sup>5NGST is the successor to the Hubble Space Telescope which is planned for launch over the next decade. For more details, see http://ngst.gsfc.nasa.gov/. (NGST). For $`z_\mathrm{s}10`$, the entire Ly$`\alpha `$ luminosity of a source is typically scattered over a characteristic angular radius of $`15^{\prime \prime }`$. The Ly$`\alpha `$ halo is therefore sufficiently extended to be resolved along with its tangential polarization, as long as its brightness exceeds the fluctuation noise of the infrared background.
Recently, five sources have been photometrically identified to have possible redshifts of $`z10`$ in the Hubble Deep Field South observed by NICMOS (Chen et al. 1998). We emphasize that even just a narrow-band photometric detection of the scattered Ly$`\alpha `$ halos (see Fig. 1) around sources at different redshifts would provide invaluable information about the neutral IGM before and during the reionization epoch. On sufficiently large scales where the Hubble flow is smooth and the IGM is neutral, the Ly$`\alpha `$ brightness distribution can also be used to determine the values of the cosmological mass densities of baryons and matter through equations (8), (9) and the angular diameter distance relation. Thus, in addition to studying the development of reionization, such observations could constrain fundamental cosmological parameters in a redshift interval that was never probed before.
Scattering of resonance line radiation gives rise to polarization at a level depending on the atomic physics of the transition and on the geometry of the scattering events. Using an extension of the Monte Carlo technique used here, Rybicki & Loeb (1999) showed that most photons with the largest impact parameters suffer a nearly right-angle last scattering, and the degree of tangential polarization is large for such photons, of order tens of percent (asymptotically 60%). Measurement of this polarization could be used as an independent check on the model parameters, and might even provide a powerful way of identifying early objects.
We thank Chris Kochanek and Chuck Steidel for useful discussions. This work was supported in part by the NASA grants NAG5-7768 and NAG5-7039 (for AL).
|
no-problem/9902/astro-ph9902014.html
|
ar5iv
|
text
|
# SLOWING DOWN AND SPEEDING UP PSR’S PERIODS: A SHAPIRO TELESCOPE TRACING DARK MATTER
## 1 Introduction
Matter bends space-time. This is the reason for Newton’s apple to fall down and for planets and stars to follow Keplerian curves. Gravity is the deformation of space and time. This double reality is why massless photons are bent twice by Einstein Relativity than just one time as predicted by a simplest Newtonian plus Special Relativity model. This factor two is the base for 1919 light deflection test and General Relativity triumph.
The light bending is the root for the microlensing techniques able to reveal unseen dark MACHOs running in our galaxy. But time itself is slowed down by gravity. The famous twin paradox might be played as well as near and far a neutron star surface (neglecting tidal-Riemmann forces). The clock beats are slowed down near compact stars (NS, white dwarf, sun..) than in a far flat space- time. Indeed gravitational redshift is an additional cornerstone for General Relativity. The variable position of a wave-source respect to a gravitational field is “recorded” by a phase shift on the wave observed: a Shapiro Phase Delay. Indeed it has been predicted on 1964 and observed by Shapiro himself with radar echoes from Mercury and Venus, grazing the Sun. Because PSR’s period are on average stable ($`<\dot{P}_{psr}>10^{14}ss^1`$) we proposed as timing candles to scan all over the galaxy hunting for crossing Dark Matter along the line of sight.
## 2 The first evidences of Shapiro Phase Delay on PSR’s
Microlensing technique seems an unique powerful test because it is inspecting million stars for acromatic luminosity magnifications: how could a few hundred pulsar sample compete with such a huge number? The reason lay on the extreme Shapiro Phase Shift sensitivity versus the microlensing one. While microlensing magnification is proportional to the inverse of the MACHO’s impact parameter $`1/b_{min}=1/u_{min}R_E`$ (where $`u_{min}`$ is adimensional and $`R_E`$ is the Einstein Radius), Shapiro Phase Delay is made up by two main terms: a first geometrical one, $`\mathrm{\Delta }t_{geo}=\frac{r_S}{4c}\left(\sqrt{u^2+4}u\right)^{\mathrm{\hspace{0.17em}2}}`$, which is due to deflect and undeflected path and it is rapidly vanishing at large impact values: $`\mathrm{\Delta }t_{geo}|_{u1}=\frac{r_S}{cu^{\mathrm{\hspace{0.17em}2}}}`$. The second and main phase delay is the gravitational redshift (Shapiro Phase Delay) due to a MACHO field $`\frac{GM}{c^2r}𝑑r`$ all along the wave trajectory; its behaviour is logarithmic on the impact parameter $`u`$:
$$\mathrm{\Delta }t_{grav}=\frac{r_S}{c}\mathrm{ln}\left(\frac{8D_s}{r_S}\right)\frac{2r_S}{c}\mathrm{ln}\left(\sqrt{u^2+4}+u\right)$$
(1)
where $`u=b/R_E`$ is the adimensional impact parameter of any MACHO crossing along, whose Schwarchild and Einstein radii are:
$$r_S=\frac{GM}{c^2},$$
$$R_E=\sqrt{2r_S\frac{D_dD_{ds}}{D_s}}=4.810^{13}\sqrt{\left[\frac{M}{M_{}}\right]\left[\frac{D_s}{4Kpc}\right]\left[\frac{x_d}{1/2}\right]\left[2\frac{x_d}{1/2}\right]}cm$$
(2)
where $`x_d=\frac{D_d}{D_s}`$, $`1x_d=\frac{D_{ds}}{D_s}`$; $`D_d`$ is the deflector-observer distance, $`D_{ds}`$ the source-deflector distance and $`D_s`$ the source-observer distance.
The consequent characteristic time of the Shapiro delay while the MACHO approaches or goes far away the line of sight is:
$$t_c=\frac{R_E}{v_{}}u_{min}4.55\frac{\left[\frac{u_{min}}{10^2}\right]}{\left[\frac{\beta _{}}{10^3}\right]}\sqrt{\left[\frac{M}{M_{}}\right]\left[\frac{D_s}{4Kpc}\right]\left[\frac{x_d}{1/2}\right]\left[2\frac{x_d}{1/2}\right]}yr$$
(3)
and the consequent PSR’s period derivative is:
$$\dot{P}=1.3810^{13}\frac{\left[\frac{\beta _{}}{10^3}\right]\sqrt{\frac{M}{M_{}}}}{\left[\frac{u_{min}}{10^2}\right]\sqrt{\left[\frac{D_s}{4Kpc}\right]\left[\frac{x_d}{0.5}\right]\left[2\frac{x_d}{0.5}\right]}}ss^1$$
(4)
First we must notice that the above period derivative is an order of magnitude larger than the average PSR one ($`\dot{P}10^{14}ss^1`$) and therefore it is well detectable. Secondly because of above sensitivities we could parametrized the adimensional impact parameters not just as $`u_{min}1`$ as needed in microlens technique but at $`u_{min}10^2`$ scales. This offer a corresponding square geometrical probability amplification ($`\pi u^2`$) to reveal a MACHO, nearly four orders of magnitude larger than in microlensing case. For this reason present Shapiro Delay on 7 hundreds PSRs are comparable with a sample of nearly 7 millions of stars for microlensing. However a positive period derivative ($`\dot{P}>0`$) might be well indebted to a large intrinsic positive angular momentum loss. Therefore we first looked for negative PSR period derivative ($`\dot{P}<0`$) whose interpretation cannot be indebted to any (rare) corotating accretion mass (single stars). We could find statistically a few events a year , for a MACHO density comparable with the observed microlenses. We did observed (up date to 6) and at least one, B1813-26, very isolated PSR, which we do definitively interpret as a Shapiro Phase Delay alive. We also find a group of PSRs in a globular cluster possibly suffering a collective Shapiro Phase Delay due to the own cluster gravitational field: B0021-72C, B0021-72D in 47 Tucanae and B2127+11D, B2127+11A in M15.
## 3 Shapiro Delay in Dark Planet search
The technique above could be fruitful and certain believed by testing and calibrating the PRS’s delay while crossing along the Sun’s field, Juppiter or Mars trajectories. For nearby deflectors the gravitational phase delay has a more complicated 3D vectorial formula:
$$\mathrm{\Delta }t_{Grav}=\frac{r_S}{c}\mathrm{ln}\left[\frac{D_sD_d\mathrm{cos}\psi +D_{ds}}{D_d\left(\mathrm{\hspace{0.17em}1}\mathrm{cos}\psi \right)}\right]$$
(5)
where $`D_d=|\stackrel{}{D_d}|`$, $`D_s=|\stackrel{}{D_s}|`$, $`D_{ds}=|\stackrel{}{D_s}\stackrel{}{D_d}|=|\stackrel{}{D_{ds}}|`$ and $`\psi =\widehat{\stackrel{}{D_s}\stackrel{}{D_d}}`$. For $`\psi 0`$ after simple expansion one recovers from equation 5 the previous Shapiro well known equation 1. For the Sun the effect even few degrees far away its position is huge ($`|\dot{P}|10^{10}ss^1`$) and must be observed within $`t_cday`$. In order to avoid any additional and confusing refractive index delay (due to solar plasma even at large impact parameters) PRSs might be better observed near Sun at higher radio PSR frequencies ($`\nu GHz`$) where plasma refractive index is smaller than the gravitational Shapiro one. Solar Shapiro delay has, naturally, a strong annual modulation. Planets like Jupiter, at $`5.2AU`$, will produce well detectable period derivative ($`|\dot{P}|10^{12}ss^1`$) at $`u_{min}100`$ and even at $`10^{}`$ angular impact parameter distance the effect might be observable ($`|\dot{P}|610^{15}ss^1`$) (see figure 2). In particular to test the technique we chose a sub sample of all known PRSs laying along the ecliptic trajectory ($`\pm 6^{}`$). Most PRSs are localized toward Galactic Center and Anticenter and are described in figure 3 . The planetary (Juppiter, Mars..) Shapiro Delay is strongly modulated not only by the Terrestrial yearly trajectory, but also by the combined Earth-Juppiter, Earth-Mars.. mutual distances, which play a key role in defining the Einstein radius and the Shapiro Phase Delay. These combined effects must introduce a very characteristic imprint in their own phase delay as we predicted in figure 3.
## 4 Conclusions
The Shapiro Phase might be able to discover not only dark matter but once calibrated it may even lead to new mini planets discovers on our ecliptic plane. Their presence might be of relevance (by tidal disturbances during rare encounter) on Earth past (and future) history (and life evolution ). Finally recent EGRET data on diffused Gev galactic-Halo might find an answer, among the others , by ($`pc`$) hydrogen molecular clouds interacting by proton (Gev) cosmic rays . Microlenses at those large radii, (by Kirchoff theorem) are unefficient. Therefore the Shapiro Phase Delay might be the unique probe to verify such an evanescent dark matter barionic candidature.
|
no-problem/9902/astro-ph9902227.html
|
ar5iv
|
text
|
# Young massive star clusters in nearby galaxies Based on observations made with the Nordic Optical Telescope, operated on the island of La Palma jointly by Denmark, Finland, Iceland, Norway, and Sweden, in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias, and with the Danish 1.5-m telescope at ESO, La Silla, Chile.
## 1 Introduction
During the last decade many investigations have revealed the presence of “young massive star clusters” (YMCs) or “super star clusters” in mergers and starburst galaxies, and it has been speculated that these objects could be young analogues of the globular clusters seen today in the Milky Way. It is an intriguing idea that globular clusters could still be formed today in some environments, because the study of such objects would be expected to provide a direct insight into the conditions that were present in the early days of our own and other galaxies when the globular clusters we see today in the halos were formed.
Probably the most famous example of a merger galaxy hosting “young massive clusters” is the “Antennae”, NGC 4038/39 where Whitmore & Schweizer (1995) discovered more than 700 blue point-like sources with absolute visual magnitudes up to $`M_V=15`$. Other well-known examples are NGC 7252 (Whitmore et. al. 1993), NGC 3921 (Schweizer et. al. 1996) and NGC 1275 (Holtzman et. al. 1992). All of these galaxies are peculiar systems and obvious mergers. In fact, in all cases investigated so far where star formation is associated with a merger, YMCs have been identified (Ashman & Zepf 1998).
But YMCs exist not only in mergers. They have been located also in starburst galaxies such as NGC 1569 and NGC 1705 (O’Connell et. al. 1994), NGC 253 (Watson et. al. 1996) and M82 (O’Connell et. al. 1995), in the nuclear rings of NGC 1097 and NGC 6951 (Barth et. al. 1995), and in the blue compact galaxy ESO-338-IG04 (Östlin et. al. 1998). The magnitudes of YMCs reported in all these galaxies range from $`M_V10`$ to $`15`$, and the effective radii $`R_e`$ ($`R_e`$ = the radius within which half of the light is contained) have been estimated to be of the order of a few parsec to about 20 pc, compatible with the objects deserving the designation “young globular clusters”.
All of the systems mentioned above are relatively distant, but in fact one does not have to go farther away than the Local Group in order to find galaxies containing rather similar star clusters. The Magellanic Clouds have long been known to host star clusters of a type not seen in the Milky Way, i.e. compact clusters that are much more massive than Galactic open clusters (van den Bergh 1991, Richtler 1993), and in many respects resemble globular clusters more than open clusters. Some of the most conspicuous examples are the $`10^7`$ years old cluster in the centre of the 30 Doradus nebula in the LMC (Brandl et. al. 1996), shining at an absolute visual magnitude of about -11, and the somewhat older object NGC 1866 (about $`10^8`$ years), also in the LMC (Fischer et. al. 1992), which has an absolute visual magnitude of $`M_V9.0`$. Even if these clusters are not quite as spectacular as those found in genuine starburst galaxies, they are still more massive than any of the open clusters seen in the Milky Way today. YMCs have been reported also in M33 (Christian & Schommer 1988), and in the giant Sc spiral M101 (Bresolin et. al. 1996).
Taking into account the spread in the ages of the YMCs in the Antennae, Fritze - v. Alvensleben (1998) recovered a luminosity function (LF) resembling that of old globular clusters (GC’s) to a very high degree when evolving the present LF to an age of 12 Gyr. Elmegreen & Efremov (1997) point out the interesting fact that the upper end of the LF of old GC systems is very similar to that observed for YMCs, open clusters in the Milky Way, and even for HII regions in the Milky Way, and this is one of their arguments in favour of the hypothesis that the basic mechanism behind the formation of all these objects is the same. They argue that massive clusters are formed whenever there is a high pressure in the interstellar medium, due to starbursts or other reasons as e.g. a high virial density (as in nuclear rings and dwarf galaxies). However, this doesn’t seem to explain the presence of YMCs in apparently undisturbed disk galaxies like M33 and M101.
So it remains a puzzling problem to understand why YMCs exist in certain galaxies, but not in others. In this paper we describe some first results from an investigation aiming at addressing this question. It seems that YMC’s can exist in a wide variety of host galaxy environments, and there are no clear systematics in the properties of the galaxies in which YMC’s have been identified. And just like it is not clear how YMCs and old globular clusters are related to each other, one can also ask if the very luminous YMCs in mergers and starburst galaxies are basically the same type of objects as those in the Magellanic Clouds, M33 and M101.
We therefore decided to observe a number of nearby galaxies and look for populations of YMC’s. The galaxies were mainly selected from the Carnegie Atlas (Sandage & Bedke 1994), and in order to minimise the problems that could arise from extinction internally in the galaxies we selected galaxies that were more or less face-on. We tried to cover as wide a range in morphological properties as possible, although the requirement that the galaxies had to be nearby (because we would rely on ground-based observations) restricted the available selection substantially. The final sample consists of 21 galaxies out to a distance modulus of $`mM30`$, for which basic data can be seen in Table 1.
In this paper we give an overview of our observations, and we discuss the main properties of the populations of YMCs in the galaxies in Table 1. In a subsequent paper (Larsen et. al. 1999) we will discuss the correlations between the number of YMCs in a galaxy and various properties of the host galaxies in more detail, and compare our data with data for starburst galaxies and mergers published in the literature.
## 2 Observations and reductions
The observations were carried out partly with the Danish 1.54 m. telescope and DFOSC (Danish Faint Object Spectrograph and Camera) at the European Southern Observatory (ESO) at La Silla, Chile, and partly with the 2.56 m. Nordic Optical Telescope (NOT) and ALFOSC (a DFOSC twin instrument), situated at La Palma, Canary Islands. The data consists of CCD images in the filters U,B,V,R,I and H$`\alpha `$. In the filters BVRI and H$`\alpha `$ we typically made 3 exposures of 5 minutes each, and 3 exposures of 20 minutes each in the U band. Both the ALFOSC and DFOSC were equipped with thinned, backside-illuminated 2 K<sup>2</sup> Loral-Lesser CCDs. The pixel scale in the ALFOSC is 0.189″/pixel and the scale in the DFOSC is 0.40″/pixel, and the fields covered by these two instruments are $`6.5\mathrm{}\times 6.5\mathrm{}`$ and $`13.7\mathrm{}\times 13.7\mathrm{}`$, respectively. All observations used in this paper were conducted under photometric conditions, with typical seeing values (measured on the CCD images) being 1.5″ and 0.8″ for the La Silla and La Palma data, respectively.
During each observing run, photometric standard stars in the Landolt (1992) fields were observed for calibration of the photometry. Some of the Landolt fields were observed several times during the night at different airmass in order to measure the atmospheric extinction coefficients. For the flatfielding we used skyflats exposed to about half the dynamic range of the CCD, and in general each flatfield used in the reductions was constructed as an average of about 5 individual integrations.
After bias subtraction and flatfielding, the three exposures in each filter were combined to a single image, and star clusters were identified using the daofind task in DAOPHOT (Stetson 1987) on a background-subtracted $`V`$-band frame. Aperture photometry was then carried out with the DAOPHOT phot task, using a small aperture radius (4 pixels for colours and 8 pixels for the $`V`$-band magnitudes) in order to minimise errors arising from the greatly varying background. Aperture corrections from the standard star photometry (aperture radius = 20 pixels) to the science data were derived from a few isolated, bright stars in each frame. Because the star clusters are not true point sources, no PSF photometry was attempted. A more detailed description of the data reduction procedure will be given in Larsen (1999).
The photometry was corrected for Galactic foreground extinction using the $`A_B`$ values given in the Third Reference Catalogue of Bright Galaxies (de Vaucouleurs et. al. (1991), hereafter RC3).
### 2.1 Photometric errors
The largest formal photometric errors as estimated by phot are those in the $`U`$ band, amounting to around 0.05 mag. for the faintest clusters. However, these error estimates are based on pure photon statistics and are not very realistic in a case like ours. Other contributions to the errors come from the standard transformation procedure, from a varying background, and from the fact that the clusters are not perfect point sources so that the aperture corrections become uncertain.
The r.m.s. residuals of the standard transformations were between 0.01 - 0.03 mags. in V, B-V and V-I, and between 0.04 and 0.06 mags. in U-B.
The errors in aperture corrections arising from the finite cluster sizes were estimated by carrying out photometry on artificially generated clusters with effective radii in the range $`R_e=04`$ pixels (0″ - 1.6″ on the DFOSC frames). 1″ corresponds to a linear distance of about 20 pc at the distance of typical galaxies in our sample, such as NGC 1313 and NGC 5236. The artificial clusters were modeled by convolving the point-spread function (PSF) with MOFFAT15 profiles.
The upper panel in Fig. 1 shows the errors in the aperture corrections for $`V`$-band photometry through aperture radii $`R_{ap}=4`$ pixels and $`R_{ap}=8`$ pixels as a function of $`R_e`$, while the lower panel shows the errors in the colour indices for $`R_{ap}=4`$ pixels. At $`R_e1\mathrm{}`$, the error in $`V`$-band magnitudes using $`R_{ap}=8`$ pixels amounts to about 0.15 magnitudes. For a given $`R_e`$, the errors in the colours are much smaller than the errors in the individual bandpasses, so that accurate colours can be derived through the small $`R_e=4`$ pixels aperture without problems. This convenient fact has also been demonstrated by e.g. Holtzman et. al. (1996).
The random errors, primarily arising due to background fluctuations, should in principle be evaluated individually for each cluster, since they depend on the local environment of the cluster. Fig. 2 shows the random errors for clusters in NGC 5236, estimated by adding artificial objects of similar brightness and colour near each cluster and remeasuring them using the same photometric procedure as for the cluster photometry. Again it is found that the errors in two different filters tend to cancel out when colour indices are formed. The $`V`$-band errors are quite substantial, but we have chosen to accept the large random errors associated with the use of an $`R_{ap}=8`$ pixels aperture in order to keep the effect of systematic errors at a low level.
## 3 Identification of star clusters
After the photometry had been obtained, the first step in the analysis was to identify star cluster candidates, and to make sure that they were really star clusters and not some other type of objects. Possible sources of confusion could be compact HII regions, foreground stars, and individual luminous stars in the observed galaxies. However, each of these objects can be eliminated by applying the following selection criteriae:
* HII regions: These can be easily identified due to their $`H\alpha `$ emission.
* Foreground stars: Because our galaxies are located at rather high galactic latitudes, practically all foreground stars are redder than $`BV0.45`$, whereas young massive star clusters will be bluer than this limit. Hence, by applying a $`BV`$ limit of 0.45 we sort away the foreground stars while retaining the young massive cluster candidates. Remaining foreground stars could in many cases be distinguished by their position in two-colour diagrams, by their lack of angular extent, and by being positioned outside the galaxies.
* Individual luminous stars in the galaxies: We apply a brightness limit of $`M_V=8.5`$ for cluster candidates with $`UB>0.4`$ and $`M_V=9.5`$ for candidates with $`UB<0.4`$. The bluer objects are often found inside or near star forming regions, but the magnitude limit of $`M_V=9.5`$ should prevent confusion with even very massive stars.
In addition to these selection criteriae it was found very useful to generate colour-composite images using the I, U and H$`\alpha `$ exposures and identify all the cluster candidates visually on these images. For the “red” channel we used the H$`\alpha `$ exposures, for the “green” channel we used the I-band frames, and for the “blue” channel the U-band frames. In images constructed like this, YMCs stand out very clearly as compact blue objects, in contrast to HII regions which are distinctly red, and foreground stars and background galaxies which appear green.
Following the procedure outlined above, we ended up with a list of star cluster candidates in each galaxy. The cluster nature of the detected objects was further verified by examining their positions in two-colour diagrams (U-B,B-V and U-V,V-I), and compare with model predictions for the colours of star clusters and individual stars. In addition, we have been able to obtain spectra of a few of the brightest star cluster candidates. These will be discussed in a subsequent paper.
The cluster samples may suffer from incompleteness effects. In particular, we have deliberately excluded the youngest clusters which are still embedded in giant HII regions (corresponding to an age of less than about $`10^7`$ years). Clusters which have intrinsic $`BV<0.45`$ will also slip out of the sample if their actual observed $`BV`$ index is larger than 0.45 due to reddening internally in the host galaxy.
### 3.1 Counting clusters
The specific frequency for old globular cluster systems has traditionally been defined as (Harris & van den Bergh 1981):
$$S_N=N_{\text{GC}}\times 10^{0.4\times (M_V+15)}$$
(1)
where $`N_{\text{GC}}`$ is the total number of globular clusters belonging to a galaxy of absolute visual magnitude $`M_V`$. Such a definition is a reasonable way to characterise old globular cluster systems because $`N_{\text{GC}}`$ is a well-defined quantity, which can be estimated with good accuracy due to the gaussian-like luminosity function (LF) even if the faintest clusters are not directly observable. In the case of young clusters it is more complicated to define a useful measure of the richness of the cluster systems, because the LF is no longer gaussian and the number of young clusters that one finds in a galaxy depends critically on the magnitude limit applied in the survey. Nevertheless, we have defined a quantity equivalent to $`S_N`$ for the young cluster systems:
$$T_N=N_{\text{YMC}}\times 10^{0.4\times (M_B+15)}$$
(2)
$`N_{\text{YMC}}`$ is the number of clusters $`N_B+N_R`$ satisfying the criteriae described in Sec. 3. We have chosen to normalise $`T_N`$ to the $`B`$-band luminosity of the host galaxy because it can be looked up directly in the RC3 catalogue.
## 4 Results
### 4.1 Specific frequencies
In Table 2 we give the number of clusters identified in each of the observed galaxies. The columns labeled $`N_B`$ and $`N_R`$ refer to the number of “blue” and “red” clusters respectively, according to the definition that “blue” clusters are clusters with $`UB<0.4`$ (and hence $`M_V<9.5`$) whereas the “red” clusters have $`UB0.4`$ (and $`M_V<8.5`$). See also Sect. 3. The data for the LMC are from Bica et. al. (1996) and those for M33 are from Christian & Schommer (1988).
The “specific frequencies” $`T_N`$ for the galaxies in our sample are given in the fourth column of Table 2. The number $`N_{\text{YMC}}=N_B+N_R`$ used to derive $`T_N`$ is the total number of clusters, “red” and “blue”, detected in each galaxy.
The errors on $`T_N`$ were estimated taking into consideration only the uncertainties of the absolute magnitudes of the host galaxies resulting from the distance errors as given in Table 1 and poisson statistics of the cluster counts. However, it is clear that this is not a realistic estimate of the total uncertainties of the $`T_N`$ values. Another source of uncertainty arises from incompleteness effects, particularly for the more distant galaxies. For all galaxies with more than 20 clusters we estimated the incompleteness by adding artificial clusters with magnitudes of 18.0, 18.5 $`\mathrm{}`$ 21.0, and testing how many of the artificially added clusters were detected by DAOFIND in all of the filters $`U`$,$`B`$ and $`V`$. Because the completeness depends critically on the size of the objects, we carried out completeness tests for artificial clusters with $`R_e=0`$ pc and $`R_e=20`$ pc in each galaxy. The numbers of clusters actually detected in each of the magnitude bins \[18.25 - 18.75\], \[18.75 - 19.25\] $`\mathrm{}`$ \[20.75 - 21.25\] were then corrected by the fraction of artificial clusters recovered in the corresponding bin, and finally the “corrected” $`T_N`$ values were derived. These are given in the last column of Table 2, labeled $`T_{N,C}`$, for point sources (first line) and objects with $`R_e=20`$ pc (second line). See Larsen (1999) for more details on the completeness corrections.
One additional source of errors affecting $`T_N`$ which remains uncorrected, is the fact that an uncertainty in the distance also affects the magnitude limit for detection of star clusters . If a galaxy is more distant (or nearby) than the value we have adopted, our limit corresponds to a “too bright” (too faint) absolute magnitude, and we have underestimated (overestimated) the number of clusters. Hence, the true $`T_N`$ errors are somewhat larger than those given in Table 2, but they depend on the cluster luminosity function. If the clusters follow a luminosity function of the form $`\varphi (L)dLL^{1.78}dL`$ (Whitmore & Schweizer 1995) then a difference in the magnitude limit of $`\mathrm{\Delta }M_V=0.1`$ would lead to a difference in the cluster counts of about 7%.
A histogram of the uncorrected $`T_N`$ values (Fig. 3) shows that a wide range of $`T_N`$ values are present within our sample. Many of the galaxies in the lowest bins contain only a few massive clusters or none at all, but a few galaxies have much higher $`T_N`$ values than the average. The most extreme $`T_N`$ values are found in NGC 1156, NGC 1313, NGC 3621, NGC 5204 and NGC 5236. The galaxy NGC 2997 also hosts a very rich cluster system, but the $`T_N`$ value is probably severely underestimated because of the large distance of NGC 2997 which introduces significant incompleteness problems. A similar remark applies to two other distant galaxies observed at the Danish 1.54 m. telescope, NGC 1493 and NGC 7424, while all the remaining galaxies in Table 1 are either more nearby, or have been observed at the NOT in better seeing conditions, and hence their $`T_N`$ values are believed to be more realistic.
### 4.2 Two-colour diagrams
In Fig. 4 we show the $`BV,UB`$ diagrams for six cluster-rich galaxies. These plots also include the so-called “S” sequence defined by Girardi et. al. (1995, see also Elson & Fall (1985)), represented as a dashed line. The “S” sequence is essentially an age sequence, derived as a fit to the average colours of bright LMC clusters in the $`UB,BV`$ diagram. The age increases as one moves along the S-sequence from blue to red colours. The colours of our cluster candidates are very much compatible with those of the S-sequence, especially if one considers that there is a considerable scatter around the S-sequence also for Magellanic Cloud clusters (Girardi et. al. 1995). Also included in the diagrams are stellar models by Bertelli et. al. (1994) (dots), in order to demonstrate that the position of clusters within such a diagram is distinctly different from that of single stars. Already from Fig. 4 one can see that there is a considerable age spread among the clusters in each galaxy, with the red cut-off being due to our selection criteriae. The reddening vector corresponding to a reddening of $`E(BV)=0.30`$ is shown in each plot as an arrow, and it is quite clear that the spread along the S-sequence cannot be entirely due to reddening effects.
Bresolin et. al. (1996) used HST data to carry out photometry for star clusters in the giant Sc-type spiral M101. A comparison between their data and photometry for clusters in one of our galaxies (NGC 1313) is shown in Fig. 5. It is evident that the colours of clusters in the two galaxies are very similar. NGC 1313 was chosen as an illustrative example because it contains a relatively rich cluster system, although not so rich that the diagram becomes too crowded.
In Fig. 5 we have also included a curve showing the colours of star clusters according to the population synthesis models of Bruzual & Charlot (1993, hereafter BC93). The agreement between the synthetic and observed colours is very good for U-B $`>0.3`$, but for U-B $`<0.3`$ the B-V colours of the BC93 models are systematically too blue compared to our data and the S sequence. The “red loop” that extends out to B-V$``$0.3 and U-B$``$-0.5 is due to the appearance of red supergiants at an age of about $`10^7`$ years (Girardi & Bica 1993) and is strongly metallicity dependent. Girardi et. al. (1995) constructed population synthesis models based on a set of isochrones by Bertelli et. al. (1994) and found very good agreement between the S-sequence and their synthetic colours. The models (solar metallicity) are included in Fig. 5 as a solid line. In these models the “red loop” is not as pronounced as in the BC93 models, and the youngest models are in general not as blue as those of BC93, resulting in a much better fit to the observed cluster colours.
### 4.3 Ages and masses
A direct determination of the mass of an unresolved star cluster requires a knowledge of the M/L ratio, which in turn depends on many other quantities, in particular the age and the IMF of the cluster. However, if one assumes that the IMF does not vary too much from one star cluster to another, then the luminosities alone should facilitate a comparison of star clusters with similar ages.
Applying the S-sequence age calibration to the clusters in our sample, the luminosities of each cluster can then be directly compared to Milky Way clusters of similar age, as shown in Fig. 6. Ages and absolute visual magnitudes for Milky Way open clusters are from the Lyngå (1982) catalogue, and are represented in each plot as small crosses. In the diagrams in Fig. 6 we have also indicated the effect of a reddening of E(B-V) = 0.30. In these plots the “reddening vector” depends in principle on the original position of the cluster within the (U-B,B-V) diagram from which the age was derived, but we have included two typical reddening vectors, corresponding to two different ages.
In all of the galaxies in Fig. 6 but NGC 2403, the absolute visual magnitudes of the brightest clusters are 2 - 3 magnitudes brighter than the upper limit of Milky Way open clusters of similar ages. Accordingly they should be nearly 10 times more massive. In the case of NGC 2403, the most massive clusters are not significantly more massive than open clusters found in the Milky Way. Fig. 6 also confirms the suspicion that the cluster data in NGC 2997 are incomplete, particularly for $`M_V>10`$.
We have included population synthesis models for the luminosity evolution of single-burst stellar populations of solar metallicity by BC93 in Fig. 6, scaled to a total mass of $`10^5M_{}`$. Models for three different IMFs are plotted: Salpeter (1955), Miller-Scalo (1979) and Scalo (1986), all covering a mass range from 0.1 - 65 $`M_{}`$. The different assumptions about the shape of the IMF obviously affect the evolution of the $`M_V`$ magnitude per unit mass quite strongly, and unfortunately the effect is most severe just in the age interval we are interested in. The difference between the Miller-Scalo and the Scalo IMF amounts to almost 2 magnitudes, but in any case the most massive clusters appear to have masses around $`10^5M_{}`$.
In Fig. 6 we have also indicated the location of a “typical” old globular cluster system with an error bar centered on the coordinates 15 Gyr, $`M_V=7.4`$ and with $`\sigma _V=1.2`$ mags. Although the comparison of masses at high and low age based on population synthesis models is extremely sensitive to the exact shape of the IMF, it seems that the masses of the young massive star clusters are at least within the range of “true” globular clusters.
Reddening effects alone are unlikely to affect the derived ages to a high degree, as a scatter along the “reddening vectors” in Fig. 6 would then be expected. Basically this would mean that one would expect a much steeper rate of decrease in $`M_V`$ vs. the derived age, while the observed relation between age and the upper luminosity limit is in fact remarkably compatible with that predicted by the models. The comparison with model calculations implies that the upper mass limit for clusters must have remained relatively unchanged over the entire period during which clusters have been formed in each galaxy.
## 5 Notes on individual galaxies
### 5.1 NGC 1156
This is a Magellanic-type irregular galaxy, currently undergoing an episode of intense star formation. Ho et. al. 1995 noted that the spectrum of NGC 1156 resembles that of the “W-R galaxy” NGC 4214. NGC 1156 is a completely isolated galaxy, so the starburst could not have been triggered by interaction with other galaxies. We have found a number of massive star clusters in NGC 1156.
### 5.2 NGC 1313
This is an SB(s)d galaxy of absolute $`B`$ magnitude $`M_B=18.9`$. de Vaucouleurs 1963 found a distance modulus of $`mM=28.2`$, which we adopt. The morphology of NGC 1313 is peculiar in the sense that many detached sections of star formation are found, particularly in the south-western part of the galaxy. There is also a “loop” extending about 1.5 Kpc (projected) to the east of the bar with a number of HII regions and massive star clusters. Another interesting feature is that one can see an extended, elongated diffuse envelope of optical light, with the major axis rotated $`45^{}`$ relative to the central bar of NGC 1313, embedding the whole galaxy. It has been suggested by Ryder et. al. (1995) that the diffuse envelope surrounding NGC 1313 is associated with galactic cirrus known to exist in this part of the sky (Wang & Yu 1995), but this explanation does not seem likely since it would require a very perfect alignment of the centre of NGC 1313 with the diffuse light. Also, the outer boundary of the active star-forming parts of galaxy coincide quite well with the borders of the more luminous parts of the envelope. In our opinion the most likely explanation is that the diffuse envelope is indeed physically associated with NGC 1313 itself.
Walsh & Roy (1997) determined O/H abundances for 33 HII regions in NGC 1313, and found no radial gradient. This makes NGC 1313 the most massive known barred spiral without any radial abundance gradient.
NGC 1313 hosts a rich population of massive star clusters. When looking at the plot in Fig. 6 it seems that there is a concentration of clusters at log(Age) $``$ 8.3 or roughly 200 Myr. We emphasize that this should be confirmed by a more thorough study of the cluster population in this galaxy, and in particular it would be very useful to be able to detect fainter clusters in order to improve the statistics. If this is real it could imply that some kind of event stimulated the formation of massive star clusters in NGC 1313 a few hundred Myr ago, perhaps the accretion of a companion galaxy. A second “burst” of cluster formation seems to have been taking place very recently, and is maybe going on even today.
### 5.3 NGC 2403
NGC 2403 is a nearby spiral, morphologically very similar to M33 apart from the fact that NGC 2403 lacks a distinct nucleus. It is a textbook example of an Sc-type spiral, and it is very well resolved on our NOT images. A photographic survey of star clusters in NGC 2403 was already carried out by Battistini et. al. 1984, who succeeded in finding a few YMC candidates. NGC 2403 spans more than 20 $`\times `$ 20 arcminutes in the sky, so we have been able to cover only the central parts using the ALFOSC. Within the central $`6.5\mathrm{}\times 6.5\mathrm{}`$ (about 6$`\times `$6 Kpc) we have located 14 clusters altogether, but the real number of clusters in NGC 2403 should be significantly higher, taking into account the large fraction of the galaxy that we haven’t covered, and considering the fact that in the other galaxies we have studied, many clusters are located at considerable distances from the centre.
### 5.4 NGC 2997
NGC 2997 is an example of a “hot spot” galaxy (Meaburn et. al. 1982) with a number of UV luminous knots near the centre. Walsh et. al. (1986) studied the knots and concluded that they are in fact very massive star clusters, and Maoz et. al. (1996) further investigated the central region of NGC 2997 using the HST. On an image taken with the repaired HST through the F606W filter they identified 155 compact sources, all with diameters of a few pc. Of 24 clusters detected in the F606W filter as well as in an earlier F220W image, all have colours implying ages less than 100 Myr and masses $`10^4M_{}`$. Maoz et. al. (1996) conclude that the clusters in the centre of NGC 2997 will eventually evolve into objects resembling globular clusters as we know them in the Milky Way today.
In our study we have found a number of massive star clusters also outside the centre of NGC 2997. Taking the numbers at face value, the cluster system does not appear to be as rich as that of NGC 5236, but with better and more complete data we would expect to see a number of YMCs in NGC 2997 that could rival that in NGC 5236.
### 5.5 NGC 3621
This galaxy is at first sight a quite ordinary late-type spiral, and has not received much attention. It was observed with the HST by Rawson et. al. (1997) as part of the Extragalactic Distance Scale Key Project, and cepheids were discovered and used to derive a distance modulus of 29.1.
Our data show that NGC 3621 contains a surprisingly high number of massive star clusters. The galaxy is rather inclined ($`i=51^{}`$, Rawson et. al. 1997), and nearly all the clusters are seen projected on the near side of the galaxy, so a number of clusters on the far side may be hidden from our view. Ryder & Dopita (1993) noted a lack of HII regions on the far side of the galaxy, and pointed out that there is also a quite prominent spiral arm on the near side that doesn’t appear to have a counterpart on the far side. So it remains possible that the excess of young clusters and HII regions on the near side is real.
### 5.6 NGC 5204
NGC 5204 is a companion to the giant Sc spiral M101. The structure of the HI in this galaxy is that of a strongly warped disk (Sicotte & Carignan 1997), and one could speculate that this is related to tidal interaction effects with M101. Sicotte & Carignan (1997) also find that the dark matter halo of NGC 5204 contributes significantly to the mass even in the inner parts.
The high $`T_N`$ value of this galaxy is a consequence of its very low $`M_B`$ rather than a high absolute number of clusters - we found only 7 clusters in this galaxy. Curiously, all of the 7 clusters belong to the “red” class, suggesting that no new clusters are being formed in NGC 5204 at the moment.
### 5.7 NGC 5236
NGC 5236 (M83) is a grand-design barred spiral of type SBc, striking by its regularity and its very high surface brightness - the highest among the galaxies in our sample. The absolute visual magnitude is $`M_V=20.0`$ (de Vaucouleurs et. al. 1983). NGC 5236 is currently undergoing a burst of star formation in the nucleus as well as in the spiral arms.
A study in the rocket UV (Bohlin et. al. 1990) has already revealed the presence of a number of very young massive star clusters inside the HII regions of NGC 5236, and HST observations of the nucleus (Heap et. al. 1993) showed an arc of numerous OB clusters near the centre of the galaxy. These clusters were found to have absolute visual magnitudes in the range from $`M_V=10.4`$ to $`M_V=13.4`$. and typical radii of the order of 4 pc. Masses were estimated to be between $`10^4`$ and $`10^5`$ $`M_{}`$.
Our investigation adds a large number of massive star clusters in NGC 5236 also outside the centre and the HII regions. In terms of absolute numbers the cluster system of NGC 5236 is by far the richest in our sample, and in particular there is a large number of clusters in the “red” group. This may be partly due to reddening effects although Fig. 6 shows that there is in fact a large intrinsic age spread among the clusters in NGC 5236.
### 5.8 NGC 6946
The study of NGC 6946 is complicated by the fact that it is located at low galactic latitude ($`b=12^{}`$), and there is an interstellar absorption of $`A_B=1.6`$ magnitudes and a large number of field stars towards this galaxy. NGC 6946 is nevertheless a well-studied galaxy, and we also chose to include it in our sample, reasoning that star clusters should be recognizable as extended objects on the NOT data.
The chemical abundances of HII regions in NGC 6946 were studied by Ferguson et. al. (1998), who concluded that their data were consistent with a single log-linear dependence on the radius. At 1.5-2 optical radii (defined by the B-band 25th magnitude isophote) they measured abundances of O/H of about 10%-15% of the solar value, and N/O of about 20% - 25% of the solar value.
Among the approximately 100 clusters we have identified in NGC 6946, one stands out as particularly striking (Fig. 7). This cluster is apparently a very young object, located in one of the spiral arms at a distance of 4.4 Kpc from the centre, and with an impressive visual luminosity of $`M_V=13`$. Using a deconvolution-like algorithm (Larsen 1999), the effective radius was estimated to be about 15 pc. The cluster is located within a bubble-like structure with a diameter of about 550 pc, containing numerous bright stars and perhaps some less massive clusters. On optical images this structure is very conspicuous, but it is not visible on the mid-IR ISOCAM maps by Malhotra et. al. (1996). There are no traces of $`H\alpha `$ emission either, except for a small patch at the very centre of the structure.
### 5.9 LMC and M33
For these galaxies, we have adopted data from the literature.
As mentioned in the introduction, both the LMC and M33 contain young star clusters that are more massive than the open clusters seen in the Milky Way. However, as is evident from Table 2, only one cluster in M33 is a YMC according to our criteriae. The LMC, on the other hand, contains a relatively rich cluster population, with 7 clusters in the “red” group and 1 cluster in the “blue” group. The cluster R136 in the 30 Doradus nebula of the LMC has not been included in the data for Table 2 because of its location within a giant HII region. Compared to the other galaxies in our sample, the LMC ranks among the relatively cluster-rich ones, but it is also clear that a cluster population like the one of the LMC is by no means unusual.
Because the LMC is so nearby, the limiting magnitude for detection of clusters is obviously much fainter than in the other galaxies in our sample, and the Bica et. al. (1996) catalogue should certainly be complete down to our limit of $`M_V=8.5`$, corresponding to $`V=10.25`$ (taking into account an absorption of about 0.25 mags. towards the LMC). If the LMC was located at the distance of most of the galaxies in our sample we would probably not have detected 8 clusters, but a somewhat smaller number, and the $`T_N`$ value would have been correspondingly lower. This should be kept in mind when comparing the data for the LMC with data for the rest of the galaxies in the sample.
## 6 Radial density profiles of cluster systems
As an attempt to investigate how cluster formation correlates with the general characteristics of galaxies, we have compared the surface densities of YMCs (number of clusters per unit area) as a function of galactocentric radius with the surface brigthness in $`U`$, $`V`$, $`I`$ and $`H\alpha `$. Obviously, such a comparison only makes sense for relatively rich cluster systems, and is shown in Fig. 8 for four of the most cluster-rich galaxies in our sample. We did not include data for the apparently quite cluster-rich galaxy NGC 6946 in Fig. 8 because of the numerous Galactic foreground stars in the field of this galaxy which make the cluster identifications less certain.
The surface brightnesses were measured directly on our CCD images using the phot task in DAOPHOT. In the case of $`H\alpha `$ we used continuum-subtracted images, obtained by scaling an R-band frame so that the flux for stellar sources was the same in the R-band and $`H\alpha `$ images, and subtracting the scaled R-band image from the $`H\alpha `$ image. The flux was measured through a number of apertures with radii of 50, 100, 150 $`\mathrm{}`$ pixels, centered on the galaxies, and the background was measured in an annulus with an inner radius of 850 pixels and a width of 100 pixels. The flux through the i’th annular ring was then calculated as the flux through the i’th aperture minus the flux through the (i-1)’th aperture, and the surface brightness was finally derived by dividing with the area of the i’th annular ring. No attempt was made to standard calibrate the surface brightnesses, so the $`y`$-axis units in Fig. 8 are arbitrary. The cluster “surface densities” were obtained by normalising the number of clusters within each annular ring to the area of the respective rings. Finally, all profiles were normalised to the V-band surface brightness profile.
For all the galaxies in Fig. 8 the similarity between the surface brightness profiles and the cluster surface densities is quite striking. In the cases of NGC 2997 and NGC 5236, where the $`H\alpha `$ profiles are markedly different from the broad-band profiles, the cluster surface densities seem to follow the $`H\alpha `$ profiles rather than the broad-band profiles. Accordingly the presence of massive clusters must be closely linked with the process of star formation in general in those galaxies where YMCs are present. In order to get a complete picture one should include the clusters in the central starbursts of NGC 2997 and NGC 5236, but this would, in any case, affect the conclusions only for the innermost bin.
## 7 Discussion
Perhaps the most striking fact about the cluster-rich galaxies in our sample is that they do not appear to have a lot of other properties in common. Fig. 9 shows the specific frequency $`T_N`$ as a function of the “T”-type (Table 1), and does not support the suggestion by Kennicutt & Chu (1988) that the presence of YMCs in galaxies increases along the Hubble sequence. Instead, at wide range of $`T_N`$ values is seen independently of Hubble type, so even if YMCs might be absent in galaxies of even earlier types than we have studied here the phenomenon cannot be entirely related to morphology.
However, what characterises all these cluster systems is that they do not seem to have been formed during one intense burst of star formation. Instead, their age distributions as inferred from the “S” sequence are quite smooth (possibly with the exception of NGC 1313), so in contrast to starburst galaxies like the Antennae or M82, the rather “normal” galaxies in our sample have been able to maintain a “production” of clusters over a longer timescale, at least several hundred Myr, in a more quiescent mode than that of the starburst galaxies. The most luminous clusters we have found have absolute visual magnitudes of about $`M_V=12`$, about three magnitudes brighter than the brightest open clusters in the Milky Way, but still somewhat fainter than the $`M_V=13`$ to $`M_V=15`$ clusters in the Antennae and certain starburst galaxies.
One notable exception is NGC 6946 which is forming such a “super star cluster” just before our eyes. That cluster is located far away from the centre of the galaxy, something which is not unusual at all. Also in NGC 1313 the most massive cluster is located far from the centre of the host galaxy, at a projected galactocentric distance of about 3.7 kpc, and in the Milky Way a number of high-mass (old) open clusters are found in the anticentre direction, e.g. M67. It can of course not be excluded that a massive cluster like the one in NGC 6946 could be located in a region of the Galactic disk hidden from our view, but in any case the Milky Way does not seem to contain any large number of young massive clusters as seen e.g. in NGC 5236 or NGC 1313.
In general we find, however, that the distribution of YMCs follows the H$`\alpha `$ surface brightness profile, at least for those galaxies where the statistics allow such a comparison. Taking H$`\alpha `$ as an indicator of star formation, it then appears that in certain galaxies the formation of YMCs occurs whenever stars are formed. This raises the question whether the presence of massive cluster formation is correlated with global star formation indicators, such as $`H\alpha `$ luminosity or other parameters. These questions will be addressed in more detail in a subsequent paper (Larsen et. al. 1999).
Two of the galaxies in our sample, NGC 5236 and NGC 2997, have a lot of properties in common. Both galaxies are grand-design, high surface-brightness spirals although NGC 2997 lacks the impressive bar of NGC 5236, and both were known to contain massive star clusters near their centres also before this study. We have identified rich cluster system throughout the disks of these two galaxies.
In our opinion it is becoming clearer and clearer that a whole continuum of cluster properties (age, mass, size) must exist, one just has to look in the right places. For some reason the Milky Way and many other galaxies were only able to form very massive, compact star clusters during the early phases of their evolution, these clusters are today seen as globular clusters in the halos of these galaxies. Other galaxies such as the Magellanic Clouds, M33 and NGC 2403 are able to form substantially larger number of massive clusters than the Milky Way even today, and in our sample of galaxies we have at least 5 galaxies that are able to form clusters whose masses reach well into the interval defined by the globular clusters of the Milky Way. Still more massive clusters are being formed today in genuine starburst and merger galaxies such as the Antennae, NGC 7252, M82 and others, and it seems that the masses of these clusters can easily compete with those of “high-end” globular clusters in the Milky Way. Whether YMCs will survive long enough to one day be regarded as “true” globular clusters is still a somewhat controversial question, whose definitive answer requires a detailed knowledge of the internal structure of the individual clusters and a better theoretical understanding of the dynamical evolution of star clusters in general.
One could also ask if the LF of star clusters really has an upper cut-off that varies from galaxy to galaxy, or if the presence of massive clusters is merely a statistical effect that follows from a generally rich cluster system. In order to investigate this question it is necessary to obtain data with a sufficiently high resolution that the search for star clusters can be extended to much fainter magnitudes than we have been able to do in our study.
## 8 Conclusions
The data presented in this paper demonstrate that massive star clusters are formed not only in starburst galaxies, but also in rather normal galaxies. None of the galaxies in our sample show obvious signs of having been involved in interaction processes, yet we find that there is a large variation in the specific frequency $`T_N`$ of massive clusters from one galaxy to another. Some of the galaxies in our sample (notably NGC 1313 and NGC 5236) have considerably higher $`T_N`$ than the LMC, while other galaxies which at first glance could seem in many ways morphologically similar to the LMC (e.g. NGC 300 and NGC 4395) turn out to contain no rich cluster systems. In general there is no correlation between the morphological type of the galaxies in our sample and their $`T_N`$ values. Whether a galaxy contains massive star clusters or not is therefore not only a question of its morphology (as suggested by Kennicutt & Chu 1988), so one has to search for correlations between other parameters and the $`T_N`$ values. Within each of the galaxies that contain populations of YMCs, the number of clusters as a function of radius follows the H$`\alpha `$ surface brightness more closely than the broad-band surface brightness, which implies that the formation of massive clusters in a given galaxy is closely linked to star formation in general.
###### Acknowledgements.
This research was supported by the Danish Natural Science Research Council through its Centre for Ground-Based Observational Astronomy. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. We are grateful to J.V. Clausen for having read several versions of this manuscript, and the DFG Graduierten Kolleg “Das Magellansche System und andere Zwerggalaxien” is thanked for covering travel costs to S.S. Larsen.
|
no-problem/9902/cond-mat9902034.html
|
ar5iv
|
text
|
# Combinatorial Optimization by Iterative Partial Transcription
## I Introduction
Combinatorial optimization problems occur in many fields of physics, engineering and economics. They are closely related to statistical physics, see e.g. and references therein. Many of the combinatorial optimization problems are difficult to solve since they are NP-hard, i.e., there is no algorithm known which finds the exact solution with an effort proportional to any power of the problem size. One of the most popular such tasks is the traveling salesman problem (TSP): how to find the shortest roundtrip through a given set of cities. For recent surveys on various approaches to the TSP see .
Many combinatorial optimization problems are of considerable practical importance. Thus, algorithms are needed which yield good approximations of the exact solution within a reasonable computing time, and which require only a modest effort in programming. Various deterministic and probabilistic approaches, so-called search heuristics, have been proposed to construct such approximation algorithms. A considerable part of them borrows ideas from physics and biology.
The conceptionally simplest approximation algorithms are local search procedures. They can be best understood when interpreting the approximate solutions as discrete points (states) in a high-dimensional hilly landscape, and the quantity to be optimized as the corresponding potential energy. These algorithms proceed iteratively, improving the solution by small modifications (moves) step by step: The neighborhood of the current state, defined by the set of permitted modificiations of the solution (move class), is searched for states of lower energy. If such a state is found, it is substituted for the current state, and a new search is started. Otherwise, the process stops because a local minimum has been reached.
Usually, the chances to find the global minimum in this way – or in the case of multimodality (degeneracy), one of the global minima – vanish exponentially as the problem size rises. They can be increased by taking moves of higher complexity into account. Physically speaking, by means of the local search we create states which are only metastable; the degree of metastability is defined by the move class considered. Thus, according to increasing complexity of the moves, one can define hierarchies of classes of metastable states. Considering more complex moves corresponds to waiting longer relaxation times, so that, on the average, one ends up in lower local minima.
The local search concept is simple. However, in sophisticated algorithms, the moves considered can be fairly complicated, i.e., they may concern a rather large number of degrees of freedom as in the Lin-Kernighan algorithm for the TSP . The art of developing such algorithms is to select from the set of all possible modifications, related to a given number of degrees of freedom, an appropriate small part to be included into the move class.
In order to overcome barriers between local minima, simulated annealing (SA) assumes the “sample” (current approximate solution) to be in contact with a heat bath with a time dependent temperature. Thus, moves increasing the energy are also taken into account, where the acceptance probability decreases exponentially with increasing energy change. Slow cooling permits the “sample” to reach a particularly deep local minimum.
Several proposals have been made to improve this basic concept, in particular to optimize the temperature schedule of the annealing process, see e.g. and references therein, or to adapt SA to parallel computer architectures . Moreover, substituting the random decision of accepting energy increasing moves by a deterministic decision according to whether or not the energy change exceeds a certain upper bound, one gets threshold accepting, a closely related concept . Finally, replacing the slow cooling of SA by thermal cycling, i.e., by cyclically heating and quenching with decreasing amplitude, can considerably improve the performance ; for an early approach based on cyclically heating (with the temperature chosen at random) and rapid cooling see .
Genetic algorithms offer another possibility to escape from local minima. They simulate a biological evolution process by operating on a population of individuals (approximate solutions), where new generations are produced through the repeated application of genetic operators such as selection, crossover and mutation. Particularly effective seem to be algorithms in which the individuals are local minima, see and references therein. However, a guarantee to find the global optimum within any finite computing time cannot be given by this approach, nor by any other of the heuristic methods mentioned, though, for infinite computing time, the convergence of SA (with a logarithmic temperature schedule) and of a broad class of evolutionary algorithms has been proved .
At the same time, exact solution methods have been developed further. They are mainly based on branch-and-bound and branch-and-cut ideas . Thus, a specific TSP instance including 7397 cities was solved . However, for a fixed size, the effort necessary to find the exact solution can vary enormously from problem to problem. For example, the TSPLIB95 includes an instance of 1577 cities which could only very recently be solved by Applegate and co-workers; they needed approximately 280 hours on a DEC Alphastation 4100 5/400 .
In this paper, we present iterative partial transcription (IPT), an approach to improve the performance of heuristic algorithms for combinatorial optimization problems: IPT compares pairs of states, represented by vectors of coordinates. In an iterative procedure, it systematically searches for the subsets of the components of these vectors, the copying of which from one vector to the other yields new approximate solutions with decreased energy. IPT is particularly useful when applied to local minima. We illustrate its efficiency for the TSP, demonstrating that the incorporation of IPT into local search based heuristic algorithms can considerably increase their performance.
The paper is organized as follows: In Section II, we present the IPT procedure in a general manner, as well as applied to the TSP. Section III is devoted to embedding IPT in multi-start local search and in thermal cycling. Section IV reports on the results obtained for several instances of the traveling salesman problem. Finally, Section V summarizes the paper.
## II Iterative partial transcription
### A General formulation
The design of the optimization method proposed here is motivated by the use of local search algorithms common to three highly effective Monte Carlo optimization procedures: the iterated Lin-Kernighan method for the TSP , the thermal-cycling approach , and the genetic-local-search strategy . The efficiency of these approaches in finding states of particularly low energy rests on the consideration of local minima rather than of arbitrary states, and on modifying the local minima by sophisticated operations. These operations typically involve elaborate manipulation steps, and in some cases they make use of other local minima. Compared to SA, a single such modification concerns a rather large number of degrees of freedom. Of course, it demands far more CPU time than a single Metropolis step in SA. The idea of our proposal is to increase the average “gain” of the individual local searches by a fast postprocessing phase, and consequently to reduce the average number of local search steps required to reach a certain energy. This is achieved by making good use of the information inherent in the transformation mapping one to the other local minimum.
In detail, consider two states $`𝐯_1`$ and $`𝐯_2`$ (possible approximate solutions of the given optimization problem), encoded as two vectors, which differ in $`k`$ components. We look for decompositions of the transformation $`𝐌`$ mapping $`𝐯_1`$ to $`𝐯_2`$ into a product of two commuting transformations $`𝐌_\alpha `$ and $`𝐌_\beta `$,
$$𝐯_2=𝐌(𝐯_1)=𝐌_\beta (𝐌_\alpha (𝐯_1))=𝐌_\alpha (𝐌_\beta (𝐯_1))$$
(1)
such that $`𝐌_\alpha (𝐯_1)`$ and $`𝐌_\beta (𝐯_1)`$ are possible approximate solutions of the optimization problem too, and that $`𝐌_\alpha `$ and $`𝐌_\beta `$ modify disjunct sets of $`k_\alpha `$ and $`k_\beta `$ components of $`𝐯_1`$. Thus, $`k_\alpha +k_\beta =k`$, so that both $`𝐌_\alpha `$ and $`𝐌_\beta `$ “transcribe” part of the components of $`𝐯_1`$ by the values of these components in $`𝐯_2`$.
The procedure proposed here is an iterative search for appropriate transformations of this kind. It merges two states $`𝐯_1`$ and $`𝐯_2`$: According to increasing $`k_\alpha `$, where $`1k_\alpha k`$, it systematically searches for pairs of $`𝐌_\alpha `$ and $`𝐌_\beta `$ satisfying Eq. (1). If such a pair is found, it checks whether or not $`𝐌_\alpha `$ improves $`𝐯_1`$. If yes, $`𝐯_1`$ is substituted by $`𝐌_\alpha (𝐯_1)`$, otherwise $`𝐯_2`$ by $`𝐌_\alpha ^1(𝐯_2)=𝐌_\beta (𝐯_1)`$, and then the search is restarted. The iteration stops if $`𝐯_1=𝐯_2`$. This procedure, which we refer to as iterative partial transcription (IPT), has as its output the current $`𝐯_1`$.
Below, we apply IPT to local minima with respect to some move class. However, the IPT output state will in general not be such a local minimum. Therefore, provided the IPT output state differs from both the input states, it is additionally exposed to a local search with respect to this move class. We refer to this combination of IPT and local search as IPTLS.
The proposed IPT procedure decomposes the rather complex transformation of one state to another into several parts, analyzing with respect to which features these states differ. Disregarding the disadvantageous features, it effectively makes use of the favorable ones for a specific improvement. This approach can easily be understood when it is interpreted in physical terms: We consider low-energy states as differing from the ground state by several non-interacting “elementary” excitations, which, however, may involve rather complex modifications. Comparing two low-energy states, we identify the excitations which are present in one of these states, but not in the other, and generate a new low-energy state by relaxation of all the excitations found. In this sense, IPT is a generalization of the basic idea of the approach to finding the ground state of a spin glass proposed by Kawashima and Suzuki . These authors relax excitations formed by clusters of neighboring spins, which they identify by the comparison of different replicas.
There are some links between this method and other heuristic search algorithms: IPT can be considered as a local search in the subspace spanned by the differing components of both states. The related move class is given by the possibilities of simply inheriting a “part” of the other state, which corresponds to a shift to the alternative point in a particular subspace of the configuration space. As the Lin-Kernighan procedure for the TSP , IPT takes rather complex moves into account while diminishing the effort needed for exploring the search space by largely reducing its dimension. Alternatively, in biological terms, IPT can be interpreted as the deterministic transcription of (groups of) genes.
IPT is applicable to several problems. For example, for the TSP, $`𝐌_\alpha `$ would correspond to the transcription of a part of the tour; for a short-range Ising spin glass, $`𝐌_\alpha `$ would describe the flipping of a cluster of neighboring spins, cf. Ref. . However, IPT is clearly not applicable to problems with long-range interaction such as the Coulomb glass (an Ising spin glass with Coulomb interaction).
### B Realization for the TSP
We illustrate IPT by applying it to the traveling salesman problem. The states (possible solutions) are permutations of the $`N`$ given cities. The length of the roundtrip corresponds to the potential energy to be minimized. We use the following notions: tour and subtour denote closed roundtrips through all cities and part of the cities, respectively, whereas chains and subchains stand for tours and subtours with one connection eliminated, respectively. The number of cities in a subchain is referred to as its size. Thus, to identify pairs of transformations $`𝐌_\alpha `$ and $`𝐌_\beta `$ in the sense of the general description of IPT means to search for subchains which include the same cities in a different order, and have the same initial and final cities.
Starting from two tours A and B, IPT proceeds according to the following scheme:
* Formation of a reduced representation: For each city, check whether or not it has the same neighbors in both tours / subtours. If yes, create a new pair of subtours by omitting this city and connecting its neighbors. Let the number of cities in the reduced problem be $`N_\mathrm{r}`$. The “next” cities of $`i`$ in $`A`$, i.e., the cities following the city $`i`$ in tour $`A`$ of the reduced problem, are denoted by $`n_{i,1}^A`$, $`n_{i,2}^A`$, $`n_{i,3}^A`$, and so on; the “previous” cities of $`i`$ are named $`p_{i,1}^A`$, $`p_{i,2}^A`$, $`p_{i,3}^A`$, and so on. The cities of tour $`B`$ are referred to analogously.
* Comparison of subchains of the reduced tours $`A`$ and $`B`$ where their size $`s`$ increases from 4 to $`N_\mathrm{r}/2+1`$: Check for all $`i`$, whether the final cities are the same, that is, whether $`n_{i,s1}^A=n_{i,s1}^B`$, or alternatively $`p_{i,s1}^A=n_{i,s1}^B`$. Provided one of these conditions is fulfilled, investigate whether or not the corresponding subchains include the same cities . If yes, substitute in the original tours the worse of the corresponding subchains by the better one (in the case of equality, substitute the corresponding subchain in $`B`$), and go to (1).
* Choose the better of the current original tours $`A`$ and $`B`$ to be the IPT output.
Our IPT algorithm for the TSP has some resemblance to the subroute transcription procedure originally proposed by Brady , later adopted by Yamamura et al. in the “subtour exchange crossover” operator of a genetic TSP algorithm. However, these two methods do not require to fulfill the restriction that the two subchains must have the same initial and final cities. This condition is substantial in our approach: It guarantees that each transcription of a subchain diminishes the tour length. Moreover, it largely reduces the number of pairs of subchains to be compared in detail (whether or not they include the same cities), and thus the CPU time as well.
## III Main algorithm
The effectiveness of the IPT procedure can only be judged in the context of the main algorithm in which it is embedded. As such, we consider the multi-start-local-search algorithm and the thermal-cycling algorithm. In both cases, IPT acts on local minima only. Thus, we always use it in combination with an additional local search on output states differing from both the input states, that is in the IPTLS version.
### A Multi-start local search
The simplest manner of using a multi-start-local-search algorithm for the solution of an optimization problem is to perform $`K`$ times a local search starting from a random state, and to take the lowest of the resulting states as the final state. This algorithm is primitive, but it has the advantage of having only a single adjustable parameter, $`K`$.
Incorporating IPTLS into this multi-start local search permits to combine the information obtained by the individual trials more efficiently. For that, the first approximation of the solution is obtained by a local search starting from a random state. Then, for $`j=2`$ to $`K`$, IPTLS is performed between the $`(j1)`$-th approximation and the state obtained by the $`j`$-th local search starting from a random state. The output state is considered as the $`j`$-th approximation.
The performance of this extended multi-start-local-search approach is likely to improve when “searching in parallel”, cf. . In order to do so, we utilize an archive of $`N_\mathrm{a}`$ states ($`N_\mathrm{a}<K`$), where the state of lowest energy is considered as the current approximation. The archive is initialized by $`N_\mathrm{a}`$ local searches starting from states chosen at random. After this, $`(KN_\mathrm{a})`$ times the following steps are performed: A new state is generated by a local search starting from a random state. Then, a series of IPTLSs is performed between this new state and the archive states. As soon as the resulting state has a shorter tour length than the currently selected archive state, it is substituted for this archive state, and the series of IPTLSs is terminated. Finally, after finishing these $`K`$ local searches extended by IPTLS, we try to improve the archive by applying IPTLS to all pairs of states contained in it.
The “searching in parallel” approach is promising for three reasons: This method is, in effect, a partition of the computational effort into several search processes in order to minimize the failure risk . More importantly, the low-energy states, created during the expensive local search starting from random states, are used multiply by means of the series of IPTLSs. Finally, the local search step following a “successful” IPT has to be performed at most once within each series.
### B Thermal cycling
Thermal cycling has been shown to be far more efficient than multi-start local search. It consists of cyclic heatings and quenchings by Metropolis and local search procedures, respectively, where the amount of energy deposited into the sample during the individual heatings decreases in the turn of the optimization process. This algorithm works particularly well when applied to an archive of $`N_\mathrm{a}`$ samples rather than to a single sample.
The embedding of IPTLS in thermal cycling is achieved in the following three ways:
* The multi-start local search creating the initial archive is enhanced by additional IPTLS as described in the previous subsection.
* Each temperature step starts with trying to improve the archive by applying IPTLS to all pairs of archive states, where the output state always replaces the better of the two input states – substituting the worse of the two would cause a too early loss of variety in the archive, cf. .
* After each thermal cycle, a series of IPTLS between the final state and all archive states with energies smaller or equal to that of the initial state is performed. This series is terminated as soon as one of the archive states is improved by the corresponding IPTLS step. In this sense, each thermal cycle does not act on its initial state only, but on (a part of) the whole archive.
Moreover, the inclusion of IPTLS between the final state of each cycle and the archive states suggests a change in the heating process. In Ref. , a constant number of modifications is performed for heating, independent of the problem size. Now, this number is chosen to be proportional to the problem size. The reason for this change is the following. For very large problems, the total modification of the state within one heating-quenching cycle should frequently be a superposition of independent, “local” variations. Most of these variations cause an increase of the energy. Thus, in , their number must be small to have a realistic chance for a net improvement. However, when IPTLS is included for postprocessing, the undesirable “local” variations are filtered out to a large extent, such that the above restriction can be abandoned.
## IV Application tests
### A Implementation details
We now demonstrate the efficiency of the two algorithms described in Sections III.A and III.B, respectively, for the TSP. These algorithms rely on an adequate local search procedure. Here, we use a slightly improved version of the local search implementation of Ref. . Thus, we have the choice between four alternative possibilities concerning the kind of metastability to be reached:
* stable with respect to reverse of a subchain, as well as to shift of a city;
* same as (a), and stable with respect to cutting three connections of the tour, and concatenating the three subchains in a new manner;
* same as (b), and stable with respect to rearrangements by first cutting the tour twice and forming two separated subtours, and connecting then these subtours after cutting two other connections;
* same as (c), and stable concerning a restricted Lin-Kernighan search which consists of cutting the tour once, then several times alternately cutting the chain and concatenating the subchains, and finally connecting the ends of the chain again, where the number of trials to modify the chain is restricted to 1000.
In the present study, we have performed numerical experiments considering move class (a) or (d) mainly.
The efficiency of our local search approach rests on three principles: (i) New connections are tried according to increasing length, where appropriate bounds are utilized to terminate the search as soon as it becomes useless. (ii) In stage (c), we first tabulate all rearrangements, which decompose the original tour into two subtours with a shorter total length. Then, we search for those decompositions of the original tour into two subtours, starting from which one of the tabulated rearrangements produces a new, shorter tour. (iii) Limiting the number of trials in (d) improves the efficiency considerably if the cities are clustered, i.e., if a few of the distances between neighboring cities in the optimal tour are much larger than the others.
The IPT part is the same in both of the presented algorithms. It requires a computational effort which is roughly proportional to $`N^2`$. However, due to the use of a reduced representation, the proportionality constant seems to be small in practice: We performed multi-start-local-search-with-IPTLS runs ($`N_\mathrm{a}=1`$) for sets of cities, randomly distributed in a square, with Euclidian metric. We observed that, for up to several thousand cities, even when only move class (a) is taken into account, the CPU time for the IPT is roughly one order of magnitude smaller than the CPU time for the local search.
Our thermal-cycling code was adapted to using IPTLS in three points: (i) Since IPTLS ensures a high quality of the primary archive, the corresponding effort could be diminished; we now perform $`30N_\mathrm{a}`$ rather than $`50N_\mathrm{a}`$ searches starting from random states in initializing the archive. (ii) The heating part in thermal cycling, see Section III.B, has been changed in comparison to according to the last paragraph of the previous section; each heating is terminated after $`N/10`$ rather than after 50 modifications of the tour. (iii) Due to the efficiency enhancement of the individual thermal cycles by IPTLS, we now perform $`2N_\mathrm{a}`$ rather than $`5N_\mathrm{a}`$ cycles before deciding whether or not the temperature can be decreased. All other adjustable parameters of thermal cycling have the same values as in Ref. .
For comparison, we have also performed a series of runs of a carefully tuned SA code, where the adjustable parameters were optimized for the instance considered. In this or that way we took into account all the essential points discussed in the simulated annealing section of the TSP review . Our program uses an adaptive temperature schedule, and automatically shrinks the move class utilized in the turn of the cooling process.
More specifically, as starting temperature of SA, we choose $`1/10`$ of the length reduction when quenching a random tour, divided by the number of cities. At each temperature, we perform a given number of sweeps. Then, if during this series of sweeps the best state found so far could not be improved, we decrease the temperature by a factor 0.9; otherwise we perform the same number of sweeps with unchanged temperature again, and so on. Finally, after 10 temperature steps without improvement of the best state so far, we terminate the cooling, and, for this best state, we perform a local search considering the complete move class (a). This adaptive exponential schedule is robust concerning moderate changes of the initial temperature. In optimizing our implementation, we have also tried logarithmic and $`1/k`$ schedules. But none of them lead to a clear acceleration compared to the schedule described.
In our implementation, we construct the SA move class starting from the local-search move class (a), and restricting it by neighborhood pruning. This means that the number of neighbors considered in selecting the first of the new connections of a move is temperature dependent: We choose the upper bound of the corresponding neighbor identification number (1 for nearest neighbor, 2 for next-nearest neighbor, and so on) as 2.5 times its mean value for the tour modifications performed within the previous series of sweeps. This neighborhood pruning is very effective; without it, the program would be slower by roughly a factor of 40 (for 1 $`\%`$ accuracy).
The numerical experiments reported in this paper were performed using an HP K460 with 180 MHz PA8000 processors, running under HP-UX 10.20. (All CPU times given relate to one processor.) Our code was written in FORTRAN77.
### B Multi-start-local-search results
Since heuristic procedures yield only approximate solutions, the truly important property is the relation between the mean quality of the solution, that is the deviation of the mean tour length from the global optimum, and the required computing time, $`\tau _{\mathrm{CPU}}`$. Thus, in order to illustrate the performance of IPT, we have investigated the influence of the adjustable parameters on this relation for the 532 Northamerican cities problem (att532), a standard example from the TSPLIB95 . These results are presented in Figs. 1 to 3. Moreover, to check for robustness and size dependence, we have additionally studied five other instances from the TSPLIB95, i.e., pcb442, rat783, fl1577, pr2392, and fl3795, considering a smaller number of parameter sets, see Tables I and II. (Except for pr2392, the instances chosen are the same as in .)
The performance of multi-start local searches with move classes (a) and (d), respectively, is shown in Fig. 1 for att532. This graph contrasts results obtained for $`N_\mathrm{a}=1`$ with and without IPTLS, and includes SA data (cf. previous subsection) for comparison. In particular, Fig. 1 shows the following:
* For large $`\tau _{\mathrm{CPU}}`$, i.e., for a large number of local searches $`K`$, considerable performance gains are reached when the multi-start local search is extended by IPTLS. This is observed for move class (a), as well as for move class (d). The speed gains are small when $`K`$ is close to 1, but they rapidly increase with $`K`$. For the highest $`K`$ considered, they amount to factors of roughly 100 and 30 for multi-start local searches concerning (a) and (d), respectively. In further experiments, we obtained analogous results for move classes (b) and (c).
* Even without IPTLS, multi-start local search using a sufficiently complex move class can be clearly advantageous in comparison to SA : compare the multi-start-local-search data for move class (d) with the SA results.
* For att532, if an accuracy between 1 and 3 $`\%`$ is required, even multi-start local search according to move class (a) extended by IPTLS can compete with SA: it is a bit better for $`\tau _{\mathrm{CPU}}<4\mathrm{sec}`$, and slightly worse for larger $`\tau _{\mathrm{CPU}}`$. However, if higher accuracies are desired, our SA program outperforms the multi-start-local-search-with-IPTLS code, which utilizes only move class (a). A minor result of this comparison, not obvious from the figure since each point represents an average of 100 runs, concerns the variance of the final tour length: the variance is considerably smaller for the multi-start local search according to (a) extended by IPTLS than for SA.
The advantage of “searching in parallel” is demonstrated by Fig. 2. We compare the multi-start-local-search-with-IPTLS results of Fig. 1 to data obtained with archives of 3 and 10 states. For small numbers of local searches $`K`$, there is almost no influence of “parallelizing”, i.e., of using $`N_\mathrm{a}>1`$. However, as $`K`$ increases, corresponding to increasing $`\tau _{\mathrm{CPU}}`$, the “searching in parallel” strategy performs better and better. Moreover, up to some optimum archive size, the advantage increases also with $`N_\mathrm{a}`$. Above the optimum size, the performance slightly decreases with increasing $`N_\mathrm{a}`$: additional runs for move class (d) showed that, in the whole accuracy region presented in Fig. 2, the performance decreases a bit when the archive size increases from 10 to 30. The optimum archive size seems to rise slowly with $`K`$.
The SA data, given in Fig. 1, are included into Fig. 2 also. It is remarkable that for att532 multi-start local search with IPTLS performed in parallel ($`N_\mathrm{a}=10`$) has roughly the same performance as our tuned SA program. However, the former method has the considerable advantage to possess only one tuning parameter, which is, moreover, rather “uncritical”.
To ensure fairness of the comparison, we have implemented the searching-in-parallel idea also in our SA program: four runs, each taking one fourth of the available computing time, are performed, and the best tour found in these runs is taken as final result, cf. . This performance curve is presented in Fig. 2. There is a clear efficiency increase arising from this parallelism if an accuracy better than 1 $`\%`$ is desired. However, for att532, as Fig. 2 shows, even this sophisticated SA algorithm is still far slower than the multi-start local search with IPTLS concerning move class (d).
For a broader test of our code, we considered six symmetric TSP instances taken from the TSPLIB95 , including between 442 and 3795 cities. Table I presents results for three different parameter sets of trials, $`K`$, and archive sizes, $`N_\mathrm{a}`$. These data confirm the above interpretations concerning the performance of our algorithm:
* For all instances considered but pr2392, the best known tour lengths were reproduced. For pcb442, att532, rat783, and fl1577, these values are the exact optima; fl3795 has not been solved exactly yet. For pr2392, our best (mean) result exceeds the known exact optimum tour length by 0.5 $`\%`$ (1 $`\%`$).
* There is a considerable benefit of “searching in parallel” as illustrated by the results for $`K=500`$ with archive sizes 1 and 10, respectively.
Finally, the comparison of the data in Table I with those given in Table I of is instructive. However, this consideration is complicated by the use of a slightly improved local search code in the present work, which typically causes a speed gain by a factor of 1.5. The comparison shows that multi-start local search extended by IPTLS and performed “in parallel” reaches roughly the same efficiency as thermal cycling without IPTLS. In more detail, the former program is clearly faster for att532, fl1577 and fl3795, but slower for rat783. For pcb442, both codes have roughly the same performance. The fact that there is no clear size dependence in this comparison is not surprising due to the large variety of the features (occurence of clusters of cities, degeneracies, …) of the examples considered.
### C Thermal-cycling results
In order to study to what extent IPTLS improves thermal cycling, we have considered the same six symmetric TSP instances as above. Additionally, for comparison, we have performed thermal-cycling runs without IPTLS for att532 using the same local-search implementation as in the thermal-cycling-with-IPTLS code. The results are presented in Fig. 3 and in Table II.
Fig. 3 illustrates the high efficiency of the thermal-cycling approach: For att532, the performance of the original algorithm (without IPTLS) is clearly better than that of multi-start local search with IPTLS for $`N_\mathrm{a}=1`$. It is comparable to that of multi-start local search with IPTLS, applied to archives of 3 states, cf. Fig. 2. In detail, original thermal cycling is slower if low accuracy is desired, and better if a high accuracy has to be achieved. However, this ranking is certainly TSP instance dependent.
The efficiency is further improved by embedding IPTLS in thermal cycling, see Fig. 3. For att532, there is a gain by a factor of 2 to 3; it slightly increases with the accuracy demanded. The comparison to “parallelized” multi-start local search with IPTLS yields a surprising result: For att532, when move class (d) is considered, the thermal-cycling-with-IPTLS code is only slightly better than that program, which is considerably simpler from the conceptional point of view (only two adjustable parameters).
For other TSP instances however, thermal cycling with IPTLS can be clearly more efficient than “parallelized” multi-start local search with IPTLS, compare Table II to Table I with respect to rat783, pr2392, and fl3795. It is remarkable that thermal cycling with IPTLS reproduced the best known tour lengths for all the problems considered within “reasonable” computing times. Moreover, comparing Table II with Table I from shows that, for the instances pcb442, att532, and rat783, the thermal-cycling-with-IPTLS program is typically by a factor of 2 to 3 faster than the code used in . For fl1577, the acceleration amounts to a factor of 5, and, for fl3795, it is even larger – roughly a factor of 10 is obtained.
It is definitely problematic to use results obtained on different computing platforms (hardware, operating system, and programming language) as the basis of a judgement. Nevertheless, we now compare the performance of our thermal-cycling-with-IPTLS procedure with that of four other approaches, but the results should be considered with care.
It seems that our code is more efficient, for all instances but pcb442, than the genetic-local-search algorithm presented in – a significantly improved version of the winning algorithm of the First International Contest on Evolutionary Optimization . In detail, our code is slightly slower for pcb442 and slightly faster for rat783, it has clear advantages for att532, in particular when high accuracies have to be achieved, and it is considerably faster for fl1577 and fl3795. However, the approach presented in has been optimized for solving large TSP instances by minimizing memory requirements; the distances between cities are computed rather than looked up in a distance table stored in the main memory. For example, the genetic-local-search algorithm of Ref. needs 10 MBytes of main memory for solving fl3795 compared to 256 MBytes used by the program presented here.
In comparison to the iterated Lin-Kernighan approach proposed by Johnson and McGeoch, the performance of which is illustrated by Table 16 of , our program is slower by roughly a factor of 4 for pcb442 and att532 if the accuracy of our results for $`N_\mathrm{a}=3`$ is required. The performance gap seems to shrink with increasing accuracy demand . However, for fl3795, our code performs considerably better. According to further calculations, this advantage arises primarily from a larger robustness of our code, and not from better scalability .
Clearly, one should also attempt to make a comparison with the state-of-the-art exact algorithms. For the Padberg-Rinaldi 532 cities problem, the branch-and-cut program by Thienel and Nadeff, one of the presently fastest exact solution codes, needs 16.5 minutes on a SPARC10 , which corresponds to roughly 4 minutes computing time for our CPU. Utilizing an archive of 12 states and cyclically quenching according to stage (d), we performed 100 runs. Our Monte Carlo approach, i.e., thermal cycling extended by IPTLS, reproduced the optimum tour length 27686 in 97 of the 100 runs, requiring on the average 246 CPU seconds for one of them. In the other three cases, we obtained tours with lengths 27693 (once) or 27698 (twice). However, when comparing with exact algorithms for fl1577, the usefulness of the proposed approach is more obvious: Here, using an archive of 8 states, we obtained the exact optimum in 19 of 20 runs, and in one case a tour of length 22253. Our Monte Carlo optimization requires 1390 CPU seconds on the average, whereas $`10^6`$ CPU seconds were needed for the only recently obtained exact solution of this problem on a DEC Alphastation 4100 5/400 .
Again, these comparisons should be interpreted with care: On the one hand, computing provably optimal solutions requires much more effort than simply trying to find high-quality solutions without any guarantee. However, on the other hand, the required effort depends not only on the size of the TSP instance, but also on its “character”. Thus, fl3795, for which our code yields solutions of the best known tour length with high probability within “reasonable” $`\tau _{\mathrm{CPU}}`$, has – to the best of our knowledge – not been solved exactly yet.
## V Conclusions
We have presented an algorithm by means of which the effectiveness of local search based heuristic combinatorial optimization procedures can be increased considerably. This algorithm, iterative partial transcription, is physically motivated: for a spin glass, it corresponds to searching for non-interacting clusters of spins with respect to which two states differ, and relaxing these excitations. Mathematically spoken, the algorithm can be understood as a search in the subspace spanned by the differing components of two approximate solutions of the optimization problem. It transcribes subsets of the components of the vector, representing one approximate solution, by the related components of the other approximate solution if the quality of the former solution can be increased in this way. This process continues iteratively, accounting for an increasing number of components.
For the traveling salesman problem, we have demonstrated the feasibility of this approach by embedding it in the multi-start-local-search algorithm starting from random states, and, alternatively, in the thermal-cycling method. In both cases, a considerable acceleration of the computation of high-quality approximate solutions was reached. For the TSP instances considered, these algorithms are far more efficient than SA.
There are several areas for future research, such as (i) evaluating the performance of iterative partial transcription for very large TSP instances, (ii) investigating its usefulness in other combinatorial optimization problems, and (iii) incorporating it in other heuristic combinatorial optimization procedures.
## Acknowledgements
This work was supported by the SMWK and DFG (SFB 393). We are particularly indebted to M. Pollak and U. Rößler for a series of critical remarks, and to R. Möbius for her help in finding an appropriate name for the procedure presented. Moreover, discussions with A. Díaz-Sánchez, D.S. Johnson, J. Talamantes, and S. Thienel were very useful.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.