id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/9901/quant-ph9901065.html
ar5iv
text
# Untitled Document On the importance of the Bohmian approach for interpreting CP-violation experiments Dipankar Home<sup>1</sup><sup>1</sup>1e-mail:dhom@boseinst.ernet.in Bose Institute, Calcutta 700009, India A.S.Majumdar<sup>2</sup><sup>2</sup>2e-mail:archan@boson.bose.res.in S.N.Bose National Centre for Basic Sciences Block-JD, Sector III, Salt Lake, Calcutta 700091, India. ## Abstract We argue that the inference of CP violation in experiments involving the $`K^0\overline{K^0}`$ system in weak interactions of particle physics is facilitated by the assumption of particle trajectories for the decaying particles and the decay products. A consistent explanation in terms of such trajectories is naturally incorporated within the Bohmian interpretation of quantum mechanics. I. Introduction The Bohm model is able to provide a causal interpretation of quantum mechanics in a consistent manner . At the same time, the predictions of Bohmian mechanics are in exact agreement with the standard quantum mechanical predictions for observable probabilities in all usual experimental situations. In this paper we shall be concerned with examining the possible importance of the Bohmian approach in interpreting certain experiments whose understanding in terms of the standard interpretation is rather ambiguous. For the purpose of reinterpreting the standard quantum formalism using the Bohmian scheme, a wave function $`\psi `$ is not taken to provide a complete specification of the state of an individual system; an additional ontological “position” coordinate (an objectively real “position” existing irrespective of any external observation) is ascribed to an individual particle. The “position” coordinate of the particle evolves with time obeying an equation which can be derived from the Schrodinger equation (considering the one dimensional case) $`i\mathrm{}{\displaystyle \frac{\psi }{t}}=H\psi {\displaystyle \frac{\mathrm{}^2}{2m}}{\displaystyle \frac{^2\psi }{x^2}}+V(x)\psi `$ (1) by writing $`\psi =Re^{iS/\mathrm{}}`$ (2) and using the continuity equation $`{\displaystyle \frac{}{x}}(\rho v)+{\displaystyle \frac{\rho }{t}}=0`$ (3) for the probability distribution $`\rho (x,t)`$ given by $`\rho =|\psi |^2.`$ (4) It is important to note that $`\rho `$ is ascribed an ontological significance by regarding it as representing the probability density of “particles” occupying actual positions. In contrast, in the standard formulation $`\rho `$ is interpreted as the probability density of finding a particle around a certain position. Setting ($`\rho v`$) equal to the quantum probability current leads naturally to the Bohmian interpretation whrere the particle velocity $`v(x,t)`$ is given by $`v{\displaystyle \frac{dx}{dt}}={\displaystyle \frac{1}{m}}{\displaystyle \frac{S}{x}}`$ (5) The particle “trajectory” is completely deterministic and is obtained by integrating (5) with the appropriate initial conditions. The essential significance of Bohm’s model lies in providing an elegant solution to the measurement problem (which has been described by Weinberg as “the most important puzzle in the interpretation of quantum mechanics”) without requiring wave function collapse, since according to the Bohmian interpretation, in any measurement a definite outcome is singled out by the relevant ontological position coordinate. In view of the importance of the Bohm model in providing not only an internally consistent alternative interpretation of the standard quantum formalism, but also perhaps the neatest solution to the measurement problem , it should be worthwhile to look for specific situations where the conceptual superiority of Bohm’s model over the standard interpretation may become easily transparent. To this end, we now proceed to examine the analysis of a fundamentally important experiment of particle physics, namely, the discovery of CP-violation . II. The CP-violation experiment C(charge conjugation) and P(parity) are two of the fundamental discrete symmetries of nature, the violations of which have not been empirically detected in phenomena other than weak interactions. If a third discrete symmetry T(time reversal) is taken into account, there exists a fundamental theorem of quantum field theory, viz., the CPT theorem which states that all physical processes are invariant under the combined operation of CPT. Nevertheless, there is no theorem forbidding the violation of CP symmetry, and indeed, there have been several experiments to date , starting from the pioneering observation of Christenson, Cronin, Fitch and Turlay , that have revealed the occurrence of CP violation through weak interactions of particle physics involving the particles $`K^0`$ and $`\overline{K^0}`$. The eigenstates of strangeness $`K^0`$ $`(s=+1)`$ and its CP conjugate $`\overline{K^0}`$ $`(s=1)`$ are produced in strong interactions, for example, the decay of $`\mathrm{\Phi }`$ particles. Weak interactions do not conserve strangeness, whereby $`K^0`$ and $`\overline{K^0}`$ can mix through intermediate states like $`2\pi ,3\pi ,\pi \mu \nu ,\pi e\nu `$, etc. The observable particles, which are the long lived $`K`$-meson $`K_L`$, and the short lived one $`K_S`$, are linear superpositions of $`K^0`$ and $`\overline{K^0}`$, i.e., $`|K_L`$ $`=`$ $`(p|K^0q|\overline{K^0})/\sqrt{|p|^2+|q|^2}`$ (6) $`|K_S`$ $`=`$ $`(p|K^0+q|\overline{K^0})/\sqrt{|p|^2+|q|^2}`$ (7) which obey the exponential decay law $`|K_L|K_Lexp(\mathrm{\Gamma }_Lt/2)exp(im_Lt)`$ and analogously for $`|K_S`$, where $`\mathrm{\Gamma }_L`$ and $`m_L`$ are the decay width and mass respectively of the $`K_L`$ particle. It follows from (6) and (7) that $`K_L|K_S={\displaystyle \frac{|p|^2|q|^2}{|p|^2+|q|^2}}`$ (8) CP violation takes place if the states $`|K_L`$ and $`|K_S`$ are not orthogonal. Through weak interactions the $`K_S`$ particle decays rapidly into channels such as $`K_S\pi ^+\pi ^{}`$ and $`K_S2\pi ^0`$ with a mean lifetime of $`10^{10}s`$, whereas, the predominant decay modes of $`K_L`$ are $`K_L\pi ^\pm e^\pm \nu `$ (with branching ratio $`39\%`$), $`K_L\pi ^\pm \mu ^\pm \nu (27\%)`$, and $`K_L3\pi (33\%)`$ . The CP violating decay mode $`K_L2\pi `$ is extremely rare (with branching ratio $`10^3`$) in the background of the other large decay modes. Considering the Schrodinger evolution, if the analysis of the term corresponding to $`K_s`$ in the relevant initial wave function shows that it cannot contribute significantly to the emission of two pions with suitable momenta and locations, then one can infer the occurrence of CP violation in this particular situation. In other words such $`2\pi `$ can only arise through the $`K_L`$ decay mode. The momenta and locations of the emitted pions are important since the key experimental issue is to detect the $`2\pi `$ particles coming from the decay of $`K_L`$ and identify them as coming from $`K_L`$ and not $`K_S`$. In a typical experiment to detect CP violation, an initial state of the type $`|\psi _i=(a|K_L+b|K_S)`$ (9) is used which is a coherent superposition of the $`K_L`$ and $`K_S`$ states. Such a state has been produced by the technique of ‘regeneration’ which has been used in a large number of experiments . The common feature of all these experiments is the measurement of the vector momenta $`\stackrel{}{p_i}`$ of the charged decay products $`\pi ^+\pi ^{}`$ or $`2\pi ^0`$ from the decaying pions. It is only the type of instrument used for actually measuring the momenta that varies from experiment to experiment. III. Bohmian trajectories To see how the Bohmian interpretation helps in drawing the relevant inference from this experiment, we concentrate on the analysis of a single event in which the two emitted pions from a decaying kaon are detected by two detectors respectively along two different directions. From the measured momenta $`\stackrel{}{p_1}`$ and $`\stackrel{}{p_2}`$, the “trajectories” followed by the individual pions are retrodictively inferred assuming that they have followed linear “trajectories”. The point of intersection of these retrodicted “trajectories” is inferred to be the point from which the decay products have emanated from the decaying system; in other words, what is technically known as the “decay vertex” is determined in this way. The value of the momentum of the decaying kaon is obtained by $`\stackrel{}{p_k}=\stackrel{}{p_1}+\stackrel{}{p_2}`$. Once the decay vertex and the kaon momentum is known, one estimates the time taken by the kaon to reach the decay vertex from the source, again using at this stage the idea of a linear “trajectory”. If this time turns out to be much larger than the $`K_S`$ mean lifetime ($`10^{10}s`$), one infers that the detected $`2\pi `$ pair must have come from $`K_L`$, which, as already mentioned, is the signature of CP violation. It is thus evident from the above discussion that the assumption of a linear “trajectory” of a freely evolving particle (kaon or pion) provides a consistent explanation in support of CP violation in such an experiment. Within the standard interpretationm of quantum mechanics, there is no way one can justify assigning a “trajectory” to a freely evolving particle. Moreover, assuming such a “trajectory” to be linear is an additional ad hoc input. One possible argument could be to assign localized wave packets to emitted pions and kaons, and to use the fact that their peaks follow classical trajectories in the case of a free evolution. However, in the standard quantum mechanical description of decay processes, the decay products are regarded as asymptotically free, and hence should be represented by plane wave states. Moreover, even if they are approximated in some sense by localized wave packets, there would be inevitable spreading of the wave packets. Even if this spreading is regarded as negligible within the time interval concerned, a ‘literal identification’ of the wave packet with the particle is conceptually impermissible without an additional input at the fundamental level in the form of the notion of a “particle” with a definite position even when unobserved (“particle” ontology). On the other hand, the assumption of linear “trajectories” followed by the decaying particles and the decay products is amenable to a natural explanation within the Bohmian framework. The decaying kaons as well as the asymptotically free decay products are represented by plane waves $`\psi e^{ikx}.`$ (10) Hence it follows that in the Bohmian scheme the velocity equation (5) is in this case given by $`{\displaystyle \frac{dx}{dt}}={\displaystyle \frac{\mathrm{}k}{m}}`$ (11) which when integrated provides the linear “trajectories” of the particles. These trajectories are ontological and deterministic. Therefore, in this interpretation, the exact position coordinates of the “decay vertex” can be assigned in a natural way by retrodicting the pion “trajectories” without any inconsistencies of the type inherent in the standard interpretation. Hence, it seems necessary that the standard formalism of quantum mechanics needs to be supplemented with the Bohmian interpretation of ontological particle “trajectory” (in the sense that the particle has traversed a well defined path even when unobserved) to enable for the consistent inference of the observation of CP violation in the actual experiments involving kaon decays. IV. Concluding remarks The main reasons for choosing, in particular, the CP violation experiment for this purpose are the following. First, unlike other common high energy experiments this particular experiment involves not merely the measurement of some physical quantities but inferring from the measured quantities the violation of a fundamental symmetry property of the pertinent physical interactions. Secondly, again unlike other common high energy experiments, the effects of particle creation and annihilation are not relevant for the important part of the experiment involved with the prediction of CP violation, and no second quantized treatment is required for the theoretical framework. The crucial phenomena of particle decays which this experiment is concerned with, is appropriately described in terms of the Schrodinger equation (see and references therein) for which there exists a consistent Bohmian interpretation. Note that ignoring interpretational nuances, if one tries to follow a very pragmatic approach and approximates the plane wave states of the decay products by wave packets whose peaks follow classical trajectories with finite speeds, careful estimates need to be done to quantify the resulting errors or fluctuations due to spreading of wave packets by taking into account the actual distances involved in the performed experiments. (Of course, the estimates of these distances related to the particle trajectories are fundamental from the Bohmian perspective.) This is important because the CP violation effect is exceedingly small; the branching ratio of the CP violating decay mode $`K_L2\pi `$ is $`10^3`$. In none of the CP violation experiments performed to date has this point been considered in the relevant analysis. We conclude by noting that this analysis suggests that it should be worthwhile to look for more such appropriate examples where the inadequacy or ambiguity of the standard formalism in comprehending the results of the concerned experiments can be avoided by using the Bohmian interpretation. It should be appreciated that since there is no measurement problem in the Bohmian interpretation, a Bohmian analysis is useful for all experiments in quantum mechanics, and in particular scattering experiments where it is required to know why particles are detected where they are at the end of the experiment. The answer to this is clear from the Bohmian perspective—the particles are detected where they actually are. However, from the viewpoint of the standard interpretation the explanation is rather obscure, as long as the Schrodinger wave function is regarded as the complete description of the physical system. In this context it has been recently argued that the concept of quantum probability current, a full understanding of which is provided by Bohmian mechanics, is fundamental for a genuine understanding of scattering phenomena. Apart from this, it has been claimed that a special significance of Bohmian mechanics lies in experiments related to the measurement of time of flight of particles, and tunelling time in particular for which it is difficult to find a consistent or unambiguous definition within the standard framework of quantum mechanics. All this is of course different from empirically verifying a new consequence, if any exists, of the Bohmian interpretation which is not obtainable from the standard interpretation. Nevertheless, such investigations like the one reported in this paper could be helpful in understanding more clearly the relative merits of the standard and Bohmian interpretations. This work was supported by the Department of Science and Technology, India. REFERENCES \] P.R.Holland, “The Quantum Theory of Motion”, (Cambridge University Press, London, 1993); D.Bohm, Phys. Rev. 85 (1952) 166; D.Bohm and B.J.Hiley, “The Undivided Universe”, (Routledge, London, 1993); J.T.Cushing, “Quantum Mechanics - Historical Contingency and the Copenhagen Hegemony”, (University of Chicago Press, Chicago, 1994). \] S.Weinberg, “Dreams of a final theory”, (Vintage, London, 1993) p. 64. \] J.H.Christenson, J.W.Cronin, V.L.Fitch and R.Turlay, Phys. Rev. Lett. 13 (1964) 138. \] For a review, see for instance, K.Kleinknecht, in “CP violation”, edited by C.Jarlskog, (World Scientific, Singapore, 1989) pp. 41 -104. \] A.Pais and O.Piccioni, Phys. Rev. 100 (1955) 1487. \] For eample, see, C.Geweniger et al., Phys. Lett. B 48 (1974) 487; V.Chaloupka et al., Phys. Lett. B 50 (1974) 1; W.C.Carithers et al., Phys. Rev. Lett. 34 (1975) 1244; N.Grossmann, et al., Phys. Rev. Lett., 59 (1987) 18. \] M.Daumer, D.Duerr, S.Goldstein and N.Zanghi, J. Stat. Phys. 88 (1997) 967. \] C.R.Leavens, Phys Lett. A197 (1995) 88; in “Bohmian Mechanics and Quantum Theory: An Appraisal”, eds. J.T.Cushing, A.Fine and S.Goldstein (Kluwer, Dordrecht, 1996).
no-problem/9901/astro-ph9901097.html
ar5iv
text
# Gravitational Lensing as a Probe of Quintessence ## 1. Introduction There is now strong evidence that the matter density of the Universe, both baryonic and dark, is smaller than the critical value predicted by inflation. Some of the same cosmological probes that suggest the ratio of matter density to critical density is less than one prefer the presence of an additional component with energy density such that the total energy density of the Universe is in fact the critical value (e.g., Lineweaver 1998; Perlmutter et al. 1998; Riess et al. 1998). Type Ia supernovae and other observations have recently provided evidence that the equation of state, $`w`$, of this component is $`1wP/\rho 1/3`$ (Garnavich et al. 1998; Waga & Miceli 1998; Cooray 1999). The oldest known candidate for this additional energy is the cosmological constant, characterized by $`w=1`$. However, other possibilities have now been considered, including a slowly-varying, spatially inhomogeneous scalar field component better known as quintessence (e.g., Huey et al. 1998; Zlatev et al. 1998; Wang & Steinhardt 1998). Certain quintessence models, called tracker models (Ferreira & Joyce 1997; Zlatev et al. 1998; see Steinhardt et al. 1998 for a comprehensive review), have the feature that the energy density of the scalar field, $`\mathrm{\Omega }_Q`$, is a fixed fraction of the energy density of the dominant component. Therefore such models may explain the coincidence problem — why $`\mathrm{\Omega }_Q`$ is of the same order of magnitude as $`\mathrm{\Omega }_M`$ today. In these tracker models $`w`$ is a function of redshift, and typically varies from $`w1/3`$ during the radiation dominated era to $`w0.2`$ during the matter-dominated era and finally to a value $`w0.8`$ during late epochs (today). Time variation of the equation of state is even more prominent in scalar field models involving pseudo-Nambu-Goldstone bosons (PNGB models) (Frieman et al. 1995). In such models the field is frozen to its initial value due to the large expansion rate, but becomes dynamical at some later stage at redshift $`z_x`$. Likely values for $`z_x`$ are roughly between 3 and 0 (Coble et al. 1997), which means that interesting dynamics — and hence the variation in the equation of state — happen at redshift of a few. Huterer & Turner (1999) point out that distance measurements to Type Ia supernovae offer a possibility to reconstruct the quintessence potential, while Starobinsky (1998) and Wang & Steinhardt (1998) suggested the possibility of using cluster abundances as a function of redshift as well. In § 2, we study constraints on $`w(z)`$ based on current Type Ia supernovae distances, gravitational lensing statistics, and globular cluster ages. In § 3, we consider the possibility of imposing reliable constraints on $`w(z)`$ based on cosmological probes at high redshift, in particular gravitational lensing statistics. We follow the conventions that the Hubble constant, $`H_0`$, is 100 $`h`$ km s<sup>-1</sup> Mpc<sup>-1</sup>, the present matter energy density in units of the closure density is $`\mathrm{\Omega }_M`$, and the normalized present day energy density in the unknown component is $`\mathrm{\Omega }_Q`$. ## 2. Current Constraints on $`w(z)`$ Since a generic quintessence model has a time-varying equation of state, one should be able to distinguish it from models where the equation of state is time independent (e.g, Turner & White 1997). In this Letter, we write the equation of state of the unknown component as $$w(z)w_0+z(dw/dz)_0.$$ (1) This relation should be a good approximation for most quintessence models out to redshift of a few and, of course, exact for models where $`w(z)`$ is a constant or changing slowly. Note that negative $`(dw/dz)_0`$ corresponds to an equation of state which is larger today compared to early epochs. Models where the scalar field is initially frozen typically exhibit such behavior, while for tracker field models $`(dw/dz)_0>0`$. In order to constrain $`w(z)`$, we extend current published analyses which have so far considered the existence of a redshift-independent equation of state (e.g., Cooray 1999; Garnavich et al. 1998; Waga & Miceli 1998). Since the only difference between this study and previous ones is that we now allow $`w`$ to vary with $`z`$, formalisms presented in previous papers should also hold except for the fact that $`w`$ is now redshift dependent. We refer the reader to previous work for detailed formulae and calculational methods. Fig. 1. Current constraints on $`w(z)w_0+z(dw/dz)_0`$. The dot-dashed lines are the current type Ia supernovae (Riess et al. 1998; Perlmutter et al. 1998) constraints at the 95% confidence level, while the upper limits from gravitational lensing statistics (Cooray 1999) is shown with solid lines. The age of the Universe as a function of $`H_0t_0`$ is show by dotted lines. A conservative lower limit on this parameter is 0.8 (dashed line) when age of the Universe is $``$ 14 to 15 Gyr and $`H_065`$ km s<sup>-1</sup> Mpc<sup>-1</sup>. The shaded region defines the parameter region allowed by current data. In Fig. 1 we summarize current constraints on $`w(z)`$ as given in Equation 1. Here we have assumed a flat Universe with $`\mathrm{\Omega }_M=1/3`$. Evidence for such low matter density, independent of the nature of the additional energy component, comes primarily from the mass density within galaxy clusters (e.g., Evrard 1997). As shown in Fig 1, there is a wide range of possibilities for $`w(z)`$, and the allowed parameter space is consistent with the degeneracies discussed in literature (e.g., Zlatev et al. 1998). Even though the $`w1/3`$ model has been ruled out by combined type Ia supernovae, gravitational lensing and globular cluster ages, we now note that a model in which $`w_01/3`$ but $`(dw/dz)_00.9`$ is fully consistent with the current observational data. In order to test whether one can constrain $`w(z)`$ better than the current data, we increased the Type Ia supernova sample between redshifts of 0.1 to 1; however, the degeneracy between $`w_0`$ and $`(dw/dz)_0`$ did not change appreciably. Increasing the upper redshift of supernova samples decreased the degeneracies; thus, cosmological probes at high redshifts are needed to properly distinguish redshift-dependent $`w(z)`$ from a constant $`w`$ model. A probe to a much higher redshift is provided by the CMB anisotropy data; however, as pointed out in Huterer & Turner (1998) and Huey et al. (1998), CMB anisotropy is not a strong probe of $`w(z)`$. This is due to the fact that $`w(z)`$ affects mostly the lower multipoles, which cannot be measured precisely due to cosmic variance. Also, supernovae and galaxy clusters are unlikely to constrain $`w(z)`$ in the near future given that current observational programs are not likely to recover them at high redshifts ($`z2`$). Fig. 2. Expected number of multiply imaged quasars with image separations between 1 and 6 arcsecs in the SDSS data. We have only counted lensed quasars with at least two images greater than a magnitude limit of 21. The expected number is a constant along the $`\mathrm{\Omega }_M\mathrm{\Omega }_\mathrm{\Lambda }`$ lines. For $`\mathrm{\Omega }_M=0.3`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$, we expect $``$ 2000 lensed sources to be detected within the SDSS. Thus, an alternative probe to high redshifts is needed. In the next section we consider the possibility of using gravitational lensing statistics, in particular the redshift distribution of strongly lensed sources, as a probe of $`w(z)`$. Such statistics in principle probe the volume of the Universe out to a redshift of $``$ 5. In the past, lensing statistics were hampered by the lack of large samples of lensed sources with a well known selection function and their redshift distributions, which are all needed to constrain $`w(z)`$. An exciting possibility is now provided by the upcoming high-quality data from the Sloan Digital Sky Survey<sup>1</sup><sup>1</sup>1http://www.sdss.org (SDSS; Gunn & Knapp 1993) which is going to image $`\pi `$ steradians of the sky down to a 1-$`\sigma `$ magnitude limit of $``$ 23. Since no high redshift ($`z2`$) cosmological probes yet exist, gravitational lensing statistics may be the prime candidate to study $`w(z)`$. ## 3. Gravitational Lensing Statistics In order to calculate the expected number of lensed quasars, in particular considering the SDSS, we extend previous calculations in Wallington & Narayan (1993; also, Dodelson & MacMinn 1997) and Cheng & Krauss (1998). Our calculation follows that of Cooray et al. (1999) in which we calculated the number of lensed galaxies in the Hubble Deep Field. We follow the magnification bias (e.g., Kochanek 1991) calculation in Cheng & Krauss (1998). We calculate the expected number of lensed sources as a function of the magnitude limit of the SDSS and select sources with image separations between 1 and 6 arcsecs. This range is selected based on image resolution and source confusion limits. Our prediction, shown in Fig. 2 as a function of $`\mathrm{\Omega }_M`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$, is based on current determination of the quasar luminosity function, which is likely to be updated once adequate quasars statistics are available from the SDSS. As shown, there are about 2000 lensed quasars down to a limiting magnitude of 21 that could in principle be detected from the SDSS data if $`\mathrm{\Omega }_M=0.3`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$. This number drops to about $``$ 600 when $`\mathrm{\Omega }_M=1.0`$; however, such a cosmological model is already ruled out by current data. In this calculation, we have ignored effects due to extinction and dust; this is an issue with no consensus among various studies (Malhotra et al. 1997; Falco et al. 1998). It is likely that a combined analysis of statistics from ongoing lensed radio sources and optical searches may increase knowledge on such systematic effects. The SDSS lensed quasars lie in redshifts out to $`z`$ 5. Detection of such high redshift lensed sources is likely to be aided by the 5 color imaging data, including the $`z`$-filter. This possibility has already been demonstrated by the detection of some of the highest redshift quasars known today using the SDSS first year test images (Fan et al. 1998). ## 4. Lensing Constraints on $`w(z)`$: Prospects In order to estimate the accuracy to which one can constrain $`w(z)`$ based on gravitational lensing statistics, we take a Fisher matrix approach with parameters $`w_0`$ and $`(dw/dz)_0`$, and assume a flat universe. As stated in literature (e.g., Tegmark et al. 1997), Fisher matrix analysis allows one to estimate the best statistical errors on parameters calculated from a given data set. The Fisher matrix $`F`$ is given by: $$F_{ij}=\frac{^2\mathrm{ln}L}{p_ip_j}_𝐱,$$ (2) where $`L`$ is the likelihood of observing data set $`𝐱`$ given the parameters $`p_1\mathrm{}p_n`$. We bin the observations (number of lensed quasars) in redshift bins of width $`\mathrm{\Delta }z`$ out to a maximum redshift $`z_{\mathrm{max}}`$. Since the selection function for quasar discovery process is still unknown, we adopt a Poisson likelihood function at each redshift bin $`\mathrm{\Delta }z`$ centered at redshift $`z`$. The expected number of lensed sources, $`N_{\mathrm{exp}}`$, in each redshift bin takes into account the magnitude limit, range of allowed image separations as well as the magnification bias. With these approximations, we can now write the Fisher matrix as: $$F_{ij}=\underset{\mathrm{\Delta }z}{}\frac{1}{N_{\mathrm{exp}}(z,\mathrm{\Delta }z)}\frac{N_{\mathrm{exp}}(z,\mathrm{\Delta }z)}{p_i}\frac{N_{\mathrm{exp}}(z,\mathrm{\Delta }z)}{p_j}.$$ (3) This form for the Fisher matrix is identical to the one derived from a Gaussian distribution when the uncertainty ($`\sigma `$) of the distribution is taken to be equal to the shot-noise term $`\sqrt{N_{\mathrm{exp}}(z,\mathrm{\Delta }z)}`$. Fig. 3. Constraints (2-$`\sigma `$) on $`w_0`$ and $`(dw/dz)_0`$ using lensed source redshift distribution expected from the SDSS for a fiducial cosmological model of $`w_0=1`$, $`(dw/dz)_0=0`$, and marginalised over $`\mathrm{\Omega }_M`$ ($`\mathrm{\Omega }_M=0.3\pm 0.1`$ and $`\mathrm{\Omega }_Q=1\mathrm{\Omega }_M`$). We show the expected 2$`\sigma `$ errors when the redshift distribution of lensed sources is known with an accuracy of 0.1 and 0.3 while the maximum redshift probed by lensing statistics is 3 and 5. In Fig. 3 we show the expected (2-$`\sigma `$) uncertainties in $`w_0`$ and $`(dw/dz)_0`$. Here we considered a flat universe with three parameters, $`\mathrm{\Omega }_M`$, $`w_0`$, and $`(dw/dz)_0`$. We marginalised over $`\mathrm{\Omega }_M`$ allowing $`\mathrm{\Omega }_M=0.3\pm 0.1`$ (1-$`\sigma `$) and considered a fiducial model with $`w_0=1`$ and $`(dw/dz)_0=0`$. The three curves in Fig. 3 show variation of the constraint region with the redshift bin width $`\mathrm{\Delta }z`$ (which is roughly equal to the uncertainty in the redshift determination) and with the maximum redshift of detected quasars $`z_{\mathrm{max}}`$. Photometric redshifts now allow redshift determinations with an accuracy of order 0.1 68% of the time and of order 0.3 100% of the time (e.g., Hogg et al. 1998). It is apparent that the size of the constraint region decreases significantly with increasing $`z_{\mathrm{max}}`$ and decreasing $`\mathrm{\Delta }z`$. Note that one can quite safely assume that $`z_{\mathrm{max}}5`$ for quasars in a survey such as the SDSS. Additionally, these calculations assume a limiting magnitude of 21, much lower than the expected 1-$`\sigma `$ limiting magnitude of 23 for the SDSS data. Fig. 4. Constraints (2-$`\sigma `$) on $`w_0`$ and $`(dw/dz)_0`$ using lensed source redshift distribution expected from the SDSS for four fiducial cosmological models when redshift of lensed sources is known with an accuracy of 0.3 out to a redshift of 5. In all cases we assumed a flat universe and marginalised over $`\mathrm{\Omega }_M`$ ($`\mathrm{\Omega }_M=0.3\pm 0.1`$). Fig. 4 shows constraints in the $`w_0`$$`(dw/dz)_0`$ plane for four fiducial models. Models shown are the cosmological constant ($`w(z)=1`$), non-Abelian cosmic strings ($`w(z)=1/3`$; Spergel & Pen 1997) and two quintessence models exhibiting a variation in $`w`$ at small $`z`$ ($`w(z)=0.5+0.1z`$ and $`w(z)=0.50.05z`$). In all cases we assumed a flat universe and marginalised over $`\mathrm{\Omega }_M`$ ($`\mathrm{\Omega }_M=0.3\pm 0.1`$). We also assumed that redshifts of lensed objects are determined with an accuracy $`\mathrm{\Delta }z=0.3`$ and that we have data out to redshift of $`z_{\mathrm{max}}=5`$. This figure shows that the strength of the constraints depends strongly on the fiducial model. In fact, we found that fiducial models for which $`w_0+z(dw/dz)_01`$ (for $`z`$ of order unity) give weaker constraints. This result is not surprising and can be understood by simply using the fact that number of lensed objects out to a redshift of $`z`$ is roughly proportional to the volume of the universe $`V(z)`$. Since $`dV/dp_i`$ (where $`p_i`$ is either $`w_0`$ or $`(dw/dz)_0`$) is proportional to $`(1+z)^{1+w_0+z(dw/dz)_0}`$, we see that the expected number of lensed quasars varies slowly with $`p_i`$ if $`w_0+z(dw/dz)_01`$. In that case, our constraint region will be relatively large. ## 5. Summary & Conclusions In this Letter, we considered the possibility of constraining quintessence models that have been suggested to explain the missing energy density of the Universe. We suggested gravitational lensing statistics, which can be used as a probe of the equation of state of the missing component, $`w(z)`$. An exciting possibility to obtain an adequate sample of lensed quasars and their redshifts comes from the Sloan Digital Sky Survey. Writing $`w(z)w_0+z(dw/dz)_0`$, we studied the expected accuracy to which equation of state today $`w_0`$ and its rate of change $`(dw/dz)_0`$ can simultaneously be constrained. Adopting some conservative assumptions about the quality of the data from SDSS and assuming a flat universe with $`\mathrm{\Omega }_M=0.3\pm 0.1`$, we conclude that tight constraints on these two parameters can indeed be obtained. The strength of the constraints depends not only on the quality of the lensing data from the SDSS, but also on the fiducial model (true values of $`w_0`$ and $`(dw/dz)_0`$). In particular, fiducial models for which $`w_0+z(dw/dz)_01`$ (for $`z`$ of order unity) give weaker constraints on $`w_0`$ and $`(dw/dz)_0`$. We would like to thank Scott Dodelson, Josh Frieman, Lloyd Knox, Cole Miller, Jean Quashnock and Michael Turner for useful discussions. ARC acknowledges support from McCormick Fellowship at the University of Chicago. We also thank the anonymous referee for his/her prompt refereeing of the paper.
no-problem/9901/quant-ph9901054.html
ar5iv
text
# Untitled Document Controlled quantum evolutions and stochastic mechanicsPaper presented at the 7<sup>th</sup> UK Conference on Mathematical and Conceptual Foundations of Modern Physics; Nottingham (UK) 7-11 September, 1998. Nicola Cufaro Petroni INFN Sezione di Bari and Dipartimento di Fisica dell’Università di Bari, via Amendola 173, 70126 Bari (Italy) CUFARO@BARI.INFN.IT Salvatore De Martino, Silvio De Siena and Fabrizio Illuminati INFN Sezione di Napoli - Gruppo collegato di Salerno and Dipartimento di Fisica dell’Università di Salerno via S.Allende, 84081 Baronissi, Salerno (Italy) DEMARTINO@PHYSICS.UNISA.IT, DESIENA@PHYSICS.UNISA.IT ILLUMINATI@PHYSICS.UNISA.IT ABSTRACT: We perform a detailed analysis of the non stationary solutions of the evolution (Fokker-Planck) equations associated to either stationary or non stationary quantum states by the stochastic mechanics. For the excited stationary states of quantum systems with singular velocity fields we explicitely discuss the exact solutions for the HO case. Moreover the possibility of modifying the original potentials in order to implement arbitrary evolutions ruled by these equations is discussed with respect to both possible models for quantum measurements and applications to the control of particle beams in accelerators. 1. Introduction In a few papers the analogy between diffusive classical systems and quantum systems has been reconsidered from the standpoint of the stochastic mechanics (SM) , , and particular attention was devoted there to the evolution of the classical systems associated to a quantum wave function when the conditions imposed by the stochastic variational principle are not satisfied (non extremal processes). The hypothesis that the evolving distribution converges in time toward the quantum distribution, constituted several years ago an important point in the answer by Bohm and Vigier to some criticisms to the assumptions of the Causal Interpretation of the Quantum Mechanics (CIQM) . In the quoted papers it was pointed out that, while the right convergence was in fact achieved for a few quantum examples, these results could not be considered general as shown in some counterexamples: in fact not only for particular non stationary wave functions (as for a minimal uncertainty packet), but also for stationary states with nodes (namely with zeros) we do not seem to get the right asymptotic behaviour. For stationary states with nodes the problem is that the corresponding velocity field to consider in the Fokker-Planck equation shows singularities in the locations of the nodes of the wave function. These singularities effectively separate the available interval of the space variables into (probabilistically) non communicating sections which trap any amount of probability initially attributed and make the system non ergodic. In a more recent paper it has been shown first of all that for transitive systems with stationary velocity fields (as, for example, a stationary state without nodes) we always have an exponential convergence to the right quantum probability distribution associated to the extremal process, even if we initially start from an arbitrary non extremal process. These results can also be extended to an arbitrary stationary state if we separately consider the process as confined in every configuration space region between two subsequent nodes. Moreover it has been remarked there that while the non extremal processes should be considered virtual, as trajectories in the classical Lagrangian mechanics, they can also be turned real if we modify the potential in a suitable way. The interest of this remark lies not only in the fact that non extremal processes are exactly what is lacking in quantum mechanics in order to interpret it as a totally classical stochastic process theory (for example in order to have a classical picture of a double slit experiment ), but also in the possibility of engineering some new controlled real evolutions of quantum states. In fact this could be useful to study (a) transitions between stationary states (b) possible models for measure theory and (c) control of the particle beam dynamics in accelerators . In a sense the SM is also a theory, independent from quantum mechanics, with applications in several physical fields, in particular for systems not perfectly described by the quantum formalism, but whose evolution is correctly controlled by quantum fluctuation: the so called mesoscopic or quantum-like systems. This behaviour characterizes, for example, the beam dynamics in particle accelerators and there is evidence that it could be described by the stochastic formalism of Nelson diffusions , . Of course in this model trajectories and transition probabilities always are perfectly meaningful and, to study in detail the evolution of the probability distributions, and in particular to try to understand if and how it is possible to realize controlled evolutions, it is necessary to determine the fundamental solutions (transition probability densities) associated by SM to every quantum state in consideration: a problem dealt with in the following sections. 2. Fokker-Planck equations for stochastic mechanics SM is a generalization of classical mechanics based on the theory of classical stochastic processes . The variational principles of Lagrangian type provide a foundation for it, as for the classical mechanics or the field theory . In this scheme the deterministic trajectories of classical mechanics are replaced by the random trajectories of diffusion processes in the configuration space. The surprisig feature is that programming equations derived from the stochastic version of the lagrangian principle are formally identical to the equations of a Madelung fluid , the hydrodynamical equivalent of the Schrödinger equation in the Stochastic Interpretation of the Quantum Mechanics (SIQM) . On this basis, it is possible to develop an interpretative scheme where the phenomenological predictions of SM coincide with that of quantum mechanics for all the experimentally measurable quantities. Within this interpretative code the SM is nothing but a quantization procedure, different from the ordinary ones only formally, but completely equivalent from the point of view of the physical consequences. Hence we consider here the SM as a probabilistic simulation of quantum mechanics, providing a bridge between this fundamental section of physics and the stochastic differential calculus. However it is well known that the most peculiar features of the involved stochastic processes, namely the transition probability densities, seem not always enter into this code scheme: in fact, if we want to check experimentally if the transition probabilities are the right ones for a given quantum state, we are obliged to perform repeated position measurements on the quantum system; but, according to quantum theory, the quantum state changes at every measurement (wave packet reduction), and since our transition probabilities are associated to a well defined wave function it will be in general practically impossible to experimentally observe a well defined transition probability. Several ways out of these difficulties have been explored: for example stochastic mechanic scheme could be modified by means of non constant diffusion coefficients ; or alternatively it would be possible to modify the stochastic evolution during the measurement . Here we will rather assume that the processes which do not satisfy the stochastic variational principle still keep a physical meaning and that tey will rapidly converge (in time) toward the processes associated to quantum states. Indeed on the one hand any departure from the distributions of quantum mechanics will quickly be reabsorbed in the time evolution, at least in many meaningful cases; and on the other hand the non standard evolving distributions could be realized by suitable quantum systems for modified, time dependent potentials which may asymptotically in time rejoin the usual potentials. SM is a model intended to achieve a connection between quantum mechanics and classical random phenomena: here we will recall a few notions in order to fix the notation. The position of a classical particle is promoted to a vector Markov process $`\xi (t)`$ defined on some probabilistic space $`(\mathrm{\Omega },,𝐏)`$ and taking values in $`𝐑^3`$. We suppose that this process is characterized by a pdf $`f(𝐫,t)`$ and a transition pdf $`p(𝐫,t|𝐫^{},t^{})`$ and satisfies an Itô stochastic differential equation of the form $$d\xi _j(t)=v_j(\xi (t),t)dt+d\eta _j(t)$$ $`(2.1)`$ where $`v_j`$ are the components of the forward velocity field. However here $`v_j`$ are not given a priori, but play the role of dynamical variables and are subsequently determined on the basis of a variational principle, namely on the basis of a dynamics. On the other hand $`\eta (t)`$ is a Brownian process independent of $`\xi (t)`$ and such that $$𝐄_t\left(d\eta _j(t)\right)=0,𝐄_t\left(d\eta _j(t)d\eta _k(t)\right)=2D\delta _{jk}dt$$ $`2.2`$ where $`d\eta (t)=\eta (t+dt)\eta (t)`$ (for $`dt>0`$), $`D`$ is a diffusion coefficient, and $`𝐄_t`$ are the conditional expectations with respect to $`\xi (t)`$. In what follows we will limit ourselves to the case of the one dimensional trajectories, so that the Markov processes $`\xi (t)`$ considered will always take values in $`𝐑`$. Moreover we will suppose for the time being that the forces acting on the particle will be defined by means of a time-independent potential $`V(x)`$. A suitable definition of the Lagrangian and of the stochastic action functional for the system described by the dynamical variables $`f`$ and $`v`$ allows us to select, by means of the principle of stationarity of the action, the processes which reproduce the quantum mechanics , . In fact, while the pdf $`f(x,t)`$ of the process satisfies, as usual, the Forward Fokker-Planck (FP) equation associated to (2.1) $$_tf=D_x^2f_x(vf)=_x(D_xfvf),$$ $`(2.3)`$ the following choice for the Lagrangian field $$(x,t)=\frac{m}{2}v^2(x,t)+mD_xv(x,t)V(x)$$ $`(2.4)`$ enables us to define a stochastic action funcional $$𝒜=_{t_0}^{t_1}𝐄(\xi (t),t)𝑑t$$ $`(2.5)`$ which leads, through the stationarity condition $`\delta 𝒜=0`$, to the equation $$_tS+\frac{(_xS)^2}{2m}+V2mD^2\frac{_x^2\sqrt{f}}{\sqrt{f}}=0$$ $`(2.6)`$ involving a field $`S(x,t)`$ defined as $$S(x,t)=_t^{t_1}𝐄\left((\xi (s),s)|\xi (t)=x\right)𝑑s+𝐄\left(S_1\left(\xi (t_1)\right)|\xi (t)=x\right)$$ $`(2.7)`$ where $`S_1()=S(,t_1)`$ is an arbitrary final condition. Now the relevant remark is that if $`R(x,t)=\sqrt{f(x,t)}`$, and if we define $$\psi (x,t)=R(x,t)\mathrm{e}^{iS(x,t)/\mathrm{}}$$ $`(2.8)`$ the equation (2.6) takes the form $$_tS+\frac{(_xS)^2}{2m}+V\frac{\mathrm{}^2}{2m}\frac{_x^2R}{R}=0,$$ $`(2.9)`$ and the complex wave function $`\psi `$ will satisfy the Schrödinger equation $$i\mathrm{}_t\psi =\widehat{H}\psi =\frac{\mathrm{}^2}{2m}_x^2\psi +V\psi ,$$ $`(2.10)`$ provided that the diffusione coefficient be connected to the Planck constant by the relation $$D=\frac{\mathrm{}}{2m}.$$ $`(2.11)`$ This trail leading from classical stochastic processes (plus a dynamics) to quantum mechanics can also be trod in the reverse way following the line of reasoning of the SIQM which, as it is well known, is formally ruled by the same differential equations as the SM. If we start from the (one dimensional) Schrödinger equation (2.10) with the Ansatz (2.8), and if we separate the real and the imaginary parts as usual in SIQM , the function $`f=R^2=|\psi |^2`$ comes out to be a particular solution of a FP equation of the form (2.3) with constant diffusion coefficient (2.11) and forward velocity field $$v(x,t)=\frac{1}{m}_xS+\frac{\mathrm{}}{2m}_x(\mathrm{ln}R^2).$$ $`(2.12)`$ On the other hand the explicit dependence of $`v`$ on the form of $`R`$ clearly indicates that to have a solution of (2.3) which makes quantum sense we must pick-up just one, suitable, particular solution. In fact the system is ruled not only by the FP equation (2.3), but also by the second, dynamical equation (2.9), the so-called Hamilton-Jacobi-Madelung (HJM) equation, deduced by separating the real and imaginary parts of (2.10) (see ). The analogy between (2.3) and a FP equation, which looks rather accidental in a purely SIQM context, is more than formal since, as we have briefly recalled, the SM shows how to recover both the equations (2.3) and (2.9) (and hence the Schrödinger equation (2.10)) in a purely classical, dynamical stochastic context. 3. The eigenvalue problem for the FP equation Let us recall here (see for example ) a few generalities about the pdf’s (probability density functions) $`f(x,t)`$ solutions of a one-dimensional FP equation of the form $$_tf=_x^2(Df)_x(vf)=_x\left[_x(Df)vf\right]$$ $`(3.1)`$ defined for $`x[a,b]`$ and $`tt_0`$, when $`D(x)`$ and $`v(x)`$ are two time independent functions such that $`D(x)>0`$, $`v(x)`$ has no singularities in $`(a,b)`$, and both are continuous and differentiable functions. The conditions imposed on the probabilistic solutions are of course $$\begin{array}{cc}\hfill f(x,t)0,& a<x<b,t_0t,\hfill \\ \hfill _a^bf(x,t)𝑑x=1,& t_0t,\hfill \end{array}$$ $`(3.2)`$ and from the form of (3.1) the second condition also takes the form $$\left[_x(Df)vf\right]_{a,b}=0,t_0t.$$ $`(3.3)`$ Suitable initial conditions will be added to produce the required evolution: for example the transition pdf $`p(x,t|x_0,t_0)`$ will be selected by the initial condition $$\underset{tt_0^+}{lim}f(x,t)=f(x,t_0^+)=\delta (xx_0).$$ $`(3.4)`$ It is also possible to show by direct calculation that $$h(x)=N^1\mathrm{e}^{{\scriptscriptstyle [D^{}(x)v(x)]/D(x)𝑑x}},N=_a^b\mathrm{e}^{{\scriptscriptstyle [D^{}(x)v(x)]/D(x)𝑑x}}𝑑x$$ $`(3.5)`$ is an invariant (time independent) solution of (3.1) satisfying the conditions (3.2). Remark however that (3.1) is not in the standard self-adjoint form ; but if we define the function $`g(x,t)`$ by means of $$f(x,t)=\sqrt{h(x)}g(x,t)$$ $`(3.6)`$ it would be easy to show that $`g(x,t)`$ obeys now an equation of the form $$_tg=g$$ $`(3.7)`$ where the operator $``$ defined by $$\phi =\frac{d}{dx}\left[p(x)\frac{d\phi (x)}{dx}\right]q(x)\phi (x),$$ $`(3.8)`$ with $$\begin{array}{cc}\hfill p(x)& =D(x)>0,\hfill \\ \hfill q(x)& =\frac{\left[D^{}(x)v(x)\right]^2}{4D(x)}\frac{\left[D^{}(x)v(x)\right]^{}}{2},\hfill \end{array}$$ $`(3.9)`$ is now self-adjoint. Then, by separating the variables by means of $`g(x,t)=\gamma (t)G(x)`$ we have $`\gamma (t)=\mathrm{e}^{\lambda t}`$ while $`G`$ must be solution of a typical Sturm-Liouville problem associated to the equation $$G(x)+\lambda G(x)=0$$ $`(3.10)`$ with the boundary conditions $$\begin{array}{cc}& \left[D^{}(a)v(a)\right]G(a)+2D(a)G^{}(a)=0,\hfill \\ & \left[D^{}(b)v(b)\right]G(b)+2D(b)G^{}(b)=0.\hfill \end{array}$$ $`(3.11)`$ It easy to see that $`\lambda =0`$ is always an eigenvalue for the problem (3.10) with (3.11), and that the corresponding eigenfunction is $`\sqrt{h(x)}`$ as defined from (3.5). For the differential problem (3.10) with (3.11) we have that the simple eigenvalues $`\lambda _n`$ will constitute an infinite, increasing sequence and the corresponding eigenfunction $`G_n(x)`$ will have $`n`$ simple zeros in $`(a,b)`$. For us this means that $`\lambda _0=0`$, corresponding to the eigenfunction $`G_0(x)=\sqrt{h(x)}`$ which never vanishes in $`(a,b)`$, is the lowest eigenvalue and that all other eigenvalues are strictly positive. Moreover the eigenfunctions will constitute a complete orthonormal set of functions in $`L^2\left([a,b]\right)`$ . As a consequence the general solution of (3.1) with (3.2) will have the form $$f(x,t)=\underset{n=0}{\overset{\mathrm{}}{}}c_n\mathrm{e}^{\lambda _nt}\sqrt{h(x)}G_n(x)$$ $`(3.12)`$ with $`c_0=1`$ for normalization (remember that $`\lambda _0=0`$). The coefficients $`c_n`$ for a particular solution are selected by an initial condition $$f(x,t_0^+)=f_0(x)$$ $`(3.13)`$ and are then calculated from the orthonormality relations as $$c_n=_a^bf_0(x)\frac{G_n(x)}{\sqrt{h(x)}}𝑑x.$$ $`(3.14)`$ In particular for the transition pdf we have from (3.4) that $$c_n=\frac{G_n(x_0)}{\sqrt{h(x_0)}}.$$ $`(3.15)`$ Since $`\lambda _0=0`$ and $`\lambda _n>0`$ for $`n1`$, the general solution (3.12) of (3.1) has a precise time evolution: all the exponential factors in (3.12) vanish with $`t+\mathrm{}`$ with the only exception of the term $`n=0`$ which is constant, so that exponentially fast we will always have $$\underset{t+\mathrm{}}{lim}f(x,t)=c_0\sqrt{h(x)}G_0(x)=h(x),$$ $`(3.16)`$ namely the general solution will always relax in time toward the invariant solution $`h(x)`$. 4. Stationary quantum states Let us consider now a Schrödinger equation (2.10) with a time-independent potential $`V(x)`$ which gives rise to a purely discrete spectrum and bound, normalizable states, and let us use the following notations for stationary states, eigenvalues and eigenfunctions: $$\begin{array}{cc}\hfill \psi _n(x,t)& =\varphi _n(x)\mathrm{e}^{iE_nt/\mathrm{}}\hfill \\ \hfill \widehat{H}\varphi _n& =\frac{\mathrm{}^2}{2m}\varphi _n^{\prime \prime }+V\varphi _n=E_n\varphi _n.\hfill \end{array}$$ $`(4.1)`$ Taking into account the relation (2.11) the previous eigenvalue equation can also be recast in the following form $$D\varphi _n^{\prime \prime }=\frac{VE_n}{\mathrm{}}\varphi _n.$$ $`(4.2)`$ For these stationary states the pdf is the time independent, real function $$f_n(x)=|\psi _n(x,t)|^2=\varphi _n^2(x),$$ $`(4.3)`$ and $$S(x,t)=E_nt,R(x,t)=\varphi _n(x),$$ $`(4.4)`$ so that for our state the velocity field is $$v_n(x)=2D\frac{\varphi _n^{}(x)}{\varphi _n(x)}.$$ $`(4.5)`$ This means that now $`v_n`$ is time-independent and it presents singularities in the zeros (nodes) of the eigenfunction. Since the $`n`$-th eigenfunction of a quantum system with bound states has exactly $`n`$ simple nodes that we will indicate with $`x_1,\mathrm{},x_n`$, the coefficients of the FP equation (2.3) are not defined in these $`n`$ points and we will be obliged to solve it in separate intervals by imposing the right boundary conditions connecting the different sections. In fact these singularities effectively separate the real axis in $`n+1`$ sub-intervals with walls impenetrable to the probability current. Hence the process will not have an unique invariant measure and will never cross the boundaries fixed by the singularities of $`v(x)`$: if we start in one of the intervals in which the axis is so divided we will always remain there . As a consequence we must think the normalization integral (3.2) (with $`a=\mathrm{}`$ and $`b=+\mathrm{}`$) as the sum of $`n+1`$ integrals over the sub-intervals $`[x_k,x_{k+1}]`$ with $`k=0,1,\mathrm{},n`$ (where we understand, to unificate the notation, that $`x_0=\mathrm{}`$ and $`x_{n+1}=+\mathrm{}`$). Hence for $`n1`$ we will be obliged to solve the equation (2.3) in every interval $`[x_k,x_{k+1}]`$ by requiring that the integrals $$_{x_k}^{x_{k+1}}f(x,t)𝑑x$$ $`(4.6)`$ be kept at a constant value for $`tt_0`$: this value is not, in general, equal to one (only the sum of these $`n+1`$ integrals amounts to one) and, since the separate intervals can not communicate, it will be fixed by the choice of the initial conditions. Hence the boundary conditions associated to (2.3) require the conservation of the probability in $`[x_k,x_{k+1}]`$, namely the vanishing of the probability current at the end points of the interval: $$\left[D_xfvf\right]_{x_k,x_{k+1}}=0,tt_0.$$ $`(4.7)`$ To have a particular solution we must moreover specify the initial conditions: in particular we will be interested in the transition pdf $`p(x,t|x_0,t_0)`$, which is singled out by the initial condition (3.4), since the asymptotic approximation in $`L^1`$ among solutions of (2.3) is ruled by the asymptotic behavior of $`p(x,t|x_0,t_0)`$ through the Chapman-Kolmogorov equation $$f(x,t)=_{\mathrm{}}^+\mathrm{}p(x,t|y,t_0)f(y,t_0^+)𝑑y.$$ $`(4.8)`$ It is clear at this point that in every interval $`[x_k,x_{k+1}]`$ (both finite or infinite) we can solve the equation (2.3) along the guidelines sketched in the section 3 by keeping in mind that in $`[x_k,x_{k+1}]`$ we already know the invariant, time-independent solution $`\varphi _n^2(x)`$ (or, more precisely, its restriction to the said interval) which is never zero in this interval with the exception of the extremes $`x_k`$ and $`x_{k+1}`$. Hence, as we have seen in the general case, with the position $$f(x,t)=\varphi _n(x)g(x,t)$$ $`(4.9)`$ we can reduce (2.3) to the form $$_tg=_ng$$ $`(4.10)`$ where $`_n`$ is now the self-adjoint operator defined on $`[x_k,x_{k+1}]`$ by $$_n\phi (x)=\frac{\mathrm{d}}{\mathrm{d}x}\left[p(x)\frac{\mathrm{d}\phi (x)}{\mathrm{d}x}\right]q_n(x)\phi (x)$$ $`(4.11)`$ where we have now $$p(x)=D>0;q_n(x)=\frac{v_n^2(x)}{4D}+\frac{v_n^{}(x)}{2}.$$ $`(4.12)`$ To solve (4.10) it is in general advisable to separate the variables, so that we immediately have $`\gamma (t)=\mathrm{e}^{\lambda t}`$ while $`G`$ must be solution of the Sturm-Liouville problem associated to the equation $$_nG(x)+\lambda G(x)=0$$ $`(4.13)`$ with the boundary conditions $$\left[2DG^{}(x)v_n(x)G(x)\right]_{x_k,x_{k+1}}=0.$$ $`(4.14)`$ The general behaviour of the solutions obtained as expansions in the system of the eigenfunctions of (4.13) has already been discussed in section 3. In particular we deduce from (3.12) that for the stationary quantum states (more precisely, in every subinterval defined by two subsequent nodes) all the solutions of (2.3) always converge in time toward the right quantum solution $`|\varphi _n|^2`$: a general result not contained in the previous papers . As a further consequence a quantum solution $`\varphi _n^2`$ defined on the entire interval $`(\mathrm{},+\mathrm{})`$ will be stable under deviations from its initial condition. 5. Harmonic oscillator To see in an explicit way how the pdf’s of SM evolve, let us consider now in detail the particular example of a quantum harmonic oscillator (HO) characterized by the potential $$V(x)=\frac{m}{2}\omega ^2x^2.$$ $`(5.1)`$ It is well-known that its eigenvalues are $$E_n=\mathrm{}\omega \left(n+\frac{1}{2}\right);n=0,1,2\mathrm{}$$ $`(5.2)`$ while, with the notation $$\sigma _0^2=\frac{\mathrm{}}{2m\omega },$$ $`(5.3)`$ the eigenfuncions are $$\varphi _n(x)=\frac{1}{\sqrt{\sigma _0\sqrt{2\pi }2^nn!}}\mathrm{e}^{x^2/4\sigma _0^2}H_n\left(\frac{x}{\sigma _0\sqrt{2}}\right)$$ $`(5.4)`$ where $`H_n`$ are the Hermite polynomials. The corresponding velocity fields are easily calculated and are for example $$\begin{array}{cc}\hfill v_0(x)& =\omega x,\hfill \\ \hfill v_1(x)& =2\frac{\omega \sigma _0^2}{x}\omega x,\hfill \\ \hfill v_2(x)& =4\omega \sigma _0^2\frac{x}{x^2\sigma _0^2}\omega x,\hfill \end{array}$$ $`(5.6)`$ with singularities in the zeros $`x_k`$ of the Hermite polynomials. If we now keep the form of the velocity fields fixed we can consider (2.3) as an ordinary FP equation for a diffusion process and solve it to see the approach to the equilibrium of the general solutions. When $`n=0`$ the equation (2.3) takes the form $$_tf=\omega \sigma _0^2_x^2f+\omega x_xf+\omega f$$ $`(5.7)`$ and the fundamental solution comes out to be the Ornstein-Uhlenbeck transition pdf $$p(x,t|x_0,t_0)=\frac{1}{\sigma (t)\sqrt{2\pi }}\mathrm{e}^{[x\alpha (t)]^2/2\sigma ^2(t)},(tt_0)$$ $`(5.8)`$ where we used the notation $$\alpha (t)=x_0\mathrm{e}^{\omega (tt_0)},\sigma ^2(t)=\sigma _0^2\left[1\mathrm{e}^{2\omega (tt_0)}\right],(tt_0).$$ $`(5.9)`$ The stationary Markov process associated to the transition pdf (5.8) is selected by the initial, invariant pdf $$f(x)=\frac{1}{\sigma _0\sqrt{2\pi }}\mathrm{e}^{x^2/2\sigma _0^2}$$ $`(5.10)`$ which is also the asymptotic pdf for every other initial condition when the evolution is ruled by (5.7) (see ) so that the invariant distribution plays also the role of the limit distribution. Since this invariant pdf also coincides with the quantum stationary pdf $`\varphi _0^2=|\psi _0|^2`$ the process associated by the SM to the ground state of a quantum HO is nothing but the stationary Ornstein-Uhlenbeck process. For $`n1`$ the solutions of (2.3) are no more so easy to find and, as discussed in the previus section, we will have to solve the eigenvalue problem (4.13) which, with $`ϵ=\mathrm{}\lambda `$, can be written as $$\frac{\mathrm{}^2}{2m}G^{\prime \prime }(x)+\left(\frac{m}{2}\omega ^2x^2\mathrm{}\omega \frac{2n+1}{2}\right)G(x)=ϵG(x),$$ $`(5.11)`$ in every interval $`[x_k,x_{k+1}]`$, with $`k=0,1,\mathrm{},n`$, between two subsequent singularities of the $`v_n`$ field. The boundary conditions at the endpoints of these intervals, deduced from (4.7) through (4.9), are $$[\varphi _nG^{}\varphi _n^{}G]_{x_k,x_{k+1}}=0$$ $`(5.12)`$ and since $`\varphi _n`$ (but not $`\varphi _n^{}`$) vanishes in $`x_k,x_{k+1}`$, the conditions to impose are $$G(x_k)=G(x_{k+1})=0$$ $`(5.13)`$ where it is understood that for $`x_0`$ and $`x_{n+1}`$ we respectively mean $$\underset{x\mathrm{}}{lim}G(x)=0,\underset{x+\mathrm{}}{lim}G(x)=0.$$ $`(5.14)`$ It is also useful at this point to give the eigenvalue problem in an adimensional form by using the new adimensional variable $`x/\sigma _0`$ (which will still be called $`x`$) and the eigenvalue $`\mu =\lambda /\omega =ϵ/\mathrm{}\omega `$. In this way the equation (5.11) with the conditions (5.13) becomes $$\begin{array}{cc}\hfill y^{\prime \prime }(x)\left(\frac{x^2}{4}\frac{2n+1}{2}\mu \right)y(x)& =0\hfill \\ \hfill y(x_k)=y(x_{k+1})& =0\hfill \end{array}$$ $`(5.15)`$ where $`x,x_k,x_{k+1}`$ are now adimensional variables. If $`\mu _m`$ and $`y_m(x)`$ are the eigenvalues and eigenfunctions of (5.15), the general solution of the corresponding FP equation (2.3) will be $$f(x,t)=\underset{m=0}{\overset{\mathrm{}}{}}c_m\mathrm{e}^{\mu _m\omega t}\varphi _n(x)y_m\left(\frac{x}{\sigma _0}\right).$$ $`(5.16)`$ Of course the values of the coefficients $`c_m`$ will be fixed by the initial conditions and by the obvious requirements that $`f(x,t)`$ must be non negative and normalized (on the whole $`x`$ axis) along all its evolution. Two linearly independent solutions of (5.15) are $$y^{(1)}=\mathrm{e}^{x^2/4}M(\frac{\mu +n}{2},\frac{1}{2};\frac{x^2}{2}),y^{(2)}=x\mathrm{e}^{x^2/4}M(\frac{\mu +n1}{2},\frac{3}{2};\frac{x^2}{2}),$$ $`(5.17)`$ where $`M(a,b;z)`$ are the confluent hypergeometric functions. We consider first the case $`n=1`$ ($`x_0=\mathrm{}`$, $`x_1=0`$ and $`x_2=+\mathrm{}`$) so that (5.15) will have to be solved separately for $`x0`$ and for $`x0`$ with the boundary conditions $`y(0)=0`$ and $$\underset{x\mathrm{}}{lim}y(x)=\underset{x+\mathrm{}}{lim}y(x)=0.$$ $`(5.18)`$ A long calculation shows that the transition pdf is now $$p(x,t|x_0,t_0)=\frac{x}{\alpha (t)}\frac{\mathrm{e}^{[x\alpha (t)]^2/2\sigma ^2(t)}\mathrm{e}^{[x+\alpha (t)]^2/2\sigma ^2(t)}}{\sigma (t)\sqrt{2\pi }}$$ $`(5.19)`$ where $`\alpha (t)`$ and $`\sigma ^2(t)`$ are defined in (5.9). It must be remarked however that (5.19) must be considered as restricted to $`x0`$ when $`x_0>0`$ and to $`x0`$ when $`x_0<0`$, and that only on these intervals it is suitably normalized. In order to take into account at once both these possibilities we can also introduce the Heavyside function $`\mathrm{\Theta }(x)`$ so that for every $`x_00`$ we will have $$p(x,t|x_0,t_0)=\mathrm{\Theta }(xx_0)\frac{x}{\alpha (t)}\frac{\mathrm{e}^{[x\alpha (t)]^2/2\sigma ^2(t)}\mathrm{e}^{[x+\alpha (t)]^2/2\sigma ^2(t)}}{\sigma (t)\sqrt{2\pi }}.$$ $`(5.20)`$ This completely solves the problem for $`n=1`$ since from (4.8) we can now deduce also the evolution of every other initial pdf. In particular it can be shown that $$\underset{t+\mathrm{}}{lim}p(x,t|x_0,t_0)=2\mathrm{\Theta }(xx_0)\frac{x^2\mathrm{e}^{x^2/2\sigma _0^2}}{\sigma _0^3\sqrt{2\pi }}=2\mathrm{\Theta }(xx_0)\varphi _1^2(x),$$ $`(5.21)`$ and hence, if $`f(x,t_0^+)=f_0(x)`$ is the initial pdf, we have for $`t>t_0`$ $$\begin{array}{cc}\hfill \underset{t+\mathrm{}}{lim}f(x,t)& =\underset{t+\mathrm{}}{lim}_{\mathrm{}}^+\mathrm{}p(x,t|y,t_0)f_0(y)𝑑y\hfill \\ & =2\varphi _1^2(x)_{\mathrm{}}^+\mathrm{}\mathrm{\Theta }(xy)f_0(y)𝑑y=\mathrm{\Gamma }(q;x)\varphi _1^2(x),\hfill \end{array}$$ $`(5.22)`$ where we have defined the function $$\mathrm{\Gamma }(q;x)=q\mathrm{\Theta }(x)+(2q)\mathrm{\Theta }(x);q=2_0^+\mathrm{}f_0(y)𝑑y.$$ $`(5.23)`$ Remark that when $`q=1`$ (namely when the initial probability is equally shared on the two real semi-axis) we have $`\mathrm{\Gamma }(1;x)=1`$ and the asymptotical pdf coincides with the quantum stationary pdf $`\varphi _1^2(x)`$; if on the other hand $`q1`$ the asymptotical pdf has the same shape of $`\varphi _1^2(x)`$ but with different weights on the two semi-axis. If then $`n=2`$ we have $`x_0=\mathrm{}`$, $`x_1=1`$, $`x_2=1`$ and $`x_3=+\mathrm{}`$, and the equation (5.15) must be solved in the three intervals $`(\mathrm{},1]`$, $`[1,1]`$ and $`[1,+\mathrm{})`$, but the eigenvalues and eigenfunctions are now not easy to find so that a complete analysis of this case (and of every other case with $`n>2`$) has still to be elaborated. At present only a few indications can be obtained numerically : for example it can be shown that, beyond $`\mu _0=0`$, the first eigenvalues in the interval $`[1,1]`$ can be calculated as the first values such that $$M(\frac{\mu +1}{2},\frac{3}{2};\frac{1}{2})=0$$ $`(5.24)`$ and are $`\mu _17.44`$, $`\mu _237.06`$, $`\mu _386.41`$. Also for the unbounded interval $`[1,+\mathrm{})`$ (the analysis is similar for $`(\mathrm{},1]`$) the eigenvalues are derivable only numerically. 6. Controlled evolutions It is important to remark now that solutions of the type (5.8) and (5.20), and any other solution different from $`|\varphi _n|^2`$, are not associated to quantum mechanical states solutions of (2.10); in other words, they define processes that satisfy neither the stochastic variational principle nor the Nelson dynamical equation . That notwithstanding these processes still keep an interesting relation with the quantum mechanics. In fact to every solution $`f(x,t)`$ of a FP equation (3.1), with a given $`v(x,t)`$ and the constant diffusion coefficient (2.11), we can always associate the wave function of a quantum system if we take a suitable time-dependent potential. This means in practice that even the virtual (non optimal) processes discussed in this paper can be associated to proper quantum states, namely can be made optimal provided that the potential $`V(x)`$ of (2.10) be modified in a new $`V(x,t)`$ in order to control the evolution. Let us take a solution $`f(x,t)`$ of the FP equation (3.1), with a given $`v(x,t)`$ and a constant diffusion coefficient (3.3): if we define the functions $`R(x,t)`$ and $`W(x,t)`$ from $$f(x,t)=R^2(x,t),v(x,t)=_xW(x,t),$$ $`(6.1)`$ if we remember from (2.12) that the following relation must hold $$mv=_xS+\mathrm{}\frac{_xR}{R}=_xS+\frac{\mathrm{}}{2}\frac{_xf}{f}=_x\left(S+\frac{\mathrm{}}{2}\mathrm{ln}\stackrel{~}{f}\right)$$ $`(6.2)`$ where $`\stackrel{~}{f}`$ is an adimensional pdf (it is the argument of a logarithm) obtained by means of a suitable and arbitrary multiplicative constant, and if $`S(x,t)`$ is supposed to be the phase of a wave function as in (2.8), we immediately get the equation $$S(x,t)=mW(x,t)\frac{\mathrm{}}{2}\mathrm{ln}\stackrel{~}{f}(x,t)\theta (t)$$ $`(6.3)`$ which allows us to determine $`S`$ from $`f`$ and $`v`$ (namely $`W`$) up to an additive arbitrary function of the time $`\theta (t)`$. However, in order that the wave function (2.8) with the said $`R`$ and $`S`$ be a solution of a Schrödinger equation, we must also be sure that the HJM equation (2.9) is satisfied. Since $`S`$ and $`R`$ are now fixed, the equation (2.9) must be considered as a relation defining a potential which, after a short calculation, becomes $$V(x,t)=\frac{\mathrm{}^2}{4m}_x^2\mathrm{ln}\stackrel{~}{f}+\frac{\mathrm{}}{2}\left(_t\mathrm{ln}\stackrel{~}{f}+v_x\mathrm{ln}\stackrel{~}{f}\right)\frac{mv^2}{2}m_tW+\dot{\theta }.$$ $`(6.4)`$ Of course if we start with a quantum wave function for a given potential and if we pick up as a solution of (2.3) exactly $`f=R^2`$, the formula (6.4) will correctly give back the initial potential, as can be seen for both the ground state and the first excited state of the HO which (by choosing respectively $`\theta (t)=\mathrm{}\omega t/2`$ and $`\theta (t)=3\mathrm{}\omega t/2`$, which amounts to suitably fix the zero of the potential energy) give as result the usual harmonic potential (5.1). If on the other hand we consider for example the (non stationary) fundamental solution (5.8) associated to the velocity field $`v_0(x)`$ of (5.6) for the case $`n=0`$ of the HO (we put $`t_0=0`$ to simplify the notation) we have already remarked that it does not correspond to a quantum wave function whatsoever. However a short calculation shows that, by choosing $$\dot{\theta }(t)=\frac{\mathrm{}\omega }{2}\left(\frac{2\sigma _0^2}{\sigma ^2(t)}1\right)=\frac{\mathrm{}\omega }{2}\frac{1}{\mathrm{tanh}\omega t}\frac{\mathrm{}\omega }{2},(t+\mathrm{}),$$ $`(6.5)`$ and the time-dependent controlling potential $$V(x,t)=\frac{\mathrm{}\omega }{2}\left[\frac{x\alpha (t)}{\sigma (t)}\right]^2\frac{\sigma _0^2}{\sigma ^2(t)}\frac{m\omega ^2x^2}{2}\frac{m\omega ^2x^2}{2},(t+\mathrm{})$$ $`(6.6)`$ we can define a quantum state (a wave function solution of a Schrödinger equation) which realizes the required evolution (5.8). Of course the fact that for $`t+\mathrm{}`$ we recover the harmonic potential is associated to the fact, aleady remarked, that the usual quantum pdf $`\varphi _0^2(x)`$ is also the limit distribution for every initial condition and in particular also for the pdf (5.8). In the case $`n=1`$, with $`v_1(x)`$ from (5.6) and the transition probability (5.20) as given non-stationary solution, the calculations are lenghtier. However if we define $$\begin{array}{c}\text{ }F(x,t)=\frac{\mathrm{e}^{[x\alpha (t)]^2/2\sigma ^2(t)}}{\sigma (t)\sqrt{2\pi }},G(x,t)=\frac{\mathrm{e}^{[x+\alpha (t)]^2/2\sigma ^2(t)}}{\sigma (t)\sqrt{2\pi }},(6.7)\text{ }\hfill \\ \text{ }T\left[\frac{x\alpha (t)}{\sigma ^2(t)}\right]=\frac{x\alpha (t)}{\sigma ^2(t)}\frac{F(x,t)+G(x,t)}{F(x,t)G(x,t)},T(x)=\frac{x}{\mathrm{tanh}x},(6.8)\text{ }\hfill \end{array}$$ and if we choose $$\dot{\theta }(t)=\frac{\mathrm{}\omega }{2}\left(\frac{4\sigma _0^2}{\sigma ^2(t)}\frac{2\sigma _0^2\alpha ^2(t)}{\sigma ^4(t)}1\right)\frac{3}{2}\mathrm{}\omega ,(t+\mathrm{})$$ $`(6.9)`$ we have as time dependent potential for every $`x0`$ $$\begin{array}{cc}\hfill V(x,t)& =\frac{m\omega ^2x^2}{2}\left(\frac{2\sigma _0^4}{\sigma ^4}1\right)+\mathrm{}\omega \left[1\frac{\sigma _0^2}{\sigma ^2}T\left(\frac{x\alpha }{\sigma ^2}\right)\right]\frac{\mathrm{}^2}{4mx^2}\left[1T\left(\frac{x\alpha }{\sigma ^2}\right)\right]\hfill \\ & \frac{m\omega ^2x^2}{2},(t+\mathrm{}).\hfill \end{array}$$ $`(6.10)`$ In this case the asymptotic potential is the usual harmonic potential, but we must consider it separately on the positive and negative $`x`$ semi-axis since in the point $`x=0`$ a singular behaviour would show up when $`t0`$. This means that, also if asymptotically we recover the right potential, this will be associated with new boundary conditions in $`x=0`$ since we will be obliged to keep the system bounded on the positive (for example) semi-axis. 7. Modelling transitions The explicit knowledge of the transition pdf of the type (5.8) and (5.20), And the possibility of turning optimal any suitable $`(f,v)`$ state by a right choice of $`V(x,t)`$ enable us also to explore the possibility of modelling evolutions leading, for example, from the pdf of a given stationary state to another (decays and excitations). In fact a spontaneuous generalization of this idea hints to the possibility of modelling evolutions ffrom a given, arbitrary pdf and the pdf of an eigenfuncition of some observable: something which could become an element for very simple models of quantum measurements where we try to dynamically describe the wave packet collapse. As a first example let us consider the transition between the invariant pdf’s $$\begin{array}{cc}\hfill f_0(x)& =\varphi _0^2(x)=\frac{1}{\sigma _0\sqrt{2\pi }}\mathrm{e}^{x^2/2\sigma _0^2},\hfill \\ \hfill f_1(x)& =\varphi _1^2(x)=\frac{x^2}{\sigma _0^3\sqrt{2\pi }}\mathrm{e}^{x^2/2\sigma _0^2}.\hfill \end{array}$$ $`(7.1)`$ If for instance we choose to describe the decay $`10`$ we should just use the Chapman-Kolmogorov equation (4.8) with (5.8) as transition pdf and $`f_1(x)`$ as initial pdf ($`t_0=0`$). An elementary integration will show in this case that the resulting evolution takes the form $$f_{10}(x,t)=\beta ^2(t)f_0(x)+\gamma ^2(t)f_1(x)$$ $`(7.2)`$ where we used the notation $$\beta ^2(t)=1\mathrm{e}^{2\omega t},\gamma (t)=\mathrm{e}^{\omega t}.$$ $`(7.3)`$ Taking now $`v_0(x)`$ from (5.6) and the evolving pdf from (7.2) and putting them in (6.4) (remark that, since $`v_0`$ is stationary, $`_tW=0`$) we get the following form of the controlling potential: $$V(x,t)=\frac{m\omega ^2x^2}{2}2\mathrm{}\omega U(x/\sigma _0;\beta /\gamma )$$ $`(7.4)`$ where $$U(x;b)=\frac{x^4+b^2x^2b^2}{(b^2+x^2)^2}.$$ $`(7.5)`$ In our example the parameter $$b^2(t)=\frac{\beta ^2(t)}{\gamma ^2(t)}=\mathrm{e}^{2\omega t}1$$ $`(7.6)`$ is such that $`b^2(0^+)=0`$ and $`b^2(+\mathrm{})=+\mathrm{}`$ and hence $`U`$ goes everywhere to zero for $`t+\mathrm{}`$, but is everywhere 1 with a negative singularity in $`x=0`$ for $`t0^+`$. As a consequence, while for $`t+\mathrm{}`$ the controlling potential (7.4) behaves like the HO potential (5.1), for $`t0^+`$ it presents an unessential shift of $`2\mathrm{}\omega `$ in the zero level, but shows also a deep negative singularity in $`x=0`$. Apart from this singular behaviour of the controlling potential, a problem arises from the form of the phase funcion $`S`$. In fact from (6.3) we easily have for our decay $$S(x,t)=\frac{\mathrm{}}{2}\mathrm{ln}\left[\beta ^2(x,t)+\frac{x^2}{\sigma _0^2}\gamma ^2(x,t)\right]\frac{\mathrm{}\omega }{2}t$$ $`(7.7)`$ so that in particular we have $$S(x,0^+)=\frac{\mathrm{}}{2}\mathrm{ln}\frac{x^2}{\sigma _0^2},$$ $`(7.8)`$ while we would have expected that initially our phase function be independent from $`x`$ as for every stationary wave function: this means that in our supposed evolution the phase function presents a discontinuous behaviour for $`t0^+`$. The problem arises here from the fact that in our simple model we initially have a stationary state characterized by a ddp $`f_1(x)`$ and a velocity field $`v_1(x)`$, and then suddenly, in order to start the decay, we suppose the same $`f_1`$ embedded in a different velocity field $`v_0(x)`$ which drags it toward a new stationary $`f_0(x)`$. This discontinuous change from $`v_1`$ to $`v_0`$ is of course responsible for the remarked discontinuous change in the phase of the wave function. Hence a more realistic model for a controlled transition must take into account a continuous and smooth (albeit widely arbitrary) modification of the initial velocity field into the final one, a requirement which compels us to consider a new class of FP equations with time-dependent velocity field $`v(x,t)`$. In particular to achieve the proposed controlled decay between two stationary states we should solve an evolution equation with a velocity field $`v(x,t)`$ continuously, and possibly smoothingly, going from $`v_1(x)`$ to $`v_0(x)`$; but this seems at present beyond the reach of our possibilities since every reasonable such $`v(x,t)`$ field has proven intractable from the point of view of the solution of the FP equation (2.3). However we can show the results for another meaningful example which does not present the same technical difficulties of the decay between two stationary states: namely the controlled evolution from a coherent oscillating packet in a HO, and the ground state of the same HO. To do this we will recall a simple result which indicates how to find the solutions of a particular class of evolution equations (2.3) which contains the situation of our proposed example. If the velocity field of the evolution equation (2.3) has the linear form $$v(x,t)=A(t)+B(t)x$$ $`(7.9)`$ with $`A(t)`$ and $`B(t)`$ continuous functions of time, then there are always solutions of the form $`𝒩(\mu (t),\nu (t))`$ where $`\mu (t)`$ and $`\nu (t)`$ are calculated from the differential equations $$\mu ^{}(t)B(t)\mu (t)=A(t);\nu ^{}(t)2B(t)\nu (t)=2D$$ $`(7.10)`$ with suitable initial conditions. On the other hand the (non stationary) wave function of the oscillating coherent wave packet with initial displacement $`a`$ is $$\psi _c(x,t)=\left(\frac{1}{2\pi \sigma _0^2}\right)^{1/4}\mathrm{exp}\left[\frac{(xa\mathrm{cos}\omega t)^2}{4\sigma _0^2}i\left(\frac{4ax\mathrm{sin}\omega ta^2\mathrm{sin}2\omega t}{8\sigma _0^2}+\frac{\omega t}{2}\right)\right]$$ $`(7.11)`$ so that the corresponding forward velocity field will be $$v_c(x,t)=a\omega (\mathrm{cos}\omega t\mathrm{sin}\omega t)\omega x,$$ $`(7.12)`$ namely it will have the required form (7.9) with $`A(t)=a\omega (\mathrm{cos}\omega t\mathrm{sin}\omega t)`$ and $`B(t)=\omega `$, while the position pdf will be $$f_c(x,t)=|\psi _c(x,t)|^2=f_0(xa\mathrm{cos}\omega t).$$ $`(7.13)`$ Now it is very easy to show that when $`B(t)=\omega `$, as in the case of our wave packet, there are stable, coherent (non dispersive) solutions with $`\nu (t)=\sigma _0^2`$ of the form $`𝒩(\mu (t),\sigma _0^2)`$, namely of the form $$f(x,t)=f_0\left(x\mu (t)\right).$$ $`(7.14)`$ Of course the time evolution of such coherent solutions can be determined in one step, without implementing the two steps procedure of first calculating the transition pdf and then, through the Chapman-Kolmogorov equation, the evolution of an arbitrary initial pdf. On the other hand if we compare (5.6) and (7.12) we see that the difference between $`v_0`$ and $`v_c`$ consists in the first, time dependent term of the second one; hence it is natural to consider the problem of solving the evolution equation (2.3) with a velocity field of the type $$\begin{array}{cc}\hfill v(x,t)& =A(t)\omega x\hfill \\ \hfill A(t)& =a\omega (\mathrm{cos}\omega t\mathrm{sin}\omega t)F(t)\hfill \end{array}$$ $`(7.15)`$ where $`F(t)`$ is an arbitrary function varying smoothly between 1 and 0, or vice verssa. In this case the evolution equation (2.3) still has stable, coherent (non dispersive) solutions of the form (7.14) with a $`\mu (t)`$ dependent on our choice of $`F(t)`$ through (7.10). A completely smooth transition from the coherent, oscillating wave function (7.11) to the ground state $`\varphi _0`$ (5.4) of the HO can now be achieved for example by means of the following choice of the function $`F(t)`$: $$F(t)=1\left(1\mathrm{e}^{\mathrm{\Omega }t}\right)^N=\underset{k=1}{\overset{N}{}}(1)^{k+1}\left(\genfrac{}{}{0pt}{}{N}{k}\right)\mathrm{e}^{\omega _kt}$$ $`(7.16)`$ where $$\mathrm{\Omega }=\frac{\mathrm{ln}N}{\tau },\omega _k=k\mathrm{\Omega };\tau >0,N2.$$ $`(7.17)`$ In fact this $`F(t)`$ goes monotonically from $`F(0)=1`$ to $`F(+\mathrm{})=0`$ with a flex point in $`\tau `$ (which can be considered as the arbitrary instant of the transition) where its derivative $`F^{}(\tau )`$ is negative and grows, in absolute value, logarithmically with $`N`$. The condition $`N2`$ also guarantees that $`F^{}(0)=0`$, and hence that the controlling potential $`V(x,t)`$ of (6.4) will continuously start at $`t=0`$ from the HO potential (5.1), and eventually come back to it for $`t+\mathrm{}`$. Finally the phase function $`S(x,t)`$ too will change continuously from that of $`\psi _c`$ to that of the HO ground state. A long but simple calculation will now show that the explicit form of the controlling potential is $$V(x,t)=m\omega ^2\frac{x^2}{2}m\omega ax\underset{k=1}{\overset{N}{}}(1)^{k+1}\left(\genfrac{}{}{0pt}{}{N}{k}\right)\left[U_k(t)\omega _k\mathrm{e}^{\omega _kt}W_k\omega \mathrm{e}^{\omega t}\right]$$ $`(7.18)`$ where $$\begin{array}{cc}\hfill U_k(t)& =\mathrm{sin}\omega t+\frac{2\omega ^2\mathrm{sin}\omega t\omega _k^2\mathrm{cos}\omega t}{(\omega _k\omega )^2+\omega ^2},\hfill \\ \hfill W_k& =1+\frac{2\omega ^2\omega _k^2}{(\omega _k\omega )^2+\omega ^2}=\sqrt{2}U_k\left(\frac{\pi }{4\omega }\right).\hfill \end{array}$$ $`(7.19)`$ The parameters $`\tau `$ and $`N`$, with the limitations (7.17), are free and connected to the particular form of the transition that we want to implement. We conclude this section by remarking that, in a HO, the transition between a coherent, oscillating wave packet and the ground state is a transition between a (Poisson) superposition of all the energy eigenstates to just one energy eigenstate: an outcome which is similar to that of an energy measurement, but for the important fact that here the result (the energy eigenstate) is deterministically controlled by a time dependent potential. In fact our controlled transition does not produce mixtures, but pure states (eigenstates) and in some way realizes a dynamical model for one of the branches of a measurement leading to an eigenvalue and an eigenstate. 8. Beam dynamics in particle accelerators As a model which tries to put in evidence the classical aspects of the quantum physics, the SM seems especially suitable to the description of systems whose nature in some sense lies between classical and quantum: the so called mesoscopic or quantum-like systems . We will propose now a few preliminary remarks about the possibility of making use of this characteristic in a particular physical domain . The dynamical evolution of beams in particles accelerators is a typical example of mesoscopic behaviour. Since they are governed by external electromagnetic forces and by the interaction of the beam particles among themselves and with the environment, charged beams are higly nonlinear dynamical systems, and most of the studies on colliding beams rely either on classical phenomena such as nonlinear resonances, or on isolated sources of unstable behaviors as building blocks of more complicated chaotic instabilities. This line of inquiry has produced a general qualitative picture of dynamical processes in particle accelerators at the classical level. However, the coherent oscillations of the beam density and profile require, to be explained, some mechanism of local correlation and loss of statistical independence. This fundamental observation points towards the need to take into account all the interactions as a whole. Moreover, the overall interactions between charged particles and machine elements are really nonclassical in the sense that of the many sources of noise that are present, almost all are mediated by fundamental quantum processes of emission and absorbtion of photons. Therefore the equations describing these processes must be, in principle, quantum. Starting from the above considerations, two different approaches to the classical collective dynamics of charged beams have been developed, one relying on the FP equation for the beam density, another based on a mathematical coarse graining of Vlasov equation leading to a quantum-like Schrödinger equation, with a thermal unit of emittance playing the role of Planck constant . The study of statistical effects on the dynamics of electron (positron) colliding beams by the FP equation has led to several interesting results, and has become an established reference in treating the sources of noise and dissipation in particle accelerators by standard classical probabilistic techniques . Concerning the relevance of the quantum-like approach, at this stage we only want to point out that some recent experiments on confined classical systems subject to particular phase-space boundary conditions seem to to be well explained by a quantum-like (Schrödinger equation) formalism . In this approach the (one dimensional) transverse density profile of the beam is described in terms of a complex function, called beam wave function, whose squared modulus give the transverse density profile of the beam. This beam wave function satisfies a Schrödinger-like equation where $`\mathrm{}`$ is replaced by the transverse beam emittance $`ϵ`$: $$iϵ\frac{\psi (x,z)}{z}=\frac{ϵ^2}{2}\frac{^2\psi (x,z)}{x^2}+U(x,z)\psi (x,z).$$ $`(8.1)`$ On the other hand a recently proposed model for the description of collective beam dynamics in the semiclassical regime relies on the idea of simulating semiclassical corrections to classical dynamics by suitable classical stochastic fluctuations with long range coherent correlations, whose scale is ruled by Planck constant. This elaborates a hypothesis first proposed by Calogero in his attempt to prove that quantum mechanics might be interpreted as a tiny chaotic component of the individual particles’ motion in a gravitationally interacting universe. The virtue of the proposed semiclassical model is twofold: on the one hand it can be formulated both in a probabilistic FP fashion and in a quantum-like (Schrödinger) setting, thus bridging the formal gap between the two approaches. On the other hand it goes further by describing collective effects beyond the classical regime due to the semiclassical quantum corrections. Since we are interested in the description of the stability regime, when thermal dissipative effects are balanced on average by the RF energy pumping, and the overall dynamics is conservative and time-reversal invariant in the mean, the choice to model the random kinematics with the Nelson diffusions, that are nondissipative and time-reversal invariant, is particularly natural. The diffusion process describes the effective motion at the mesoscopic level (interplay of thermal equilibrium, classical mechanical stability, and fundamental quantum noise) and therefore the diffusion coefficient is set to be the semiclassical unit of emittance provided by qualitative dimensional analysis. In other words, we simulate the quantum corrections to classical deterministic motion (at leading order in Planck constant) with a suitably defined random kinematics replacing the classical deterministic trajectories. Therefore, apart from the different objects involved (beam spatial density versus Born probability density; Planck constant versus emittance), the dynamical equations of our model formally reproduce the equations of the Madelung fluid (hydrodynamic) representation of quantum mechanics. In this sense, the present scheme allows for a quantum-like formulation equivalent to the probabilistic one. With a few changes in the notation we can now reproduce, for the beam dynamics, the SM approach sketched in section 2. Let $`q(t)`$ be the process representing some collective degree of freedom of the beam with a pdf $`\rho (x,t)`$. Then, in suitable units, the basic stochastic kinematical relation is an Itô stochastic differential equation of the type (2.1) where the emittance $`ϵ`$ of the beam plays the role of a diffusion coefficient. Since we are interested in the stability regime of the bunch oscillations, the bunch itself can be considered in a quasi-stationary state, during which the energy lost by dissipation is regained in the RF cavities. In such a quasi-stationary regime the dynamics is, on average, invariant for time-reversal and we can define a classical effective Lagrangian $`L(q,\dot{q})`$ of the system, where the classical deterministic kinematics is replaced by the random diffusive kinematics (2.1). The equations for the dynamics can then be obtained from the classical Lagrangian by means of the stochastic variational principles. Introducing now the time-like coordinate $`s=ct`$ we get now the analog of the equations (2.3) and (2.6) in the form of a HJM equation $$_sS+\frac{v^2}{2}2ϵ^2\frac{_x^2\sqrt{\rho }}{\sqrt{\rho }}+V(x,s)=0,$$ $`(8.2)`$ and of a continuity equation $$_s\rho =_x(\rho v).$$ $`(8.3)`$ Remark that now the symbol $`v`$ no more represents the forward velocity fields, but rather the drift velocity connected to the forward and backward velocities by the relation $`2v=v_{(+)}+v_{()}`$, and to the phase function by the relation $`v=_xS`$. The observable structure is now quite clear: $`𝐄v`$ is the average velocity of the bunch center oscillating along the transverse direction; $`𝐄q`$ gives the average coordinate of the bunch center; finally the second moment $`(\mathrm{\Delta }q)^2=E\left(qE(q)\right)^2`$ determines the dispersion (spreading) of the bunch. The coupled equations of dynamics may now be used to achieve a controlled coherence: given a desired state $`(\rho ,v)`$ the equations of motion (8.2) and (8.3) can be solved to calculate the external controlling potential $`V(x,s)`$ that realizes this state. General techniques to obtain localized quantum wavepackets as dynamically controlled systems in SM have already been introduced . In this way one can construct for general systems either coherent packets following the classical trajectories with constant dispersion, or coherent packets following the classical trajectories with time-dependent, but at any time bounded dispersion. These results can now be extended also to the quantum-like description of the transverse dynamics of a particle beam and hence it will be possible to select a current velocity, by fixing the characteristics of the motion of the packet center, to determine the corresponding solutions of the FP (continuity) equation and finally to use the HJM equation as a constraint giving us the controlling device. The formal details of this program will be developed in a subsequent paper. 9. Concluding remarks It has been observed that the inverse problem of determining a controlling potential for a given quantum evolution in fact does not need to be formulated in terms of SM. Given two quantum wave function $`\psi _1`$ and $`\psi _2`$ we could indeed design a new wave function $`\psi (x,t)`$, evolving from $`\psi _1`$ to $`\psi _2`$ plug it, as required evolution, directly in the Schrödinger equation (2.10) and eventually deduce from that the form of the controlling potential. At first glance this seems to completely circumwent the need for a model like the SM: given an arbitrary evolving state we can always calculate the potential producing it. However about that two remarks are in order. First of all, from a purely technical point of view, the simplification introduced by this procedure shows up to be elusive. In fact we must remember that a quantum wave function has complex values and hence, if we simply take an arbitrary evolution, the resulting potential calculated from the Schrödinger equation (2.10) will also be complex. This means that, to have a real valued potential, we must impose some conditions on the supposed evolution. These conditions of course depend on the hypothesized form of $`\psi `$. For example, if we fix the evolution of its modulus, the said condition will materialize in a partial differential equation on the phase function $`S`$ of the wave function. On the other hand the use of the HJM equation (2.9) as the tool to solve the inverse problem always give a real valued potential as a result. However both the two proposed procedures are possible and, to identically posed questions, they will give identical answers. Given this obvius equivalence, the second remark is that our choice of the procedure will be operated on the basis of opportunity considerations. In both cases the result will be influenced by the starting hypothesis on the supposed evolution of the state $`\psi `$ modelling the transition from $`\psi _1`$ to $`\psi _2`$. But, since the observable part of the wave function is its square modulus, namely the position pdf, the relevant hypothesis will be on its evolution. The phase function, or, equivalently, the velocity fields, are not directly observable, and hence are at first sight of secondary concern. Their importance become apparent only when we require that the potential be real or that the transitions show a realistic, smooth behaviour. Hence, depending on the specific problem we are dealing with, it could be more suitable to approach it in terms of a state given through a wave function $`\psi `$, or in terms of a state given through the couple $`(f,v)`$. The two approaches are certainly equivalent, but one may prove to be more suggestive. In particular that based on the SM equations seems to be better for the treatment of systems, like as the mesoscopic, quantum-like ones, which are well described by classical probabilistic models in terms of real space-time trajectories. REFERENCES 1. N.Cufaro Petroni and F.Guerra: Found.Phys. 25 (1995) 297; N.Cufaro Petroni: Asymptotic behaviour of densities for Nelson processes, in Quantum communications and measurement, V.P.Belavkin et al. Eds., Plenum Press, New York, 1995, p. 43; N.Cufaro Petroni, S.De Martino and S.De Siena: Non equilibrium densities of Nelson processes, in New perspectives in the physics of mesoscopic systems, S.De Martino et al. Eds., World Scientific, Singapore, 1997, p. 59. 2. E.Nelson: Phys.Rev. 150 (1966) 1079; E.Nelson: Dynamical Theories of Brownian Motion (Princeton U.P.; Princeton, 1967); E.Nelson: Quantum Fluctuations (Princeton U.P.; Princeton, 1985); F.Guerra: Phys.Rep. 77 (1981) 263. 3. F.Guerra and L.Morato: Phys.Rev. D 27 (1983) 1774; F.Guerra and R.Marra: Phys.Rev. D 28 (1983) 1916; F.Guerra and R.Marra: Phys.Rev. D 29 (1984) 1647. 4. D.Bohm and J.P.Vigier: Phys.Rev. 96 (1954) 208. 5. N.Cufaro Petroni, S.De Martino and S.De Siena: Exact solutions of Fokker-Planck equations associated to quantum wave functions; in press on Phys.Lett. A. 6. N.Cufaro Petroni: Phys.Lett. A141 (1989) 370; N.Cufaro Petroni: Phys.Lett. A160 (1991) 107; N.Cufaro Petroni and J.P.Vigier: Found.Phys. 22 (1992) 1. 7. N.Cufaro Petroni, S.De Martino, S.De Siena and F.Illuminati: A stochastic model for the semiclassical collective dynamics of charged beams in partcle accelerators; contribution to the 15th ICFA Advanced Beam Dynamics Workshop, Monterey (California, US), Jan 98. N.Cufaro Petroni, S.De Martino, S.De Siena, R. Fedele, F.Illuminati and S. Tzenov: Sochastic control of beam dynamics; contribution to the EPAC’98 Conference, Stockholm (Sweden), Jun 98. 8. E.Madelung: Z.Physik 40 (1926) 332; L.de Broglie: C.R.Acad.Sci.Paris 183 (1926) 447; L.de Broglie: C.R.Acad.Sci.Paris 184 (1927) 273; L.de Broglie: C.R.Acad.Sci.Paris 185 (1927) 380; D.Bohm: Phys.Rev. 85 (1952) 166, 180. 9. L.de la Peña and A.M.Cetto: Found.Phys. 5 (1975) 355; N.Cufaro Petroni and J.P.Vigier: Phys.Lett. A73 (1979) 289; N.Cufaro Petroni and J.P.Vigier: Phys.Lett. A81 (1981) 12; N.Cufaro Petroni and J.P.Vigier: Phys.Lett. A101 (1984) 4; N.Cufaro Petroni, C.Dewdney, P.Holland, T.Kyprianidis and J.P.Vigier: Phys.Lett. A106 (1984) 368; N.Cufaro Petroni, C.Dewdney, P.Holland, T.Kyprianidis and J.P.Vigier: Phys.Rev. D32 (1985) 1375. 10. F.Guerra: The problem of the physical interpretation of Nelson stochastic mechanics as a model for quantum mechanics, in New perspectives in the physics of mesoscopic systems, S.De Martino et al. Eds., World Scientific, Singapore, 1997, p. 133. 11. H.Risken: The Fokker-Planck equation (Springer, Berlin, 1989). 12. F.Tricomi: Equazioni differenziali (Einaudi, Torino, 1948). 13. F.Tricomi: Integral equations (Dover, New York, 1985). 14. S.Albeverio and R.Høgh-Krohn: J.Math.Phys. 15 (1974) 1745. 15. New perspectives in the physics of mesoscopic systems, S.De Martino et al. Eds., World Scientific, Singapore, 1997. 16. F. Ruggiero, Ann.Phys. (N.Y.) 153, (1984) 122; J. F. Schonfeld, Ann.Phys. (N.Y.) 160, (1985) 149. 17. R. Fedele, G. Miele and L. Palumbo, Phys.Lett. A194, (1994) 113. 18. S. Chattopadhyay, AIP Conf. Proc. 127, 444 (1983); F. Ruggiero, E. Picasso and L. A. Radicati, Ann.Phys. (N. Y.) 197, (1990) 396. 19. R. K. Varma, in Quantum-like Models and Coherence Effects, R. Fedele et al. Eds. World Scientific, Singapore, 1996. 20. S.De Nicola, R.Fedele, G.Miele and V.Man’ko, in New perspectives in the physics of mesoscopic systems, S.De Martino et al. Eds., World Scientific, Singapore, 1997, p. 89. 21. S. De Martino, S. De Siena and F. Illuminati, Mod.Phys.Lett. B12 (1998), in press. 22. F. Calogero, Phys.Lett. A228, (1997) 335. 23. S.De Martino, S. De Siena and F. Illuminati, J.Phys. A30 (1997) 4117.
no-problem/9901/astro-ph9901190.html
ar5iv
text
# Radio Continuum Evidence for Outflow and Absorption in the Seyfert 1 Galaxy Markarian 231 ## 1 Introduction Classified optically as a Seyfert 1 galaxy, Markarian 231 is also the most luminous infrared galaxy in the local ($`z<0.1`$) universe (Surace et al. (1998)). Ultraluminous infrared galaxies like Mrk 231 have total infrared luminosities well above $`10^{11}L_{}`$, as measured by IRAS. Such galaxies are thought to be stages along a sequence in the evolution of merging spiral galaxies; the mergers generate enormous bursts of star formation and the merged galaxies eventually turn into quasars (e.g., Sanders et al. (1988)). Both the enormous infrared luminosity of $`3\times 10^{12}L_{}`$ (Soifer et al. (1989); Bonatto & Pastoriza (1997)) and the Seyfert/quasar properties of Mrk 231 argue that it is well along in the merger sequence. Using a velocity relative to the 3 K background of 12,447 km s<sup>-1</sup> (de Vaucouleurs et al. (1991)) and $`H_0=75`$ km s<sup>-1</sup> Mpc<sup>-1</sup>, the distance to Mrk 231 is 166 Mpc, and 800 pc subtend 1″. On a galactic scale, Mrk 231 shows tidal tails indicative of a recent interaction (e.g., Hutchings & Neff (1987); Lipari, Colina, & Macchetto (1994); Surace et al. (1998)). Optical emission lines from an apparent star-forming region are seen 10″–15″ ($`10`$ kpc) south of the nucleus (Hamilton & Keel (1987); Hutchings & Neff (1987)). An extended radio continuum source roughly 60″ (48 kpc) in diameter is also present, predominantly to the south of the nucleus (de Bruyn & Wilson (1976); Hutchings & Neff (1987); Condon & Broderick (1988)). Armus et al. (1994) reported a second nucleus about 3.5″ south of the main nucleus, but Hubble Space Telescope (HST) imaging shows only a series of star-forming knots at this location (Surace et al. (1998)). There is a general consensus that many of the observed properties of the galaxy result from a merger occurring $`10^8`$$`10^9`$ years ago (e.g., Hutchings & Neff (1987); Armus et al. (1994); Surace et al. (1998)). Baum et al. (1993) imaged a continuum source at 1.4 GHz that extends for 150″ (120 kpc) perpendicular to the galaxy disk, and interpreted that source as a superwind driven by the intense star formation triggered by the merger. The inner region of Mrk 231 contains a CO disk aligned almost east-west, with an inner diameter of about 1″ (800 pc), a lower density region extending to 3″, and a total gas mass exceeding $`10^9M_{}`$ (Bryant & Scoville (1996); Downes & Solomon (1998)). A compact 10-$`\mu `$m source has a maximum diameter of 0.6″ (Miles et al. (1996)) and is apparently associated with the active galactic nucleus (AGN). That nucleus shows broad Balmer emission lines, characteristic of a Seyfert 1 nucleus, rather weak forbidden lines (e.g., Boksenberg et al. (1977)), and variable, low-ionization, and broad absorption lines at redshifts up to 7800 km s<sup>-1</sup> relative to the systemic velocity (Boroson et al. (1991); Forster, Rich, & McCarthy (1995)). The nucleus shows strong Fe II emission, although its X-ray and low-energy $`\gamma `$-ray emission are anomalously weak for an AGN (Rigopoulou, Lawrence, & Rowan-Robinson (1996); Dermer et al. (1997); Lawrence et al. (1997)). ASCA observations imply a ratio $`L_x/L_{\mathrm{FIR}}=7\times 10^4`$, more consistent with a starburst galaxy than with an AGN, although there clearly is a hard power-law component associated with the active nucleus (Nakagawa et al. (1997)). The ASCA data show an X-ray absorbing column of $`N_H=6\times 10^{22}`$ cm<sup>-2</sup>, which may be associated with the gas in the broad-line region. OH maser emission with an isotropic luminosity of $`700L_{}`$ also is associated with the galaxy (Baan (1985)) but no H<sub>2</sub>O megamaser emission has been detected from the nucleus, with an upper limit of $`18L_{}`$ (Braatz, Wilson, & Henkel (1996)). The OH emission is spread over a scale of a few hundred parsecs, rather than being confined to a very compact region as in most other active galaxies (Lonsdale et al. (1998)). Numerous Very Large Array (VLA) images of Mrk 231 are available at $``$1″ or better resolution (e.g., Ulvestad, Wilson, & Sramek (1981); Neff & Ulvestad (1988); Condon et al. (1991); Patnaik et al. (1992); Kukula et al. (1995); Papadopoulous et al. (1995)). These images are dominated by an unresolved VLA core with a typical flux density of 100–300 mJy between 1 and 22 GHz; based on the available published data, this core appears to vary at gigahertz frequencies by tens of percent on time scales of years. There also is very weak extended emission on the sub-arcsecond scale (Taylor et al. (1998)). The Hi absorption against the VLA core (Dickey (1982)) occurs against a radio halo or disk of emission 440 mas (350 pc) in extent, with an elongation PA in the east-west direction (Carilli, Wrobel, & Ulvestad 1998, hereafter CWU), similar to the elongation PA of the 800-mas CO emission (Bryant & Scoville (1996)). The radio continuum from this 350-pc disk is strongest at low frequencies and is apparently responsible for much of the extra $`100`$ mJy of flux density missed between measurements on VLBI scales and those on VLA scales (e.g., Lonsdale, Smith, & Lonsdale (1993); Taylor et al. (1994); Taylor et al. (1998)). The properties of the 350-pc disk imply a massive star formation rate of $`60M_{}`$ yr<sup>-1</sup> (CWU ). On a smaller scale of 50 mas (40 pc), the core resolves into a north-south triple at 1.7 GHz, as imaged with the European VLBI Network in the mid-1980s (Neff & Ulvestad (1988)). Optical and ultraviolet spectropolarimetry give an electric-vector position angle (PA) of $`95`$°, suggesting scattering of radiation by dust clouds flowing outward along this north-south VLBI axis (Goodrich & Miller (1994); Smith et al. (1995)). This paper presents and interprets new continuum observations, with the Very Long Baseline Array (VLBA) and the VLA, designed to probe structures in Mrk 231 on scales ranging from parsecs to kiloparsecs. Preliminary results were reported by Ulvestad, Wrobel, & Carilli (1998). This radio continuum study of Mrk 231 provides evidence for outflow, for synchrotron self-absorption, and for free-free absorption. These new radio results are related, wherever possible, to published results from the aforementioned studies of radio spectral lines and in shorter wavelength bands. ## 2 Observations and Calibration ### 2.1 VLBA The continuum emission from Mrk 231 was observed with the 10-element VLBA (Napier et al. (1994)) at frequencies ranging from 1.4 to 22.2 GHz, during the three separate observing sessions summarized in Table 1. During 1995 November, 11-minute scans on Mrk 231 typically were preceded by 2-minute scans on the delay-rate check source J1219+4829, with a total time of about 39 minutes to cycle through 1.4, 2.3, and 5.0 GHz. Occasional scans on DA 193 (J0555+3948), 3C 345 (J1642+3949), and J1740+5212 were obtained for ancillary calibration (fringe-finding, manual pulse calibration, and amplitude calibration checks). On 1996 December 8, scans of 12 minutes duration on Mrk 231 were interleaved with 2–4-minute scans on J1219+4829, and the total cycle time for 5.0, 8.4, and 15 GHz was 44 minutes. A single 25-m VLA antenna was included along with the VLBA to provide improved short-spacing coverage. Ancillary calibrators were 4C 39.25 (J0927+3902), J1310+3220, and 3C 345. On 1996 December 27, both Hi line and 22-GHz continuum data were acquired using the VLBA and the phased VLA. The line observations were reported by CWU and will not be described further here. At 22 GHz, 8-minute scans of Mrk 231 were interleaved with 3-minute scans of J1219+4829. Short scans of 4C 39.25, J1310+3220, and 3C 345 were included for ancillary calibration. All VLBA observing and correlation of Mrk 231 adopted the J2000 position for source J1256+5652 from Patnaik et al. (1992). At the VLA, additional scans of 3C 286 (J1331+3030) were obtained on 1996 December 27 to set the flux density scale, and scans of Mrk 231 at 5, 8.4, and 15 GHz were also acquired to provide a contemporaneous spectrum of the galaxy. Initial calibration of the VLBA data was carried out using the standard gain values of the VLBA antennas together with system temperatures measured every 1–2 minutes during the observing sessions. Autocorrelations were used to correct for imperfect adjustment of the sampler levels for these 2-bit (4-level) data. For the 1995 data, an additional amplitude adjustment was made using the VLBA image of DA 193, whose total flux density was constrained to be equal to the value measured at the VLA. Baseline gain corrections were less than 5% at 5 GHz, 5–10% at 2.3 GHz, and 5–20% at 1.4 GHz. The substantial corrections at 1.4 GHz were expected because the observing frequency differed by nearly 300 MHz (roughly 20% of the observing frequency) from the frequency where standard gains are measured. Since the VLBA gains are quite stable, and accurate system temperatures were regularly measured, the estimated uncertainty in the VLBA flux density scale is less than $`10`$% . The NRAO Astronomical Image Processing System (AIPS) (van Moorsel, Kemball, & Greisen (1996)) was used for all VLBA data calibration. ### 2.2 VLA The continuum emission from Mrk 231 was observed in dual circular polarizations with the VLA (Thompson et al. (1980)) during the five separate observing sessions summarized in Table 2. In 1988 and 1989, scaled-array observations were made at 1.5, 4.8, and 15.0 GHz, using the B, C, and D configurations, respectively. These observations yielded intrinsic resolution near 4″, useful for studying the emission on kiloparsec scales. Scans of $`30`$ minutes were interleaved with scans of J1219+4829, the local phase calibrator, and the amplitude scale was set to that of Baars et al. (1977) using short observations of 3C 286. These scaled-array observations were not full syntheses, as only 1.5–3 hr were spent on Mrk 231 in each case. During 1995 November, a short observation in the B configuration was made to determine the total flux and spectrum of Mrk 231 contemporaneously with the 1995 VLBA observations. Five frequencies between 1.4 and 22 GHz were used at the VLA, with 11 minutes spent on Mrk 231 at each frequency. Resolutions ranged from $`4`$″ at 1.5 GHz to $`0.3`$″ at 22 GHz. J1219+4829 was observed for several minutes at each frequency as a phase calibrator. Also, short scans of 3C 286 were obtained to set the VLA flux density scale and short scans of DA 193 were acquired to check the VLBA flux density scale. During 1996 December, observations of Mrk 231 at 5, 8.4, and 15 GHz were only single 2-minute snapshots, while the 1.4- and 22-GHz observations made together with the VLBA were much longer. Phase and amplitude calibration using J1219+4829 and 3C 286, respectively, was similar to that carried out for earlier VLA epochs, with the added complication that additional editing was required for calibrator scans acquired in phased-array mode. VLA flux densities typically have errors of $`5`$% but residual phase noise in 1996 led to larger errors at the higher frequencies (7.5% at 8.4 GHz, 10% at 15 GHz, and 15% at 22 GHz). At 1.4 GHz, a polarization calibration was performed using J1219+4829 to deduce the antenna polarizations and using 3C 286 to fix the absolute polarization PA, and the 1.4-GHz data were corrected for these effects. AIPS was used for all VLA data calibration. ## 3 VLBA Imaging Following the initial calibration, all VLBA data were imaged in AIPS. After initial images were made, an iterative self-calibration procedure was used to correct the complex gains of the individual antennas for atmospheric and instrumental effects. This process halted when the image quality and the r.m.s. noise stabilized. A major aim of this study was to derive the spectra of different components in Mrk 231 to search for absorption. To accomplish this goal, full-resolution images were made at each VLBA frequency from 1.4 to 15 GHz. Then, the data at each frequency were re-imaged using only the range of projected baseline lengths sampled at each of the lower frequencies. In this process, the weighting of the data was tapered in the aperture plane to give approximately the same resolution as that available at each lower frequency, with a restoring beam fixed to have the same parameters as the full resolution beam at the lower frequency. For example, the 8.4-GHz data were re-imaged at three different resolutions, equivalent to the full resolution at 1.4, 2.3, and 5.0 GHz. The subsections below present a subset of images that are the most important for the scientific analysis. The interpretation of the results is deferred to later sections. ### 3.1 North-South Triple The new VLBA image at 1.4 GHz, presented in Figure 1, shows the same (nearly) north-south triple structure known from previous 1.7-GHz observations with the European VLBI Network (Neff & Ulvestad (1988)). The triple consists of an unresolved core, together with two resolved lobes. The approximate size of the triple is 50 mas (40 pc). The emission 20 mas to the east of the core is not apparent at other frequencies, and is likely to be an artifact. The other images in Figure 1 are the 2.3, 5.0, and 8.4-GHz images of the same triple at a resolution matching that of the 1.4-GHz image. Note that as the frequency increases, progressively less diffuse emission is seen; only the outer portion of the VLBA lobes is detected at 8.4 GHz. The total flux densities in the north (N), central (C), and south (S) components were measured by integrating over the entire area of each component. In addition, the position of the peak intensity in each component, relative to component C, was determined by fitting a parabola to a few pixels in each image surrounding the peak. Results of these measurements are given in Table 3. Also included in Table 3 are the results for component C from the tapered data at 15 GHz; components N and S are too weak to be detected at that frequency. The estimated errors in the flux densities are 10% for component C and for the low-frequency data on component S, but rise to as much as 50% for components N and S at 8.4 GHz due to the increasing uncertainty in the lobe strengths at frequencies where they are largely resolved out. Spectra of components N, C, and S are shown in Figure 2. Although the data at all frequencies were not taken simultaneously, the well-resolved components, N and S, are unlikely to have varied over the 13 months between sessions. Component C appears to have been roughly constant from 1.4 to 5 GHz, based on (1) the VLA core results (see Section 4.2); and (2) the small differences of only 5–10 mJy between 1995 and 1996 in the VLBA strengths at 1.4 and 5 GHz (Table 3; CWU ). The data between 5 and 22 GHz were taken within 20 days, so Figure 2 should be a good snapshot of the spectrum of the VLBA components at a single epoch, 1996 December. VLBA images, at full resolution, of the north-south triple at 2.3 and 5.0 GHz are shown in Figures 3a and 3b. (The full-resolution image at 8.4 GHz, discussed below, shows no emission from components N or S.) Both of these images show the structure at the ends of the VLBA lobes more clearly, with substantial resolution perpendicular to the direction to the central source. Neither the total component flux densities nor the peak positions differ significantly from those values derived from the images at matched resolution. ### 3.2 Central Component of Triple The first installment of the Caltech-Jodrell VLBI survey presented a 5.0-GHz image of Mrk 231, under the alias 1254+571 (Taylor et al. (1994)). That survey included intercontinental baselines, and indicated that Mrk 231 was slightly resolved on a scale near 1 mas, in a nearly east-west direction, markedly different from the PA of the larger, north-south triple described above. New VLBA images, at full resolution, of component C at 8.4 and 15.4 GHz are shown in Figures 3c and 3d. The 8.4-GHz image shows clear resolution in a PA between 60° and 65°, somewhat north of the PA of 92° quoted by Taylor et al. (1994) for the resolved core at 5 GHz, although their image does show an extension slightly north of east, in PA $`80`$°. The 15-GHz image shows that the central source appears to break up into three separate components at sub-parsec resolution. Single- and multi-component Gaussian fits were made to component C at all frequencies between 1.4 and 15 GHz. At 1.4 and 2.3 GHz, the source is not resolved, and the total flux densities are indistinguishable from the values given in Table 3. However, at 5, 8.4, and 15.4 GHz, component C is significantly resolved with a size of 0.8–1.0 mas (0.6–0.8 pc) in a PA near 63°. At the 8.4-GHz resolution, component C in both the 8.4 and 15-GHz images is much better fitted by a two-component model. Finally, at the full 15-GHz resolution, there may be a third component to the southwest. All the fits to component C at the three different resolutions are summarized in Table 4. It is possible that the southwestern component is an imaging artifact, but since its location and that of the northeastern component are not symmetric with respect to the central component, the southwestern component is most likely real. In the multiple component fits, only the strongest source appears significantly resolved. Since the resolution is predominantly along the direction of the structure, this indicates the possible presence of more components that would be separated from the strongest source at yet higher spatial resolution. Although Mrk 231 also was observed with the VLBA and the phased VLA at 22.2 GHz, the steepening high-frequency spectrum of the core at higher frequencies meant that fringes were detected only on the relatively short baselines in the southwestern United States. Imaging of these limited data yields a total flux density of 30 mJy for the core, and that datum is included in the spectral plot given in Figure 2. Phase-referencing observations are required to image component C at 22 GHz with the resolution and sensitivity needed to identify subcomponents. ## 4 VLA Imaging ### 4.1 Scaled Arrays During 1988 and 1989, Mrk 231 was observed at 1.5, 4.9, and 15 GHz using scaled arrays of the VLA, as described in Section 2.2. Images were restored with a common (circular) Gaussian beam of size 4″ (full width at half maximum), to enable spectral comparisons. Figure 4 shows these images at 1.5 GHz and 4.9 GHz. The 15-GHz emission is completely unresolved, so that image is not shown. Flux densities in the unresolved VLA core were derived by fitting a Gaussian constrained to the beam size to the central pixels of each image. In addition, the total flux density in each VLA image was determined by integrating over the region in which significant emission is detected; the extended flux density, then, is taken to be the difference between the total flux density and that in the unresolved VLA core. At 15 GHz, there is no detection of extended emission in a single beam area, so this difference is taken to be an upper limit to the total extended flux density. Peak intensities in the extended emission were measured at 1.5 and 4.9 GHz, while an upper limit of 3 times the r.m.s. noise per beam area was assumed at 15 GHz. Results of all these flux density measurements appear in Table 5. The spectral index of the extended emission is steep, both in total flux density and in a point-by-point comparison. The two-point spectral index, $`\alpha `$ of the total emission is $`1.05\pm 0.23`$ between 1.5 and 4.9 GHz ($`S_\nu \nu ^{+\alpha }`$, where $`S\nu `$ is the flux density at frequency $`\nu `$). A spectral-index image between these frequencies indicates that all regions with significant emission at both frequencies have spectral indices ranging between $`0.4`$ and $`1.0`$, with the total spectrum being somewhat steeper because of the areas detected at 1.5 GHz that are below the detection threshold at 4.9 GHz. The total spectral index of the extended emission between 4.9 and 15 GHz is $`\alpha <0.8\pm 0.4`$. ### 4.2 Flux Density Monitoring VLA observations of Mrk 231 were made in 1995 November and 1996 December, at frequencies ranging from 1.4 GHz to 22 GHz, as described previously. The data were calibrated, imaged, and self-calibrated in the usual way, in order to measure the flux density of the unresolved VLA core. This flux density was taken to be the peak in the final self-calibrated images. Results are presented in Table 6, with errors quoted as discussed in Section 2.2. The measurements show that the flux density of the core appears to have been relatively constant at frequencies up to 5 GHz, but decreased significantly at 15 and 22 GHz from 1995 to 1996. Therefore, the spectrum of the central component of the VLBA triple source (see Figure 2) should be valid as of the 1996 December epoch. Weak extended emission in the vicinity of the core, within the central arcsecond, is present in the highest resolution VLA data, as discussed by Taylor et al. (1998). This emission has no significant impact on the flux-density measurements for the unresolved core. ### 4.3 Deep VLA Polarimetry A long observation of Mrk 231 was made at 1.4 GHz with the phased VLA in 1996 December, as part of a VLBA observation of the Hi (CWU). The phased-array data essentially undergo a real-time calibration of the VLA phases, and then can be calibrated further in AIPS, as described previously. Mrk 231 was imaged in Stokes I, Q, and U; the Q and U images were further combined in the usual way to obtain images of the linearly polarized intensity, P, and electric-field PA, $`\chi `$. Figure 5 shows a composite image, with lines representing P and $`\chi `$ superposed on contours of Stokes I emission. The Stokes I emission in Figure 5 strongly resembles that evident in Fig. 4. However, a major new discovery from Figure 5 is that some regions of the diffuse Stokes I emission to the south of the VLA core are significantly linearly polarized, reaching a peak polarized intensity of 185 $`\mu `$Jy beam<sup>-1</sup> about 26″ (21 kpc) south of the VLA core. At this polarization peak, the percentage polarization is about 57% and $`\chi `$ (electric vector position angle) is about 15°. No rotation measure corrections have been made. The VLA core is $`<`$ 0.1% linearly polarized. ## 5 Interpretation, from Large to Small Scales ### 5.1 Summary of Radio/Optical Structures Figure 6 summarizes the overall structure of Mrk 231 on a variety of scales. The two left-hand panels (Figures 6a and 6b) are an optical B-band image of the galaxy from Hamilton & Keel (1987), and, at the same scale, the 1.5-GHz VLA image from data taken in 1989. These panels show that the radio emission to the south of the nucleus actually extends well beyond the dominant emission from the optical galaxy. The core of the VLA image contains a north-south radio source imaged with the VLBA (Figure 6c) on very much smaller scales, with a total extent of $`40`$ pc. Finally, the nucleus of the galaxy shows additional structures on the 1-pc scale at the highest resolution available with the VLBA at 15 GHz, as shown in Figure 6d. ### 5.2 Kiloparsec Scale Mrk 231 is an ultraluminous infrared galaxy with a total luminosity in excess of $`10^{12}L_{}`$ (Soifer et al. (1989)). For over 20 years, it has been known to contain radio emission about an arcminute in extent (de Bruyn & Wilson (1976)), somewhat larger than the optical galaxy. The bulk of this emission comes from a diffuse region within about 30″ to the south of the galaxy nucleus. Off-nuclear optical imaging and spectroscopy (Hamilton & Keel (1987); Hutchings & Neff (1987)) revealed H$`\alpha `$ emission in an apparent region of star formation centered roughly 10″–15″ to the south of the nucleus. From the new VLA imaging at 1.4 GHz, the total flux density in the diffuse emission is 42 mJy. This emission is primarily concentrated in a higher brightness feature extending due south of the nucleus for about 20″ (16 kpc), then appearing to curve toward the west, as described previously by Baum et al. (1993). Several different possibilities for the origin of the diffuse emission are considered below. #### 5.2.1 Thermal Radio Emission from Hii Regions? Most of the diffuse emission south of the nucleus has a steep spectrum, with a spectral index near $`1.0`$, as shown by the scaled-array observations from 1.5 to 15 GHz. On spectral grounds alone, it seems unlikely that thermal processes can make a substantial contribution. In addition, it is possible to use the H$`\alpha `$ surface brightness measured by Hamilton & Keel (1987) to compute the expected amount of thermal emission from Hii regions. We estimate the total H$`\beta `$ flux (assuming Case B recombination) in a 4″ VLA beam 12″ south of the nucleus to be $`2\times 10^{16}`$ erg cm<sup>-2</sup> s<sup>-1</sup>. The predicted thermal radio brightness at 4.9 GHz (e.g., Ulvestad et al. 1981; Condon (1992)) then would be only about 1 $`\mu `$Jy beam<sup>-1</sup>. In contrast, the peak in the diffuse source at 4.9 GHz is $`400`$ $`\mu `$Jy beam<sup>-1</sup>. Furthermore, the overall spectrum sets an upper limit of $`100`$ $`\mu `$Jy beam<sup>-1</sup> for the flat-spectrum contribution at any point, entirely consistent with the prediction from the optical spectroscopy. The lack of discernible thermal radio emission implies that the intrinsic H$`\beta `$ flux can be no more than $`100`$ times higher than that observed, implying an upper limit of $`A_V5`$ for the extinction in the star-forming region. The flattest radio spectrum, with $`\alpha 0.4`$ between 1.5 and 4.9 GHz, occurs near the peak of the diffuse emission in the 4.9-GHz image. This is near the peak in the fractional polarization, and well beyond the region of significant optical continuum and line emission (Hamilton & Keel (1987); Hutchings & Neff (1987)). The apparent lack of optical emission from young stars implies little or no thermal contribution to the extended radio emission in this area, so the flattening of the spectrum must have another cause. #### 5.2.2 Nonthermal Radio Emission from Supernova Remnants? Assuming a total thermal radio flux density of 1–10 $`\mu `$Jy in the emission in the southern diffuse lobe, plus a distance of 166 Mpc, the formulae given by Condon (1992) imply an ionizing flux of $`10^{52}`$ photons s<sup>-1</sup> and a star formation rate of $`0.03M_{}`$ yr<sup>-1</sup> in stars above $`5M_{}`$. This predicts a nonthermal radio luminosity, from supernova remnants, of $`3\times 10^{19}`$ W Hz<sup>-1</sup> at 4.9 GHz. The corresponding radio flux density at the distance of Mrk 231 would be $`10`$ $`\mu `$Jy. This prediction falls three orders of magnitude short of the total flux density in the southern VLA lobe, and 1.5 orders of magnitude short of the peak flux density per beam at the location of the putative Hii region. Alternatively, using the standard radio/infrared relation for starbursts (e.g., Condon (1992)), the diffuse 1.4-GHz flux density of 42 mJy would predict infrared emission of 5–10 Jy, extended over $``$30″, at 60 and 100 $`\mu `$m. This is inconsistent with the small size ($`<`$1″–2″) found for the near-, mid-, and far-infrared emission (Miles et al. (1996); Matthews et al. (1987); Roche & Chandler (1993)). Furthermore, although the diffuse radio emission and the optical galaxy have somewhat similar shapes, Figure 6 shows that much of the radio emission is actually located beyond the region of significant B-band emission in the galaxy, which would not be expected if the radio emission were associated with a large population of supernova remnants. Finally, the high fractional polarization implies a magnetic field that is ordered on a scale much larger than that expected from a collection of supernova remnants. Therefore, we rule out the possibility that the diffuse radio emission south of the nucleus is generated by star formation and supernovae. #### 5.2.3 Nonthermal Radio Emission Powered by Radio Jet The remaining, favored, possibility is that the diffuse emission is excited by a jet from the galaxy nucleus. This inference is supported by an apparent ridge of slightly higher surface brightness emission connecting back to the nucleus. Also, the extremely high polarization at the outer edge of the diffuse lobe implies a well-ordered magnetic field, rather than the chaotic field expected from a collection of supernova remnants. In addition, the polarization vectors indicate that the magnetic field appears to wrap around the outer edge of the lobe, as would be natural for emission fed by a nuclear jet. The total emission of 42 mJy in this southern lobe at 1.4 GHz corresponds to $`1.4\times 10^{23}`$ W Hz<sup>-1</sup>, and the total luminosity between 10 MHz and 100 GHz (assuming equal proton and electron energies, and a spectral index $`\alpha =1.05`$) is $`1.8\times 10^{40}`$ erg s<sup>-1</sup>. If this luminosity arises from relativistic particles that uniformly fill a lobe 10 kpc in diameter, then the physical conditions in the lobe can be estimated (cf. Pacholczyk (1970)). The minimum-energy magnetic field is $`10`$ $`\mu `$gauss, the total energy in the lobe is $`1.5\times 10^{56}`$ erg, and the synchrotron lifetime is $`3\times 10^8`$ yr. The lobe peaks only $`15`$ kpc from the nucleus, and therefore could be supplied by a jet with an advance speed of only $`50`$ km s<sup>-1</sup>. This required velocity is much smaller than the speeds of up to 7800 km s<sup>-1</sup> seen on parsec scales in the broad-absorption-line clouds (e.g., Forster et al. 1995). The north-south VLBA triple implies the presence of an energy-supplying jet on smaller scales, although its speed of advance is not presently known. It is interesting to note that the kiloparsec-scale radio emission emerges from the galaxy core in roughly the same direction as the strong optical emission seen just south of the core, in a PA between $`165^{}`$ and $`170^{}`$ (see Figure 6). The secondary optical peak was shown by Surace et al. (1998) to consist of a series of star-forming knots. Therefore, if it is correct to assume that the kiloparsec-scale emission is energized by a radio jet, that jet would appear to be related to the star formation; perhaps the jet has compressed thermal material along its path enough to trigger a burst of star formation. The large-scale jet then appears to curve back to the west only beyond the main extent of the optical galaxy, but in the same general sense as the tidal tails imaged by Hamilton & Keel (1987) and Surace et al. (1998). ### 5.3 Sub-Kiloparsec Scales CWU discovered a continuum “halo” at 1.4 GHz, with a size of $`440`$ mas (350 pc) and containing about 130 mJy. Comparison of Tables 3 and 6 indicates that the amount of flux density missing between VLBA and VLA scales ranges from $`135\pm 14`$ mJy at 1.4 GHz to $`50\pm 13`$ mJy at 15 GHz. If this emission is dominated by the 350-pc disk, then the spectral index of that disk appears to be $`0.41\pm 0.12`$ between 1.4 and 15 GHz. Taylor et al. (1998) have recently imaged weak extended emission on a 1″ (800 pc) scale, using the VLBA at 0.3 and 0.6 GHz as well as archival VLA observations at frequencies of 5 GHz and higher. They suggest that this emission comes from a weak “outer disk” that may have a spectral turnover near 8 GHz. It may be that the emission on 0.5″–1.0″ scales contains both an optically thin, steep-spectrum component, and some regions that are free-free absorbed at the lower frequencies. The sub-kiloparsec “milli-halo” seen in NGC 1275 (3C 84) by Silver, Taylor, & Vermeulen (1998) has a similar intrinsic strength and size to the radio disk in Mrk 231. The milli-halo in NGC 1275 is reported to have a steeper spectrum, with $`\alpha 0.9`$. Different spectral shapes for NGC 1275 and Mrk 231 may imply different emission processes, different electron energy distributions, or just varying amounts of free-free absorption. The suggested model for NGC 1275 is that the milli-halo is caused by particles leaking out of the radio jet into the surrounding medium (Silver et al. (1998)). However, for Mrk 231, the elongation of the 350-pc emission perpendicular to the 40-pc VLBA triple and parallel to the slightly larger scale CO disk indicates, instead, a possible relationship to the disk of material thought to surround active galactic nuclei (e.g., Antonucci (1993)). CWU associate this elongated emission with a disk containing atomic, molecular, and dust components on a scale of a few hundred parsecs. The short electron lifetimes ($`<10^5`$ yr) in this disk imply local particle acceleration, requiring star formation or other shock processes in the inner kiloparsec. In fact, the radio continuum emission from this disk or torus is likely to be powered by massive star formation, since it is consistent with the canonical radio-infrared relation. This is in agreement with the conclusion of Downes & Solomon (1998), who suggest that the bulk of the far-infrared emission in Mrk 231 is powered by a starburst. The slight flattening of the radio spectrum relative to the canonical starburst spectral index of $`0.8`$ could be accounted for by localized free-free absorption within the disk. ### 5.4 10–100 Parsec Scales The north-south triple source in Mrk 231 has a total extent of roughly 50 mas, or 40 pc. At 1.4 and 2.3 GHz, there is considerable emission detected from the central core out to the outer edges of the triple. However, at 5.0 GHz, most of the detected extended emission is in the outermost parts of the lobes. The high resolution images at 2.3 and 5.0 GHz (Figure 3) show that the ends of the lobes are significantly resolved perpendicular to the direction to the core, with transverse extents of 15–20 mas ($``$12–16 pc). This may indicate the presence of shocks at “working surfaces” where jets are attempting to burrow out of the nuclear regions. The outer edges of the VLBA triple source are at a distance similar to the inner scale of the diffuse radio emitter and the Hi-absorption cloud imaged by CWU, so it seems plausible that these lobes are generated by a jet running into the inner surface of an Hi shell or disk. It is also interesting to note that the major axis of the triple source is similar to the axis of the diffuse radio source and the elongation of the optical galaxy on scales 2–3 orders of magnitude larger (see Figure 6). This suggests a long-term memory of the symmetry axis of this Seyfert galaxy, possibly associated with an accretion disk in its center. #### 5.4.1 Northern Component of Triple At the resolution of the 1.4-GHz VLBA image, the northern component of the 40-pc triple has a steep spectrum between 1.4 and 8.4 GHz, with a spectral index of $`0.99\pm 0.29`$. Within the errors, the spectrum is straight across the entire flux range, consistent with optically thin synchrotron emission from an ensemble of electrons with a steep power-law distribution in energy. However, the peak of the northern component shifts systematically toward the southwest with increasing frequency (see Table 3), implying the presence of spectral gradients within the lobe. At 1.4 GHz, this peak is located 23.3 mas (17 pc) from the core, in PA 6.5°. At the same low resolution, the 8.4-GHz peak is located only 19.1 mas (14 pc) from the core in PA $`4.3`$°. Since most of the shift in the peak of the northern lobe occurs between 1.4 and 2.3 GHz, a spectral index image was made between those two frequencies, using the 2.3-GHz image tapered to the 1.4-GHz resolution. That image, displayed in Figure 7, shows the presence of a region with an inverted spectrum, roughly 18 mas (14 pc) north of the galaxy nucleus, although the point-spread function at 1.4-GHz is too large to cleanly resolve the region. (The sharp edges in the spectral-index map are caused by blanking the individual input maps at 8 times their respective noise levels.) A similar image of the spectral index between 2.3 and 5 GHz, at the 2.3-GHz resolution, shows a small region with a nearly flat spectrum between those two frequencies near the same location. This implies that the northern lobe may contain a small component, with a diameter no larger than $`4`$ mas (3 pc), whose spectrum turns over near a frequency of 2–3 GHz. The most logical cause for the apparent spectral turnover in the northern lobe is free-free absorption in an ionized region with a temperature near $`10^4`$ K. For a turnover frequency of 2 GHz, the emission measure would be $`1.3\times 10^7`$ cm<sup>-6</sup> pc. Both the large variation in spectral index and the shift of the peak as a function of frequency occur over 3–4 mas, implying that the absorbing medium has a size of $`3`$ pc or less on the plane of the sky. If this dimension is also used as an estimate of the line-of-sight distance through the absorber, then the average density along a 3-pc line of sight would be $`2\times 10^3`$ cm<sup>-3</sup>. Such a density is, for instance, fairly typical for ionized clouds in Seyfert narrow-line regions (e.g., Osterbrock (1993)). #### 5.4.2 Southern Component of Triple The spectral-index image also indicates a spectral gradient in the southern VLBA component between 1.4 and 2.3 GHz, including a portion at the outer edge of the lobe with an inverted spectrum. The spectral index image between 2.3 and 5.0 GHz shows no such inverted component. The presence of an inverted component between 1.4 and 2.3 GHz is further confirmed by the overall lobe spectrum, which has $`\alpha =0.41\pm 0.29`$ between 1.4 and 2.3 GHz and $`\alpha =1.54\pm 0.20`$ between 2.3 and 8.4 GHz (cf. Table 3 and Figure 2). If the steep spectrum above 2.3 GHz continued to 1.4 GHz, the flux density at that frequency would be 65 mJy instead of the measured value of 37 mJy, implying that a substantial fraction of the total flux of the lobe is absorbed at the lowest frequency. Hypothesizing a turnover frequency due to free-free absorption midway between our two lowest observing frequencies, near 1.8 GHz, the emission measure would be $`1\times 10^7`$ cm<sup>-6</sup>pc. The region of strongest absorption appears to be at the outer edge of the lobe, between 22 and 28 mas ($`20`$ pc) from the central component. This implies an absorbing cloud with a size near 5 pc, and the derived particle density is then $`1.5\times 10^3`$ cm<sup>-3</sup>. Both the density and the size of this ionized cloud are consistent with those required for free-free absorption in the northern component. The overall high-frequency spectrum could be artificially steepened by resolution effects, which would tend to decrease the flux density at the higher frequencies. However, the spectral index between 2.3 and 5 GHz is still near $`1.5`$ in images made at the full 2.3-GHz resolution, and the mix of short and long spacings on the VLBA implies that little 5-GHz emission should be missing on this scale. Therefore, the overall spectrum of the southern VLBA lobe is indeed quite steep intrinsically. #### 5.4.3 Overall Structure of North-South VLBA Triple The 40-pc VLBA triple appears to be significantly affected by free-free absorption due to ionized gas. On a somewhat larger scale, a neutral absorbing component has been detected in our Hi absorption study (CWU ). Therefore, a possible inference is that the ionized gas is merely the inner part of the putative disk seen in Hi, ionized by the central continuum source in Mrk 231. That disk may be a larger version of the disks traced by H<sub>2</sub>O maser emission in active galaxies such as NGC 4258 (Miyoshi et al. (1995); Herrnstein, Greenhill, & Moran (1996); Herrnstein et al. (1997)). However, the viewing angle for the inner disk in Mrk 231 (e.g., CWU ) would be such that no H<sub>2</sub>O maser emission is present. The much larger size of the apparent disk in Mrk 231 could be related to its much more luminous central continuum source. The north-south VLBA source in Mrk 231 bears a striking resemblance to that seen in 3C 84 (Vermeulen, Readhead, & Backer (1994); Walker, Romney, & Benson (1994); Dhawan, Kellermann, & Romney (1998)). NGC 1275, the host galaxy of 3C 84, is slightly more than twice as close to us as Mrk 231, and the size of its north-south radio source is 10 pc, about a quarter the size of the triple in Mrk 231. In addition, the northern component of 3C 84 (the “counterjet”) also exhibits free-free absorption near 15 GHz, attributed to ionized gas with a path length of several parsecs and a density above $`10^4`$ cm<sup>-3</sup> (Vermeulen et al. 1994). Models of the absorption in 3C 84 indicate that the absorbing gas is not spherically distributed, but can be in a warped disk that could be kept ionized by a central continuum source (Levinson, Laor, & Vermeulen (1995)). However, in 3C 84, the disk appears to block only the northern VLBA component, whereas in Mrk 231 the apparent free-free absorption toward parts of both the southern and northern components indicate that the disk may block both components. This blockage occurs despite the fact that the VLBA lobes in Mrk 231 are several times farther from the radio core than is the absorbed region of the jet in 3C 84. Levinson et al. (1995) estimated a total bolometric luminosity of $`2\times 10^{11}L_{}`$ (scaling their value to $`H_0=75`$) for 3C 84, while the total luminosity of Mrk 231 is $`15`$ times greater. This fact, combined with the inference that the ionized density in Mrk 231 is $`10`$ times smaller than that derived from the higher-frequency spectral turnover in 3C 84 (Vermeulen et al. 1994), indicates that the volume of the ionized region could be hundreds of times larger in Mrk 231 than in 3C 84. This may account for the fact that free-free absorption is detected over a much larger scale in Mrk 231 than in 3C 84. An alternative explanation for the free-free absorption is connected with the possibility that much of the infrared emission in Mrk 231 may be generated by star formation rather than by an active galactic nucleus (Downes & Solomon (1998)). In this case, a local source of ionizing radiation could account for the free-free-absorbing clouds. Ionization of spherical clouds 5 pc in diameter, with densities of $`10^3`$ cm<sup>-3</sup>, requires only $`10^{50}`$ ionizing photons per second. This ionization rate could be accounted for by just a few early O stars (Osterbrock (1989)), easily consistent with the high infrared luminosity. ### 5.5 Parsec Scales The central source of the 40-pc VLBA triple undoubtedly contains the actual nucleus of Mrk 231. This central source was shown in Section 3.2 (see Table 4 and Figure 3) to consist of two dominant components separated by 1.1 mas (0.9 pc) in PA $`65`$°, with a possible weak third component located 0.85 mas (0.7 pc) to the southwest. At 8.4 GHz, the two stronger components have brightness temperatures of $`9\times 10^9`$ K and $`>1\times 10^9`$ K. The bulk of the flux density in the strongest component is unresolved, so it may have a peak brightness temperature considerably exceeding $`10^{10}`$ K. The spectrum of component C peaks between 5.0 and 8.4 GHz (see Figure 2); the spectral index between 8.4 and 15 GHz is $`1.32\pm 0.23`$, indicative of optically thin synchrotron radiation. Table 4 indicates that the two dominant components of this source both have steep spectra between 8.4 and 15 GHz. The high-frequency (“intrinsic,” or un-absorbed) spectrum of the stronger component may be steeper than that of the weaker component, but this is somewhat uncertain due to the blending of the two components at 8.4 GHz. Two possibilities for the spectral turnover are either synchrotron self-absorption or free-free absorption similar to that deduced for the northern and southern lobes. The stronger component has an 8.4-GHz brightness temperature of $`10^{10}`$ K, indicating that synchrotron self-absorption is a possible cause for the spectral turnover. If the turnover occurs near 6 GHz at a flux density near 150 mJy, the 8.4-GHz size of $`0.56\times 0.40`$ mas implies a magnetic field strength near 0.5 gauss. For comparison, the minimum-energy magnetic field calculated for a straight spectrum with a spectral index of $`1.3`$ between $`10^7`$ and $`10^{11}`$ Hz, and with equal proton and electron energy densities, is similar, $`0.2`$ gauss. The equipartition magnetic field is thus near the value required for synchrotron-self-absorption to occur in the stronger component, making it likely that this could account for the overall turnover of the spectrum. Alternatively, free-free absorption causing a turnover near 6 GHz, and over a 1-pc path length, would require an ionized density of $`1.1\times 10^4`$ cm<sup>-3</sup>. Attributing the turnover above 5 GHz to synchrotron self-absorption in the stronger central component does not eliminate the possibility that free-free absorption also occurs at a somewhat lower frequency. For example, the northeastern component of the central source has a spectral index of $`0.7`$ between 8.4 and 15 GHz, as indicated by the two-component fit to the central source. An extrapolation of this spectrum to lower frequencies would predict a flux density of $``$45 mJy at 2.3 GHz and $``$65 mJy at 1.4 GHz. The latter prediction exceeds the total observed value of 53 mJy for all of component C. This contradiction can be removed by postulating the presence of either free-free or synchrotron absorption of the weaker component in the vicinity of 2 GHz. Such absorption is consistent with the results for the outer VLBA lobes, again indicating the presence of a large quantity of ionized gas in the inner regions of Mrk 231. However, strong variability of the northeastern component between 1996 and 1998 (Ulvestad et al., 1998b, and in preparation) implies that it may well be the location of the actual nucleus; its brightness temperature may be considerably greater than the lower limit of $`10^9`$ K found in this work, consistent with the possible presence of synchrotron self-absorption. The major axis of the central VLBA source is near 65°, very different from the position angle of about 0°–5° for the larger-scale VLBA triple. If we assume (based on our recent observations of variability) that the northeastern component of the central source is the galaxy core, and the inner radio jet extends to the southwest in position angle 115°, the obvious hypothesis would be that this jet must twist within the inner few parsecs of the galaxy to feed the more distant VLBA lobes. However, there is no evidence in any VLBA images for a direct connection between the 1-pc-scale source and the larger scale VLBI lobes. Apparent large changes in VLBA position angles within the inner few parsecs are also present in NGC 4151 (Ulvestad et al. 1998a ), so this morphological trait is not unique to Mrk 231. Indeed, a similar circumstance was discovered recently in the nearest active galaxy, Centaurus A, where the sub-parsec axis defined by the VLBI radio jet (Jones et al. (1996); Tingay et al. (1998)) is misaligned by some 70° from the axis of the infrared disk imaged with HST on a 40-pc scale (Schreier et al. (1998)). An alternative hypothesis to jet curvature would be that the northeastern component of the central source is part of an accretion disk or torus as claimed for component S1 in NGC 1068 (Gallimore, Baum, & O’Dea (1997)). However, this seems untenable for Mrk 231; the brightness temperature in Mrk 231 is more than 1000 times higher than that in NGC 1068, so an interpretation as thermal bremsstrahlung or reflected synchrotron emission, while reasonable for NGC 1068, becomes implausible for Mrk 231. If the central source in Mrk 231 is the inner portion of a jet, its symmetry axis would presumably represent the inner axis of the accretion disk around the black hole that powers the active galactic nucleus. In contrast, the VLBA triple, plus the Hi and CO disks, clearly indicate a very different axis on scales greater than a few parsecs. The smaller scale is typical for the optical broad-line region, while the larger scale matches that for the optical narrow-line region in Seyferts. Therefore, a possible interpretation is that the broad-line and narrow-line regions have symmetry axes that are related only over the long term. The simplest unified schemes for Seyfert galaxies (Antonucci (1993)) suggest that Seyfert 1 galaxies like Mrk 231 are those in which the line of sight to the broad-line region misses the nuclear disk or torus, while Seyfert 2 galaxies are those in which our viewing angle prevents a direct view of the broad-line region. These models typically rely on a common symmetry axis for the broad-line and narrow-line regions, whereas the data presented here indicate that those axes may be very different, at least at the current epoch. This is in accord with recent calculations (Pringle 1996, 1997; Maloney, Begelman, & Pringle (1996)), showing that the accretion disks in active galactic nuclei may be severely warped by the local radiation source, and also with observations of sub-parsec-scale warped disks traced out by VLBI imaging of water megamasers in NGC 4258 (Herrnstein et al. 1996). One could imagine that the larger scale VLBA and VLA sources represent the “average” axis of the disk, while the small-scale VLBA source shows the instantaneous direction of an inner disk that might be determined by the gas most recently added to that disk. The amount of jet curvature, or disk precession or warping, required would be reduced considerably if we happen to be viewing the radio jet nearly end-on, so that a small change in angle could appear much larger in projection. ## 6 Summary The VLBA and VLA have been used to image the continuum emission from Mrk 231 on scales ranging from parsecs to kiloparsecs. An asymmetric, diffuse radio source is traced for more than 25 kpc. It exhibits linear polarization as high as 57%, has a ridge of modest brightness aligned with a starburst region several kiloparsecs south of the galaxy nucleus and (roughly) with the 40-pc VLBA triple source, and appears to be powered by energy deposition from the jet. This diffuse radio source extends beyond the bulk of the optical emission from Mrk 231. Inside the diffuse radio emission, a 350-pc disk of radio continuum and Hi (CWU), appears to be caused by synchrotron emission that may be free-free absorbed in some regions. This is consistent with the inference that the disk is powered by a massive starburst. The 40-pc VLBA triple source exhibits free-free absorption in both its northern and southern components at frequencies of 2–3 GHz, implying the presence of ionized gas clouds several parsecs in diameter with densities of 1–2$`\times 10^3`$ cm<sup>-3</sup>. These clouds may reside in the ionized inner region of the same disk responsible for the Hi absorption (CWU ). Their ionization may be powered either by the active galactic nucleus or by a few O stars. The central component of the 40-pc source has an elongation PA $`65`$° from that of the 40-pc VLBA triple. This core contains at least two components, with brightness temperatures of $`10^{10}`$ K and $`>10^9`$ K, each of which is absorbed at low frequencies. The data are consistent with the presence of synchrotron self-absorption between 5 and 10 GHz in the stronger component, implying a magnetic field of $`0.5`$ gauss. The weaker component, about 1 pc to the northeast, is absorbed below 2–3 GHz, due either to free-free absorption or synchrotron self-absorption. All data support the presence of two symmetry axes in Mrk 231: (1) an inner axis in PA $`65`$° that determines the initial direction of the radio jet, and is likely to be associated with the current axis of a central black hole or its accretion disk; and (2) an outer axis that is near PA 0°, perpendicular to a disk or torus that is ionized in its inner 10–20 pc radius, neutral out to $`200`$ pc from the center, and then molecular out to $`400`$ pc radius. The outer axis presumably represents the average long-term axis of the central engine. These two axes are substantially misaligned unless the parsec-scale radio source is viewed nearly end-on, so that small curvature would be enhanced greatly by projection. ###### Acknowledgements. We are grateful to Bill Keel for supplying the optical image of Mrk 231, and to Greg Taylor for useful discussions. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
no-problem/9901/cond-mat9901009.html
ar5iv
text
# Jamming and Static Stress Transmission in Granular Materials ## I Introduction In this paper we consider assemblies of cohesionless rough particles, whose rigidity is sufficient that individual particle deformations remain always small. Such assemblies are sometimes argued to be governed by the continuum mechanics of a Hookean elastic solid (perhaps with a very high modulus). But this implicitly assumes that each granular contact is capable equally of supporting tensile as compressive loads. For a cohesionless medium this is certainly untrue: cohesionless granular assemblies are therefore not elastic. The question is not one of principle, but of degree – how important is the prohibition of tensile forces? This is not completely clear; some would argue that it represents a negligible effect and that an elastic model remains basically sound, so long as the mean stresses in the material remain compressive everywhere. However, a fully elastic granular assembly would be one in which grains were, effectively, glued permanently to their neighbours on first contact. Because the packing is microscopically disordered, it is possible that, during subsequent loading a significant fraction of such contacts would become tensile, even if the load being applied remains everywhere compressive on average. If so, the absence of tensile forces is a major, even dominant, factor. Notice that the absence of tensile forces is a distinct physical effect from the one addressed by most elastoplastic continuum theories of granular media (see, e.g., Ref. ). These are like elastic models, but they allow for the fact that the ratio of shear and normal forces at a contact cannot exceed a fixed value determined by a coefficient of static friction; this is usually assumed to translate to a similar condition on the bulk stress components acting across any plane. The resulting plasticity is similar to that arising in metals, for example, and not related to the prohibition of tensile forces: it applies equally for cohesive contacts. Of course, in applying such theories to cohesionless media one should assume that the mean stresses are everywhere compressive: however, as emphasized above, this constraint does not ensure that individual contact forces are all compressive as is actually required. These considerations suggest a physically very different picture of granular media, already well developed and respected in the sphere of discrete modelling. In this picture, nonlinear physics is dominant, and the contact network of grains is always liable to reorganize as loads are applied: it is an “adaptive structure”. The contact network defines a loadbearing “granular skeleton” shown in Fig. 1(a): this is often thought of as a network of “force chains”, or roughly linear chains of strong particle contacts, alongside which the other grains play a relatively minor role in the transmission of stress. If these ideas are true, the continuum mechanics of the material has to be thought about afresh. Since this widely-accepted picture of force-chains implies a microscopically heterogeneous character of the contact network in the material, it is not necessarily obvious that a continuum description of it is possible at all. However, we have in recent years developed continuum models for granular materials which, we now believe, do capture some of the physics of force chains, and of their geometrical dependence on the construction history. This interpretation, which has evolved significantly beyond the empiricism of our early work, is developed below. The proposal that granular assemblies under gravity cannot properly be described by the ideas of conventional elastoplasticity has been opprobiously dismissed in some quarters: we stand accused of ignoring all that is ‘long and widely known’ among geotechnical engineers. However, we are not the first to put forward such a subversive proposal. Indeed workers such as Trollope and Harr have long ago developed ideas of force transfer rules among discrete particles, not unrelated to our own approaches, which yield continuum equations quite unlike those of elastoplasticity. More recently, dynamical hypoplastic continuum models have been developed which, as explained by Gudehus describe an ‘anelastic behaviour without \[the\] elastic range, flow conditions and flow rules of elastoplasticity’. Our own models, though not explicitly dynamic, are similarly anelastic in a specific manner that we describe as “fragile”. In Section II below, we describe a generic “jamming” mechanism for the construction of a granular skeleton that, we argue, points toward fragile mechanical behaviour. This scenario is related, but not identical, to several other current ideas in the literature on granular media. These include the emergence of rigidity by successive buckling of force chains and the concept of mechanical percolation. In particular there is a strong link between fragile media and isostatic models of granular assemblies. In an isostatic network, the requirement of force balance at the nodes is enough to determine all the forces acting, so these can be calculated without reference to a strain or displacement variable. Isostatic networks require a mean coordination number with a specific critical value ($`z=2d`$ with $`d`$ the dimension of space). In this sense, isostatic contact networks are “exceptional”, and may appear remote from real granular materials. However, it is increasingly clear that almost all disordered packings of frictionless spheres actually approach an isostatic state in the rigid particle limit. Since friction is ignored, there is still a missing link between this result and the physics of real granular media – a link provided by the concept of force chains, as we show below (Section II B). More generally, the idea that the granular skeleton could engineer itself to maintain an isostatic or fragile state is closely connected with the concepts of self-organized criticality (soc) (see also Ref. ). The concepts provide a generic mechanism whereby an overdamped dynamical system under external forcing can come to rest at a marginally stable (critical) state. In the soc scenario, this state is characterized by hierarchical (fractal) correlations and large noise effects (compare Fig. 1(a)). In this article we ignore these complications and describe only our minimalist, noise-free models of the granular skeleton; these represent, in effect, regular arrays of force chains. The effect of noise on the resulting continuum equations is of great interest, but these require a separate discussion, which is made elsewhere. ## II Colloids, Jamming and Fragile Matter ### A Colloids We start by describing a simple model of jamming in a colloid, sheared between parallel plates. This is the simplest plausible scenario in which an adaptive skeleton forms in response to an applied load; we believe it sheds much light on the related problem of dry granular media as discussed in Section III below. We will first use it to illustrate some general ideas on the relationship between jamming and fragility. Consider a concentrated colloidal suspension of hard particles, confined between parallel plates at fixed separation, to which a shear stress is applied (Fig. 1(b) and 2 (a)). Above a certain threshold of stress, this system exhibits enters a regime of strong shear thickening; see, e.g., Ref. . The effect can be observed in the kitchen, by stirring a concentrated suspension of corn-starch with a spoon. In fact, computer simulations suggest that, at least under certain idealized conditions, the material will jam completely and cease to flow, no matter how long the stress is maintained. In these simulations, jamming apparently occurs because the particles form “force chains” along the compressional direction (Fig. 1(b)). Even for spherical particles the lubrication films cannot prevent direct interparticle contacts; once these arise, an array or network of force chains can indeed support the shear stress indefinitely. (We ignore Brownian motion, here and below, as do the simulations; this could cause the jammed state to have finite lifetime.) To model the jammed state, we start from a simple idealization of a force chain: a linear string of at least three rigid particles in point contact. Crucially, this chain can only support loads along its own axis (Fig.3 (a)): successive contacts must be collinear, with the forces along the line of contacts, to prevent torques on particles within the chain. Note that neither friction at the contacts, nor particle aspherity, can change this “longitudinal force” rule. (Particle deformability, however, does matter; see Section III C below.) As a minimal model of the jammed colloid, we therefore take an assembly of such force chains, characterized by a unique director (a headless unit vector) $`𝐧`$, in a sea of “spectator” particles, and incompressible solvent. This is obviously oversimplified, for we ignore completely any interactions between chains, the deflections caused by weak interactions with the spectator particles, and the fact that there must be some spread in the orientation of the force chains themselves. Nonetheless, with these assumptions, in static equilibrium and with no body forces acting, the pressure tensor $`p_{ij}`$ (defined as $`p_{ij}=\sigma _{ij}`$, with $`\sigma _{ij}`$ the usual stress tensor) must obey $$p_{ij}=P\delta _{ij}+\mathrm{\Lambda }n_in_j$$ (1) Here $`P`$ is an isotropic fluid pressure, and $`\mathrm{\Lambda }`$ ($`>0`$) a compressive stress carried by the force chains. ### B Jamming and Fragile Matter Eq. (1) defines a material that is mechanically very unusual. It permits static equilibrium only so long as the applied compression is along $`𝐧`$; while this remains true, incremental loads (an increase or decrease in stresses at fixed compression axis of the stress tensor) can be accommodated reversibly, by what is (at the particle contact scale) an elastic mechanism. But the material is certainly not an ordinary elastic body, for if instead one tries to shear the sample in a slightly different direction (causing a rotation of the principal stress axes) static equilibrium cannot be maintained without changing the director $`𝐧`$. Now, $`𝐧`$ describes the orientation of a set of force chains that pick their ways through a dense sea of spectator particles. Accordingly $`𝐧`$ cannot simply rotate; instead, the existing force chains must be abandoned and new ones created with a slightly different orientation. This entails dissipative, plastic, reorganization, as the particles start to move but then re-jam in a configuration that can support the new load. The entire contact network has to reconstruct itself to adapt to the new load conditions; within the model, this is true even if the compression direction is rotated only by an infinitesimal amount. Our model jammed colloid is thus an idealized example of “fragile matter”: it can statically support applied shear stresses (within some range), but only by virtue of a self-organized internal structure, whose mechanical properties have evolved directly in response to the load itself. Its incremental response can be elastic only to compatible loads; incompatible loads (in this case, those of a different compression axis), even if small, will cause finite, plastic reorganizations. The inability to elastically support some infinitesimal loads is our chosen technical definition of the term “fragile”. We argue that jamming may lead generically to mechanical fragility, at least in systems with overdamped internal dynamics. Such a system is likely to arrests as soon as it can support the external load; since the load is only just supported, one expects the state to be only marginally stable. Any incompatible perturbations then force rearrangement; this will leave the system in a newly jammed but (by the same argument) equally fragile state. This scenario is related, but not identical, to several other ideas in the literature. The link between jamming and fragility is schematically illustrated in Fig. 4. Now consider again the idealized jammed colloid of (Fig. 2 (a)). So far we allowed for an external stress field (imposed a the plates) but no body forces. What body forces can the system support without plastic rotation of the director? Various models are possible. One is to assume that Eq. (1) continues to apply, with $`P(𝐫)`$ and $`\mathrm{\Lambda }(𝐫)`$ now varying in space. If $`P`$ is a simple fluid pressure, a localized body force can be supported only if it acts along $`𝐧`$. Thus (as in a bulk fluid) no static Green function exists for a general body force. (Note that, since Eq. (1) is already written as a continuum equation, such a Green function would describes the response to a load that is localized in space but nonetheless acts on many particles in some mesoscopic neighbourhood.) For example, if the particles in Fig. 2 (a) were to become subject to a gravitational force along $`y`$, then the existing force chains could not sustain this but would reorganize. Applying the longitudinal force rule, the new shape is easily found to be a catenary, as realized by Hooke, and emphasized by Edwards. On the other hand, a general body force can be supported, in three dimensions, if there are several different orientations of force chain, possibly forming a network or “granular skeleton”. A minimal model for this is: $$p_{ij}=\mathrm{\Lambda }_1n_in_j+\mathrm{\Lambda }_2m_im_j+\mathrm{\Lambda }_3l_il_j$$ (2) with $`𝐧,𝐦,𝐥`$ directors along three nonparallel populations of force chains; the $`\mathrm{\Lambda }`$’s are compressive pressures acting along these. Body forces cause $`\mathrm{\Lambda }_{1,2,3}`$ to vary in space. We can thus distinguish two levels of fragility, according to whether incompatible loads include localized body forces (bulk fragility, e.g. Eq. (1)), or are limited to forces acting at the boundary (boundary fragility, e.g. Eq. (2)). In disordered systems one should also distinguish between macro-fragile responses involving changes in the mean orientation of force chains, and the micro-fragile responses of individual contacts. We expect micro-fragility in granular materials (see Ref. ), although the models discussed here, which exclude randomness, are only macro-fragile; in practice the distinction may become blurred. In any case, these various types of fragility should not be associated too strongly with minimal models such as Eqs. (1,2). It is clear that many granular skeletons have a complex network structure where many more than three directions of force chains exist. Such a network may nonetheless be fragile. Fragility in fact requires any connected granular skeleton of force chains, obeying the longitudinal force rule (lfr), to have a mean coordination number $`z=2d`$ with $`d`$ dimension of space (e.g. Fig. 2 (b) in two dimensions). This coordination number describes the skeleton, rather than the medium as a whole; but otherwise it is the same rule as applies for packings of frictionless hard spheres. These also obey the lfr – not because of force chains, but because there is no friction. Regular packings of frictionless spheres (which show isostatic mechanics) have been studied in detail recently; and Moukarzel has argued that disordered frictionless packings of hard spheres are also generically fragile (see also Ref. ). These arguments appear to depend only on the lfr and the absence of tensile forces, so they should, if correct, equally apply to any granular skeleton that is made of force chains of three or more completely rigid particles. ### C Fixed Principal Axis Model Returning to the simple model of Eq. (2), the chosen values of the three directors (two in $`d=2`$) clearly should depend on how the system came to be jammed (its “construction history”). If it jammed in response to a constant external stress, switched on suddenly at some earlier time, one can argue that the history is specified purely by the stress tensor itself. In this case, if one director points along the major compression axis then by symmetry any others should lie at rightangles to it (Fig. 2 (b)). Applying a similar argument to the intermediate axis leads to the ansatz that all three directors lie along principal stress axes; this is perhaps the simplest model in $`d=3`$. One version of this argument links force chains with the fabric tensor, which is then taken coaxial with the stress. (The fabric tensor is the second moment of the orientational distribution function for contacts and/or interparticle forces.) With the ansatz of perpendicular directors as just described, Eq. (2) becomes a “fixed principle axes” (fpa) model. Although grossly oversimplified, this leads to nontrivial predictions for the jammed state in the colloidal problem, such as a constant ratio of the shear and normal stresses when these are varied in the jammed regime. Such constancy is indeed reported by Laun in the regime of strong shear thickening; see Ref. . ## III Granular Materials We believe that these simple ideas on jamming and fragility in colloids are equally relevant to cohesionless, dry granular media constructed from hard frictional particles. For although the formation of dry granular aggregates under gravity is not normally described in terms of jamming, it is a closely related process. Indeed, the filling of silos and the motion of a piston in a cylinder of grains both exhibit jamming and stick-slip phenomena associated with force chains; see Ref. . And, just as in a jammed colloid, the mechanical integrity of a sandpile entirely disappears as soon as the load (in this case gravity) is removed. In the granular context, a model like Eq. (2) can be interpreted by saying that a fragile granular skeleton of force chains is laid down at the time when particles are first buried at a free surface (see Fig. 5); so long as subsequent loadings are compatible with this structure, the skeleton will remain intact – if not grain by grain, then at least in its average properties. If in addition the skeleton is rectilinear (perpendicular directors) this forces the principal axes to maintain forever the orientation they had close to the free surface (fpa model). However, we do not insist on this last property and other models, which correspond to an oblique family of directors in Eq. (2), will be described below. In what follows we review in more detail the nature of our fragile models and the role they might play within a continuum mechanical description of granular media. We will mainly be concerned with the standard sandpile, which we define to be a conical pile, constructed by pouring cohesionless hard grains from a point source onto a perfectly rough, rigid support as shown in Fig. 5. We assume that this construction leads to a series of shallow surface avalanches whereby all grains have come to rest, at the point of burial, very close to the free surface of the pile. (Very different conditions may apply for wedges of sand; see Ref. .) An alternative history is the sieved pile, which is a cone created by sieving a series of concentric discs one on top of the other. In the standard sandpile, it is well known that the vertical normal stress has a minimum, not a maximum, beneath the apex. A striking feature of our modelling approach is that it not only accounts for this “stress-dip” reasonably well, but predicts that it should be entirely absent for a sieved pile. This proposal is currently being subject to careful experimental verification. ### A Continuum Modelling of Granular Media The equations of stress continuity express the fact that, in static equilibrium, the forces acting on a small element of material must balance. For a conical pile of sand we have, in $`d=3`$ dimensions, $$_r\sigma _{rr}+_z\sigma _{zr}=\beta (\sigma _{\chi \chi }\sigma _{rr})/r$$ (3) $$_r\sigma _{rz}+_z\sigma _{zz}=g\beta \sigma _{rz}/r$$ (4) $$_\chi \sigma _{ij}=0$$ (5) where $`\beta =1`$. Here $`z,r`$ and $`\chi `$ are cylindrical polar coordinates, with $`z`$ the downward vertical. We take $`r=0`$ as a symmetry axis, so that $`\sigma _{r\chi }=\sigma _{z\chi }=0`$; $`g`$ is the force of gravity per unit volume; $`\sigma _{ij}`$ is the usual stress tensor which is symmetric in $`i,j`$. The equations for $`d=2`$ are obtained by setting $`\beta =0`$ in (3,4) and suppressing (5). These describe a wedge of constant cross section and infinite extent in the third dimension. The Coulomb law states that, at any point in a cohesionless granular medium, the shear force acting across any plane must be smaller in magnitude than $`\mathrm{tan}\varphi `$ times the compressive normal force. Here $`\varphi `$ is the angle of friction, a material parameter which, in simple models, is equal to the angle of repose. We accept this here, while noting that (i) $`\varphi `$ in principle depends on the texture (or fabric) of the medium and hence on its construction history; (ii) for a highly anisotropic packing, the existence of an orientation-independent $`\varphi `$ is questionable; (iii) the identification of $`\varphi `$ with the repose angle ignores some complications such as the Bagnold hysteresis effect (which may in turn be coupled to density changes). Setting these to one side, we note that the Coulomb law is an inequality: therefore, when combined with stress continuity, it cannot lead to closed equations for the granular stresses. To close the system of equations, further assumptions are clearly required. One choice is to assume that the material is an elastic continuum wherever it does not violate the Coulomb condition. (This is the simplest possible type of elastoplastic model.) A second choice it to treat the Coulomb condition as though it were an equality. This is the basis of the so-called “rigid plastic” approach to granular media. We return to both of these modelling schemes after first describing our own approach. #### 1 Constitutive Relations Among Stresses We view cohesionless granular matter as assembly of rigid particles held up by friction. The static indeterminacy of frictional forces can, we argue, then be circumvented by assuming the existence of some local constitutive relations (c.r.’s) among components of the stress tensor. The c.r.’s among stress components are taken to encode the network of contacts in the granular packing geometry; they therefore depend explicitly on its construction history. The task is then to postulate and/or justify physically suitable c.r.’s among stresses, of which only one (the primary c.r.) is required for systems with two dimensional symmetry, such as a wedge of sand; for a three dimensional symmetric assembly (the conical sandpile) a secondary c.r. is also needed. The above nomenclature has caused confusion to some commentators on our work. In solid mechanics the term ‘constitutive relation’ normally refers to a material-dependent equation relating stress and strain. In fluid mechanics one has instead equations relating stress and (in the general case of a viscoelastic fluid) strain-rate history. Instead, our models of granular media entail equations relating stress components to one another, in a manner that we take to depend on the construction history of the material. Clearly such equations are intended to describe constitutive properties of the medium: they relate its state of stress to other discernable features of its physical state. We see no alternative to the term ‘constitutive relations’ for such equations. In the simplest case, which is the fpa model one hypothesizes that, in each material element, the orientation of the stress ellipsoid became ‘locked’ into the material texture at the time when it last came to rest, and does not change in subsequent compatible loadings. This is a bold, simplifying assumption, and it may indeed be far too simple, but it exemplifies the idea of having a local rule for stress propagation that depends explicitly on construction history. At first sight the idea of ‘locking in’ the principal axes seems to contradict the conception of an adaptive granular skeleton which can rearrange in response to small incremental loads. Remember though that this ‘locking in’ only applies for compatible loads – those which the existing skeleton can support. Any incompatible load will cause reorganization. We therefore require that any incompatible loads are specified in defining the construction history of the material. For the standard sandpile geometry (see Fig. 5), where the material comes to rest on a surface slip plane, such loads do not in fact arise after material is buried. The fpa constitutive hypothesis then leads to the following primary c.r. among stresses: $$\sigma _{rr}=\sigma _{zz}2\mathrm{tan}\varphi \sigma _{zr}$$ (6) where $`\varphi `$ is the angle of repose. Eq.(6) is algebraically specific to the case of a standard sandpile created from a point source by a series of avalanches along the free surface. The conceptual basis of the fpa model is not so narrow: indeed, we applied it already to jammed colloids in Section II. More generally the fpa model is arguably the simplest possible choice for a history-dependent c.r. among stresses; but this does not mean it will be sensible in all geometries. A consequence of Eq. (6) for a standard sandpile, is that the major principal axis everywhere bisects the angle between the vertical and the free surface. It should be noted that in cartesian coordinates, the fpa model reads: $$\sigma _{xx}=\sigma _{zz}2\text{sign}(x)\mathrm{tan}\varphi \sigma _{xz}$$ (7) where $`x=\pm r`$ is horizontal. From Eq. (7), the fpa constitutive relation is seen to be discontinuous on the central axis of the pile: the local texture of the packing has a singularity on the central axis which is reflected in the stress propagation rules of the model. (This is physically admissible since the centreline separates material which has avalanched to the left from material which has avalanched to the right.) The paradoxical requirement, on the centreline, that the principal axes are fixed simultaneously in two different directions has a simple resolution: the stress tensor turns out to be isotropic there. See Fig. 5. The constitutive singularity leads to an ‘arching’ effect for the standard sandpile, as previously put forward by Edwards and Oakeshott and others. The fpa model is one of a larger class of osl (for “oriented stress linearity”) models in which the primary constitutive relation (in the sandpile geometry) is, in Cartesians $$\sigma _{xx}=\eta \sigma _{zz}+\mu \text{sign}(x)\sigma _{xz}$$ (8) with $`\eta ,\mu `$ constants. Note that the boundary condition, that the free surface of a pile at its angle of repose $`\varphi `$ is a slip plane, yields one equation linking $`\eta `$ and $`\mu `$ to $`\varphi `$; thus, for a sandpile geometry, the osl scheme represents a one-parameter family of primary c.r.’s. The osl models were developed to explain experimental data on the stress distribution beneath a standard sandpile. With a plausible choice of secondary c.r. (of which several were tried, with only minor differences resulting), the experimental data (Fig. 6) is found to support models in the osl family with $`\eta `$ close, but perhaps not exactly equal, to unity (the fpa value). This is remarkable, in view of the radical simplicity of the assumptions made. As explained by Wittmer et al., the osl models, combined with stress continuity (Eq. 8) yield hyperbolic equations having fixed characteristic rays for stress propagation. In fact they are wave equations; moreover they are essentially equivalent to Eq. (2), with (in general) an oblique triplet of directors (these become mutually orthogonal only in the case of fpa). The constitutive property that osl models describe is that these characteristic rays (and not, in general, the principal axes) have orientations that are ‘locked in’ on burial of an element, and do not change when a further compatible load is applied. As demonstrated already in Section II, there is every reason to identify such characteristics, in the continuum model, with the mean orientations of force chains in the underlying material. Note that unless the osl parameter is chosen so that $`\mu =0`$, a constitutive singularity on the central axis, as mentioned above for the fpa case, remains. (The characteristics are asymmetrically disposed about the vertical axis, and invert discontinuously at the centreline $`x=0`$.) The case $`\mu =0`$ corresponds to one studied earlier by Bouchaud et al., and of the osl family it is the only candidate for describing a sieved pile, in which the construction history clearly cannot lead to a constitutive singularity at the axis of the pile. The resulting ‘bcc’ model could be called a ‘local Janssen model’ in that it assumes local proportionality of horizontal and vertical compressive stresses – an assumption which, when applied globally to average stresses in a silo, was first made by Janssen. The bcc model predicts a smooth maximum, not a dip, in the pressure beneath the apex of a pile. This is what we expect, therefore, in the case of a sieved pile. #### 2 Rigid-Plastic Models A more traditional, but related, approach is one based on the (Mohr-Coulomb) rigid-plastic model. To find so-called limit state solutions in this model, one postulates that the Coulomb condition is everywhere obeyed with equality. That is, through every point in the material there passes some plane across which the shear force is exactly $`\mathrm{tan}\varphi `$ times the normal force. By assuming this, the model allows closure (modulo a sign ambiguity discussed below) of the equations for the stress without invocation of an elastic strain field. This limit-state analysis of the rigid plastic model is equivalent to assuming a ‘constitutive relation’ (sometimes called ‘incipient failure everywhere’ ): $$\sigma _{rr}=\sigma _{zz}\frac{1}{\mathrm{cos}^2\varphi }\left[\mathrm{sin}^2\varphi +1\pm 2\mathrm{sin}\varphi \sqrt{1(\mathrm{cot}\varphi \sigma _{zr}/\sigma _{zz})^2}\right]$$ (9) whereas the Coulomb inequality requires only that $`\sigma _{rr}`$ lies between the two values ($`\pm `$) on the right. It is a simple exercise to show that for a sandpile at its repose angle, only one solution of the resulting equations exists in which the sign choice is everywhere the same. This requires the negative root (conventionally referred to as an ‘active’ solution) and it shows a hump, not a dip, in the vertical normal stress beneath the apex. Savage, however, draws attention to a ‘passive’ solution, having a pronounced dip beneath the apex. The passive solution actually contains a pair of matching planes between an inner region where the positive root of (9) is taken, and an outer region where the negative is chosen. In fact (see Ref. ) there are more than one such matched solutions, corresponding to different types of discontinuity in the stress (or its gradient) at the matching plane and/or the pile centre. Moreover, there is no physical principle that limits the number of matching surfaces; by adding extra ones, a very wide variety of results might be achieved. It is interesting to compare the mathematics, and physics, of Eq. (9) with that of the osl models introduced above. The rigid-plastic model yields a local c.r. among stresses; like osl the resulting equations are hyperbolic. It also exhibits fragility: because a yield plane passes through every material point, certain incremental loads will cause reorganization. Therefore, anyone who defends the rigid-plastic model as a cogent description of sandpiles cannot reasonably object to these same features in our own models. Conversely, we cannot object in principle to a model in which a Coulomb yield plane passes through every material point. However, we still see no reason why it should be a good model; in particular we cannot see how to make a link between the characteristic rays in this model (which are always load dependent) and the underlying geometry of the contact network in the medium. In contrast, this link arises naturally in the osl framework. #### 3 Elastoplastic models The simplest elastoplastic models assume a material in which a perfectly elastic behaviour at small strains is conjoined onto perfect plasticity (the Coulomb condition with equality) at larger ones. In such an approach to the standard sandpile, an inner elastic region connects onto an outer plastic one. In the inner elastic region the stresses obey the Navier equations, which follow from those of Hookean elasticity by elimination of the strain variables. The corresponding strain field is usually not discussed, but tacitly treated as infinitesimal: the high modulus limit is taken. It has been argued that, for a sandpile on a rigid support fpa-like solutions can be found within a purely elastoplastic model, at least in two dimensions. However, since these show a cusp in the vertical stress on the centreline, they imply an infinitesimal displacement field incompatible with a continuous elasticity across the midline. On the other hand, it is possible to obtain a stress dip, in an elastoplastic model, by assuming that the supporting base is not rigid but subject to basal sag. This explanation cannot explain the data of Huntley which involves an indentable (rather than sagging) base. Moreover, it would predict a similar dip for a sieved pile, unlike or own models; experiments on this point are now available, and suggest that indeed no dip is seen in this case. Objections to the elastoplastic approach to modelling sandpiles can also be raised at a much more fundamental level. Specifically, to make unambiguous predictions for the stresses in a sandpile, these models require boundary information which, at least for the simpler models, can be given no clear physical meaning or interpretation. We return to this point below. ### B Fragile vs. Elastoplastic Descriptions #### 1 Problems definining an elastic strain In the fpa model and its relatives, strain variables are not considered. No elastic modulus enters, and there is no intrinsic stress scale. The resulting predictions for a conical pile therefore obey what is usually called radial stress-field (rsf) scaling. Formally one has for the stresses at the base $$\sigma _{ij}=ghs_{ij}(r/ch)$$ (10) where $`h`$ is the pile height, $`c=\mathrm{cot}\alpha `$ and $`s_{ij}`$ a reduced stress: $`\alpha `$ is the angle between the free surface and the horizontal so that for a pile at repose, $`\alpha =\varphi `$. This form of rsf scaling, which involves only the forces at the base, might be called the ‘weak form’ and is distinct from the ‘strong form’ in which Eq. (10) is obeyed also with $`z`$ (an arbitrary height from the apex) replacing $`h`$ (the overall height of the pile). Our osl models obey both forms; only the weak form has been tested directly by experiment but it is well-confirmed in many systems (Smid and Novosad, Huntley). The observation of rsf scaling, to experimental accuracy, suggests that elastic effects need not be considered explicitly. This does not of itself rule out elastic or elastoplastic behaviour which, at least in the limit of large modulus, can also yield equations for the stress from which the bulk strain fields, and hence also the modulus, cancel. (Note that it is tempting, but entirely wrong, to assume that a similar cancellation occurs at the boundaries of the material; we return to this below.) The cancellation of bulk strain fields in elastoplastic models disguises a serious problem in their application to the standard sandpile and related geometries. The difficulty is this: there is no obvious definition of strain or displacement for such a construction history. To define a physical displacement or strain field, one requires a reference state. In (say) a triaxial strain test (see, e.g., Ref. ) an initial state is made by some reproducible procedure, and strains measured from there. The elastic part is identifiable in principle, by removing the applied stresses (maintaining an isotropic pressure) and seeing how much the sample recovers its shape. In contrast, a pile constructed by pouring grains onto its apex is not made by a plastic and/or elastic deformation from some initial reference state of the same continuous medium. The problem of the missing reference state occurs whenever the solidity of the body itself arises purely because of the load applied. Thus, for the jammed colloid considered in Section II above, the unloaded state is simply a fluid. For the sandpile, it is grains floating freely in space. On cannot satisfactorily define an elastic strain with respect to either of these reference states. A route to defining a strain variable does however exist, so long as one ignores the fact that tensile forces are prohibited. In effect, one assumes that when grains of sand arrive at the free surface, each one forms permanent or“glued” elastic contacts with its neighbours; this contact network can then, by assumption, elastically support arbitrary incremental loads. This is an admissable physical hypothesis, though contradictory to our own hypothesis of an adaptive, fragile granular skeleton. We do not yet know which hypothesis is more correct; the test of this lies in experiment. (It does not lie in a sociological comparison of how physicists and engineers approach their work, as offered by Savage.) If the “glued pile” model is correct, then a strain variable is defined from the relative displacement that has occurred between adjoining particles since the moment they first were glued together. However, the resulting displacement field, found by integrating the strain, is unlikely to be single-valued. Put differently, if a glued assembly is created under gravity and then gravity is switched off, it will revert to a state in which there are residual elastic strains throughout the material, even though there is now no body force acting (Fig. 9(a)). This is because the particle contact network was itself created under partially-loaded conditions. Many elastic and elastoplastic calculations, such as all those reviewed by Savage, entirely ignore the problem of quenched stresses, and therefore embody an implausible “floating model” of a sandpile shown in Fig. 9 (b). Note that these effects do not become small when the limit of a large modulus is taken; the quenched stresses remain of order the stress that was acting during formation, and can take both signs (tensile as well as compressive). So, if one creates a glued pile under gravity and then slowly switches off the body force, tensile forces will arise long before $`g`$ has gone to zero. In this sense, the response to gravity of a cohesionless pile is completely nonlinear. Correspondingly, in an unglued pile, no smooth deformation can connect the state of a pile created under gravity with an unloaded state of the same contact network: as the load is removed, such a pile will undergo large-scale reorganization. #### 2 Boundary conditions and determinacy in hyperbolic models Models, such as osl, that assume local constitutive equations among stresses provide hyperbolic differential equations for the stress field. Accordingly, if one specifies a zero-force boundary condition at the free (upper) surface of a granular aggregate on a rough rigid support, then any perturbation arising from a small extra body force (in two dimensions, for simplicity) propagates along two characteristics passing through this point. In the osl models these characteristics are, moreover, straight lines. Therefore the force at the base can be found simply by summing contributions from all the body forces as propagated along two characteristic rays onto the support; the sandpile problem is, within the modelling approach by Bouchaud et al. and Wittmer et al., mathematically well-posed. There is no need to consider any elastic strain field and the paradoxes concerning its definition in cohesionless poured sand, discussed above, do not arise. Note that in principle, one could have propagation also along the ‘backward’ characteristics (see Fig. 7 (a)). This is forbidden since these cut the free surface; any such propagation can only arise in the presence of a nonzero surface force, in violation of the boundary conditions. Therefore the fact that the propagation occurs only along downward characteristics is not related to the fact that gravity acts downward; it arises because we know already the forces acting at the free surface (they are zero). Suppose we had instead an inverse problem: a pile on a bed with some unspecified overload at the top surface, for which the forces acting at the base had been measured. In this case, the information from the known forces could be propagated along the upward characteristics to find the unknown overload. More generally, in osl models, each characteristic ray will cut the surface of a (convex) patch of material at two points; the sum of the forces along the ray at the two ends must then be balanced by the longitudinal component of the body force integrated along the ray (see Fig. 7 (b)). These models are thus “boundary fragile”. In three dimensions, the mathematical structure of these models is somewhat altered, but the conclusions are basically unaffected. The propagation of stresses is governed by a Green’s function which is the response to a localized overload; osl models predict that for (say) sand in a horizontal bed, the maximum response at the base is not directly beneath a localized overload but on a ring of finite radius (proportional to the depth) with this as its axis. (This could be difficult to test cleanly because of noise effects, but there are related consequences for stress-stress correlations which are discussed in Ref. .) On the other hand, for different geometries, such as sand in a bin, the stress propagation problem is not well-posed even with hyperbolic equations, unless something is known about the interaction between the medium and the sidewalls. But by assuming a constant ratio to shear and normal forces at the walls, further interesting predictions can be made, for example that the total weight increment measured at the base of a cylindrical silo, in response to an overload on the top, is a nonmonotonic function of the height of the fill. These predictions represent clear signatures of hyperbolic stress propagation and, if confirmed experimentally, would be hard to explain by other means. #### 3 The problem of elastic indeterminacy The well-posedness of the standard sandpile is not shared be models involving the elliptic equations for an elastic body. For such a material, the stresses throughout the body can be solved only if, at all points on the boundary, either the force distribution or a displacement field is specified. Accordingly, once the zero-stress boundary condition is applied at the free surface, nothing can in principle be calculated unless either the forces or the displacements at the base are known (and the former amounts to specifying in advance the solution of the problem). The problem does not arise from any uncertainty about what to do mathematically: one should specify a displacement field at the base. Difficulties nonetheless arise if, as we argued above, no physical significance can be attributed to this displacement field for cohesionless poured sand. To give a specific example, consider the static equilibrium of an elastic cone of finite modulus, which is placed in an unstressed state (without gravity) onto a completely rough, rigid surface; gravity is then switched on. This generates a pressure distribution with a smooth parabolic hump as in Fig. 8a. (The roughness can crudely be represented by a set of pins.) Starting from any initial configuration, another can be generated by pulling and pushing parts of the body horizontally across the base (i.e., changing the displacements there); if this is rough, the new state will still be pinned and will achieve a new static equilibrium. This will generate a stress distribution, across the supporting surface and within the pile, that differs from the original one. Indeed, if a large enough modulus is now taken (at fixed forces), this procedure allows one to generate arbitrary differences in the stress distribution while generating neither appreciable distortions in the shape of the cone, nor any forces at its free surface. This corresponds to a limit $`Y\mathrm{}`$, $`\underset{¯}{u}0`$ at fixed $`Y\underset{¯}{u}`$ where $`Y`$ is the modulus and $`\underset{¯}{u}`$ the displacement field at the base. Analogous remarks apply to any simple elastoplastic theory of sandpiles, in which an elastic zone, in contact with part of the base, is attached at matching surfaces to a plastic zone. A natural presumption for the standard sandpile might be that $`Y\underset{¯}{u}=0`$ (that is, the basal displacements vanish before the high modulus limit is taken). This is consistent with the “glued pile” interpretation of elastic models – one assumes that glue also firmly attaches grains to the support as they arrive. However, the same interpretation, as shown above, also requires explicit consideration of quenched stresses (see Fig. 9). Note in any case that elastic and elastoplastic predictions for the sandpile are indeterminate, in a rigorous mathematical sense, if the $`Y\mathrm{}`$ limit is taken before the basal displacements $`\underset{¯}{u}`$ have been specified. Experiments (reviewed in detail in Cates et al.) report that for sandpiles on a rough rigid support, the forces on the base can be measured reproducibly; and, although subject to statistical fluctuations on the scale of several grains, do not vary too much among piles constructed in the same way. In contrast, for any simple elastic or elastoplastic model that does not include a specification of the basal displacements, there is a very large indeterminacy in the predicted stress distribution, even after averaging over any statistical fluctuations. An elastoplastic modeller who believes that the experiments measure something well-defined is then obliged to explain why and how the basal displacements (even if infinitesimal) are fixed by the construction history. Note that basal sag is not a candidate for the missing mechanism, since it does not resolve the elastic indeterminacy in these models; the latter arises primarily from the roughness, rather than the rigidity, of the support. An alternative view is that of Evesque, who directly confronts the issue of elastic indeterminacy and seemingly concludes that the experimental results themselves are and must be indeterminate; he argues that the external forces acting on the base of a pile are effectively chosen at will, rather than actually measured, by the experimentalist (see also Ref. ). To what extent this viewpoint is based on experiment, and to what extent on an implicit presumption in favour of elastoplastic theory, is to us unclear. ### C Crossover from Fragile to Elastic Regimes We have emphasized above the very different modelling assumptions of the fragile and elast(oplast)ic approaches to granular media. However, we have recently shown that hyperbolic fragile behaviour can be recovered from an elastoplastic description by taking a strongly anisotropic limit. Moreover, the crossover between elastic and hyperbolic behaviour, at least for one simple model of the granular skeleton, is controlled by the deformability of the granular particles. For simplicity in this section, we restrict attention to the fpa model. The fpa model describes, by definition, a material in which the shear stress must vanish across a pair of orthogonal planes fixed in the medium – those normal to the (fixed) principal axes of the stress tensor. According to the Coulomb inequality (which we also assume) the shear stress must also be less than $`\mathrm{tan}\varphi `$ times the normal stress, across planes oriented in all other directions. Clearly this combination of requirements can be viewed as a limiting case of an elastoplastic model with an anisotropic yield condition: $$|\sigma _{tn}|\sigma _{nn}\mathrm{tan}\mathrm{\Phi }(\theta )$$ (11) where $`\theta `$ is the angle between the plane normal $`𝐧`$ and the vertical (say) and $`𝐭𝐧=0`$. An anisotropic yield condition should arise, in principle, in any material having a nontrivial fabric, arising from its construction history. The limiting choice corresponding to the fpa model for a sandpile is $`\mathrm{\Phi }(\theta )=0`$ for $`\theta =(\pi 2\varphi )/4`$ (this corresponds to planes where $`𝐧`$ lies parallel to the major principal axis), and $`\mathrm{\Phi }(\theta )=\varphi `$ otherwise. (There is no separate need to specify the second, orthogonal plane across which shear stresses vanish, since this is assured by the symmetry of the stress tensor.) By a similar argument, all other osl models can also be cast in terms of an anisotropic yield condition, of the form $`|\sigma _{tn}\sigma _{nn}\mathrm{tan}\mathrm{\Psi }(\theta )|\sigma _{nn}\mathrm{tan}\mathrm{\Phi }(\theta )`$ where $`\mathrm{\Phi }(\theta )`$ vanishes, and $`\mathrm{\Phi }(\theta )`$ is finite for two values of $`\theta `$. (This fixes a nonzero ratio of shear and normal stresses across certain special planes.) At this purely phenomenological level there is no difficulty in connecting hyperbolic models smoothly onto highly anisotropic elastoplastic descriptions. Specifically, consider a medium having an orientation-dependent friction angle $`\mathrm{\Phi }(\theta )`$ that does not actually vanish, but is instead very small ($`ϵ`$, say) in a narrow range of angles (say of order $`ϵ`$) around $`\theta =(\pi 2\varphi )/4`$, and approaches $`\varphi `$ elsewhere. (One interesting way to achieve the required yield anisotropy is to have a strong anisotropy in the elastic response, and then impose a uniform yield condition to the strains, rather than stresses.) Such a material will have, in principle, mixed elliptic/hyperbolic equations of the usual elastoplastic type. The resulting elastic and plastic regions must nonetheless arrange themselves so as to obey the fpa model to within terms that vanish as $`ϵ0`$. If $`ϵ`$ is small but finite, then for this elastoplastic model the results will depend on the basal boundary condition, but only through these higher order corrections to the leading (fpa) result. Thus, although elastoplastic models do suffer from elastic indeterminacy (they require a basal displacement field to be specified), the extent of the influence of the boundary condition on the solution depends on the model chosen. Strong enough (fabric-dependent) anisotropy, in an elastoplastic description, might so constrain the solution that it is primarily the granular fabric (hence the construction history) and only minimally the boundary conditions which actually determine the stresses in the body. For models such as that given above there is a well-defined limit where the indeterminacy is entirely lifted, hyperbolic equations are recovered, and it is quite proper to talk of local stress propagation ‘rules’ determined by the construction history of the material. Our continuum modelling framework is based precisely on these assumptions. The crossover just outlined can also be understood directly in terms of the micromechanics of force chains, at least within the simplified picture developed in Section II. We consider a regular lattice of force chains (see Fig. 2 (b)), for simplicity rectangular (the fpa case), which is fragile if the chains can support only longitudinal forces. As mentioned in Section III C, this is true so long as such paths consist of linear chains of rigid particles, meeting at frictional point contacts: the forces on all particles within each chain must then be colinear, to avoid torques. This imposes the (fpa) requirement that there are no shear forces across a pair of orthogonal planes normal to the force chains themselves. Suppose now a small degree of particle deformability is introduced. This relaxes slightly the collinearity requirement, but only because the point contacts are now flattened (see Fig. 3 (b)). The ratio $`ϵ`$ of the maximum transverse load to the normal one will therefore vanish with (some power of) the mean compression. This yield criterion applies only across two special planes; failure across others is governed by some smooth yield requirement (such as the ordinary Coulomb condition: the ratio of the principal stresses lies between given limits). The granular skeleton just described, which was fragile in the limit of rigid grains, is now governed by a strongly anisotropic elastoplastic yield criterion of precisely the kind described above. This indicates how a packing of frictional, deformable rough particles, displaying broadly conventional elastoplastic features when the deformability is significant, can approach a fragile limit when the limit of a large modulus is taken. (It does not prove that all packings become fragile in this limit.) Conversely it shows how a packing that is basically fragile in its response to a graviational load could nonetheless support very small incremental deformations, such as sound waves, by an elastic mechanism. The question of whether sandpiles are better described as fragile, or as ordinarily elastoplastic, remains open experimentally. To some extent it may depend on the question being asked. However, we have argued, on various grounds, that in calculating the stresses in a pile under gravity a fragile description may lie closer to the true physics. ## IV Conclusions The jammed state of colloids, if it indeed exists in the laboratory, has not yet been fully elucidated by experiment. It is interesting that even very simple models, such as Eq. (1), can lead to nontrivial and testable predictions (such as the constancy of certain measured stress ratios). Such models suggest an appealing conceptual link between jamming, force chains, and fragile matter. However, further experiments are needed to establish the degree to which they are useful in describing real colloids. For granular media, the existence of tenuous force-chain skeletons is clear; the question is whether such skeletons are fragile. Several theoretical arguments have been given, above and elsewhere, to suggest that this may be the case, at least in the limit of rigid particles. Moreover, simulations show strong rearrangement under small changes of compression axis; the skeleton is indeed “self-organized”. Experiments also suggest cascades of rearrangement in response to small disturbances. These findings are consistent with the fragile picture. The standard sandpile (a conical pile formed by pouring onto a rough rigid support) has played a central role in our discussions. From the perspective of geotechnical engineering, the problem of calculating stresses in the humble sandpile may appear to be of only of marginal importance. The physicist’s view is different: the sandpile is important, because it is one of the simplest problems in granular mechanics imaginable. It therefore provides a test-bed for existing models and, if these show shortcomings, may suggest ideas for improved physical theories of granular media. Given the present state of the data, a conventional elastoplastic interpretation of the experimental results for sandpiles may remain tenable; more experiments are urgently required. In the mean time, a desire to keep using tried-and-tested modelling strategies until these are demonstrably proven ineffective is quite understandable. We find it harder to accept the suggestion that anyone who questions the general validity of traditional elastoplastic thinking is somehow uneducated. In summary, we have discussed a new class of models for stress propagation in granular matter. These models assume local propagation rules for stresses which depend on the construction history of the material and which lead to hyperbolic differential equations for the stresses. As such, their physical basis is substantially different from that of conventional elastoplastic theory. Our approach predicts ‘fragile’ behaviour, in which stresses are supported by a granular skeleton of force chains that respond by finite internal rearrangement to certain types of infinitesimal load. Obviously, such models of granular matter might be incomplete in various ways. Specifically we have discussed a possible crossover to elastic behaviour at very small incremental loads, and to conventional elastoplasticity at very high mean stresses (when significant particle deformations arise). However, we believe that our approach, by capturing at the continuum level at least some of the physics of force chains, may offer important insights that lie beyond the scope of previous continuum modelling strategies. ## V Acknowledgment We thank R. C. Ball, E. Clement, C. S. and M. J. Cowperthwaite, S. F. Edwards, P. Evesque, P.-G. de Gennes, G. Gudehus, J. Goddard, D. Levine, C. E. Lester, J. Melrose, S. Nagel, F. Radjai, J.-N. Roux, J. Socolar, C. Thornton, L. Vanel and T. A. Witten for illuminating discussions. Work funded in part by EPSRC (UK) GR/K56223 and GR/K76733.
no-problem/9901/cond-mat9901044.html
ar5iv
text
# Depletion forces near curved surfaces ## Abstract Based on density functional theory the influence of curvature on the depletion potential of a single big hard sphere immersed in a fluid of small hard spheres with packing fraction $`\eta _s`$ either inside or outside of a hard spherical cavity of radius $`R_c`$ is calculated. The relevant features of this potential are analyzed as function of $`\eta _s`$ and $`R_c`$. There is a very slow convergence towards the flat wall limit $`R_c\mathrm{}`$. Our results allow us to discuss the strength of depletion forces acting near membranes both in normal and lateral directions and to make contact with recent experimental results. Many biological processes are controlled by the interactions of macromolecules with cell membranes. Besides highly specific interactions of steric and chemical nature there are also entropic force fields which are omnipresent but whose actions depend only on gross geometrical features. These so-called depletion forces arise because both the membranes and the macromolecules generate excluded volumes for the small particles forming the solvent. Although these forces have been discussed for biological systems for many years , the simultaneous presence of many other forces severely impedes the precise analysis of depletion forces in such systems. Therefore it is highly welcome that dissolved colloidal particles can be tailored such that they resemble closely the effective model of monodispersed hard spheres confined by hard walls . This allows one to study the depletion forces exclusively and to compare them quantitatively with theoretical results. Once a satisfactory level of understanding has been reached for these model systems one can apply with confidence this knowledge to the much more complex biological systems. Moreover, in colloidal suspensions themselves depletion forces can be exploited to organize self-assembled structures . This dedicated use requires a detailed knowledge of the mechanisms of depletion forces, too. Along these lines in recent years there has been significant progress in understanding the depletion forces between two big spheres and between a single big sphere and a flat wall based on experiments , analytic results , and simulations . A generic feature of membranes is, however, that they are not flat. The variation of the local curvature leads to a new quality of the depletion forces in that they are no longer directed only normal to the surface, as for a flat wall or a wall with constant curvature, but that there is also a lateral component which promotes transport along the membrane. So far there are no systematic theoretical studies available which accurately predict this important curvature dependence of the depletion forces. Based on the quantitatively reliable density functional theory developed by Rosenfeld we compute the depletion potential for a big sphere of radius $`R_b`$ either inside or outside of a spherical cavity with radius $`R_c`$, filled with a solvent of hard spheres of radius $`R_s`$, as function of $`R_c`$ and of the packing fraction of the small spheres $`\eta _s`$ defined as the fraction of the total volume occupied by the small spheres. This enables us to make contact with a recent experiment in which the curvature dependence of depletion forces has been investigated by monitoring colloidal particles enclosed in vesicles . Figure 1 shows our results for the depletion potential $`W(r;R_c)`$ in units of $`k_BT=\beta ^1`$ inside and outside of a hard spherical cavity centered at $`r=0`$. Here $`W(r;R_c)`$ is the difference of the grand canonical free energies of the hard sphere solvent in the presence and absence, respectively, of a big hard sphere whose center is fixed at a distance $`r`$ from the center of the cavity. The choice of the parameters corresponds to those of the experiment in Ref. . The solid line denotes the actual depletion potential compared with its so-called Asakura-Oosawa approximation (AOa) (dotted line) which is valid only for such small values of $`\eta _s`$ that the solvent can be treated as an ideal gas and which is determined by the overlap of the excluded volumes around the big sphere and the hard wall. The AOa, which has been used to interpret the experimental data in Ref. , satisfactorily predicts the value $`W_c`$ of the depletion potential at contact and its derivation with respect to $`R_c`$, but otherwise it fails considerably. Whereas $`W_{AOa}(r;R_c)`$ is purely attractive and vanishes for $`rR_cR_b2R_s`$, the actual potential is both attractive and repulsive with the wavelength of the oscillations approximately given by $`2R_s`$. The correlation effects of the solvent generate the potential barrier $`\mathrm{\Delta }W_r`$ (see Fig. 1) which is completely missing within the AOa, and they increase the range of the potential significantly beyond the AOa range of $`2R_s`$. The presence of this potential barrier has very pronounced repercussions for the diffusion dynamics of the big particle. The time $`\tau `$ required to overcome a potential barrier $`\mathrm{\Delta }E`$ is proportional to $`\mathrm{exp}(\beta \mathrm{\Delta }E)`$. Therefore, with $`\mathrm{\Delta }E=\mathrm{\Delta }W_r`$, for the big sphere it takes about $`e^{2.5}12`$ times longer to reach the wall from the center of the cavity or to escape from the wall as compared with the time estimated from the AOa (see Fig. 1). This difference still awaits experimental confirmation. For larger packing fractions $`\eta _s`$ this difference becomes even larger. Thus the value $`W_c=\mathrm{\Delta }W_r\mathrm{\Delta }W_e`$ of the depletion potential at contact can be obtained from the ratio of rates of escaping from the wall and reaching it, respectively. Taking into account only the former one, as suggested by the AOa, would be misleading. The overall structure of the depletion potential outside of the cavity is similar to the one inside, but its amplitude is reduced (Fig. 1). This is in line with the trend predicted by the AOa which expresses the fact that the overlap between the excluded volumes around a big sphere and the wall is larger inside the spherical cavity than outside. This geometrical difference between the excluded overlap volume inside and outside of the cavity is schematically drawn as inset in Fig. 1. In the inset the boundaries of the excluded volumes around the big sphere and the cavity wall are indicated by dashed lines and the overlap of excluded volumes is shaded. In Figs. 2 and 3 we discuss as function of the packing fraction $`\eta _s`$ the relevance of the cavity curvature for the depletion potential relative to the case of a planar wall, i.e., $`R_c\mathrm{}`$ by focusing on the potential at contact $`W_c`$ and the escape potential barrier $`\mathrm{\Delta }W_e`$, respectively. As expected in the limit $`R_c\mathrm{}`$ both quantities reach a common value for the outside and inside potential. These common values are in very good agreement with simulation data for the planar wall . For all values of $`R_c`$ the absolute values of the potential parameters inside are lager than outside. They increase stronger than linearly with increasing $`\eta _s`$. For increasing curvature $`1/R_c`$ the outside potential becomes weaker whereas the inside potential becomes much stronger; this holds for all values of $`\eta _s`$. The gain of strength of the inside potential upon increasing the cavity curvature is much more pronounced than the corresponding loss of strength on the outside. This difference between the behavior inside and outside widens strongly with increasing $`\eta _s`$. The dependence of the depletion potential on the cavity curvature is surprisingly strong even for large values of $`R_c/R_s`$. For $`\eta _s=0.3`$ and $`R_c=50R_s`$ the density profile $`\rho _s(r)`$ of the solvent without the big sphere near the curved wall differs only slightly from that near a flat wall; its contact value differs by $`1.5\%`$ from the corresponding flat one. However, the potential at contact still differs by almost $`10\%`$ from the corresponding flat value. This amplification of the influence of the curvature can be understood in terms of the geometric considerations of the overlap volumes leading to $$\beta W_c^{AOa}(R_c)=\eta _s\left(1+\frac{3sR_c}{R_c+\gamma R_b}\right)$$ (1) with $`s=R_b/R_s`$ and $`\gamma =+1`$ and $`1`$ for outside and inside, respectively. Within AOa the absolute value of the amplitude of the first curvature correction is the same inside and outside as can be seen from $$\frac{\beta W_c^{AOa}(R_c)}{\beta W_c^{AOa}(\mathrm{})}=1\frac{3\gamma s^2}{3s+1}\frac{1}{R_c/R_s}+𝒪\left((R_c/R_s)^2\right).$$ (2) For $`R_bR_s`$, i.e., $`s1`$ the curvature dependence of $`W_c`$ is significantly enhanced. In a recent experiment Dinsmore et al. used video microscopy to monitor the position of a single big colloid particle with $`R_b=5.7R_s`$ immersed in a solution of small colloids with $`\eta _s=0.3`$ inside a vesicle. These were charge stabilized colloids which are supposed to resemble closely the model system of hard spheres confined by a hard wall. From the probability $`p(𝐱)`$ of finding the big colloid at the position $`𝐱`$ – the quantity actually measured in the experiment – one can infer the depletion potential $`𝒲(𝐱)`$ according to $`p(𝐱)=p_{bulk}e^{\beta 𝒲(𝐱)}`$ with $`𝒲(𝐱)=0`$ in the bulk. Thus the spatial resolution of the experimentally determined $`𝒲(𝐱)`$ reflects that of $`p(𝐱)`$, which at best was approximately $`2R_s`$ in Ref. . In the experiment the big colloid particle was observed to be most of the time very close to the vesicle wall inside a shell whose width was chosen to be $`6.7R_s`$, i.e., even larger than the optimal resolution $`2R_s`$. Approximating $`𝒲(𝐱)`$ by $`W(r;R_c)`$, where $`r`$ is the minimal distance between $`𝐱`$ and the vesicle wall and $`R_c`$ the local radius of the wall at this closest point (see below), the experimentally determined depletion potential therefore corresponds to an average of $`W(r;R_c)`$ shown in Fig. 1 over all visible oscillations. Within this shell the big colloid was observed to be more often in regions with a small local radius of curvature. This is in qualitative agreement with both the AOa and our present approach. Quantitatively, in Ref. the ratio $`p_{shell}/p_{bulk}`$ of the probabilities of finding the big particle within the shell and in the bulk, respectively, was determined. The authors have documented the logarithm of this ration, denoted as $$\beta \mathrm{\Delta }F:=\mathrm{ln}\frac{p_{shell}}{p_{bulk}}\mathrm{ln}\left(_{V_{shell}}d^3re^{\beta W(r;R_c)}/V_{shell}\right),$$ (3) relative to its value $`\beta \mathrm{\Delta }F_{\mathrm{}}`$ near a flat wall, i.e., for $`R_c\mathrm{}`$; $`V_{shell}`$ is the volume of the shell. In Fig. 4 we compare the published data for $`\beta \mathrm{\Delta }F\beta \mathrm{\Delta }F_{\mathrm{}}`$ with our prediction for this quantity based on the actual depletion potential as shown in Fig. 1, with the corresponding prediction if in Eq. (3) and for $`\beta \mathrm{\Delta }F_{\mathrm{}}`$ the full AOa is used, and with the so-called truncated AOa, $`\beta W_c^{AOa}\beta W_{c,\mathrm{}}^{AOa}`$, i.e., the AOa values for $`W(r;R_c)`$ at wall contact, which was used for comparison with theory in Ref. . We find that our theoretical prediction as well as the full AOa are closer to the experimental data than the truncated AOa. Given the high accuracy of our calculation the remaining discrepancy cannot be due to insufficient theoretical knowledge of $`W(r;R_c)`$. The fact that in Ref. only the radius of curvature in plane could be determined, polydispersity of the solvent, and the possible presence of dispersion forces are among the candidates for explaining this discrepancy. The small difference between the full AOa and our theoretical results in Fig. 4 demonstrates that the present spatial resolution cannot discriminate between the rich structure shown in Fig. 1 and its AOa. This emphasizes the need for future experiments with significantly increased spatial resolution. For a cavity of general shape $`z=f(𝐑=(R_x,R_y))`$ of its surface relative to a suitable chosen $`(x,y)`$ reference plane our results allow us to determine approximately lateral depletion forces. To this end we introduce normal coordinates $`(R_x,R_y,r)`$ such that $`𝐱=(x,y,z)=(R_x,R_y,f(𝐑))+r𝐧(𝐑)`$ where $`r`$ is the minimal distance of the point $`𝐱`$ from the surface $`f(𝐑)`$, $`(R_x,R_y)`$ ($`(x,y)`$) are the lateral coordinates of that point on the surface closest to $`𝐱`$, and $`𝐧=(f(𝐑),1)/\sqrt{1+(f)^2}`$ is the local surface normal pointing towards $`𝐱`$. The actual depletion potential $`𝒲(𝐱;[f])`$, which depends functionally on $`f(𝐑)`$, can be approximated by $`𝒲(𝐱;[f])W(r(𝐱);R_c(𝐑(𝐱)))`$. Here we have assumed that the cavity surface varies sufficiently smoothly so that its local two principal radii of curvature can be described by the single radius $`R_c(𝐑)`$. Within this approximation the components $`F_{lat}^{(i)}(𝐱)`$ of the lateral depletion force in the direction of the tangential vector $`𝐭_i`$, $`i=1,2`$, is given by $$F_{lat}^{(i)}(𝐱)=\frac{W(r;R_c(𝐑))}{R_c}\underset{j=x,y}{}\frac{R_c(𝐑)}{R_j}𝐭_iR_j,$$ (4) with $`r=r(𝐱;[f]),𝐑=𝐑(𝐱;[f]),=/𝐱`$, and $`𝐧𝐭_i=0=𝐭_1𝐭_2`$. The factor $`W/R_c`$ can be determined from our above results. Its absolute value increases with decreasing radius of curvature. Its sign differs inside and outside of the cavity and changes as function of the normal distance $`r`$. A big sphere approaching the convex nonspherical wall from inside (outside) is exposed to an oscillating lateral force, which pulls the sphere towards regions of the surface with a small local radius of curvature close to a minimum (maximum) of the depletion potential and pushes it away from these regions close to a maximum (minimum). These oscillations of the lateral force originate from the packing effects of the small spheres. When the big sphere has reached the wall the lateral force pulls it along the surface to that point with the smallest local radius of curvature inside the cavity and pushes it away from this point towards the point with the largest local radius of curvature outside of the cavity, respectively. This can already be understood within the AOa which predicts $`(\beta W_c^{AOa}(R_c))/R_c=3\eta _s\gamma sR_b(R_c+\gamma R_b)^2`$ (see Eq. (1)). Since Eq. (1) provides a very good description of the contact value of the depletion potential, even for high packing fractions, the above expression for its derivative is also quantitatively reliable. Applying Eq. (4) for an ellipsoidal cavity with semi-axes $`a=b=20R_s`$ and $`c=30R_s`$ and for a big sphere with $`R_b=5R_s`$ immersed in a fluid of small spheres with $`\eta _s=0.3`$ yields at contact a maximal ratio of the lateral force to the normal force of $`|F_{lat}/F_{norm}|=0.07`$ inside and outside of the cavity. Such quantitative predictions for the size of entropic lateral forces along curved confining walls still await tests by simulations and experiments.
no-problem/9901/quant-ph9901058.html
ar5iv
text
# On the Path Integral of the Relativistic Electron ## I Introduction It is well known that the continuum propagator of the Dirac equation can be found by summing over random walks. Renewed interest in this issue has arisen in connection with the investigation of stochastic processes which have been shown to be related to the Dirac equation (Gaveau et al. 1984, McKeon and Ord 1992). Likewise, the correspondence between the path integral and the Ising model has been explored (Gersch 1981, Jacobson and Schulman 1984) and solutions for a discretized version of the Dirac equation have been found (Kauffman and Noyes, 1996). As described by Feynman and Hibbs (1965), the propagator of the $`1+1`$ dimensional Dirac equation $$\text{i}\mathrm{\Psi }/t=\text{i}\sigma _z\mathrm{\Psi }/xm\sigma _x\mathrm{\Psi }$$ (1) (where units $`c=\mathrm{}=1`$ are assumed and $`\sigma _x`$ and $`\sigma _z`$ are the respective Pauli spin matrices) can be found from a model of the one-dimensional motion of a relativistic particle. In this model the motion of the electron is restricted to movements either forward or backward occuring at the speed of light. Assuming units $`c=1`$, the motion of the particle corresponds to a sequence of straight path segments of slope $`\pm 45^{}`$ in the x-t plane. The retarded propagator $`K(x,t)`$ of the Dirac equation may then obtained from the limiting process (see e.g. Feynman and Hibbs 1965, Jacobson and Schulman 1984) $$K_{\delta \gamma }(x,t)=\underset{N\mathrm{}}{lim}A_{\delta \gamma }(ϵ)\underset{R0}{}N_{\delta \gamma }(R)(\text{i}mϵ)^R.$$ (2) Here, $`N`$ is the number of segments of constant length $`ϵ=t/N`$ of the particle’s path between its start point (which is assumed to be the origin of the corresponding coordinate system) and the end point $`(x,t)`$ of the path. $`R`$ denotes the number of bends while $`N_{\delta \gamma }(R)`$ stands for the total number of paths consisting of $`N`$ segments with $`R`$ bends. The indices $`\gamma `$ and $`\delta `$ correspond to the directions forward or backward at the path’s start and end points, respectively, and refer to the components of $`K`$. $`A_{\delta \gamma }(ϵ)`$ accounts for a convenient normalization. ## II Model and Calculations In this short note we demonstrate that a minor conceptional change of Feynman’s chessboard model naturally and directly yields exact solutions to the Dirac equation (1). The conceptional change is suggested by the observation that a path with $`R`$ bends between given start and end points is determined by $`R1`$ bends. For a sketch of the situation consider Figure 1. The path shown in Figure 1 exhibits five bends, three to the left and two to the right. However, the first two bends to the right and left, respectively, determine the path since the location and direction of the last bend (indicated by a circle in Figure 1) is fully determined by the first four bends. We thus consider here, in contrast to the original formulation of the model where all bends occuring on a path contribute to the total amplitude, only contributions to the total amplitude from bends which actually define the path. In the light of the general path integral formalism it makes perfect sense to consider only those bends which define a path, i.e. the minimum information characterizing a path. In the following we demonstrate by an explicit calculation that the modified model directly leads to exact solutions of the Dirac equation (1). We will use a calculation scheme different from the combinatorial approach envisaged by Feynman and Hibbs (1965) and its Ising model correspondence (Gersch, 1981). Following Feynman’s chessboard model we consider each bend which defines a possible path to contribute an amplitude $$\varphi _{j_r}=\text{i}mϵ_{j_r}$$ (3) where $`ϵ_{j_r}ϵ`$ is the length of a path segment. The total amplitude contributed by a path is the product $$\varphi =\underset{r}{}(\text{i}mϵ_{j_r})$$ (4) where $`j_r`$ runs over all the segments followed by a bend. While the index $`r`$ enumerates the path segments after which bends occur, the value of $`j_r`$ indicates the corresponding segment. A path with $`R`$ bends which starts with positive velocity (i.e. to the right) and ends with negative velocity (i.e. to the left) consists of exactly $`(R1)/2+1`$ bends to the left and $`(R1)/2`$ to the right. The $`(R1)/2`$ bends to the right may occur after any arbitrary path segment to the left. $`(R1)/2`$ of the $`(R1)/2+1`$ bends to the left occur in the same manner after path segments to the right while the additional bend to the left occurs after the last segment. Let $`P`$ be the total number of path segments to the right (+) and $`Q`$ those to the left (-). Then, the contribution of the $`R^+=(R1)/2`$ bends to the right to $`\mathrm{\Psi }_+`$ is $`\mathrm{\Psi }_+(R^+)`$ $`=`$ $`N_+(R^+){\displaystyle \underset{r=1}{\overset{R^+}{}}}(\text{i}mϵ_{j_r})`$ (5) $`=`$ $`{\displaystyle \underset{j_1<\mathrm{}<j_{R^+}}{\overset{P1}{}}}(\text{i}mϵ)^{R^+}`$ (6) For $`P1`$, $`\mathrm{\Psi }_+(R^+)`$ is approximated by $`\mathrm{\Psi }_+(R^+)`$ $``$ $`{\displaystyle \frac{1}{R^+!}}{\displaystyle \underset{j_1\mathrm{}j_{R^+}}{\overset{P}{}}}(\text{i}ϵ)^{R^+}`$ (7) $``$ $`{\displaystyle \frac{(\text{i}mϵ)^{R^+}}{R^+!}}\left({\displaystyle \underset{j_r=1}{\overset{P}{}}}1\right)^{R^+}`$ (8) $`=`$ $`{\displaystyle \frac{P^{R^+}(\text{i}mϵ)^{R^+}}{R^+!}}`$ (9) The contribution of the $`R^{}=[((R1)/2+1)1]=(R1)/2`$ bends to the left is calculated similarly. The additional bend (occuring after the last segment to the right) does not enter the calculation since a possible path is fully determined by the location by its $`R1`$ bends to the right and left, respectively. Therefore we find $`\mathrm{\Psi }_+(R^{})`$ $``$ $`{\displaystyle \frac{Q^R^{}(\text{i}ϵ)^R^{}}{R^{}!}}`$ (10) In the limit $`N\mathrm{}`$ (i.e. $`P,Q\mathrm{}`$) the exact expression for $`\mathrm{\Psi }_+`$ becomes $$\mathrm{\Psi }_+=\underset{\text{odd }R}{}(\text{i}mϵ)^{R1}\frac{(PQ)^{(R1)/2}}{[((R1)/2)!]^2}$$ (11) where $`ϵ=t/(P+Q)`$. With $`v=\mathrm{\Delta }x/\mathrm{\Delta }t=x/t=(PQ)/(P+Q)`$ the classical velocity attributed to the particle, $`PQ=((P+Q)/2\gamma )^2`$ where $`\gamma =1/\sqrt{1v^2}`$. Thus we have $$\mathrm{\Psi }_+=\underset{k=0}{\overset{\mathrm{}}{}}(1)^k\frac{(mt/2\gamma )^{2k}}{[(k)!]^2}=J_0(mt/\gamma )$$ (12) where $`J_0`$ is the zeroth order Bessel function of the first kind. A similar calculation yields for $`\mathrm{\Psi }_+`$ the same result. For $`\mathrm{\Psi }_{++}`$, the number of bends to the right and to the left is $`R/2`$ for each direction where $`R`$ is even. However, the path is again defined by $`R^+=R/2`$ bends to the right and $`R^{}=R/21`$ bends to the left. Thus, $`\mathrm{\Psi }_{++}`$ $`=`$ $`{\displaystyle \underset{\text{even }R}{}}(\text{i}mϵ)^{R1}{\displaystyle \frac{P^{R/2}Q^{R/21}}{(R/2)!(R/21)!}}`$ (13) $`=`$ $`\text{i}\sqrt{P/Q}{\displaystyle \underset{k=0}{\overset{\mathrm{}}{}}}(1)^k{\displaystyle \frac{(mt/2\gamma )^{2k+1}}{(k+1)!(k)!}}`$ (14) $`=`$ $`\text{i}\sqrt{P/Q}J_1(mt/\gamma ).`$ (15) With $`\sqrt{P/Q}=(t+x)/(t^2x^2)^{1/2}`$ and $`\tau =(t^2x^2)^{1/2}`$ the component $`\mathrm{\Psi }_{++}`$ becomes $$\mathrm{\Psi }_{++}=\text{i}(t+x)/\tau J_1(mt/\gamma ).$$ (16) A similar calculation yields $$\mathrm{\Psi }_{}=\text{i}(tx)/\tau J_1(mt/\gamma ).$$ (17) This completes the envisaged computation. As a side remark note that the presented calculation scheme is not restricted to $`ϵ_{j_r}ϵ`$. As will be shown elsewhere, similar results may be obtained for $`ϵ_{j_r}=ϵ(j_r)`$. ## III Discussion To relate the components $`\mathrm{\Psi }_{\delta \gamma }`$ to the solution of the Dirac equation (1) consider the explicit represation $$\sigma _x=\left(\begin{array}{cc}0& 1\\ 1& 0\end{array}\right),\sigma _z=\left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right).$$ (18) As may be seen by direct calculation, in this representation $`\mathrm{\Psi }_1`$ and $`\mathrm{\Psi }_2`$ defined as $$\mathrm{\Psi }_1=\left(\begin{array}{c}\mathrm{\Psi }_{++}\\ \mathrm{\Psi }_+\end{array}\right);\mathrm{\Psi }_2=\left(\begin{array}{c}\mathrm{\Psi }_+\\ \mathrm{\Psi }_{}\end{array}\right)$$ (19) are two independent, exact solutions of the Dirac equation (1). This completes the demonstration that Feynman’s chessboard model yields exact solutions to the Dirac equation when taking into account only those bends which actually define paths. With regard to fundamental theories of spacetime and/or quantum mechanics (e.g. in the spirit of Finkelstein, 1974) this could be of importance. Somehow similar results have been obtained from the continuum limit of a discretized version of the Dirac equation (Kauffman and Noyes, 1996). The calculation scheme and part of the results presented here can be generalized to unevenly spaced spacetime lattices. This opens up the possibility to define an analogon to the Feynman checkerboard for discrete spacetime models of different type (e.g. Kull and Treumann, 1994). Related work is in progress and will be presented elsewhere. A.K. would like to thank Dr. O. Forster for valuable discussions. Part of the work of A.K. has been supported by the Swiss National Foundation, grant 81AN-052101.
no-problem/9901/astro-ph9901105.html
ar5iv
text
# The Early Afterglow ## 1 Introduction In the internal-external scenario, the GRB is produced by internal shocks while the afterglow is produced by the interaction of the flow with the ISM. The original fireball model was invoked to explain the Gamma-Ray Bursts (GRBs) phenomena. It requires extreme relativistic motion, with a Lorentz factor $`\gamma \text{>}\text{}100`$. The afterglow observations, which fit the theory rather well, are considered as a confirmation of the fireball model. However, the current afterglow observations, which detect radiation from several hours after the burst onwards, do not probe the initial extreme relativistic conditions. By the time of the present observations, several hours after the burst, the Lorentz factor is less than $`10`$, and it is independent of the initial Lorentz factor. Afterglow observations, a few seconds after the burst, can provide the missing information concerning the initial phase and prove the internal shock scenario. Such rapid observations are possible, in principle, with future missions (Kulkarni and Harrison, Private communication). ## 2 The Forward Shock The synchrotron spectrum from relativistic electrons that are continuously accelerated into a power law energy distribution is always given by four power law segments, separated by three critical frequencies: $`\nu _{sa}`$ the self absorption frequency, $`\nu _c`$ the cooling frequency and $`\nu _m`$ the characteristic synchrotron frequency. Using the relativistic shock jump conditions and assuming that the electrons and the magnetic field acquire fractions $`ϵ_e`$ and $`ϵ_B`$ of the equipartition energy, we obtain: $$\nu _m=1.1\times 10^{19}\mathrm{Hz}\left(\frac{ϵ_e}{0.1}\right)^2\left(\frac{ϵ_B}{0.1}\right)^{1/2}(\frac{\gamma }{300})^4n_1^{1/2}.$$ (1) $$\nu _c=1.1\times 10^{17}\mathrm{Hz}\left(\frac{ϵ_B}{0.1}\right)^{3/2}\left(\frac{\gamma }{300}\right)^4n_1^{3/2}t_s^2,$$ (2) $$F_{\nu ,\mathrm{max}}=220\mu \mathrm{J}D_{28}^2\left(\frac{ϵ_B}{0.1}\right)^{1/2}\left(\frac{\gamma }{300}\right)^8n_1^{3/2}t_s^3$$ (3) $$\nu _{sa}=220\mathrm{G}\mathrm{H}\mathrm{z}\left(\frac{ϵ_B}{0.1}\right)^{6/5}\left(\frac{\gamma }{300}\right)^{28/5}n_1^{9/5}t_s^{8/5}$$ (4) These scalings generalize the adiabatic scalings obtained by Sari, Piran & Narayan (1998) to an arbitrary hydrodynamic evolution of $`\gamma (t)`$. For typical parameters, $`\nu _c<\nu _m`$, so fast cooling occurs. The spectrum of fast cooling electrons is described by four power laws: (i) For $`\nu <\nu _{sa}`$ self absorption is important and $`F_\nu \nu ^2`$. (ii) For $`\nu _{sa}<\nu <\nu _c`$ we have the synchrotron low energy tail $`F_\nu \nu ^{1/3}`$. (iii) For $`\nu _c<\nu <\nu _m`$ we have the electron cooling slope $`F_\nu \nu ^{1/2}`$. (iv) For $`\nu >\nu _m`$ $`F_\nu \nu ^{p/2}`$, where $`p`$ is the index of the electron power law distribution. In the early afterglow, the Lorentz factor is initially constant. After this phase, the evolution can be of two types (Sari 1997). Thick shells, which correspond to long bursts, begin to decelerate with $`\gamma (t)t^{1/4}`$. Only later there is a transition to deceleration with $`\gamma (t)t^{3/8}`$. The light curves for such bursts can be obtained by substituting these scalings in equations 1-4. However, for these long bursts, the complex internal shocks GRB signal would overlap, the smooth external shock afterglow signal. The separation of the observations to GRB and early afterglow would be rather difficult. For thin shells, that correspond to short bursts, there is no intermediate stage of $`\gamma (t)t^{1/4}`$. There is a single transition, at the time $`t_\gamma =(3E/32\pi \gamma _0^8nm_pc^5)^{1/3}`$, from a constant velocity to self-similar deceleration with $`\gamma (t)t^{3/8}`$. The possible light curve are illustrated in figure 1. As the intial afterglow peaks several dozen seconds after the GRB there should be no difficulty to detect it. The detection of delayed emission which fits the light curves of figure 1, would enable us to determine $`t_\gamma `$. Using $`t_\gamma `$ we could proceed to estimate the initial Lorentz factor: $$\gamma _0=240E_{52}^{1/8}n_1^{1/8}\left(t_\gamma /10\mathrm{s}\right)^{3/8}.$$ (5) If the second peak of GRB 970228, delayed by 35s, is indeed the afterglow rise, then $`\gamma _0150`$ for this burst. ## 3 The Reverse Shock and the Optical Flash There are many attempts to detect early optical emission and there is a good chance that this emission will be observed in the near future. A strong 5th magnitude optical flash would have been produced if the fluence of a moderately strong GRBs, $`10^5`$erg/s/cm<sup>2</sup> would have been released on a time scale of 10s in the optical band. Even a small fraction of this will be easily observed. It is important, therefore, to explore the expected optical emission from the GRB and the early afterglow. During the GRB and the initial emission from the forward shock the emission peaks in $`\gamma `$-rays, and only an extremely small fraction is emitted in the optical band. For example, the prompt optical flash from the GRB would be of 21st magnitude if the flux drops according to the synchrotron low energy tail of $`F_\nu \nu ^{1/3}`$. A considerably stronger flux is obtained from the reverse shock. The reverse shock contains, at the time it crosses the shell, a comparable amount of energy to the forward shock. However, its effective temperature is significantly lower (typically by a factor of $`\gamma 300`$) than that of the forward shock. The resulting peak frequency is therefore lower by $`\gamma ^210^5`$. A more detailed calculation shows that the reverse shock frequency is $$\nu _m=1.2\times 10^{14}\left(\frac{ϵ_e}{0.1}\right)^2\left(\frac{ϵ_B}{0.1}\right)^{1/2}(\frac{\gamma _0}{300})^2n_1^{1/2}.$$ (6) The cooling frequency is similar to that of the forward shock, since both have the same magnetic field and the same Lorentz factor. Using the parameters obtained by Granot, Piran and Sari (1998), from the afterglow of GRB 970508, and using $`\gamma _0=300`$ we get for the reverse shock $`\nu _c=3\times 10^{16}`$Hz and $`\nu _m=3\times 10^{14}`$Hz leading to an 8th magnitude flash. With higher initial Lorentz factor of $`\gamma _0=10^4`$ the flash drops to 13th magnitude. Inverse Compton cooling, if exists, can reduce the flux by $`2`$ magnitudes, while self absoption can influence only very short bursts with small surface area. Therefor, quite concervatively, the optical flash should be stronger than 15th magnitude and should be soon seen with modern experiments. The reverse shock signal is very short living. After the shock crosses the shell, no new electrons are injected and there is no emission above $`\nu _c`$. Moreover, $`\nu _c`$ drops fast with time as the shell’s material cools adiabatically. ## 4 Discussion The early afterglow multi-wavelength radiation could provide interesting and invaluable information on the extreme relativistic conditions that occur at this stage. The initial emission from the forward shock, which continues later as the observed afterglow signals is in the $`\gamma `$-rays or x-rays. The reverse shock emission could have a strong, short living, optical component which we expect to be brighter than 15th magnitude. Some of the current basic ideas concerning the fireball models should be revised if such signals will not be seen by new detectors that should become operational in the near future. ###### Acknowledgements. This research was supported by the US-Israel BSF 95-328 and by a grant from the Israeli Space Agency. R.S. thanks the Sherman Fairchild Foundation for support.
no-problem/9901/cond-mat9901033.html
ar5iv
text
# References A new conjecture extends the $`GM`$ law for percolation thresholds to dynamical situations Serge Galam<sup>1</sup><sup>1</sup>1e-mail: galam@ccr.jussieu.fr Laboratoire des Milieux Désordonnés et Hétérogènes, Tour 13, Case 86, 4 place Jussieu, 75252 Paris Cedex 05, France Nicolas Vandewalle<sup>2</sup><sup>2</sup>2e-mail:vandewal@gw.unipc.ulg.ac.be SUPRAS, Institut de Physique B5, Université de Liège, B-4000 Liège, Belgium Int. J. Mod. Phys. C 9 (1998) 667-671 ## Abstract The universal law for percolation thresholds proposed by Galam and Mauger ($`GM`$) is found to apply also to dynamical situations. This law depends solely on two variables, the space dimension $`d`$ and a coordinance number $`q`$. For regular lattices, $`q`$ reduces to the usual coordination number while for anisotropic lattices it is an effective coordination number. For dynamical percolation we conjecture that the law is still valid if we use the number $`q_2`$ of second nearest neighbors instead of $`q`$. This conjecture is checked for the dynamic epidemic model which considers the percolation phenomenon in a mobile disordered system. The agreement is good. keywords: percolation threshold — epidemic model — lattice — tree 1. Introduction Recently, a universal power law for both site and bond percolation thresholds was postulated by Galam and Mauger ($`GM`$) . The $`GM`$ formula is $$p_c^{GM}=p_0\left[(d1)(q1)\right]^ad^b$$ (1) with $`d`$ being the space dimension and $`q`$ a coordinance variable. For regular lattices, $`q`$ is the lattice coordination number. The exponent $`b`$ is either equal to $`a`$ for bond percolation or to $`0`$ for site percolation. Three classes characterized by three different sets of parameters $`\{a;p_0\}`$ were found. The first class includes two-dimensional triangle, square and honeycomb lattices with $`\{a=0.3601;p_0=0.8889\}`$ for site percolation and $`\{a=0.6897;p_0=0.6558\}`$ for bond percolation. Two-dimensional Kagomé and all (hyper-)cubic lattices in $`3d6`$ constitute the second class with $`\{a=0.6160;p_0=1.2868\}`$ and $`\{a=0.9346;p_0=0.7541\}`$ for site and bond respectively. The third class corresponds to high dimensions ($`d>6`$). There, $`b=2a1`$ and $`p_0=2^{a1}`$ with $`a=0.8800`$ for site and $`a=0.3685`$ for bond percolation. For hypercubic lattices ($`q=2d`$), the asymptotic limit leading term is identical to the Cayley tree threshold. For both site and bond percolation it is, $$p_c=\frac{1}{z}$$ (2) where $`z`$ is the branching rate of the tree, i.e. $`zq1`$. In high dimensions, it has been reported that more $`GM`$ classes should be taken into account . Extension to anisotropic and aperiodic lattices has also been also reported . The formula (1) remains valid. However to preserve its high accuracy $`q`$ should be replaced by an effective non-integer value $`q_{eff}`$ which is different from a simple arithmetic average. In this letter, making a simple conjecture we extend the validity of the $`GM`$ formula (Eq.(1)) to dynamical situations like epidemics or contagion models. The results obtained fit well to available data. 2. Epidemic models The epidemic model considers the Eden growth of a phase in a disordered medium with static impurities. In this model, a fraction $`x`$ of particles are randomly dispersed on a lattice. These particles act as hindrances for the random growth of a spreading phase. For $`x<x_c`$, the cluster is growing forever while for $`x>x_c`$ the growth of the cluster is stopped after a finite number of timesteps. This unblocked-blocked growth transition is closely related to a site percolation phenomenon since $`x_c=1p_c`$. Recently, the dynamical epidemic model has been introduced in order to provide a phenomenological basis for the aggregation of mesoscopic impurities along crystal growth surfaces and interfaces . In the dynamic epidemic model, the particles are mobile hindrances for the random growth of a phase. When the growth front reaches a particle, the latter is supposed to move to an empty nearest neighboring site in order to minimize its contact with the spreading phase if such a move is possible. This rule is illustrated in Figure 1. This repulsive dynamical interaction leads to an aggregation phenomenon along the growth interface and leads further to some reorganization of the mobile medium . The threshold $`p_c`$ for dynamical situations is quite different from the static medium. The formation of growth instabilities along the propagating front has been recently reported to be the source of the organization process. The dynamic epidemic model has been numerically studied at $`d=2`$ and $`d=3`$ . Moreover, an exact solution has been obtained on tree-like structures which are illustrated in Figure 2. The percolation thresholds for either static or dynamic epidemics are listed in Table I as well as the thresholds predicted by the $`GM`$ formula. 3. The conjecture Let us consider first a square lattice. As described above and illustrated in Figure 1, when a particle is reached by the interface, the particle tries to move to another neighboring empty site as in avalanche processes in sandpiles. If the neighborhood of the particle is completely occupied, the particle remains static and the cluster growth is pinned there. Intuitively, one can consider that the unblocked-blocked growth transition depends at least on the occupation of sites in the front neighborhood including the $`q_2`$ second neighboring sites (neighbors of neighbors). Not to be mistaken with next nearest neigbhors. The extended neighborhood of a site is shown in Figure 3. It is worth noticing that on regular lattices $$q_2=dq.$$ (3) Taking $`q_{eff}=q+q_2=12`$, as for percolation with next nearest interactions , the $`GM`$ law (Eq. (1), site first class) does not hold and does not reproduce the dynamical threshold, for instance $`p_c=0.44`$ for squares. Nevertheless, one can notice that taking $`q_{eff}=q_2=8`$ provides the value $`0.4411`$ for $`p_c`$ which is “exact” within the two digits of available data (see Table I). Using the same substitution, the universal law holds true also for the cubic ($`d=3`$) case for which $`q_2=18`$ and $`p_c=015`$. It yields (for site second class) $`p_c=0.1466`$ which is again “exact”. These results ground the following conjecture: Percolation properties of a dynamic epidemic process are identical to the associated static ones with substituting $`q_2`$ to $`q`$. We are now in a position to check the above conjecture in the case of the Cayley tree for which the thresholds are known exactly. The relevant parameter is the branching rate $`z`$ instead of $`q`$. The parameter $`z`$ can be considered as the number of nearest neighboring sites $`\stackrel{~}{q}`$ towards the tree extremities. In fact, there is no distinction between directed percolation and simple percolation on a tree-like structure. The exact solution of the dynamic epidemic model on a Cayley tree is $$p_c=1/z^2.$$ (4) The number $`\stackrel{~}{q}_2`$ of second neighboring sites towards the leaves is $`z^2`$. Thus from Eq.(2), this theoretical result corroborates the above conjecture that $`q`$ should be replaced by $`q_2`$ in the $`GM`$ universal law. On the trees decorated with loops like those illustrated in Figure 2b and 2c the sites are non-equivalent. An effective number $`\stackrel{~}{q}_2`$ of second sites towards the leaves can be calculated by averaging $`\stackrel{~}{q}_2`$ on the various sites of the tree structure. Effective values of the number $`\stackrel{~}{q}_2`$ for trees decorated with loops are given in Table I. Again, considering $`\stackrel{~}{q}_2`$ instead of $`q`$ in Eq.(2) provides a good value for $`p_c`$ in the case of trees decorated with loops (see Table I). However, the agreement is not as good as for the square and cubic lattices. This discreapancy is to be put in parallel to the finding that for anisotropic lattices an effective number of neigbhors should be used instead of the arithmetic average. 4. Conclusion In this letter we have proposed a mapping of the percolation threshold for a dynamical situation to a static one by making a simple conjecture with respect to the relevant neigbhoring sites. The conjecture states that the dynamical percolation case reduces to the static percolation one considering that the percolation threshold should include the effect of the second nearest neigbhors only. This conjecture was then checked using the $`GM`$ universal law for percolation thresholds. The results are very convincing. It was also found to be satisfied in the Cayley tree case. More results on a larger lattice spectrum would allow a more definite check of our conjecture. Acknowledgements NV thanks the FNRS for financial support. Thanks to M.Ausloos for fruitful discussions. | Lattice | Ref. | $`p_c`$ static | $`p_c^{GM}`$ static | $`p_c`$ dynamic | $`p_c^{GM}`$ dynamic | $`q`$ | $`q_2`$ | | --- | --- | --- | --- | --- | --- | --- | --- | | square ($`d=2`$) | | 0.5928 | 0.5985 | 0.44 | 0.4411 | 4 | 8 | | cubic ($`d=3`$) | | 0.3116 | 0.3115 | 0.15 | 0.1466 | 6 | 18 | | Tree | Ref. | $`p_c`$ static | $`1/\stackrel{~}{q}`$ | $`p_c`$ dynamic | $`1/\stackrel{~}{q}_2`$ | $`\stackrel{~}{q}`$ | $`\stackrel{~}{q}_2`$ | | Cayley tree | | $`1/z`$ | $`1/z`$ | $`1/z^2`$ | $`1/z^2`$ | $`z`$ | $`z^2`$ | | tree ($`\mathrm{}`$ loops) | | 1/2 | 1/2 | 0.151 | 0.1667 | 2 | 18/3 | | tree ($`\mathrm{}`$ loops) | | 0.597 | 0.625 | 0.269 | 0.2778 | 8/5 | 18/5 | Table I — The thresholds $`p_c`$ for both static and dynamic epidemic models compared to the associated $`p_c^{GM}`$ thresholds from Eq.(1). All lattices studied up to now to our knowledge are listed with pertinent references. The number of nearest neighboring sites $`q`$ as well as the second neighborhood $`q_2`$ are also given. Trees are as those shown in Figure 2. Figure captions Figure 1 — Illustration of one growth step of the dynamic epidemic model. The growing phase is drawn in white, the mobile particles (hindrances) are drawn in black and the empty sites are shown in grey: (a) one empty site denoted by a cross in contact with the spreading cluster is selected at random; (b) the growth takes place there and the particle touched by the new unit jumps towards a neighboring site in order to reduce its contact with the cluster. Figure 2 — Three different trees with branching rate $`z=2`$ as discussed in this paper: (a) Cayley tree, (b) tree decorated with triangular loops, and (c) tree decorated with square loops. Figure 3 — Nearest neighbors and second neighbors on a square lattice. The $`q_2=8`$ second neighbors which play the fundamental role in the dynamic epidemic model are denoted in grey.
no-problem/9901/astro-ph9901292.html
ar5iv
text
# Dynamics of Primordial Black Hole Formation ## I Introduction Primordial overdensities seeded, for instance, by inflation or topological defects may collapse to primordial black holes (PBHs) during early radiation-dominated eras if they exceed a critical threshold . This particular PBH formation process, which is examined in this paper, occurs when an initially super-horizon size region of order unity overdensity crosses into the horizon and recollapses. Among the potentially observable consequences of PBHs, should they be produced in cosmologically relevant numbers, are thermal effects due to the Hawking evaporation of small PBHs (manifested in the gamma ray background or as a class of very short gamma ray bursts ) or purely gravitational effects such as gravitational radiation of coalescing binary PBH systems or contribution of PBHs to the cosmic density parameter. Upper bounds on these signatures strongly constrain the spectral index of the fluctuation power spectrum on small scales . Recently, the possibility that stellar mass PBHs constitute halo dark matter has received attention in the context of the MACHO/EROS microlensing detections . It has been suggested that during the cosmological QCD phase transition, occurring at an epoch where the mass enclosed within the particle horizon, $`R_\mathrm{h}t`$, approximately equals one solar mass, PBH formation may be facilitated due to equation of state effects manifest in a reduction of the PBH formation threshold . Every quantitative analysis of the PBH number and mass spectrum requires knowledge of the threshold parameter, $`\delta _\mathrm{c}`$, (for the specific definition used here see below) separating perturbations that form black holes from those that do not, and the resulting black hole mass, $`M_{\mathrm{bh}}`$, as a function of distance from the threshold. In a simplified picture of the formation process, where hydrodynamical effects are only accounted for in a very approximate way, the universe is split into a collapsing region described by a closed Friedmann–Robertson–Walker (FRW) space–time and an outer, flat FRW universe. For a radiation dominated universe, it can be shown that this ansatz yields $`\delta _\mathrm{c}1/3`$, where $`\delta _\mathrm{c}`$ is evaluated at the time of horizon crossing . On dimensional grounds, the natural scale for $`M_{\mathrm{bh}}`$ is the horizon mass, $`M_\mathrm{h}R_\mathrm{h}^3`$, of the unperturbed FRW solution at the epoch when fluctuations enter the horizon. However, these estimates for $`\delta _\mathrm{c}`$ and $`M_{\mathrm{bh}}`$ are valid only within the limitations of the employed model which cannot account for the detailed nonlinear evolution of the collapsing density perturbations. In order to determine $`\delta _\mathrm{c}`$ and $`M_{\mathrm{bh}}`$ for various initial conditions, we performed one-dimensional, general relativistic simulations of the hydrodynamics of PBH formation. We studied three families of perturbation shapes chosen to represent generic classes of initial data, reflecting the lack of specific information about the distribution and classification of primordial perturbation shapes. Our numerical technique, adopted from a scheme developed by Baumgarte et al. , is sketched in Section II, followed by a description of the general hydrodynamical evolution of the collapse and the results for $`\delta _\mathrm{c}`$ (Section III) and a discussion of accretion after the PBH formation (Section IV). Defined as the excess mass within the horizon sphere at the onset of the collapse, we find $`\delta _\mathrm{c}0.7`$ for all three perturbation shapes. A numerical confirmation of the previously suggested power–law scaling of $`M_{\mathrm{bh}}`$ with $`\delta \delta _\mathrm{c}`$ , related to the well-known behavior of collapsing space–times at the critical point of black hole formation , is presented in Section V. In this framework, the PBH mass spectrum is determined by the dimensionless coefficient $`K`$ and the scaling exponent $`\gamma `$, such that $$M_{\mathrm{bh}}=KM_\mathrm{h}(\delta \delta _\mathrm{c})^\gamma .$$ (1) We provide numerical results for $`K`$ and $`\gamma `$ for the three perturbation families. These values may be used, in principle, to determine PBH mass functions as outlined in . To introduce our numerical approach and isolate the dependence of $`\delta _\mathrm{c}`$ and $`M_{\mathrm{bh}}`$ on the initial perturbation shape from the impact of the equation of state, we restrict the discussion here to the purely radiation-dominated phase of the Early Universe. In a separate publication, we will investigate the change of $`\delta _\mathrm{c}`$ before, during, and after the cosmological QCD phase transition. Two other groups have, to our knowledge, published results of numerical simulations of PBH formation in the radiation-dominated universe . Our work differs from theirs with regard to the numerical technique, the choice of initial conditions, and the analysis of the numerical data. Wherever possible and relevant, we compare our methods and results with those previously published. ## II Numerical technique The dynamics of collapsing density perturbations in the Early Universe are fully described by the general relativistic hydrodynamical equations for a perfect fluid, the field equations, the first law of thermodynamics, and a suitable equation of state. We use a simple radiation dominated equation of state, $`P=ϵ/3`$, where $`P`$ is pressure and $`ϵ`$ is energy density, as appropriate during most eras in the Early Universe. The assumption of spherical symmetry is well justified for large fluctuations in a Gaussian distribution , reducing the problem to one spatial dimension. For our simulations, we have chosen the formulation of the hydrodynamical equations by Hernandez and Misner as implemented by Baumgarte et al. (we omit restating the full system of equations but instead refer to the equations published by Baumgarte et al. by a capital “B” followed by the respective equation number ). Based on the original equations by Misner and Sharp , Hernandez and Misner proposed to exchange the Misner–Sharp time variable, $`t`$, with the outgoing null coordinate, $`u`$. The line element then reads (Eq. B27) $$ds^2=e^{2\mathrm{\Psi }}du^22e^\mathrm{\Psi }e^{\lambda /2}dudA+R^2d\mathrm{\Omega }^2,$$ (2) where $`e^\mathrm{\Psi }`$ is the lapse function, $`A`$ is the comoving radial coordinate, $`R`$ is circumferential radius, and $`d\mathrm{\Omega }`$ is the solid angle element (cf. Eq. B2). After the transformation, the hydrodynamical equations retain the Lagrangian character of the Misner–Sharp equations but avoid crossing into the event horizon of a black hole once it has formed. Covering the entire space–time outside while asymptotically approaching the event horizon, the Hernandez–Misner equations are perfectly suited to follow the evolution of a black hole for long times after its initial formation without encountering coordinate singularities. This allowed us, in principle, to study the accretion onto newly formed PBHs for arbitrarily long times (in contrast to earlier calculations ) and therefore predict final PBH masses. The Lagrangian form of the Hernandez–Misner equations allows the convenient tracking of the expanding outer regions in a comoving numerical reference frame. It also provides a simple prescription for the outer boundary condition, as explained below. The extremely low ratio of baryon number to energy density in the Early Universe requires a re-interpretation of the comoving radial coordinate, $`A`$, in Eq. (B1) and the comoving rest mass density, $`\rho _0`$, in (B3). We re-define $`\rho _0`$ as the number density of a conserved tracer particle with the purpose to define the comoving coordinate $`A`$ as the tracer particle number enclosed within $`R`$. The variable $`e`$ is then defined as the energy per tracer particle number density, such that the energy density is $`ϵ=e\rho _0`$. In the ultra-relativistic limit $`e1`$, allowing us to replace “$`1+e`$” with “$`e`$” in Eq.s B3, B4, B6, B14, and B38. This way, the Lagrangian coordinate $`A`$ can be scaled to order unity together with all other variables, which is desirable for reasons of numerical stability. Given the definition of the radial grid coordinate, a suitable discretization of $`A`$ must be found. Numerical accuracy dictates to deviate as little as possible from an equidistant grid partition lest numerical instabilities occurring on superhorizon scales severely constrain the grid size (see below). On the other hand, since $`\mathrm{\Delta }RR^2\mathrm{\Delta }A`$ in a constant density medium, spatial resolution is concentrated near the outer grid boundary and is worst near the origin (where it is needed most) in case of equidistant $`\mathrm{\Delta }A`$. As a compromise between accuracy and resolution, we use an exponentially growing cell size of the form $$\mathrm{\Delta }A_i=(1+\frac{𝒞}{N})\mathrm{\Delta }A_{i1},$$ (3) where $`𝒞`$ is a constant and $`N`$ is the total number of grid points. Based on the standard convergence tests for numerical resolution, we used $`N=500`$ and $`𝒞=12`$ for the results reported below. The canonical boundary conditions (B18 and B40) are imposed at the origin, while the outer boundary is defined to match the exact solution of the Friedmann equations for a radiation-dominated flat universe. Hence, the pressure follows the analytic solution $$P=P_0\left(\frac{\tau (A_\mathrm{N})}{\tau _0}\right)^2,$$ (4) where $`\tau (A_\mathrm{N})`$ is the proper time of the outermost fluid element (identified here with the FRW time coordinate, $`t_{\mathrm{FRW}}`$) and $`P_0`$ and $`\tau _0`$ are the initial values for pressure and proper time. The time metric (lapse) functions in (B1) and (B27) are fixed at $$e^\mathrm{\Phi }=e^\mathrm{\Psi }=1$$ (5) at the outer boundary in order to synchronize the coordinate times $`t`$ and $`u`$ with the proper time of an observer comoving with the outermost fluid element, $`\tau (A_\mathrm{N})`$, and thereby with $`t_{\mathrm{FRW}}`$ (note that B40 synchronizes $`u`$ with a stationary observer at infinity which is a meaningless concept in an expanding space–time). Owing to the presence of the curvature perturbation, the space–time converges to the flat FRW solution only asymptotically. Therefore, the accuracy of imposing Eq. (4) at the outer boundary is presumed to grow with the size of the computational domain, removing the grid boundary farther from the density perturbation. In particular, it is desirable to keep the boundary causally unconnected from the perturbed region for as long as achievable. The hydrodynamical evolution ensuing the collapse is highly dynamical for $`t100t_0`$ (Section III), where $`t_0`$ is the FRW time at the beginning of the simulation, corresponding to the light crossing time of a comoving distance of approximately $`9R_\mathrm{h}`$. Explicit numerical experiments showed good agreement of the numerical solution at the outer grid with the exact FRW solution for all relevant perturbation parameters if the grid reached out to $`R_{\mathrm{max}}9R_\mathrm{h}`$. Extending the grid to these large radii proved to be a non-trivial task for the Misner–Sharp part of the numerical scheme, needed to initialize the Hernandez–Misner computation, as will be outlined below. As the initial data is most naturally assigned on a spatial hypersurface at constant Misner–Sharp (or, equivalently, FRW) time $`t`$, one must transform the hydrodynamical and metric variables onto a null hypersurface in order to initialize the Hernandez–Misner equations. This can be done numerically in the way described by Baumgarte et al. : first, the initial conditions are given on a $`t=`$ const hypersurface and evolved using the Misner–Sharp equations. Simultaneously, the path of a light ray is followed from the origin to the grid boundary and the state variables on the path are stored. After the light ray has crossed the grid, the Misner–Sharp computation is terminated and the stored state values are used as initial data for the Hernandez–Misner equations. As a consequence, the Misner–Sharp calculation needs to be carried out until an initialization photon starting at the center reaches the outer grid boundary. For the aforementioned reasons, it is preferable to use a super-horizon size computational domain. Since the light travelling time over such a large grid is larger than the dynamical time for collapse to a PBH ($`t_0`$), a black hole already forms during the Misner–Sharp calculation before the initialization of the Hernandez–Misner grid is completed. In order to avoid a breakdown of the Misner–Sharp coordinate system inside the event horizon, the evolution of fluid elements is stopped artificially if curvature becomes large. More specifically, we found that the Misner–Sharp equations are numerically well-behaved if the inner grid boundary is defined as the innermost mass shell where the Misner–Sharp lapse function fulfills $`e^\mathrm{\Phi }0.2`$. The inner boundary conditions are henceforth chosen as the frozen-in state variables on this shell. Despite this modification of the evolution equations the final results of the Hernandez–Misner calculation are unaffected if the initialization photon is far away from the collapsing region at the time of the boundary re-definition, i.e., if the black hole forms at late times during the Misner–Sharp calculation. In all cases reported here, this condition is satisfied. The second complication that arises in the Misner–Sharp coordinate system is related to superhorizon scales. In a nearly flat expanding space–time the square of the coordinate velocity, $`U^2`$ (where $`U=e^\mathrm{\Phi }R/t`$), and the ratio of gravitational mass to radius, $`2m/R`$, grow with increasing $`R`$. Both terms cancel identically in an exact FRW universe for all $`R`$. On scales much larger than the horizon, however, they become much greater than unity, and thus numerical noise can lead to significant errors in the radial metric function, $`\mathrm{\Gamma }=(1+U^22m/R)^{1/2}`$ (cf. B12). The positive feedback of errors in $`\mathrm{\Gamma }`$, $`\rho _0`$ (B13), and $`m`$ (B6) leads to a numerical instability of the Misner–Sharp equations on superhorizon scales. It can be controlled by solving for $`\mathrm{\Gamma }`$, $`\rho _0`$, and $`m`$ at each time step by iteration, and by imposing a very restrictive allowed density change of $`\mathrm{\Delta }\rho _0/\rho _05\times 10^4`$ per time step. None of the above mentioned problems exist in the Hernandez–Misner formulation by virtue of the time slicing along null surfaces: avoidance of the central singularity is guaranteed by the formation of an event horizon, and the superhorizon instability cannot occur because every grid point lies, by definition, on the horizon. Therefore, after the assignment of initial data is completed, the integration of the fluid equations in Hernandez–Misner coordinates is numerically stable for arbitrarily long times and on arbitrarily large spatial domains. In order to achieve reasonable accuracy, however, the time step size must be restricted to values much smaller than the Courant-Friedrich-Levy (CFL) condition. The reason is most likely found in the numerical integration of the lapse function, $`e^\mathrm{\Psi }`$, which is only first-order accurate . This problem is most important for collapse with initial conditions close to the threshold for black hole formation, which leads to the formation of very small black holes and gives rise to very strong space–time curvature. Decreasing the time step generally causes a decrease of resulting black hole mass, $`M_{\mathrm{bh}}`$. At the numerical resolution chosen for this problem (see Eq. 3), the code is unable to follow the formation of black holes smaller than $`M_{\mathrm{bh}}0.1`$ in units of the initial horizon mass. An adaptive mesh algorithm may be necessary to resolve this problem. In agreement with studies of critical gravitational collapse (), our experiments indicate that only the coefficient $`K`$ in Eq. (1) is affected by the time step variation, while the scaling exponent $`\gamma `$ appears very robust. Nevertheless, these problems disappear rapidly with distance to the threshold such that convergence for larger PBH masses was attained. In addition to the test-bed calculations reported by Baumgarte et al. , we verified the accuracy of the code including the modified outer boundary conditions by simulating the evolution of a flat unperturbed universe. Using the time step restriction described above, the numerical results for all hydrodynamical variables differ from the analytic FRW solution by less than $`10^3`$. ## III Hydrodynamical evolution of collapsing fluctuations We have studied the spherically symmetric evolution of three families of curvature perturbations. Initial conditions are chosen to be perturbations in the energy density, $`ϵ`$, in unperturbed Hubble flow specified at horizon crossing. The first family of perturbations is described by a Gaussian-shaped overdensity that asymptotically approaches the FRW solution at large radii, $$ϵ(R)=ϵ_0\left[1+A\mathrm{exp}\left(\frac{R^2}{2(R_\mathrm{h}/2)^2}\right)\right].$$ (6) Here, $`R`$ is circumferential radius, $`R_\mathrm{h}=2t_0`$ is the horizon length at the initial cosmological time $`t_0`$, and $`ϵ_0=3/(32\pi t_0^2)`$, yielding $`M_\mathrm{h}=(4\pi /3)ϵ_0R_\mathrm{h}^3=t_0`$ for the initial horizon mass of the unperturbed space–time. In the absence of perturbations, cirumferential radius corresponds to what is commonly referred to as proper distance in cosmology, $`r_p=ar_c`$, where $`a`$ is the scale factor of the universe and $`r_c`$ is the comoving cosmic distance. The other two families of initial conditions involve a spherical Mexican-Hat function and a sixth order polynomial. These functions are characterized by outer rarefaction regions that exactly compensate for the additional mass of the inner overdensities, so that the mass derived from the integrated density profile is equal to that of an unperturbed FRW solution: $$ϵ(R)=ϵ_0\left[1+A\left(1\frac{R^2}{R_\mathrm{h}^2}\right)\mathrm{exp}\left(\frac{3R^2}{2R_\mathrm{h}^2}\right)\right],$$ (7) and $$ϵ(R)=\{\begin{array}{cc}ϵ_0\left[1+\frac{A}{9}\left(1\frac{R^2}{R_\mathrm{h}^2}\right)\left(3\frac{R^2}{R_\mathrm{h}^2}\right)^2\right]:\hfill & R<\sqrt{3}R_\mathrm{h}\hfill \\ ϵ_0:\hfill & R\sqrt{3}R_\mathrm{h}\hfill \end{array}$$ (8) The amplitude $`A`$ is a free parameter used to tune the initial conditions to sub- or supercriticality with respect to black hole formation. The shapes of all three perturbations are illustrated in Figure (1). The relevant dimensionless threshold parameter, $`\delta _\mathrm{c}`$, for the purpose of evaluating the cosmological abundance of PBHs is the energy overdensity in uniform Hubble constant gauge averaged over a horizon volume (i.e., synchronous gauge with uniform Hubble flow condition). It is equivalent to the additional mass-energy inside $`R_\mathrm{h}`$ in units of $`M_\mathrm{h}`$. We find similar values — $`\delta _\mathrm{c}=0.67`$ (Mexican-Hat), $`\delta _\mathrm{c}=0.70`$ (Gaussian), and $`\delta _\mathrm{c}=0.71`$ (polynomial) — for all three families of initial data in our study, suggesting that the value $`\delta _\mathrm{c}0.7`$ yields a more accurate estimate for the cosmological PBH mass function than the commonly employed $`\delta _\mathrm{c}1/3`$. In cases where a black hole is formed, we define its mass, $`M_{\mathrm{bh}}`$, as the gravitational mass, $`m`$, enclosed by the innermost shell that conforms to $`e^\mathrm{\Psi }10^{10}`$, the temporal evolution of all shells with smaller $`e^\mathrm{\Psi }`$ being essentially frozen in (with regard to proper time of a distant observer). Owing to the steep rise of $`e^\mathrm{\Psi }`$ at the event horizon, the exact choice of the cutoff value does not affect $`M_{\mathrm{bh}}`$ within the accuracy reported in this work. Unless otherwise specified, we henceforth quote $`M_{\mathrm{bh}}`$ in units of the initial horizon mass, $`M_\mathrm{h}`$, and proper time in multiples of the initial time, $`t_0`$. Figures (3), (3), and (5) illustrate generic features of the evolution of slightly supercritical perturbations for the three density perturbation families, respectively. The curves display the energy density, $`ϵ/ϵ_0`$, at constant proper time, $`\tau `$, for each mass shell ($`\tau `$ is given in multiples of $`t_0`$ as labeled). In Hernandez–Misner coordinates, the lack of a well defined global time variable corresponding to the cosmological FRW time at infinity requires this local time slicing. As described in , we integrate d$`\tau =e^\mathrm{\Psi }`$d$`u`$ and store $`\tau (A,u)`$ together with all other state variables. The curves are then created by plotting the energy density along the isosurfaces of $`\tau `$. The radial coordinate is the circumferential radius, scaled such that in the absence of a perturbation it may be associated with cosmic comoving radius. Further, the initial horizon size, $`R_\mathrm{h}=2t_0`$, is normalized to unity in the Figures. It is interesting to note that with this type of time slicing $`ϵ(\tau =\text{const})`$ may cease to be a single-valued function of the circumferential radius, $`R`$, in cases of strong curvature (cf. Figure 5). In particular, at the same proper time spheres containing larger baryon number (labeled by $`A`$) may have smaller circumferential radius than spheres containing smaller baryon number. In all cases shown in Figures (3),(3), and (5), a black hole with $`M_{\mathrm{bh}}0.37`$ forms. The hydrodynamical evolution of the three different perturbations exhibits strong similarities: initially, the central overdensity grows in amplitude while the outer underdensity, if present in the initial conditions, gradually widens and levels out. A black hole forms in the interior. Some time after the initial formation of an event horizon, material close to the PBH but outside the event horizon bounces and launches a compression wave traveling outward. This compression wave is connected to the black hole by a rarefaction region that evacuates the immediate vicinity of the black hole. The strength of the rarefaction differs significantly for the Gaussian perturbation shape and the mass compensated ones: while the latter display only a weak underdensity that quickly equilibrates, the former gives rise to a drop in energy density by three orders of magnitude. The bounce of material outside the newly formed black hole is a feature intrinsic only to black holes very close to the formation threshold. It effectively shuts off further accretion of material onto the newly formed PBH. As Figure (7) demonstrates, no bounce occurs if the initial conditions are sufficiently far above the threshold. Here, a large black hole ($`M_{\mathrm{bh}}=2.75`$) forms whose event horizon reaches further out, encompassing regions where the pressure gradient is smaller, preventing pressure forces from overcoming gravitational attraction. Slightly below the PBH formation threshold, a bounce occurs whose strength is proportional to the initial perturbation amplitude (Figure 5), indicating that the fluid bounce is strongest for perturbations very close to the threshold. It is likely that previous studies failed to observe this phenomenon because their initial conditions were insufficiently close to the black hole formation threshold. Their numerical simulations may also not have followed the hydrodynamical evolution for long enough after the initial formation of the PBH. ## IV Accretion Accretion onto PBHs and their resulting growth in mass has been a highly debated subject since the suggestion that PBHs may grow in proportion with the cosmological horizon mass . Both analytic and previous numerical studies came to the conclusion that the growth of PBH masses by ongoing accretion is negligible except, possibly, for very contrived initial data for the perturbations. Our results generally confirm this statement for small PBHs, but we find noticeable differences between collapse simulations that exhibit the fluid bounce and those that do not (Figure 7). The rarefaction following the outgoing density wave efficiently cuts off the flow of material into the black hole. Comparing Figures (7) and (5), it is recognized that the secondary phase of mass growth for the Gaussian shape calculation may correspond to the rise of the second wave crest of the strongly damped density oscillation at the black hole event horizon. This second rise in density is absent in the weaker bounces of the Mexican-Hat and polynomial-shaped perturbation simulations. The large ($`M_{\mathrm{bh}}=2.75`$) black hole, on the other hand, continues to grow at a slowly decreasing rate for long times without gaining a considerable amount of mass in the process. Based on these results, we expect accretion to be insignificant for the determination of $`M_{\mathrm{bh}}`$, at least for the types of perturbations investigated here. ## V Scaling Relations for PBH Masses Choptuik’s discovery of critical phenomena in gravitational collapse near the black hole formation threshold started an active and fascinating line of research in numerical and analytical general relativity (for recent reviews, see ). For a variety of matter models, it was found that the dynamics of near-critical collapse exhibits continuous or discrete self-similarity and power law scaling of the black hole mass with the offset from the critical point (Eq. 1). In particular, Evans and Coleman found self-similarity and mass scaling in numerical experiments of a collapsing radiation fluid. They numerically determined the scaling exponent $`\gamma 0.36`$, followed by a linear perturbation analysis of the critical solution by Koike et al. that yielded $`\gamma 0.3558`$. Until recently, it was believed that entering the scaling regime requires a degree of fine-tuning of the initial data that is unnatural for any astrophysical application. It was noted that fine-tuning to criticality occurs naturally in the case of PBHs forming from a steeply declining distribution of primordial density fluctuations, as generically predicted by inflationary scenarios. In the radiation-dominated cosmological epoch, the only difference with the fluid collapse studied numerically by Evans and Coleman is the asymptotically expanding, finite density background space–time of a FRW universe. Assuming that self-similarity and mass scaling are consequences of an intermediate asymptotic solution that is independent of the asymptotic boundary conditions, Eq. (1) is applicable to PBH masses, allowing the derivation of a universal PBH initial mass function . Furthermore, cosmological constraints based on evaporating PBHs are slightly modified as a consequence of the production of not only horizon-size PBHs, as previously assumed, but the additional production of smaller, sub-horizon mass black holes at each epoch . Figure (8) presents numerical evidence that mass scaling according to Eq. (1) occurs in the collapse of near-critical black holes in an asymptotic FRW space–time, and therefore applies to PBH formation. All three perturbation families give rise to scaling solutions with a scaling exponent $`\gamma 0.36`$. Only the smallest six black holes of all families were included to obtain the numerical best fit quoted in the figure captions. On larger mass scales, deviations from mass scaling with a fixed exponent become noticeable; in all cases, $`\gamma `$ tends to increase slightly for larger $`M_{\mathrm{bh}}`$. Owing to resolution limitations discussed in Section (II), we were unable to compute the formation of smaller black holes than the ones shown in Figure (8). To linear order, the scaling relation (1) is invariant under transformations of the control parameter $`\delta `$ up to a change of the coefficient $`K`$. This was tested explicitly by choosing different definitions of $`\delta `$ (the perturbation amplitude $`A`$, the total excess mass for the Gaussian-shaped perturbation, and the excess mass within the horizon volume) for the numerical fit and obtaining identical values for $`\gamma `$. ## VI Conclusions In the general framework of primordial black hole (PBH) formation from horizon-size, pre-existing density perturbations, we numerically solved the spherically symmetric general relativistic hydrodynamical equations in order to study the collapse of radiation fluid overdensities in an expanding Friedmann–Robertson–Walker (FRW) universe. The algorithm is adopted from an implementation of the Hernandez–Misner coordinates by Baumgarte et al. . It allows the convenient computation of black hole formation and superhorizon scale dynamics by virtue of its time coordinate, chosen to be constant along outgoing null surfaces. One of the parameters entering the statistical analysis of cosmological consequences and constraints due to the possible abundant production of PBHs is the threshold parameter, $`\delta _\mathrm{c}`$, corresponding to the amplitude of the smallest perturbations that still collapse to a black hole. It generally depends on the specific perturbation shape at the time of horizon crossing. We studied three generic families of energy density perturbations, one with a finite total excess mass with respect to the unperturbed FRW solution and two mass compensated ones. Defining the control parameter, $`\delta `$, as the total excess gravitational mass of the perturbed space–time with respect to the unperturbed FRW background enclosed in the initial horizon volume, our calculations yield a similar threshold value for all three fluctuation shape families, $`\delta _\mathrm{c}0.7`$. We investigated features of collapsing space–times very close to the threshold of black hole formation embedded in an expanding FRW solution. If the initial perturbation is smaller than $`\delta _\mathrm{c}`$, it grows until pressure forces at the origin cause the fluid to bounce, creating an outgoing pressure wave followed by a rarefaction, but no black hole. Initial conditions slightly exceeding the threshold, on the other hand, lead to the formation of a very small black hole at the origin; however, the pressure gradient immediately outside of the event horizon is still sufficiently steep to force the fluid to bounce. The launch of a compression wave can be observed in simulations of all three perturbation shapes. It is strongest in the case of a pure initial overdensity, parameterized here as a Gaussian curve, where the density behind the pressure wave drops by three orders of magnitude. Increasing $`\delta `$ to values significantly above $`\delta _\mathrm{c}`$, the bounce becomes weaker and finally disappears, signaling the failure of the pressure gradient at the event horizon to overcome gravitational attraction. This behavior has important consequences for the accretion onto PBHs immediately after their formation. If a bounce occurs, the inner rarefaction shuts off accretion almost completely before any significant amount of material has been accreted. On the other hand, black holes that form from sufficiently large overdensities, where a bounce is suppressed, may accrete at a slowly decreasing rate for a long time. Since most PBHs created from collapsing primordial density fluctuations with a steeply declining amplitude distribution form very close to $`\delta _\mathrm{c}`$ , we conclude that accretion is unimportant for the estimation of PBH masses. This is in agreement with previous studies , albeit for different reasons. Finally, the previously suggested scaling relation between $`M_{\mathrm{bh}}`$ and $`\delta \delta _\mathrm{c}`$, based on the analogy with critical phenomena observed in near-critical black hole collapse in asymptotically non-expanding space–times , was confirmed numerically for an asymptotic FRW background. For the smallest black holes in our investigation, the scaling exponent is $`\gamma 0.36`$, which is identical to the non-expanding numerical and analytical result within our numerical accuracy. The parameter $`K`$ of Eq. (1), needed to evaluate the PBH initial mass function derived in , was found to range from $`K2.4`$ to $`K12`$. We wish to thank T. Baumgarte for providing the original version of the hydrodynamical code, and T. Abel, A. Olinto, and V. Katalinić for helpful discussions. Part of this research was supported by an Enrico-Fermi-Fellowship.
no-problem/9901/astro-ph9901358.html
ar5iv
text
# A Fundamental Test of the Nature of Dark Matter ## 1 Introduction The nature of dark matter (DM) poses one of the most outstanding problems in astrophysics. There are essentially two alternative hypotheses. The dark matter may be microscopic, consisting of weakly interacting particles (WIMPs) such as SUSY neutralinos or axions, or else be macroscopic, compact objects such as primordial black holes (PBHs), brown dwarfs or old white dwarfs (MACHOs). Big Bang nucleosynthesis (BBN) puts a bound on the density in baryonic matter of $`\mathrm{\Omega }_bh^2\stackrel{<}{}0.02`$ (or $`\stackrel{<}{}0.03`$ if one allows for inhomogeneous BBN), but the density of PBHs is not well constrained. It is possible that some hitherto unknown mechanism allows for DM that is dominated by macroscopic objects. For these reasons direct observational constraints on macroscopic DM of any density are very important. We propose a simple test for distinguishing macroscopic from microscopic dark matter. In this letter we consider only the opposing hypotheses that one or the other dominates. If the DM is microscopic, the clustered component, in halos, lenses high redshift supernovae (SNe). If the DM is macroscopic, most light beams do not intersect any matter - no Ricci focusing - and the SN brightness distribution is skewed to an extent that can be quantitatively distinguished from halo lensing. ## 2 Properties of the Magnification Probability Distribution Function In this paper we consider the lensing of distant supernovae by discrete “lenses”. A lens is the smallest unit of mass that acts coherently for the purpose of lensing. This could be a galaxy halo or it could be a high mass dark matter candidate such as a PBH. We make the distinction between macroscopic and microscopic DM more quantitative by considering two mass scales. The first is defined by the requirement that the projected density be smooth on the scale of the angular size of the source. This gives a maximum mass of $`m_s6\times 10^{10}\text{g}\left({\displaystyle \frac{\lambda _s}{\text{AU}}}\right)^3\mathrm{\Omega }_oh^2f`$ (1) where $`\lambda _s`$ is the physical size of the source and $`f`$ is a geometric factor of order unity. If the unit of DM is smaller then this it is microscopic DM. Another, larger mass scale is defined by the requirement that the angular size of the source be small compared to the Einstein ring radius so that it can be considered a true point source if $`m\stackrel{>}{}5\times 10^7\text{M}_{}\left({\displaystyle \frac{10^3\text{Mpc}}{D_s}}\right)\left({\displaystyle \frac{\lambda _s}{\text{AU}}}\right)^2f`$ (2) where $`D_s`$ is the angular size distance to the source. If a lens is much below this mass the high magnification tail of the distribution function will be suppressed and the rare high magnification events will become time-dependent(Schneider & Wagoner (1987)). The measured velocity of the expanding photosphere of a type Ia SN is around $`1.01.4\times 10^4\text{ km s}\text{-1}`$ (Patat et. al (1996))which means $`\lambda _s4057\mathrm{\Delta }t\text{ AU}/\text{week}`$. The SN reaches maximum light in approximately one week. The background cosmology will be taken to be the standard Friedman-Lemaître-Robertson-Walker (FLRW) with the metric $`ds^2=dt^2+a(t)^2\left(d\chi ^2+D(\chi )^2d\mathrm{\Omega }\right)`$ where the comoving angular size distance is $`D(\chi )=\{R\mathrm{sinh}(\chi /R),\chi ,R\mathrm{sin}(\chi /R)\}`$ ($`R=|H_o\sqrt{1\mathrm{\Omega }_o\mathrm{\Omega }_\mathrm{\Lambda }}|^1`$) for the open, flat and closed global geometries respectively. Another relevance angular size distance is the Dyer-Roeder or empty-beam distance, $`\stackrel{~}{D}(\chi )`$ (Dyer & Roeder (1974); Kantowski (1998), note difference in notation) which is angular size distance for a beam that passes through empty space and experiences no shear. ### 2.1 Magnification by a single lens Consider a single lens at a fixed coordinate distance from Earth. The path of the light is described by either of two lensing equations: $`𝐫_{}=𝐲\alpha (𝐲,\stackrel{~}{D}_l,\stackrel{~}{D}_s)`$ (3) $`𝐫_{}=[1\kappa _b(\chi _s)]𝐲\alpha (𝐲,D_l,D_s)`$ (4) where $`𝐫_{}`$ is the positions of the lens relative to the undeflected line of sight to the source, $`𝐲`$ is the position of its image in the same plane and $`\alpha `$ is the deflection angle times the angular size distance. In equation (3) a negative background convergence, $`\kappa _b`$, is included to account for the lack of background mass density that is assumed when $`D`$ is used instead of $`\stackrel{~}{D}`$. Two magnifications, $`\stackrel{~}{\mu }`$ and $`\mu `$, can be defined using equations (3) and (4) respectively. The requirement that the two lensing equations agree on the true size of an object results in the relation $`\stackrel{~}{D}(\chi )=[1\kappa _b(\chi )]D(\chi )`$. The explicit form of $`\kappa _b(z)`$ can be found by comparing the standard FLRW expression for $`D(\chi )`$ with the solutions for $`\stackrel{~}{D}(\chi )`$ found in Kantowski (1998). The probability that the lens is located between $`r_{}`$ and $`r_{}+dr_{}`$ is $`p(r_{})dr_{}r_{}dr_{}`$. If the lens is spherically symmetric and the magnification is a monotonic function of $`r_{}`$ the expression for the magnification can be inverted (at least numerically) to get $`r_{}(\mu ,D,D_s)`$. Then probability of a lens causing the magnification $`1+\delta \mu `$ can be found by changing variables. Lenses might also have properties such as mass, scale length, etc. which need to be averaged. For the case of a point mass lens the total magnification of both images is given by $`\stackrel{~}{\mu }=\widehat{r}^2+2/\widehat{r}\sqrt{\widehat{r}^2+4}`$; $`\widehat{r}r_{}/R_e(m,D,D_s)`$. The Einstein radius of the lens is given by $`R_e^2=4Gm\stackrel{~}{D}_l\stackrel{~}{D}_{ls}/\stackrel{~}{D}_s`$. The single lens distribution function is then $$p(\delta \stackrel{~}{\mu })d\delta \stackrel{~}{\mu }\left[(1+\delta \stackrel{~}{\mu })^21\right]^{3/2}d\delta \stackrel{~}{\mu }.$$ (5) The probability in (5) is not normalizable; it diverges at small $`\delta \stackrel{~}{\mu }`$. This can be handled by introducing a cutoff in either $`\delta \stackrel{~}{\mu }`$ space or in $`r_{}`$. The nature of this cutoff is not important as long as it is at sufficiently small $`\delta \stackrel{~}{\mu }`$ or large $`r_{}`$. This will be clear when the total magnification distribution due to multiple lenses is considered. If the dark matter consists of microscopic particles clumped into halos, the entire halo will act as a single lens. In this case the Ricci focusing contribution to the magnification strongly dominates over shear distortions produced by mass outside of the beam (Holz & Wald (1998); Premadi, Martel & Matzner (1998)) and is then a function of only the local dimensionless surface density, $`\kappa (𝐲)`$. Furthermore the lensing of the great majority of SNe will be quite weak which allows us to confidently make the linear approximation: $`\delta \mu =2[\kappa (𝐲,D_l,D_s)+\kappa _b]`$. This assumption has been well justified by many authors and will be confirmed by results in §2.2. For the purposes of this paper it will suffice to use a simple model for the surface density of halos. We use models with surface densities given by $$\mathrm{\Sigma }(y_{})=\frac{V_c^2}{2Gy_{}}\left[\left(\frac{y_{}}{r_s}\right)^2+1\right]^1$$ (6) This model behaves like a singular isothermal sphere out to $`y_{}r_s`$ where it is smoothly cut off. In the following calculations, each halo is assumed to harbor a galaxy. At all redshifts a Schechter luminosity function fit to local galaxies is assumed with $`\alpha =1.07`$ and $`\varphi ^{}=0.01(1+z)^3h^3\text{ Mpc}^3`$. The luminosities are then related to the circular velocity, $`V_c`$, by the local Tully-Fisher relation, $`V_c=V_{}(L/L_{})^{0.22}`$ where $`V_{}=200\text{ km s}\text{-1}`$. The scale lengths are related to the luminosity through $`r_c=r_{}(L/L_{})^{1/2}`$ with $`r_{}=220\text{ kpc}`$. The precise values used for these parameters do not have a significant effect on the results of this paper. ### 2.2 Total Magnification The total magnification of a source includes contributions from all the lenses surrounding the light path. To find the true path connecting a source to us, the lensing equation must be solved with multiple deflections (see Schneider, Ehlers & Falco (1992)). The magnifications due to different lens planes are in general nonlinearly coupled. However, if the deflections due to no more than one of the lenses is very weak the coupling between lenses can be ignored and their magnifications, $`\delta \mu `$ or $`\delta \stackrel{~}{\mu }`$, will add linearly. This is a good approximation for the vast majority of light paths in realistic models. The validity of this assumption will be justified by the results and is further investigated in Metcalf (1999). Furthermore numerical simulations and analytic arguments show that for both kinds of DM it is a good approximation to take the lenses to be uncorrelated in space (see Holz & Wald (1998); Metcalf 1998b ). If in addition we take the lenses internal properties to be uncorrelated, the probability that the total magnification, $`\delta \stackrel{~}{\mu }_s`$, of a point source is between $`\delta \stackrel{~}{\mu }_s`$ and $`\delta \stackrel{~}{\mu }_s+d\delta \stackrel{~}{\mu }_s`$ is $`P(\delta \stackrel{~}{\mu }_s)d\delta \stackrel{~}{\mu }_s=d\delta \stackrel{~}{\mu }_s{\displaystyle \underset{i=1}{\overset{N}{}}\left[d\delta \stackrel{~}{\mu }_ip(\delta \stackrel{~}{\mu }_i)\right]}`$ $`\delta (\delta \stackrel{~}{\mu }_s{\displaystyle \underset{i=1}{\overset{N}{}}}\delta \stackrel{~}{\mu }_i)`$ (7) where the $`\delta \stackrel{~}{\mu }_i`$ is the contribution of the $`i`$th lens. The magnification $`\delta \mu _s`$ is defined as the deviation of the luminosity from its mean value. As a result, the mean of the distribution $`P(\delta \mu _s)`$ must vanish.<sup>2</sup><sup>2</sup>2The actual mean angular size distance should be slightly larger than the FLRW value because galaxies obscure some sources. Galaxies are presumably correlated with high density regions through which the magnification would be above average were they transparent. This combined with the requirement that both magnifications agree on the true size of a source results in the expression $`1\kappa _b(\chi )=\stackrel{~}{\mu }(\chi )^{1/2}`$. In this way the value of $`\kappa _b(\chi )`$ can be found by calculating the mean of (7) numerically and a consistency check of the calculations can be made by comparing the results with the explicit values for $`D(\chi )`$ and $`\stackrel{~}{D}(\chi )`$. These values agree to a few percent which is consistent with the uncertainty introduced by the discrete nature of the numerical calculation in the power law tail of the distribution. The minimum magnification, $`\delta \mu _{min}`$, in the single lens distribution is set low enough that the resulting total distribution is independent of the cutoff. Figure 1 shows some examples of histograms made by producing random values $`\delta \stackrel{~}{\mu }_i`$ drawn from the single lens distributions and then adding them to get the total magnification. The macroscopic DM distributions shown in figure 1 are independent of the lens mass and peak well below their mean and near the empty-beam solutions (corresponding to $`\delta \mu =0.21`$, $`0.12`$ and $`0.084`$) because in these cases most lines of sight do not come very close to any lens. The probability that there are two lenses which individually give magnifications greater than $`\delta \mu `$ becomes appreciable only below the peak. This supports our approximation that whenever the lensing is strong it is dominated by one lens and the coupling between lenses is small at this redshift. In addition we have compared our results with the numerical simulations of Holz & Wald (1998) and found excellent agreement. ## 3 Distinguishing Dark Matter Candidates The apparent luminosity of a SN, $`l_{ob}`$, after lensing can be expressed in terms of either of the two magnifications, $`l_{ob}=\mu l=\stackrel{~}{\mu }l/\stackrel{~}{\mu }`$. We wish to infer via the measured luminosities of a set of SNe, each located at a different redshift, from which distribution the magnifications were drawn and in this way surmise which DM candidate is most likely. To establish some insight into the magnitude of this effect, the differences in magnitudes between the average and the empty-beam solutions at $`z=1`$ are: $`0.25\text{ mag}`$ for $`\mathrm{\Omega }_o=1`$, $`0.14\text{ mag}`$ for flat $`\mathrm{\Omega }_o=0.3`$ and $`0.10\text{ mag}`$ for open $`\mathrm{\Omega }_o=0.3`$. Let us denote the probability of getting a data set, $`\{\delta \mu \}`$, given a model \- either microscopic or macroscopic DM - as $`P(\{\delta \mu \}|model)=_iP(\delta \mu _i|model)d\delta \mu _i`$ where the product is over the observed SNe. The model here includes sources of noise. This probability can be calculated numerically from the probability distributions discussed in §2.2. Because of Bayes’ theorem we know that the ratio of these two probabilities is equal to the relative likelihood of the models being correct, the odds, given a data set. It is convenient to modify the odds into the statistic $`_p{\displaystyle \frac{1}{N_{SN}}}\mathrm{ln}\left[{\displaystyle \frac{𝑑\mathrm{\Omega }_o𝑑\mathrm{\Omega }_\mathrm{\Lambda }p(\mathrm{\Omega }_o,\mathrm{\Omega }_\mathrm{\Lambda })P\left(\{\delta \mu \}|\text{ macroDM,noise}\right)}{𝑑\mathrm{\Omega }_o𝑑\mathrm{\Omega }_\mathrm{\Lambda }p(\mathrm{\Omega }_o,\mathrm{\Omega }_\mathrm{\Lambda })P\left(\{\delta \mu \}|\text{ halos,noise}\right)}}\right].`$ (8) where $`p(\mathrm{\Omega }_o,\mathrm{\Omega }_\mathrm{\Lambda })`$ is the prior distribution for the cosmological model based on independent information or prejudice. The measured $`_p`$ is expected to be large if DM is macroscopic and smaller if DM is microscopic or nonexistent. For the left hand plot in figure 2 five thousand simulated data sets were created, $`_p`$ is calculated for each of them and their cumulative distributions plotted. The noise included in the simulation originates from both the intrinsic distribution of SN luminosities, presently corrected to $`0.12\text{ mag}`$, and the observational noise, presently an additional $`0.08\text{ mag}`$. For the left hand plot the noise is taken to be Gaussian-distributed in magnitudes with a standard deviation of $`0.16\text{ mag}`$ except for the dot-dashed curves which have $`\mathrm{\Delta }m=0.2\text{ mag}`$. The cosmology is fixed in this plot, ie $`p(\mathrm{\Omega }_o,\mathrm{\Omega }_\mathrm{\Lambda })`$ is a $`\delta `$-function. $`_p`$ can be calculated for a given data set and compared to this plot to determine its significance. It can be seen here that for 51 SNe (solid curve) at $`z=1`$ the two distributions overlap at the $`4\%`$ level, ie $`96\%`$ of the time one of the DM candidates can be ruled out at better than the $`96\%`$ confidence level. One of the advantages of $`_p`$ is that it is close to Gaussian distributed with a mean that is independent of the number of SNe observed. In this way, once the cosmology and noise model is fixed, the value of $`_p`$ is a direct prediction of the kind of DM. The middle plot in figure 2 illustrates the importance of some possible systematic uncertainties that arise from not knowing precisely the distribution of the noise. The solid curves are the same as in the left hand plot. The dotted curve is the extreme case where the noise is actually Gaussian distributed in magnification (there is a low magnitude tail), but $`_p`$ is calculated under the same assumptions as in the left hand plot. The dashed line in this plot is the case where the standard deviation is overestimated to be $`\mathrm{\Delta }m=0.2\text{ mag}`$ but is really $`\mathrm{\Delta }m=0.16\text{ mag}`$. These errors in the noise model do not destroy the efficacy of the test, but they could be important if a long tail exists in the intrinsic distribution of luminosities and they become more important for smaller $`\mathrm{\Omega }_o`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$. The right hand plot in figure 2 addresses the question of differentiating between DM candidates without assuming specific values for the cosmological parameters, thereby making the conclusion cosmology independent. Here the prior is taken to be $`p(\mathrm{\Omega }_o,\mathrm{\Omega }_\mathrm{\Lambda })=\delta (1\mathrm{\Omega }_o\mathrm{\Omega }_\mathrm{\Lambda })`$ within a range in $`\mathrm{\Omega }_o`$ ($`\mathrm{\Delta }\mathrm{\Omega }_o=0`$, $`0.1`$ and $`0.2`$) centered on $`0.3`$ and zero otherwise. The simulated data is the same here as for the solid curves in the two left hand plots. However, the integrations in (8) would be prohibitively time consuming if the entire magnification distribution function were calculated for each trial cosmology. To simplify the calculation without loosing much of the test’s effectiveness we use approximate, analytic test distribution functions. For the macroscopic DM case we use (5) with the low magnification cutoff which insures that it gives the correct mean. Comparison of this approximation with the full multi-lens distribution shows that it is a good approximation especially for low $`\mathrm{\Omega }_o`$. For the microscopic DM/halo case we approximate the distribution as a Gaussian with an appropriate width (see Metcalf 1998a ). This plot shows that not only is this simplified calculational technique adequate, but that one does not need to assume a precise cosmological model to differentiate between DM candidates. Increasing the width of the prior beyond $`\mathrm{\Delta }\mathrm{\Omega }_o=0.2`$ does not make much difference. The reason for this is that if the assumed cosmological parameters are significantly different than the true ones the distribution will be shifted to an extent that it is no longer consistent with the data. This shift would be confused for a lensing effect if the two kinds of distributions, illustrated in figure 1, were translations of each other, but they are not, even after noise is added. For the two DM cases, the modes of the magnification distributions follow different $`m`$$`z`$ relations, but their means are the same. For a fixed redshift, it is the distribution of luminosities about the mean that distinguishes the two cases. For open models ($`\mathrm{\Omega }_\mathrm{\Lambda }=0`$) it is more difficult to differentiate the DM candidates, but even in this case with 51 SNe at $`z=1`$ and $`\mathrm{\Omega }_o=0.3`$ we expect to get better than $`90\%`$ confidence at least $`90\%`$ of the time. If $`\mathrm{\Omega }_o=0.1`$ BBN constraints just barely allow for all DM being made of baryonic objects. In this case similar bounds to those shown in figure 2 for 51 SNe can be achieved with 200 SNe. However the means of the $`_p`$ distributions are closer together in this case, making the test more susceptible to systematic errors in the assumed noise model. The power of lensing to differentiate DM candidates comes mostly from its ability to identify macroscopic DM. A positive detection of the lensing by microscopic DM halos will take more SNe, as will constraining the precise fraction DM in macroscopic form, unless correlations between SN luminosities and foreground galaxies are utilized (Metcalf 1998b ; Metcalf (1999)). ## 4 Discussion One concern in implementing the test described here is the possibility that type Ia SNe and/or their galactic environments evolve with redshift. This is also a major concern in cosmological parameter estimation from SNe. So far there is no indication that the colors or spectra systematically change with redshift (Perlmutter et al. (1997), Riess et al. (1998)). Since the evolution of the magnification distribution is determined by cosmology it is in principle possible to make an independent test for systematic evolution in the distribution of SN luminosities. Microscopic DM does not need to be clustered for this test to work. The clustering is added to make the calculations realistic. Clustering the microscopic DM to a greater or lesser extent would affect our results quantitatively, but the test would still be viable in more extreme cases. We conclude that if the assumptions we have made about the noise levels in future SN observations remain reasonable, on the order of $`50100`$ SNe at $`z1`$ should suffice to determine a fundamental question: whether the major constituent of extragalactic DM is microscopic particles or macroscopic objects. We would like to thank D. Holz for providing the results of his simulations for the purposes of comparison.
no-problem/9901/astro-ph9901263.html
ar5iv
text
# A Spectroscopic Catalog of 10 Distant Rich Clusters of Galaxies ## 1. Introduction The change with redshift observed in the proportion of star-forming galaxies in the cores of rich clusters was uncovered over twenty years ago, by Butcher & Oemler (BO, 1978, 1984), but it remains one of the clearest and most striking examples of galaxy evolution. Considerable effort has gone into acquiring photometric information that would elucidate the physical processes active in distant clusters and their effects on the evolution of both the star-forming (Lavery & Henry 1994; Lubin 1996; Rakos & Schombert 1995; Rakos, Odell & Schombert 1997) and passive galaxies (Aragón-Salamanca et al. 1993; Stanford, Eisenhardt & Dickinson 1995, 1998; Smail et al. 1998). Further impetus has been provided by observations of the recent transformation of the S0 population of clusters (Dressler et al. 1997), which may allow a closer connection to be drawn between the galaxy populations of distant clusters and the evolutionary signatures found in their local Universe counterparts (Caldwell & Rose 1997; Bothun & Gregg 1990). However, it was the advent of spectroscopic surveys of the distant cluster populations (e.g. Dressler & Gunn 1983, 1992, DG92; Couch & Sharples 1987, CS87; Barger et al. 1996; Abraham et al. 1996; Fisher et al. 1998) which uncovered the real breadth of the changes in galaxies in these environments, including several spectral signatures of evolutionary change, such as evidence for a strong decline in the star-formation rates of many cluster galaxies in the recent past. The advent of high spatial resolution imaging with the Hubble Space Telescope (HST ) provided a further breakthrough, giving morphological information on the galaxies in these distant clusters. This could be used to link the evolution of stellar populations in the galaxies with the evolution of their structure, in order to understand how the various galaxy types we see in the local universe came to be. Pre- and Post-refurbishment HST observations by two groups (Couch et al. 1994, 1998; Dressler et al. 1994; Oemler et al. 1997) were used in early attempts to correlate spectral evolution with morphological/structural data, and to provide some insight into the mechanisms that might be driving the strong evolution in the cluster galaxy population. These two programs were extended from Cycle-4 into the “MORPHS” project, which accumulated post-refurbishment WFPC2 images for 11 fields in 10 clusters at $`z=0.37`$–0.56, viewed at a time some 2–4 $`h^1`$ billion years before the present day.<sup>9</sup><sup>9</sup>9We use $`q_{}=0.5`$ and $`h=1`$, where $`h=H_{}/100`$ km s<sup>-1</sup> Mpc<sup>-1</sup>. For this geometry 1 arcsec is equivalent to 3.09 $`h^1`$ kpc for our lowest redshift cluster and 3.76 $`h^1`$ kpc for the most distant. The photometric and morphological galaxy catalogs from these images were presented in Smail et al. (1997b, S97), while the data have also been used to study the evolution of the early-type galaxies within the clusters, using both color (Ellis et al. 1997) and structural information (Barger et al. 1998), the evolution of the morphology-density relation of the clusters (Dressler et al. 1997) and the masses of the clusters from weak lensing (Smail et al. 1997a). The aim of this paper is to combine the morphological information available from our HST images with detailed star-formation properties of the cluster galaxies derived from targeted spectroscopic observations. To this end we have used over 27 clear, dark nights over the past 4 years on the Palomar 5.1-m (P200),<sup>10</sup><sup>10</sup>10The Hale 5-m of the Palomar Observatory is owned and operated by the California Institute of Technology. 4.2-m William Herschel Telescope (WHT)<sup>11</sup><sup>11</sup>11The William Herschel Telescope of the Observatorio del Roques de los Muchachos, La Palma, is operated by the Royal Greenwich Observatory on behalf of the UK Particle Physics and Astronomy Council. and the 3.5-m New Technology Telescope (NTT)<sup>12</sup><sup>12</sup>12Based in part on observations collected at the European Southern Observatory, La Silla, Chile. to assemble a large catalog of spectroscopic data on galaxies in these clusters. We combine these new observations with previously published spectroscopy from DG92 and present spectroscopic observations of a total of 424 cluster members, of which 204 have morphologies from our HST imaging, as well as 233 field galaxies (71 with HST morphologies). In addition, we have analyzed all of the spectra to provide equivalent width measurements on a uniform system for the entire sample. The spectral catalogs, including line strength and color information, as well as the reduced spectra themselves in FITS format, are available at the AAS web site. A more detailed analysis of the spectroscopic data presented here will be given in Poggianti et al. (1998, P98). A plan of the paper follows. We start by discussing the observations and their reduction in §2. In §3 we then give the details of the redshift measurements, as well as our analysis to quantify the strengths of spectral features and information about our spectral classification scheme based upon these. We then present the spectral properties of galaxies in the catalog and relate these to the morphologies of the galaxies from our HST images in §4, before discussing our results in §5. Finally in §6 we list the main conclusions of this work. ## 2. Observations and Reduction ### 2.1. Selection of Spectroscopic Targets The new spectroscopic observations discussed here were targeted at determining the membership of the numerous distorted and irregular galaxies revealed by our HST WFPC2 images of the clusters, as well as gaining a more complete understanding of the star-formation properties of the general cluster population. With these aims, the object selection is closer to that employed by DG92, than the magnitude-limited selection criteria of CS87 and Barger et al. (1996). The latter approach has some claim to making the subsequent analysis simpler, especially when the sample is selected in the near-IR. However, it is a very inefficient method for studying the faint, blue cluster members as it produces samples dominated by passive spheroidal cluster members. We chose instead to base our object selection upon galaxy morphology within the region covered by our WFPC2 imaging, while being approximately magnitude-limited outside that area (selected from ground-based $`r`$ or $`i`$ CCD material to limits of $`r22`$ and $`i21`$). We note at this point that two of the cluster fields, A 370 Field 2 and Cl 0939$`+`$47 Field 2, lie outside of the central regions of their respective clusters (although we do also have observations of the core regions as well). The difference in the galaxy density between the fields should be kept in mind in the following analysis, although we will highlight such selection effects for individual figures when they are discussed below. Modelling of the sample selection for the entire spectroscopic catalog is dealt with in more detail in P98. ### 2.2. Spectroscopic Observations The spectroscopic observations discussed in this paper were undertaken with a variety of facilities over the period 1993–1997. We list the instruments and telescopes employed and the total number of nights used in Table 1. The basic details of the 10 clusters targeted in this study are listed in Table 2, this includes the mean cluster redshift, the one dimensional velocity dispersion ($`\sigma _{cl}`$, see §3.2), the redshift range used to define cluster membership ($`\mathrm{\Delta }z`$), the field center and the HST WFPC2 filters used in the observations. The new spectra presented here are typically of high quality due to both the long exposure times employed in our observations and the combination of the high efficiency of the multi-object spectrographs and the large aperture of the telescopes used. We give in Table 3 the logs of the observing runs for the various telescopes. We list the mask identification, the dates of the observations, the total exposure time and the number of objects extracted from each mask (N). The slit width typically used was 1.5 arcsec, with slits between 10–20 arcsec long. The exact size of the region on the slit used to extract the galaxy spectrum depended upon the relative signal to noise of the galaxy spectrum, but varied between 1.1–8.4 arcsec for the COSMIC spectra with a mean length of $`3.9\pm 1.2`$ arcsec. At the median redshift of the clusters in our catalog, the spectra thus sample a physical scale of $`(5\times 13)h^1`$ kpc. The exact details of the extraction and reduction of the spectra depends upon the instrument and set-up used. However, the basic steps were the same for all the data and we outline the procedures used for both the COSMIC and WHT/NTT data. The raw frames were debiased using the over-scan regions on the chip, before being trimmed. A two dimensional flatfield was constructed by dividing the flatfield exposure by a low-order fit in the dispersion direction. The data frame was then divided by this normalized flatfield, this served to correct for the pixel-to-pixel response of the detector. The sequence of data frames for each mask taken on a single night were then checked for spatial offsets between the exposures arising from flexure in the spectrograph (these are typically only $`<0.2`$ pixels for COSMIC in the course of a night). If necessary the exposures were shifted in the spatial and/or dispersion direction to align them and then combined with a cosmic-ray rejection algorithm using the IRAF task imcombine. This produced a two dimensional image of the mask exposure clean of cosmic ray events. These frames were then geometrically remapped to align the spectra along the rows of the detector. This step is necessary to remove the distortion of the spectra on the detector introduced by the spectrograph optics. The distortion is only a large effect for objects in slits near the edge of COSMIC’s large $`13.7^{}\times 13.7^{}`$ field of view, although aligning the spectra also helps when tracing some of the faintest objects. The distortion of the spectra are mapped using the positions of the emission lines in the arc exposure taken after every science exposure. The positions of objects in each slit on the remapped frame, as well as regions of clear sky surrounding them, were then defined interactively using the IRAF package apextract. The exact position of the object within the slit was traced in the dispersion direction and fitted with a low-order polynomial to allow for atmospheric refraction. The spectra were then sky-subtracted and extracted using optimal weighting to produce one dimensional spectra. The arc exposures associated with each science exposure were remapped and extracted in exactly the same manner (although with no sky-subtraction) and these were used to determine the wavelength calibration for the science exposure. We estimate our wavelength scale is good to 0.2Å rms. Finally, the one dimensional spectra were smoothed to the instrumental resolution, $`8`$Å, and rebinned to 10Å per pixel to make them more manageable. The spectra obtained with COSMIC have not been flux calibrated. The WHT and NTT spectra have been reduced using the LEXT package, purposely written for reducing LDSS–2 spectra, and the MIDAS software package. What follows is a brief description of the reduction procedure generally adopted. A number of twilight and dome flatfields, and several arc frames were obtained for each mask, as well as numerous bias frames and long–slit spectra of standard stars for flux calibration (at least one star per night). The raw frames were first debiased and then divided by the corresponding normalized flatfield. They were then calibrated in wavelength with the arcs frames obtained either with a CuAr or HeArNe combination of lamps. The sky–subtraction step was performed with an interactive choice of the spatial limits of the spectrum, which was then extracted summing the counts weighted with a Gaussian. The long–slit stellar spectrum was reduced in a similar way as the target spectra and a response function was derived by the comparison with a tabulated spectrum. Each spectrum was flux–calibrated in $`F_\nu `$ by dividing for this response function. In the case of the WHT and NTT spectra each exposure of a given mask was reduced and calibrated separately, before all the spectra of a given galaxy were coadded; no smoothing or rebinning was applied. The full digital catalog of FITS spectra collected for this program is distributed in electronic form on the AAS web site. These spectra are also available from: http://www.ociw.edu/$``$irs. ## 3. Spectroscopic Analysis The full catalog of objects observed spectroscopically in the 10 clusters is given in Tables 4 (the complete tables are included on the AAS web site as well as being available from http://www.ociw.edu/$``$irs). This has been split into “Cluster” and “Field” samples as described below. The tables list not only the spectral information on the galaxies, but also any available morphological and photometric data from S97 and DG92. A key to the various parameters and the format of the tables are given in Table 5. We now describe in more detail the measurement of some of the spectral parameters listed in Tables 4. ### 3.1. Spectral Measurements The quality of the spectra, both in terms of signal-to-noise and sky-subtraction, was visually assessed by AD for all of the spectra presented. The spectra are graded on a 4–point range, with $`q=1`$ signifying the best and $`q=4`$ the worst quality. Of the complete catalog 17% have $`q=1`$, 47% with $`q2`$ and 89% are $`q3`$. Spectra with $`q3`$ have sufficient signal to noise (S/N) for not only measurement of a redshift, but also to quantify the strength of any spectral features present. From the continuum regions around the \[Oii\]$`\lambda `$3727 and H$`\delta `$ lines we estimate median S/N of 40.2 ($`q=1`$), 28.3 ($`q=2`$) and 19.7 ($`q=3`$), with lower limits to the S/N of 20.9, 10.6 and 4.6 respectively for these three quality classes. Repeated observations suggest that the redshifts of $`q=1`$ and $`q=2`$ cases are correct at a confidence of greater than 98%, and that $`q=3`$ cases are correct at a confidence of greater than 90%. In contrast, those spectra with $`q=4`$ are of sufficient S/N to provide only a redshift, which may be uncertain in a significant number of cases. Redshifts were measured from the spectra interactively using purpose-written software that compares the wavelengths of redshifted absorption and emission lines with features in the spectra. Whenever possible we used a number of features to estimate the redshift, and only in a very small number of cases is a redshift based on only a single feature — these instances are noted in the comments in Tables 4. We list in column 24 of Tables 4 the main features used to identify the galaxy redshifts. For conciseness we have used the following abbreviations to identify the lines: Babs, Balmer absorption lines; Ha, H$`\alpha `$; Hb, H$`\beta `$; Hd, H$`\delta `$; He, H$`ϵ`$; Heta, H$`\eta `$; Hg, H$`\gamma `$; Hth, H$`\theta `$; Hz, H$`\zeta `$; G, G-band; H&K, Ca H or K; Mg, Mg-B; Na, Na-D; OII, \[Oii\]$`\lambda `$3727; OIII, \[Oiii\]$`\lambda `$4959,5007; bk, 4000Å break; MgII, Mgii$`\lambda `$2799; CIII, Ciii\]$`\lambda `$1909; CIV, Civ$`\lambda `$1549; FeI, Fei$`\lambda `$5268; NII, \[Nii\]$`\lambda `$6583; SII, \[Sii\]$`\lambda `$6716,6731. The strength of emission and absorption features in the spectra were measured using purpose-written software, allowing the positioning of the continuum to be defined interactively. We give the restframe equivalent widths (EW) for \[Oii\]$`\lambda `$3727 and H$`\delta `$ in columns 5 and 6 of Table 4A and 4B, in all instances a line seen in emission is given a negative value and is quoted in Å. The presence and strength of these lines is used in the spectral classification scheme discussed in §3.2. If other lines in the spectrum were measurable we list their EW in the comments. We give line strengths for not only those galaxies observed for this work, but also those from the early survey of DG92. The D4000 measurements have been similarly placed on a consistent system. These are measured using wavelength intervals as defined in Dressler and Shectman (1987). The COSMIC data shared a common relation of counts to flux, but were not flux calibrated per se. A multiplicative correction of 1.34 to convert the measured D4000 to true D4000 for these data was derived by comparing the COSMIC spectra of repeated objects with the equivalent flux calibrated DG92 spectra. This procedure, though imperfect, generates reasonable and consistent results, as shown by multiple COSMIC observations of the same galaxies. We have a total of 31 repeat observations, both internally within the datasets from a single telescope, and between telescopes. We find median rms scatters of $`\sigma (z_{\mathrm{COSMIC}}z_{\mathrm{DG92}})=0.0018`$ ($`N=14`$), $`\sigma (z_{\mathrm{COSMIC}}z_{\mathrm{WHT}})=0.0009`$ ($`N=2`$) and $`\sigma (z_{\mathrm{COSMIC}}z_{\mathrm{COSMIC}})=0.0005`$ ($`N=7`$) for those spectra with $`q3`$, and no systematic offsets between any of the individual datasets: $`<z_{\mathrm{COSMIC}}z_{\mathrm{DG92}}>=0.0007`$, $`<z_{\mathrm{COSMIC}}z_{\mathrm{WHT}}>=0.0009`$. We therefore conclude that there are no significant offsets between the redshifts from the different datasets and hence we are confident that we can include all the observed objects in our analysis. Finally, we quantified the detectability of \[Oii\] and H$`\delta `$ in our spectra. This enabled us to derive the lower limits on the strength of these spectral features below which we would not have identified them. Achieving this aim was not straightforward because the code that best measured the equivalent widths, which is based on a gaussian line-fitting program written by Paul Schechter, does not perform well when the lines are weak or undetectable. For this reason, when we measured the strengths of features in those galaxies where the feature was not clearly seen, we by necessity had to measure equivalent widths using the standard technique of obtaining the continuum level from straddling continuum bands, and measuring the decrement or increment in signal relative to the continuum in an interval containing the feature. We made such measurements of \[Oii\] and H$`\delta `$ EW for all COSMIC spectra with qualities $`q3`$ of cluster members in Cl 0939$`+`$47 and Cl 0024$`+`$16, a total of 79 galaxies. The intervals are, again, as defined in Dressler and Shectman (1987). For weak, but measurable, cases the line-fitting and flux-summing techniques give equivalent results, though for strong absorption lines, in particular, the latter seems to underestimate the strength of the feature, apparently by allowing the wings of the line to lower the continuum level. We believe, however, that the two scales for measuring equivalent widths are interchangeable for the purpose of looking for weak features. The results of these tests are shown in Fig. 1a and Fig. 1b, where we have plotted the equivalent widths as a function of signal-to-noise ratio in the continuum bands straddling the feature. In Fig. 1b we show that the galaxies that were designated by inspection as emission line types all have \[Oii\] EW stronger than $`3`$Å, while those designated as having no emission lines (spectral types: k, k+a, or a+k, see §3.3) have \[Oii\] EW weaker than $`4`$Å. In fact, the latter are consistent with non-detections: for 37 non-emission line members, the median EW is +0.4Å with quartiles of $`1.0`$ to +2.6. There is only a weak trend with signal-to-noise ratio. We conclude from these data that we are complete for \[Oii\] stronger than $`5`$Å, with a high level of completeness down to $`3`$Å. In other words, even at the modest signal-to-noise ratios of these spectra, none of the galaxies classified as non-emission types are likely to have emission at greater than the $`3`$Å level, and certainly none have emission stronger than $`5`$Å (this limit corresponds to “absent” in Table 6). In Fig. 1b we show a similar diagram for the same sample, this time for H$`\delta `$. Because it is weaker and in absorption, H$`\delta `$ is a more difficult feature to measure; this is apparent from the stronger trend with signal-to-noise ratio. However, as for \[Oii\], the separation of those galaxies which are designated by inspection as having moderate Balmer line strengths (k+a, a+k, and e(a), see §3.3) from the non-Balmer galaxies (k and e(c) and e(b) types), is confirmed by the objective measurements. The boundary is around 2–3 Å, below which point we are unable, except at high S/N $`>`$ 50, to discern the difference between the presence or absence of H$`\delta `$. We conclude from these data that we are complete above equivalent widths of +5Å, and mostly complete above +3Å. It is worth commenting that some of the points with large negative equivalent widths for H$`\delta `$ arise from strange continuum levels, rather than from the feature seen in emission (although there is at least one clear case of H$`\delta `$ in emission, a rare phenomena among luminous galaxies). ### 3.2. Cluster Membership As was noted above, Table 4 is split into two parts on the basis of whether a galaxy is classed as a “Cluster” member or “Field”. To accomplish this we define redshift ranges for the various clusters; these ranges are purposefully chosen to be large to ensure that we retain any galaxies in the large-scale structure surrounding the clusters, while at the same time minimizing the contamination by field galaxies. In Fig. 2 we show the redshift distributions for the individual cluster fields; in each panel the inset provides a more detailed view of the velocity distribution close to the cluster mean. The bin size in these plots has been arbitrarily chosen and may artificially enhance or suppress the visibility of any structures within the clusters. We list the resulting mean redshift, restframe velocity dispersion and redshift range defining each cluster in Table 2. We reiterate that the velocity dispersions are likely to be overestimates of the true dispersion of the well-mixed cluster population. We also list in Table 2 the number of member galaxies in our catalog for each cluster. Using these definitions our catalog contains a total of 424 cluster members and 233 field galaxies. The redshift distribution for all galaxies classed as field is shown as the open histogram in Fig. 3; the galaxies with HST morphologies are shown as the filled histogram. The median redshift of the whole field sample is $`<z>=0.42`$, while for the morphological sub-sample it is slightly higher at $`<z>=0.46`$ (Fig. 3). These values are very similar to the median redshift of our 10 clusters, $`<z>=0.44`$, allowing us to easily compare the broad properties of the cluster and field samples. A total of 20 stars were observed (all in either the flanking fields or from the earlier DG92 observations); these are included at the bottom of Table 4b, but we do not discuss them further. In Fig. 3 we may be seeing some evidence for a deficit in the total field redshift distribution, between $`z0.4`$–0.6, which would result from the inclusion of a few field galaxies in the cluster catalog. This would include galaxies in the supercluster environment, if any, in which the clusters reside, or truly unassociated galaxies relatively far from the cluster but within the wide velocity limits imposed by the cluster’s velocity dispersion. To estimate the extent of this effect we use two approaches. Firstly, a conservative upper limit on the deficit comes from linearally extrapolating the trends of number versus redshift in the field at $`z<0.35`$ and $`z>0.60`$ to limit the likely number of field galaxies in the intervening redshift range. From this we estimate that there should be $`<160`$ field galaxies in the range $`z=0.3`$–0.6, compared to the observed number of 92, giving an upper limit on the deficit of $`70`$ galaxies, or $`<7`$ per cluster. Alternatively, using the regions where the redshift limits of the cluster and field samples overlap between different clusters we estimate the contamination from random, unrelated field galaxies is of the order of $`1.0\pm 0.7`$ galaxies per cluster in our largest velocity range. We conclude therefore that the contamination from galaxies unrelated to the cluster, or its supercluster, does not exceed 7 galaxies per cluster and is probably closer to 1–2 galaxies. ### 3.3. Spectral Classification To assess the distribution in the star-formation properties of galaxies in our catalog we have found it useful to classify their spectra into a number of classes. These classes are broadly based upon those used by DG92 and CS87, however, the number of classes has been expanded to better cover the full range of features seen in our large sample. We have also used the properties of low redshift integrated spectra (Kennicutt 1992) and the expected characteristics from spectral modelling to help us define the limits of some of the classes. In revising the classifications we therefore found it necessary to redefine some of the boundaries previously used for the spectral classes. Hence, to reduce confusion between our new classes and those used previously we adopt a new nomenclature and give this and the details of the classification scheme in Table 6. We show a schematic representation of this spectral classification in Fig. 4. It should be noted that for those spectra where sky residuals or the available spectral range precluded the observation of one of the diagnostic spectral features, we have made used the strength of the other Balmer series lines (if H$`\delta `$ was unobservable) or emission lines (if \[Oii\] was unobservable) to identify the most likely spectral class. In the few cases where this has been done comments are included in Table 4. Fig. 3. Redshift distribution for galaxies classed as non-members in the fields of the 10 clusters. The open histogram gives the total redshift distribution for the field galaxies (233 galaxies), the filled histogram is those field galaxies which lie within the WFPC2 field and for which we therefore have detailed morphological information (71 galaxies). Fig. 4. A schematic representation of the spectral classification scheme used in this work. We show the regions of the H$`\delta `$–\[Oii\] equivalent width plane populated by the various spectral types. Those spectral classes not based upon the line strengths of H$`\delta `$ and \[Oii\] (e.g. CSB, e(n), etc.) are not marked. Briefly the overlap between the new system and previous ones can be summarized as follows: we retain the general features of the DG92 system, including k-type and the general class of “e” (emission) galaxies. However, we replace the mixed nomenclature “E+A” with k+a (following the suggestion of Franx 1993) and a+k, depending on the strength of the H$`\delta `$ Balmer line. We also subdivide emission line galaxies into e(a) types (with strong Balmer absorption), e(c) for those with weak or moderate Balmer absorption, and e(b) for those with very strong \[Oii\] (this can sometimes be combined with e(a) for galaxies with both strong \[Oii\] emission and strong Balmer absorption). This nomenclature reflects the nature of the spectra, with e(a) indicating a population of A stars, e(b) a spectrum similar to that expected for a burst of star-formation and e(c) a spectrum for a system undergoing a more constant SFR. In comparison to other earlier work, the PSG and the HDS galaxies of CS87 fall mostly into the a+k and k+a classes. The CS87 “Spiral” types are placed in e(c) and e(a); however, the SB galaxies are not the same as our type e(b), because the criteria for these in CS87 was not based on \[Oii\] strength. We note that spectral classes described in Table 6 can be grouped in three main categories: passive (k); past star-forming (k+a and a+k) and currently star-forming (e(c), e(a) and e(b)). AGN spectra (e(n)) are excluded in this division (Table 6). In Column 8 of Table 4 we include a photometric classification in the case of the bluest galaxies. These are labeled “Color Starburst” (CSB) if their restframe color is bluer than that expected for a low metallicity model galaxy with an increasing star-formation rate (P98). This allows us to conservatively identify those galaxies whose very blue colors can only be explained with a current starburst, whatever their spectral type may be. Fig. 5. Representative spectra from each of the spectral classes in our adopted scheme (Table 6, Fig. 4). These are plotted with arbitrary vertical scaling and in the restframe. The galaxies are all cluster members with $`q=1`$ and come from Cl 0939$`+`$47 and Cl 0024$`+`$16. The spectra are not fluxed. To better illustrate the properties of the new classification scheme we show in Fig. 5 a high-quality, representative member of each class from our catalog. In Table 7 we give the distribution of spectral classes within the different clusters (for $`q4`$), as well as the total numbers across all the clusters and the equivalent values for our field samples. As can be seen, the clusters are populated by a wide variety of spectral classes, although comparisons between clusters are not simple owing to the different apparent magnitudes of the samples and the attending variation in the typical quality of the spectra. Table 7 also lists the equivalent numbers of galaxies in each spectral class for which we have morphological information. ## 4. Basic Properties and Correlations of the Data To start the discussion of the spectroscopic sample we have assembled, we review the basic properties of the sample as a whole. We focus on a few of the correlations between the various properties of the galaxies in the sample, in particular the relationships between the morphological, spectral, and kinematic characteristics of certain classes of cluster galaxies. In the following discussion we will include the uncertain spectral classes (marked with a “:” in Tables 4), unless otherwise stated. ### 4.1. Luminosity Functions for the Morphological Classes In order to draw conclusions from our spectroscopic study in the context of the broader morphological catalog (S97), we need to compare the sampling in absolute magnitude of the two catalogs. Fig. 6a shows the absolute magnitude distribution for galaxies in the spectroscopic catalog for which ground-based $`r`$-band photometry is available. This filter approximates $`V`$ in the restframe for all 10 clusters. Our assumption of a single K-correction (from an spectral energy distribution (SED) corresponding to a present day Sbc) introduces only small errors into the magnitude distribution ($`<0.06`$ mags for E/S0 and Sd/Irr SED). Fitting a Schechter function to the bright-end of the distribution in Fig. 6a, we obtain a characteristic magnitude of $`M_V^{}=(20.64\pm 0.16)+5\mathrm{log}_{10}h`$ (for a fixed faint-end slope of $`\alpha =1.25`$ as adopted in S97). This is to be compared to a fit obtained to the morphological counts in the cluster fields corrected for likely contamination in the manner described in S97. Fitting to the composite luminosity function of all morphological types across the 10 clusters we find $`M_V^{}=(20.79\pm 0.02)+5\mathrm{log}_{10}h`$ (for $`\alpha =1.25`$). This good agreement indicates that the spectroscopic catalog fairly samples the morphological catalog for $`M_V<19+5\mathrm{log}_{10}h`$. Fig. 6b shows how the spectroscopic sampling compares as a function of morphological type within the clusters. This is achieved by comparing the spectroscopic sample for $`M_V<19+5\mathrm{log}_{10}h`$ to the field-corrected morphological counts of S97. There is no significant trend with morphological type except for the selection effect, discussed in §2.1, built into the original sample selection: the Sd/Irregular galaxies are oversampled relative to the E–Sc types (although there is considerable uncertainty in the statistical correction for field galaxies in this bin, S97). This plot allows us to quantify and correct for the sample selection in our analysis as required. ### 4.2. Luminosity Functions for the Spectral Classes The absolute magnitude distribution of the spectral classes defined in this paper will be important to understanding their relationships within the framework of galaxy evolution models. Fig. 7a shows that the magnitude distributions brighter than $`M_V=19+5\mathrm{log}_{10}h`$ for spectral classes k, k+a and a+k are statistically indistinguishable. In contrast, the e(a) and e(c) classes appear to systematically fainter than the k class; this difference is confirmed at the $`>95`$% confidence limit using two-sample Kolmogorov-Smirnoff tests. It is important to keep in mind the “completeness” limit of the spectroscopic catalog estimated in §4.1, which means that these differences could be larger, and, for example, the apparent peak in the luminosity distribution of e(a)’s in Fig. 7a may be partly an artifact of incomplete sampling. The difference between the k class and the fainter e(b) class is clearly significant: the likelihood that the two samples are drawn from the same luminosity distribution is only $`\mathrm{log}_{10}P4.6`$. Again, the difference may be larger still, owing to the incomplete sampling below ($`M_V=19+5\mathrm{log}_{10}h`$). We know of no selection effect in our study that would cause us to miss bright e(b) cluster galaxies. As we discuss in P98, the fact that the galaxies that we have identified as bursting are fainter than the other classes is significant, and discouraging for models that attempt to interpret these starbursts as progenitors for galaxies with strong Balmer lines in their spectra. The cluster sample defined by our redshift measurements also allow us to unambiguously derive, for the first time, the absolute magnitude distributions as a function of morphology, again for $`M_V19+5\mathrm{log}_{10}h`$. Fig. 7b shows a broad similarity between the absolute magnitudes of early- and mid-type disk systems (S0–Sa–Sb–Sc). Compared to these, elliptical galaxies show a systematically brighter distribution, and irregular galaxies exhibit a tail of fainter systems. These trends are in good agreement with what is seen in low-redshift clusters. ### 4.3. Morphological Properties of the Cluster Galaxies What do the galaxies in our spectral classes look like? We illustrate the morphologies of the cluster members within each spectral class in Figs. 8. The general trend towards later-types in the active spectral classes is clear. The passive spectral classes are dominated by early-type galaxies, particularly ellipticals. The correspondence of morphology and spectral properties, the same as found for low-redshift analogs. indicates that a substantial fraction of the luminous ellipticals of these clusters was in place by $`z0.5`$ (Ellis et al. 1997, Dressler et al. 1997). The e(c) spectra are generally associated with disk galaxies, most of them familiar spirals and irregulars. This is true of some of the e(a)’s as well, but this class also includes many disk systems that look more disturbed than typical present-day spirals. The k+a/a+k class does include some elliptical galaxies, but the majority are disk galaxies, a few of which have an irregular or disturbed appearance. The significance of the correlation of morphology and spectral class are discussed further in B98. Fig. 10. A comparison of the distribution of morphological type within each spectroscopic class, for both cluster and field galaxies. We briefly discuss evidence for interactions and mergers on the spectral classes of galaxies in our cluster samples. (We also comment on this issue in §5.2 which deals with the kinematics of the different cluster populations.) We show in Fig. 9a the distribution of disturbance class within the different spectral classes. The image disturbance, $`D`$, is a visual classification of the degree to which the galaxy’s structure appears distorted or disturbed (S97) compared to a typical low-redshift galaxy of the same morphological type. The $`D`$ class correlates well with the asymmetry of the galaxy’s light profile (S97). Fig. 9a suggests that the spectral properties of the galaxies broadly correlate with the degree of image distortion and disturbance, the active and recently active populations having more galaxies classed as strongly asymmetric or distorted. However, looking at Fig. 9b we see an arguably stronger correlation between morphology and $`D`$ with a pronounced shift towards higher $`D`$ values in going to later-types (Sb–Sd/Irr). This could be due to a failure on our part to actually separate disturbance from a natural trend toward more irregular morphology for late-type systems, but the large number of $`D2`$ Sc galaxies (a type that is generally symmetric for low-redshift galaxies) suggests that the effect is real.<sup>13</sup><sup>13</sup>13This tendancy of intermediate-redshift disk galaxies to appear more asymetric than low-redshift galaxies of similar type has been reported in essentially all studies of this type. If so, it most likely reflects the greater fragility of disks (compared to bulges) to perturbations, and the greater frequency of perturbations at higher redshift. However, we see that this effect does not appear to be result of the high density cluster enviroment: Fig. 9b shows that $`50\pm 8`$% of the cluster Sb–Sc–Sd/Irr galaxies have $`D2`$, a proportion similar to that seen in the late-type field population, $`60\pm 11`$%. The same effect is seen at low-redshift (Hashimoto & Oemler 1999). ### 4.4. Spectroscopic Properties of the Cluster Galaxies In Fig. 10 we quantify the distribution of morphological type for the various spectral classes, for both cluster members and field galaxies. The strong, though broad, relation between morphology and star-formation seen in low-redshift galaxies is present in this intermediate redshift sample as well. Looking at the star-forming population which causes the Butcher-Oemler effect we see a clear tendency for these galaxies to be predominantly late-type systems (Couch et al. 1994, 1998; Dressler et al. 1994; Oemler et al. 1997), although here there is a tail of earlier-types (at least in the e(a) and e(c) classes). These active early-type (E and S0) galaxies comprise a higher fraction of the field population than they do in the clusters. The two “recently-active” classes, k+a and a+k, appear to have morphological distributions which are intermediate between the passive and active cluster populations. There seems to be a clear distinction between k+a and a+k in the sense that the latter are of later morphological type, though the small number of a+k types limits the statistical certainty of this result. Fig. 11. The cumulative distribution of \[Oii\] 3727 EW for three independent morphological bins for both cluster (solid line) and field (dotted line) populations. It is interesting that, although the passive cluster population is dominated by elliptical and S0 galaxies, there is a significant number of later types, stretching out to Sd/Irr, which also show no emission lines. Aperture biases in our spectroscopy are unlikely to explain the lack of observed star-formation in this group: the spectra sample the central $`65h^2`$ kpc<sup>2</sup> of these distant galaxies. Further support for a lack of on-going star-formation in these systems is shown by the uniform red colors of those galaxies for which we have imaging in two passbands with WFPC2. We quantify the occurrence of passive late-type galaxies, and compare cluster and field populations, in Fig. 11. Using the cumulative distribution of \[Oii\] 3727 EW, we find that for the morphological groups E and S0–Sb there is a significantly higher fraction of galaxies showing little or no \[Oii\] emission in clusters as compared to the field. The likelihood, $`P`$, that the cluster and field samples are drawn from the same population is less than $`\mathrm{log}_{10}P<2.4`$ for both E and S0–Sb samples. However, the comparison of the \[Oii\] distribution of the latest-type systems (Sc–Irr, T=7–10) shows no significant difference between the cluster and field, although the number of galaxies is somewhat smaller. As an overall trend, then, there seems to be a decline in current star-formation at a fixed Hubble-type from field to cluster (see also Balogh et al. 1998). Furthermore, based on \[Oii\] EW alone as a measure of star formation, we see no evidence for enhanced star-formation in gas-rich cluster galaxies compared to the equivalent morphological sample in the field. We discuss this incidence of passive late-type galaxies further in P98. Fig. 12. A comparison of the distribution of D4000 measures in the different morphological types, for both cluster and field galaxies. In contrast to these results based on \[Oii\] EW, the distribution of D4000 strengths (Fig. 12) is very similar for cluster and field: the individual morphological types are indistinguishable in D4000 at better than $`\mathrm{log}_{10}P>1`$ in each case. Thus, while \[Oii\], the tracer of current star-formation, shows a decline in the cluster, this does not appear to be reflected in an index sensitive to the star-formation averaged over a somewhat longer period of the recent past ($`1`$–3 Gyrs). ## 5. Results and Discussion ### 5.1. The Incidence of k+a/a+k and e(a) Galaxies Our spectral catalog exhibits one effect that is especially strong: the incidence of k+a/a+k galaxies in distant clusters is very high compared to the surrounding field. Table 7 shows that in the cluster sample we have 60 examples of k+a and 18 examples of a+k, totaling 18% of the sample. This is similar to the typical value of $``$10–20% found by magnitude-limited surveys of distant clusters (DG92; Couch et al. 1998). However, this value strongly contrasts with the 7 occurrences, all k+a, found in the high-redshift field sample, only 2%. Indeed, 4 of these 7 cases are either uncertain or border-line, a far greater fraction than for the cluster sample, so an incidence of $``$1% is compatible with these data. For the low-redshift Las Campanas Redshift Survey (hereafter LCRS), Zabludoff et al. (1996) found an incidence of 0.2%, but their selection criteria included a stronger limit on H$`\delta `$ of 5.5 Å and they note that the number increases to 0.6% when the limit is dropped to 4.5 Å. Hashimoto (1998) has evaluated the occurrence of the spectral classes as defined in this paper for the LCRS, and finds 2.3% for the occurrence of k+a/a+k types. In summary, these data seem to point to at most a factor two increase in the frequency of k+a/a+k types between the low- and intermediate-redshift field populations. This is in marked contrast to the order-of-magnitude increase in the frequency of k+a types in rich clusters. At low redshift, this frequency is $`<1`$% (determined using the Dressler and Shectman (1988) catalog), compared to the 18% found here for the $`z0.5`$ clusters.<sup>14</sup><sup>14</sup>14Caldwell & Rose (1997) have reported a frequency of $``$15% of notably stronger Balmer lines in early type galaxies in five low-redshift clusters. These are for the most part lower luminosity systems, with H$`\delta <`$ 3.0 Å, which the authors suggest are the remnants of earlier bursts. The results of that study do not, therefore, conflict with the much lower frequency found by Dressler and Shectman for stronger, more luminous systems. Zabludoff et al. attributed many of the low-redshift field “E+A’s” as due to mergers and strong interactions, since morphologies of this type are often observed in the low-redshift examples. The expected evolution can be estimated from the change in the incidence of close pairs (Zepf & Koo 1989; Patton et al. 1997), which would be predicted to be the parent population. Patton et al. (1997) claim that the proportion of close pairs (two galaxies within 20$`h^1`$ kpc) increases by a factor of $`1.5`$ between $`z=0`$ and $`z=0.33`$. Extrapolating this behavior to $`<z>=0.42`$ would predict an increase in the fraction of close pairs of $`2`$–3 over that seen locally. Although we see at most a factor of two increase in the k+a population from low to high redshift using the LCRS and our sample, this does not rule out that a significant fraction of field k+a’s are due to such mergers. Zabludoff et al. argue further that, as the merger/interaction mechanism appears to be responsible for low-redshift field examples of such galaxies, it is reasonable to conclude that mergers may also be responsible for the k+a/a+k galaxies in the intermediate-redshift clusters. However, the radically different evolution described above of the k+a/a+k population between cluster and field environments suggests that the cluster environment is crucial in either the formation of cluster k+a/a+k galaxies, or in prolonging their visibility. This could in part be due to an increased propensity for mergers in the groups infalling into the intermediate-redshift clusters. However, our morphological analysis (S97) finds only a minority of cases of k+a spectra where the galaxy shows signs of a classic two-body merger, as Zabludoff et al. found for the low-redshift field examples. We conclude, then, that at least one mechanism other than mergers is responsible for the large fraction of k+a/a+k galaxies in intermediate-redshift clusters. As we discuss in B98, the majority of k+a/a+k spectra are the result of a sudden decline in star formation rate that followed a substantial rise, or burst, of star-formation, leaving a population of A-stars to dominate the light for $`10^9`$ years. Given the generic nature of the star-formation history required to form an a+k/k+a, mergers are obviously not a unique explanation for the k+a/a+k phenomena. For example, accretion of smaller satellites, instead of mergers of comparable mass systems, is not inconsistent with the morphologies we see. The greater fraction of a+k/a+k galaxies in the intermediate-redshift clusters as compared to the field is likely to be connected as well with the frequency of e(a) galaxies in these environments (B98). ### 5.2. The Distribution and Kinematic Properties of the Cluster Galaxies As a final exercise in the comparison of spectroscopic properties with other cluster characteristics, we examine the radial distributions of our cluster sample as a function of spectroscopic type. We begin by assigning field centers — these positions are given in Table 2. There is usually little ambiguity of this due to the presence of a D or cD galaxy, these have been confirmed as the cluster centers in all cases from our the weak lensing analysis in Smail et al. (1997b). Even in more complex cases, such as Cl 1447$`+`$26, the ambiguity in choosing a center will play little role over the large range in radius we investigate. Fig. 13. The cumulative radial distribution of different spectral types. These are shown for the all members from the whole sample which have $`M_V<19+5\mathrm{log}_{10}h`$. There is a clear difference between the radial distribution of k, k+a/a+k, and e type galaxies, with the former being most concentrated, the latter the least. The k+a/a+k class seems to be intermediate between the two showing a similar decline to the k types on the outskirts of the cluster, but a flatter distribution in the core, more in keeping with that seen for the e types. In Fig. 13 we show for the combined clusters the cumulative radial distribution for different spectroscopic types. This procedure is crude because it averages over the non-spherical distribution of galaxies within the clusters, but it may provide some insight into the characteristic distributions of different classes of galaxies. Not surprisingly, the k types, generally made up of E and S0 galaxies (Fig. 10), but including significant early-type spirals as well, are the most concentrated population in these clusters (c.f. S97). Also, not surprisingly, the emission line galaxies strongly avoid the center ($`r50h^1`$ kpc) of these clusters and have a much more extended distribution. What is perhaps more interesting is the way the k+a/a+k types, which may sensibly interpreted as post-starburst galaxies, avoid the centers in contrast to the k types, but are far less extended than the emission-line galaxies.<sup>15</sup><sup>15</sup>15It is tempting to describe this distribution as a “thick shell”, but we consider this potentially misleading due to the substantial departures from spherical symmetry exhibited by our clusters. Rather, it is probably more instructive to think of k+a/a+k types occurring most frequently at an intermediate radius $`R200kpc`$. The near absence of k+a types in the field, discussed above, coupled with the sudden rise in their frequency as the cluster center is approached, with an almost complete demise in the central regions, appears to be clear evidence for the environment effecting either their formation or visibility. We note in passing that a similar diagram subdividing the e types into e(a), e(c), and e(b) shows no significant difference, though there is a hint that the e(a) class has a slightly more extended distribution. We now investigate the rudimentary kinematics of the sample of cluster members. In Table 8 we list the restframe velocity dispersions and uncertainties for the entire cluster sample broken down in terms of morphological type, spectral class, disturbance and activity class (the latter three sections refer only to those galaxies lying within the WFPC2 fields). The distributions for the spectral and morphological types are also shown in Fig. 14. These values are calculated using the mean cluster redshifts listed in Table 2 and are simple averages across the cluster (no allowance has been made for different velocity dispersion for the different clusters – when such corrections are applied they make no qualitative change to the conclusions listed below). The uncertainties in the velocity dispersions are $`1\sigma `$ values estimated from bootstrap resampling of the observed distributions. Starting with the morphological samples in Table 8 we see a marked difference between the velocity dispersion of the elliptical galaxies and all the later-types, the latter having higher dispersions (including the S0 galaxies). A similar difference is noticeable when the sample is split into different spectral classes (now including the whole spectroscopic catalog of members). Interestingly the galaxies whose spectra were too poor to be classified, the “?” class, show the lowest dispersion – suggesting that these may be predominantly passive, cluster galaxies. The strongest trend is the significantly higher velocity dispersion of the presently or recently star-forming systems compared to the passive population (c.f. Dressler 1986). In particular, combining the different spectral classes (e(all) comprises e(a)/e(b)/e(c)/e(n)/e) from Table 8 we find that the emission-line and k-type galaxies have relative dispersions of $`\sigma _{em}/\sigma _k=1.40\pm 0.16`$, with the k+a/a+k galaxies being intermediate between the two. The higher dispersions of the active populations are consistent with these galaxies being less virialised than the k-type population. Such a trend can also be discerned in the variation of velocity with activity (as traced by the \[Oii\] EW) within the individual morphological types. Splitting each of the more active morphological classes (Sb-Sd/Irr) at its median \[Oii\] EW into “low” and “high” activity samples we find the dispersions listed at the bottom of Table 8 for the different morphological samples. For all three morphological types, the more active sample shows the higher velocity dispersion. A higher velocity dispersion is often taken as a sign of an infalling population, but, as we discuss below, including spatial information in our analysis shows only weak evidence for infall. We note that the \[Oii\] EW distributions for these active cluster members do not show any enhanced activity over that seen in the surrounding field for any given morphological type (c.f. §4.2). Apparently, then, the observed correlation of velocity dispersion and activity (as measured by \[Oii\] EW) is not triggered by a mechanism which causes the higher star-formation rates due to the high relative velocities of the galaxies within the clusters (i.e. ram-pressure induced star-formation). We suggest instead that the correlation between activity and velocity dispersion reflects a decline in star-formation in the galaxies which runs in parallel with and is causally linked their virialization within the clusters. In this regard we also mention the trend for more disturbed galaxies to have higher velocity dispersions, Table 8, a result which remains when we restrict the analysis to late-type galaxies, Sb–Irr. We next combine the velocity and positional information for the entire sample of clusters, focusing on the spectral classes, to calculate the velocity dispersion as a function of spectral class and radius. Dressler (1986) used the Giovanelli and Haynes (19ZZ) catalog of spirals in nearby clusters to show that gas-poor spirals tend to travel on radial orbits that take them into the cluster center, as compared to the more isotropic orbits of the gas-rich spirals. Here we divide the k, k+a/a+k, and emission-line classes into three bins of radial position, each containing one-third of their respective samples. The resulting velocity dispersions are shown in Fig. 15. Fig. 15. The velocity dispersions of the different spectral types, averaged over the entire sample, as a function of radius. The velocity dispersion is everywhere higher for active systems compared to passive galaxies. Note that the k+a/a+k types exhibit a peak in velocity dispersion that may be related to their distinctive spatial distribution. Unlike the clear difference in the orbital properties of gas-deficient systems in nearby clusters, our sample exhibits ambiguous evidence at best. The k and e types both have velocity dispersion that falls gently with radius or are, within the errors, flat. This suggests populations on mildly radial orbits, possibly an infalling population, or a simple isothermal distribution. More puzzling is the peak in velocity dispersion for the k+a/a+k types, which does appear to be statistically significant. It is possible, of course, that higher velocities increase the chance of producing a k+a/a+k. It is also possible that this kinematic feature is connected with their unusual radial distribution, as mentioned above. A system of largely circular orbits that might characterize this distribution, which is concentrated like the k-types but avoids the core, would appear to have a higher velocity dispersion due to projection of what are largely tangential velocities. This is, however, not consistent with the idea that such galaxies derive from an infalling population on what are basically radial orbits. At this point, the statistics are poor enough, and the range of models so broad, that it is not worthwhile to explore this further here. ## 6. Conclusions $``$ We have presented detailed spectroscopic observations of 657 galaxies in the fields of 10 $`z=0.37`$–0.56 clusters. Combining these with our detailed HST-based morphological catalogs in these fields we construct samples of 204 cluster members and 71 field galaxies with both accurate spectral and morphological information. $``$ Using observational and theoretical justifications we have constructed a new quantitative spectral classification scheme and use this to interpret correlations between our spectral information and other properties of the galaxies in our catalog. $``$ Based upon an analysis of the \[Oii\] EW distributions, we find no evidence for an increase in the occurrence of strongly star-forming galaxies in the moderate-redshift cluster environment compared to the moderate-redshift field using morphologically-selected samples. However, we do find a large population of late-type cluster, but not field, galaxies which show little or no evidence of on-going star-formation. $``$ This passive, late-type cluster population is related to our spectral classes k+a/a+k, both of which we interpret as indicative of post-starburst behavior. Galaxies with k+a/a+k spectra are an order-of-magnitude more frequent in the cluster environment compared to the high redshift field. $``$ These k+a/a+k galaxies avoid the central regions of the clusters, in contrast to the k types, but are also far less extended than the emission-line galaxies, and much less common in the field. This appears to be clear evidence for the environment effecting either their formation or visibility. $``$ A detailed analysis of the spectroscopic and morphological information discussed here will be presented in Poggianti et al. (1998). ## Acknowledgements We thank Ray Lucas at STScI for his enthusiastic help which enabled the efficient gathering of these HST observations. BMP and HB warmly thank Steve Maddox for crucial help during the 1995 WHT run and in the subsequent reduction of those data. We also thank Alfonso Aragón-Salamanca, Nobuo Arimoto and Amy Barger for useful discussions and assistance. AD and AO acknowledge support from NASA through STScI grant 3857. IRS acknowledges support from a PPARC Advanced Fellowship and from Royal Society and Australian Research Grants while an Honorary Visiting Fellow at UNSW. WJC acknowledges support from the Australian Department of Industry, Science and Technology, the Australian Research Council and Sun Microsystems. This work was supported in part by the Formation and Evolution of Galaxies network set up by the European Commission under contract ERB FMRX-CT96-086 of its TMR program. We acknowledge the availability of the Kennicutt’s (1992) atlas of galaxies from the NDSS-DCA Astronomical Data Center. Fig. 8a. The images of those galaxies in our sample lying within the WFPC2 frame, grouped into spectroscopic classes. This first panel shows those galaxies with k spectral types. Each image is $`5^{\prime \prime }\times 5^{\prime \prime }`$ (or 15.5–18.6 $`h^1`$ kpc depending upon the cluster’s redshift) and has the same orientation as the HST field (S97). The cluster and galaxy ID from the WFPC2 catalogs (from Tables 4 in S97, or Table 4a below) and the spectral class and morphological type are marked on each frame. For the plates in Fig. 8 see http://www.ociw.edu/$``$irs/morphs2.html#figs Fig. 8a. continued. Fig. 8a. continued. Fig. 8b. The k+a sample. Fig. 8c. The a+k sample. Fig. 8d. The e(a) sample. Fig. 8e. The e(c) sample. Fig. 8f. The e(b) sample. Fig. 8g. The e(n) sample.
no-problem/9901/math9901099.html
ar5iv
text
# Escape Probability, Mean Residence Time and Geophysical Fluid Particle Dynamics ## 1 Stochastic dynamics: Escape probability and mean residence time Stochastic dynamical systems are used as models for various scientific and engineering problems. We consider the following class of stochastic dynamical systems $`\dot{x}`$ $`=`$ $`a_1(x,y)+b_1(x,y)\dot{w}_1,`$ (1) $`\dot{y}`$ $`=`$ $`a_2(x,y)+b_2(x,y)\dot{w}_2,`$ (2) where $`w_1(t),w_2(t)`$ are two real independent Wiener processes (white noises) and $`a_1,a_2,b_1,b_2`$ are given deterministic functions. More complicated stochastic systems also occur in applications (, , ), , , ). For a planar bounded domain $`D`$, we can consider the exit problem of random solution trajectories of (1)-(2) from $`D`$. To this end, let $`D`$ denote the boundary of $`D`$ and let $`\mathrm{\Gamma }`$ be a part of the boundary $`D`$. The escape probability $`p(x,y)`$ is the probability that the trajectory of a particle starting at $`(x,y)`$ in $`D`$ first hits $`D`$ (or escapes from $`D`$) at some point in $`\mathrm{\Gamma }`$, and $`p(x,y)`$ is known to satisfy (, and references therein) $`{\displaystyle \frac{1}{2}}b_1^2(x,y)p_{xx}+{\displaystyle \frac{1}{2}}b_2^2(x,y)p_{yy}+a_1(x,y)p_x+a_2(x,y)p_y`$ $`=`$ $`0,`$ (3) $`p|_\mathrm{\Gamma }`$ $`=`$ $`1,`$ (4) $`p|_{D\mathrm{\Gamma }}`$ $`=`$ $`0.`$ (5) Suppose that initial conditions (or initial particles) are uniformly distributed over $`D`$. The average escape probability $`P`$ that a trajectory will leave $`D`$ along the subboundary $`\mathrm{\Gamma }`$, before leaving the rest of the boundary, is given by (e.g., , ) $$P=\frac{1}{|D|}_Dp(x,y)𝑑x𝑑y,$$ (6) where $`|D|`$ is the area of domain $`D`$. The residence time of a particle initially at $`(x,y)`$ inside $`D`$ is the time until the particle first hits $`D`$ (or escapes from $`D`$). The mean residence time $`u(x,y)`$ is given by (e.g., , , and references therein) $`{\displaystyle \frac{1}{2}}b_1^2(x,y)u_{xx}+{\displaystyle \frac{1}{2}}b_2^2(x,y)u_{yy}+a_1(x,y)u_x+a_2(x,y)u_y`$ $`=`$ $`1,`$ (7) $`u|_D`$ $`=`$ $`0.`$ (8) ## 2 A quasigeostrophic jet model The Lagrangian view of fluid motion is particularly important in geophysical flows since only Lagrangian data can be obtained in many situations. It is essential to understand fluid particle trajectories in many fluid problems. Escape probability (from a fluid domain) and mean residence time (in a fluid domain) quantify fluid transport between flow regimes of different characteristic motion. Deterministic quantities like escape probability and mean residence time can be computed by solving Fokker-Planck type partial differential equations. We now use the these ideas in the investigation of meandering oceanic jets. Meandering oceanic jets such as the Gulf Stream are strong currents dividing different bodies of water. Recently, del-castillo-Negrete and Morrison (), and Pratt et al. (, ) have studied models for oceanic jets. These models are dynamically consistent to within a linear approximation, i.e., the potential vorticity is approximately conserved. Del-castillo-Negrete and Morrison’s model consists of the basic flow plus time-periodic linear neutral modes. In this paper, we consider an oceanic jet consisting of the basic flow as in del-castillo-Negrete and Morrison (), plus random-in-time noise. This model incorporates small-scale oceanic motions such as the molecular diffusion (), which is an important factor in the Gulf Stream (, ). The irregularity of RAFOS floats (, , ) also suggests the inclusion of random effects in Gulf Stream modeling. This random jet may also be viewed as satisfying, approximately in the spirit of del-castillo-Negrete and Morrison (), the randomly wind forced quasigeostrophic model. Several authors have considered the randomly wind forced quasigeostrophic model in order to incorporate the impact of uncertain geophysical forces (, , , , , ). They studied statistical issues such as estimating correlation coefficients for the linearized quasigeostrophic equation with random forcing. There is also recent work which investigates the impact of the uncertainty of the ocean bottom topography on quasigeostrophic dynamics (). The randomly forced quasigeostrophic equation takes the form () $$\mathrm{\Delta }\psi _t+J(\psi ,\mathrm{\Delta }\psi )+\beta \psi _x=\frac{dW}{dt},$$ (9) where $`W(x,y,t)`$ is a space-time Wiener process (white noise). The stream function would have a random or noise component (, ). Note that $`\beta `$ is the meridional derivative of the Coriolis parameter , i.e., $`\beta =\frac{2\mathrm{\Omega }}{r}\mathrm{cos}(\theta )`$, where $`\mathrm{\Omega }`$ is the rotation rate of the earth, $`r`$ the earth’s radius and $`\theta `$ the latitude. Since $`\mathrm{\Omega }`$ and $`r`$ are fixed, $`\beta `$ is a monotonic decreasing function of the latitude. The deterministic meandering jet derived in del-castillo-Negrete and Morrison () is $$\mathrm{\Psi }(x,y)=\mathrm{tanh}(y)+asech^2(y)\mathrm{cos}(kx)+cy,$$ where $$a=0.01,c=\frac{1}{3}(1+\sqrt{1\frac{3}{2}\beta }),k=\sqrt{2(1+\sqrt{1\frac{3}{2}\beta })},0\beta \frac{2}{3}.$$ This $`\mathrm{\Psi }(x,y)`$ is an approximate solution of the usual quasigeostrophic model $$\mathrm{\Delta }\psi _t+J(\psi ,\mathrm{\Delta }\psi )+\beta \psi _x=0.$$ (10) With random wind forcing or molecular diffusive forcing in the stochastic quasigeostrophic model (9), the stream function would have a random or noise component. We approximate this noise component by adding a noise term to the above deterministic stream function $`\mathrm{\Psi }(x,y)`$, that is, in the rest of this paper, we consider the following random stream function as a model for a quasigeostrophic meandering jet, $$\stackrel{~}{\mathrm{\Psi }}(x,y)=\mathrm{tanh}(y)+asech^2(y)\mathrm{cos}(kx)+cy+\text{noise}.$$ The equations of motion for fluid particles in this jet then have noise terms. We further approximate them as white noises (or Wiener processes) $`dx`$ $`=`$ $`\mathrm{\Psi }_ydt+\sqrt{ϵ}dw_1,`$ (11) $`dy`$ $`=`$ $`\mathrm{\Psi }_xdt+\sqrt{ϵ}dw_2,`$ (12) or more specifically, $`dx`$ $`=`$ $`[sech^2(y)+2asech^2(y)\mathrm{tanh}(y)\mathrm{cos}(kx)c]dt+\sqrt{ϵ}dw_1,`$ (13) $`dy`$ $`=`$ $`aksech^2(y)\mathrm{sin}(kx)dt+\sqrt{ϵ}dw_2,`$ (14) where $`0<ϵ<1`$, and $`w_1(t),w_2(t)`$ are two real independent Wiener processes (in time only). The calculations below are for $`ϵ=0.001`$. Note that $`\beta `$ is now the only parameter 13, 14, as $`a`$ is given and $`c,`$ and $`k`$ depend only on $`\beta [0,\frac{2}{3}]`$. When $`ϵ=0`$, the deterministic jet consists of the jet core and two rows of recirculating eddies, which are called the northern and southern recirculating regions. Outside the recirculating regions are the exterior retrograde regions; see Figure 1. ## 3 Escape probability We take $`D`$ to be either an eddy or a piece of jet core (see Figures 2, 3). This piece of jet core has the same horizontal length scale as an eddy, and it is one period of the deterministic jet core (note that the deterministic velocity field is periodic in $`x`$). Thus we call this piece of jet core a unit jet core. From (3), (4), (5), the escape probability $`p(x,y)`$ that a fluid particle, initially at $`(x,y)`$, crosses the subboundary $`\mathrm{\Gamma }`$ of the domain $`D`$ satisfies $`ϵ\mathrm{\Delta }p+[sech^2(y)+2asech^2(y)\mathrm{tanh}(y)\mathrm{cos}(kx)c]p_xaksech^2(y)\mathrm{sin}(kx)p_y`$ $`=`$ $`0,`$ (15) $`p|_\mathrm{\Gamma }`$ $`=`$ $`1,`$ (16) $`p|_{D\mathrm{\Gamma }}`$ $`=`$ $`0.`$ (17) We take $`\mathrm{\Gamma }`$ to be either top or bottom boundary of an eddy or a unit jet core (see Figures 2, 3). We numerically solve this elliptic system for various values of $`\beta `$ between $`0`$ and $`\frac{2}{3}`$. In the unit jet core case, we take periodic boundary condition in horizontal (meridional) $`x`$ direction, with period $`\frac{2\pi }{k}`$. A piecewise linear, finite element approximation scheme was used for the numerical solutions of the escape probability $`p(x,y)`$, and the mean residence time $`u(x,y)`$, described by the elliptic equations (3), and (7), respectively. Using a collection of points lying on the boundary, piecewise cubic splines were constructed to define the boundary of the eddy, and the top and bottom boundaries of the jet core. Computational (triangular) grids for the eddy and the jet core were then obtained by deforming regular grids constructed for an ellipse and a rectangular region, respectively. The computed escape probability crossing the upper or lower boundary of an eddy or a unit jet core, for the case of $`\beta =\frac{1}{3}`$, are shown in Figures 4, 5, 6 and 7. Suppose that the fluid particles are initially uniformly distributed in $`D`$ (an eddy or a unit jet core). We can also compute the average escape probability $`P`$ that a particle will leave $`D`$ along the upper or lower subboundary $`\mathrm{\Gamma }`$, using the formula (6); see Figures 8 and 9. For an eddy, the average escape probability for fluid particles (initially inside the eddy) escape into the exterior retrograde region is smaller than escape into the jet core for $`0<\beta <0.3333`$, while for $`0.3333<\beta <\frac{2}{3}`$, the opposite holds (Figure 8). Thus $`\beta =0.3333`$ is a bifurcation point. Also, the average escape probability for fluid particles escape into the exterior retrograde region increases as $`\beta `$ increases from $`0`$ to $`0.54`$ (or as latitude decreases accordingly), and then decreases as $`\beta `$ increases from $`0.54`$ to $`\frac{2}{3}`$ (or as latitude decreases accordingly). Thus $`\beta =0.54`$ is another bifurcation point. The opposite holds for the average escape probability for fluid particles escape into the jet core. For a unit jet core near the jet troughs, the average escape probability for fluid particles (initially inside the jet core) escape into the northern recirculating region is greater than escape into the southern recirculating region for $`0<\beta <0.115`$, while for $`0.385<\beta <\frac{2}{3}`$, the opposite holds. Moreover, for $`0.115<\beta <0.385`$, fluid particles are about equally likely to escape into either recirculating regions (Figure 9). Furthermore, for a unit jet core near the jet crests, the situation is the opposite as for near the jet troughs. ## 4 Mean residence time The mean residence time $`u(x,y)`$ of a fluid particle, initially at $`(x,y)`$ in either an eddy or a piece of jet core (see Figures 2, 3), satisfies $`ϵ\mathrm{\Delta }u+[sech^2(y)+2asech^2(y)\mathrm{tanh}(y)\mathrm{cos}(kx)c]u_xaksech^2(y)\mathrm{sin}(kx)u_y`$ $`=`$ $`1,`$ (18) $`u|_D`$ $`=`$ $`0`$ (19) The mean residence times of fluid particles in an eddy or a unit jet core are shown, for the case of $`\beta =\frac{1}{3}`$, in Figures 10, 11. The maximal mean residence time of fluid particles initially in an eddy increases as $`\beta `$ increases from $`0`$ to $`0.432`$ (or as latitude decreases accordingly), then decreases as $`\beta `$ increases from $`0.432`$ to $`\frac{2}{3}`$ (or as latitude decreases accordingly); see Figure 12. However, the maximal mean residence time of fluid particles initially in a unit jet core always increases as $`\beta `$ increases (or as latitude decreases accordingly); see Figure 13. ## 5 Discussions The present work on fluid particle motion in random flows takes into account of fluid particle diffusive as well as advective motion. There has been recent work on fluid particle advective motion (molecular diffusion ignored) in time-periodic, quasi-periodic and aperiodic flows (periodic $``$ quasi-periodic $``$ aperiodic $``$ random); see, for example, , , , , , and . Our work on random particle motion does not require that the random part is small. However it does require that the flow can be decomposed into steady or unsteady deterministic (drift) and random (diffusion) parts; Otherwise the Fokker-Planck formalism does not hold. Acknowledgement. J. Duan would like to acknowledge the hospitality of the Center for Nonlinear Studies, Los Alamos National Laboratory, the Institute for Mathematics and its Applications (IMA), Minnesota, and the Woods Hole Oceanographic Institution, Massachusetts. He is also grateful for the discussions on this work with Diego del-castillo-Negrete (Scripps Institution of Oceanography), Greg Holloway (Institute of Ocean Sciences, Canada), Julian Hunt (Cambridge University), Peter Kiessler (Clemson University), Pat Miller (Stevens Institute of Technology), Larry Pratt (Woods Hole Oceanographic Institution), and Roger Samelson (Oregan State University). This work was supported by the National Science Foundation Grant DMS-9704345.
no-problem/9901/astro-ph9901154.html
ar5iv
text
# Science with the Constellation-X Observatory ## Constellation-X The prime objective of Constellation-X mission is high resolution X-ray spectroscopy. It will cover the $`0.2540`$ keV X-ray bandpass by utilizing two types of high throughput telescope systems to simultaneously cover the low (0.25 to 10 keV) and high energy (6 to 40 keV) bands. The low-energy Spectroscopy X-ray Telescope (SXT) is optimized to maintain a spectral resolving power of at least 300 across the 0.25 to 10 keV band pass (E/$`\mathrm{\Delta }`$E $``$ 3000 at 6 keV) and has a minimum telescope angular resolution of $`15^{\prime \prime }`$ HPD. The diameter of the field of view is $`2.5^{}`$ below 10 keV. The high energy system (HXT) with lower spectral resolving power ($`\mathrm{\Delta }`$E $``$ 1 keV) overlaps the SXT and primarily is used to measure the relatively line-less continuum from 10 to 40 keV. The diameter of the field of view is $`8^{}`$ for the HXT. The large collecting area is achieved with a design utilizing several mirror modules, each with its own spectrometer/detector system. The spectral resolving power of the SXT and the effective area of SXT and HXT are shown in Figure 1. The SXT uses two spectrometer systems that operate simultaneously to achieve the desired energy resolution: 1) a 2 eV resolution quantum microcalorimeter array, and 2) a set of reflection gratings for energies $`<2`$ keV. The gratings deflect part of the telescope beam away from the calorimeter array in a design similar to XMM except that the direct beam falls on a quantum calorimeter instead of on a CCD. The two spectrometers are complementary, with the gratings optimal for high resolution spectroscopy at low energies and the calorimeter at high energies. The gratings also provide coverage in the 0.3-0.5 keV band where the calorimeter thermal and light-blocking filters cause a loss of response. This low-energy capability is particularly important for high-redshift objects, for which line-rich regions will be moved into this low energy band. The HXT uses a multilayer coatings on individual mirror shells to provide the first focusing optics system to operate in the 6-40 keV band. Compared to other non-focusing methods such as those used for RXTE, Constellation-X has twice the area, 640 times the energy resolution, 240 times the spatial resolution, and above 10 keV, 100 times the sensitivity. AXAF and XMM, designated as the workhorses of X-ray astronomy in the next decade, will detect photons with energies up to 10 keV. The technology development program is now underway and is targeting a first launch in 2007-2008, around the time that AXAF will be reaching the end of its projected lifetime. An essential feature of the Constellation-X concept involves minimizing cost and risk by building several identical, modest satellites to achieve a large area. The current baseline is 6 satellites, although other multiple satellite configurations are also under consideration, with the final choice to be made based on a balance of overall cost and risk. The mission will be placed into a high earth or L2 orbit to facilitate high observing efficiency, provide an environment optimal for cryogenic cooling, and simplify the spacecraft design. ## Science Goals Constellation-X is a key element in NASA’s Structure and Evolution of the Universe (SEU) theme aimed at understanding the extremes of gravity and the evolution of the Universe. We highlight here a few key science areas. How can we use observations of black holes to test General Relativity? X-ray observations directly probe physical conditions close to the central engine of blackholes where the distortions of time and space predicted by general relativity are most pronounced. Constellation-X will use the spectral features of these objects (e.g. the broad iron K$`\alpha `$ line discovered by ASCA tan95 ) to map out the geometry of the inner emission regions and determine the extent to which we can test general relativity. What is the total energy output of the Universe? Models of cosmic X-ray background predict that the emission at hard X-rays is due to many absorbed AGN mad94 , with their central engines primarily visible via hard X-rays (and perhaps infrared). If most of the accretion in the Universe is highly obscured, then the emitted power per galaxy based on currently available optical, UV, or soft X-ray quasar luminosity functions may be substantially underestimated. By using hard X-ray spectra to advance our knowledge of the total luminosity of AGN, Constellation-X will bring us closer to knowing the total energy output of the Universe. What roles do supermassive black holes play in galaxy evolution? Constellation-X measurements of black hole mass and spin for the high z quasar sample will allow understanding of the relative evolution rates of black holes and their host galaxies, and will shed light on when massive black holes formed compared to the galaxy formation epoch. How does gas flow in accretion disks and how do cosmic jets form? Accretion disks play a fundamental role in many astrophysical settings, ranging from the formation of planetary systems to accretion onto supermassive black holes in AGN. There are, however, many controversies about the nature of viscosity which drives the accretion process, about the stability of the disk at various accretion rates, and about the relevance of advection and mass outflows, and the mechanisms by which jets are formed. Constellation-X will probe the physics of accretion disks to a level not currently possible, by resolving line features from the accretion disk photosphere and by measuring the continuum shape over a broad energy band. When were clusters of galaxies formed and how do they evolve? To date, cluster abundances have been measured in the X-ray band out to a redshift of about 0.4 but no discernible evolution with z has been seen. Constellation-X spectra of clusters over a range of redshifts will provide crucial information about the presence of primordial gas, including any input from possible pre-galactic generations of stars as well as the contribution from stellar nucleosynthesis as a function of time. The high sensitivity of Constellation-X is essential for extending such studies to the “poorer cousins” of clusters, groups of galaxies. Moreover, by mapping the velocity distribution of hot cluster gas via Doppler shifts in the emission lines, Constellation-X will allow us to examine the effects of collisions and mergers between member galaxies and between separate subclusters and clusters. Where are the “missing baryons” in the local Universe? Recent observations of the Lyman-$`\alpha `$ forest show that at large redshifts most of the predicted baryon content of the Universe is in the IGM, while at low redshifts, the baryon content of stars, neutral hydrogen, and X-ray emitting cluster gas is roughly one order of magnitude smaller than that expected from nucleosynthesis arguments. Therefore, a large fraction of baryonic content of the local Universe is considered “missing”. Numerical simulations cen98 predict that the missing matter may reside in the IGM with a temperature range of $`10^510^7`$ K. Such gas in the IGM can be detected with the high sensitivity, high resolution instruments aboard Constellation-X through the absorption lines of metals against the X-ray spectra of background quasars (e.g. OVII and OVIII). How are matter and energy exchanged between stars and the Interstellar Medium and how is the Intergalactic Medium enriched? The chemical enrichment of the Universe is dominated by star formation and the release of the processed material into the ISM via stellar winds and supernova explosions. Moreover, supernova explosions and enhanced star forming activities can drive hot gas out of the galaxy and enrich the ICM/IGM on megaparsec scales. Detailed, spatially-resolved X-ray spectra reveal the stellar/supernova abundances, the composition of the surrounding ISM, and the interaction of the expanding blast wave with the surrounding material. High throughput instruments such as those aboard Constellation-X are needed to measure the K-lines of less abundant elements such as F, Na, Al, P, Cl, K, Sc, Ti, V, Cr, Mn, Co, Ni, Cu, and Zn. The increased sensitivity of Constellation-X will allow us to extend these studies to exernal galaxies, beyond the Magellanic Clouds to M1 and M33, for example. This will allow us to further our understanding of the history of star formation and exchange of matter between the ISM and stars.
no-problem/9901/astro-ph9901317.html
ar5iv
text
# Rotational modulation of X-ray flares on late-type stars: T Tauri Stars and Algol ## 1 Introduction After the first stellar X-ray flares were discovered less than 25 years ago on dMe stars (Heise et al. (1975)) it took almost another decade until the Einstein observatory (EO) detected similar events on young T Tauri Stars (TTS) (Montmerle et al. (1983)). Nowadays, X-ray flares are known to be entertained on stars all over the H-R diagram (see Pettersen (1989) for a review). The timescales and energetics involved in flare events on different types of stars vary strongly consistent with the observation that the level of X-ray activity decays with age. TTS are late-type pre-main sequence stars with typical age of $`10^510^7`$ yrs and rank among the most active young stars: energy outputs of up to $`10^4`$ times the maximum X-ray emission observed from solar flares have been reported from TTS outbursts. Some of the largest X-ray flares ever observed were discovered by ROSAT on the TTS LH$`\alpha `$ 92 and P1724 (Preibisch et al. (1993), Preibisch et al. (1995)). The energy released in these events ($`>10^{36}\mathrm{ergs}`$) exceeds that of typical TTS flares by two orders or magnitude. Before the detection of these giant events, the record of X-ray luminosity was held for more than 10 years by the TTS ROX-20, where $`L_\mathrm{x}10^{32}\mathrm{ergs}/\mathrm{s}`$ were measured during an EO observation in February 1981 (Montmerle et al. (1983)). A superflare from the optically invisible infrared Class I protostar YLW 15 in $`\rho `$ Oph was presented by Grosso et al. (1997), with the intrinsic X-ray luminosity over the whole energy range being $`10^{34}`$ to $`10^{36}`$ erg/s, depending on the foreground absorption which is known to lie somewhere between 20 and 40 mag. Although no model has been found yet that explains all aspects of flaring activity, the basic picture of all flare scenarios is – in analogy to the sun – that of dynamo driven magnetic field loops that confine a hot, optically thin, X-ray emitting plasma (see Haisch et al. (1991) for a summary of flare phenomena). Quasi-static cooling of such coronal loops has been described by van den Oord & Mewe (1989). An analysis of single flare events is of interest to determine physical parameters of the flaring region such as time scales, energies, temperature and plasma density, and ultimately decide whether coronal X-ray emission of TTS is scaled-up solar activity, or whether interaction between the star and either a circumstellar disk or a close binary companion are partly responsible for the X-ray emission. In this paper we select a sample of four X-ray observations (three of TTS and one of Algol) which are in conflict with the standard modeling of the lightcurve as either a flare characterised by a quick rise and subsequent exponential decay or as simple sine-like rotational modulation of the quiescent emission. In the latter case X-ray emission would be larger when the more X-ray luminous area is on the front side of the star (directed towards the observer). Such kind of rotationally modulated emission was observed in the TTS SR 12 in $`\rho `$ Oph by Damiani et al. (1994). We propose that the untypical shape of the X-ray lightcurves we present is due to a flare event modulated by the rotation of the star. Skinner et al. (1997) suggested rotational occultation of an X-ray flare to explain the broad maximum and slow decay of a flare on V773 Tau observed by ASCA. While they model their data by fitting a sine function to the lightcurve without allowing for an exponential decay phase (similar to Damiani et al. (1994)), we start out from a decaying flare and modify it by a time varying volume factor. By this approach we take into consideration that the flare might be occulted by the star during part of the observation and we are able to estimate the decay timescale of the lightcurve $`\tau `$ and the size of the emitting loop. Such a model was first suggested by Casanova (1994), and Montmerle (1997) classified the corresponding flare event as ‘anomalous’. A rotationally modulated flare was also mentioned as possible interpretation of a flare-like event in P1724 (Neuhäuser et al. (1998)), shown in our Fig. 6, who advertised the more detailed and quantitative treatment that we present in this paper. The outline of our presentation is as follows: In Sect. 2 we introduce the X-ray observations that we chose in view of the untypical broad maximum of their lightcurves. A model that describes modulations of X-ray flares by the rotation of the star is presented in Sect. 3. In Sect. 4 we explain the structure of the lightcurves from the observations introduced in Sect. 2 by applying our model, and we summarize the results in Sect. 5. ## 2 The observations A summary of the observations analysed in this paper is given in Table 1. The observations were obtained with different instruments onboard the X-ray satellites ROSAT, Ginga, and ASCA. For information about the ROSAT instruments, the Position Sensitive Proportional Counter (PSPC) and the High Resolution Imager (HRI), we refer to Trümper (1982). The Ginga Large Area Counter (LAC) has been described by Turner et al. (1989), and a description of ASCA and its instrumentation can be found in Tanaka et al. (1994). The classical TTS (CTTS) SR 13 was detected in X-rays by Montmerle et al. (1983). It is located in the $`\rho \mathrm{Oph}`$ cloud at position $`\alpha _{2000}=16^\mathrm{h}28^\mathrm{m}45^\mathrm{s}.3`$ , $`\delta _{2000}=24^{}28^{}17.0^{\prime \prime }`$. On a speckle imaging survey SR 13 was discovered to be a binary system (Ghez et al. (1993)) with $`0.4^{\prime \prime }`$ separation, which remains unresolved in the PSPC observation of 1991 September 07/08 we present here. The period of SR 13 is unknown. However, as shown below, a period of $`36`$ days is consistent with the rotating X-ray flare model. P1724 is a weak line TTS (WTTS) located 15 arc min north of the Trapezium cluster in Orion ($`\alpha _{2000}=5^\mathrm{h}35^\mathrm{m}4^\mathrm{s}.21`$, $`\delta _{2000}=5^{}8^{}13^{\prime \prime }.2`$). Neuhäuser et al. (1998) confirm the rotational period of 5.7 d first discovered by Cutispoto et al. (1996) applying two independent numerical period search methods on the V-band lightcurve. In addition, they report systematic variations of the X-ray count rate of P1724 during an observation with the ROSAT HRI in October 1991. However, they cannot find any rotational modulation in the X-ray data. Neuhäuser et al. (1998) also find no indications for a circumstellar disk nor a close binary companion. The WTTS V773 Tau is a double-lined spectroscopic binary (Welty (1995)) which is located in the Barnard 209 dark cloud at optical position ($`\alpha _{1950}=4^\mathrm{h}11^\mathrm{m}07.29^\mathrm{s}`$, $`\delta _{1950}=28^{}04^{}41.2^{\prime \prime }`$). Upper limits for the rotation period derived by Welty (1995) are $`2.96\mathrm{d}`$ and $`2.89\mathrm{d}`$ for the K2 and K5 components respectively. These estimates are somewhat lower than the values previously reported by Rydgren & Vrba (1983). In the ASCA observation obtained on 1995 September 16/17 the V773 Tau binary system is not resolved from the classical TTS (CTTS) FM Tau, which lies at an offset of $`38^{\prime \prime }`$. For a more detailed discussion of this ASCA observation we refer to Skinner et al. (1997). Algol is a triple system, with an inner close binary of period 2.87 d comprising a B8 V primary (Algol A) and an evolved K2 IV secondary (Algol B). In January 1989 Ginga observed a large flare from the Algol system (presumably Algol B; Stern et al. (1992)). The shape of the Algol lightcurve resembles that of the TTS flares discussed before. Therefore, we decided to include this observation in our sample, although the Algol system represents a different class of flare stars. We use the Extended Scientific Software Analysis System (EXSAS, Zimmermann et al. (1995)) to analyse the two ROSAT observations, which were obtained from the ROSAT Public Data Archive. To take account of possible time variations in the background, the background count rate was computed for each satellite orbit and subtracted accordingly from the measured lightcurve. Ginga data were kindly supplied to us in computer readable format by Bob Stern, while Steve Skinner provided us the ASCA lightcurve of V773 Tau. ## 3 Rotational effects on flare lightcurves and spectra The X-ray lightcurve during a flare event is commonly described by a steep, linear rise followed by an exponential decay. The e-folding time $`\tau `$ of the decay varies greatly between several minutes to hours depending on the nature of the flaring star. Disagreement prevails on the question whether the apparent quiescent emission might be attributed to continuous, unresolved short timescale activity. The detection of a high temperature spectral component in the quiescent spectrum might hint at such low-level flaring (Skinner et al. (1997)). Several flares have been observed that do not match the typical appearance: instead of displaying a sharp peak, the lightcurves of this type of events are characterized by smooth variations around maximum emission that sometimes goes along with a slower rise as compared to standard flare events. In these cases the shape of the lightcurve can be reproduced by taking account of the rotation of the star. Flares that erupt on the backside of the star become visible only gradually as the star rotates and drags the plasma loop around. The visible flare volume is thus a function of time which modulates the exponential decay. The scenario we have in mind, and that we will refer to as the ‘rotating flare model’ henceforth, is sketched in Fig. 1. For simplicity the emitting plasma loop is approximated by a sphere anchored on the star’s surface. The fraction of the loop volume which is visible to the observer is given by $$V(r,t)=\frac{1}{\frac{4}{3}\pi r^3}_{R(R+r)\mathrm{cos}\alpha (t)}^r\pi (r^2x^2)dx$$ (1) where $`R`$ is the radius of the star and $`r`$ the radius of the spherical plasma loop. The time dependency of $`V`$ is hidden in $`\alpha `$, the angle between the current position of the flaring volume and the position where a flare just begins to become occulted by the star. Note that Eq. (1) does not hold for rotational phases during which the flaring volume is either completely behind ($`\alpha \frac{\pi }{2}`$) or completely in front of the star ($`\alpha [\pi ,2\pi ]`$). The time dependency of $`\alpha `$ in Eq. (1) depends on whether the loop is disappearing or reappearing and is given by $$\alpha (t)=\{\begin{array}{cc}\frac{2\pi t}{P_{\mathrm{rot}}}& \text{for }0\alpha (t)<\varphi _{\mathrm{crit}}(f)\\ \pi \frac{2\pi t}{P_{\mathrm{rot}}}& \text{for }\pi \varphi _{\mathrm{crit}}(f)<\alpha (t)\pi \end{array}$$ (2) where $`\varphi _{\mathrm{crit}}`$ is the critical phase at which the plasma volume has just disappeared. Eq. (1) is not valid any more until the loop reaches phase $`\pi \varphi _{\mathrm{crit}}`$ and begins to move into the line of sight again. $`\varphi _{\mathrm{crit}}`$ is a function of the relative size of the radius of the flaring volume and the radius of the star, $`f=\frac{r}{R}`$. The visible fraction of the plasma volume as a function of time is plotted for different values of the radius ratio $`f`$ in Fig. 2. Our model is based on several simplifying assumptions concerning the flare geometry. First, we imply that we look directly onto the rotational plane, i. e. $`i90^{}`$, and that the flare takes place at low latitudes. Flares that erupt in polar regions, in contrast, in the configuration of Fig. 1 would remain partially visible during the whole rotation period. Furtheron, Eq.(1) does not take account of the curvature of the star. We content ourselves with these approximations because, given the present quality of the data, further sophistication of the model seems to be unnecessary. Making use of the configuration described above, for a flare which is observed while the flaring region turns up from the backside of the star, the X-ray lightcurve can be modeled by $$I_{\mathrm{cps}}=I_\mathrm{q}+I_0\mathrm{exp}(t/\tau )V(r,t)$$ (3) where $`I_\mathrm{q}`$ is the quiescent X-ray count rate of the star, $`I_0`$ the strength of the outburst, $`\tau `$ the decay timescale of the count rate and $`V(r,t)`$ the visible fraction of the volume of the plasma loop given by Eq. (1) for values of $`\alpha `$ within the allowed range, and by 0 or 1 for angles $`\alpha `$ outside the intervals of Eq. (2). The hump-like shape of the lightcurves we will discuss in the next section can be reproduced if the visible volume $`V`$ increases during the first observed part of the flare, ie. $`\alpha \pi \varphi _{\mathrm{crit}}`$. Three critical moments determine the rotating flare event: the time of outburst, the time when the flare region passes phase $`\pi \varphi _{\mathrm{crit}}`$ and begins to move into the line of sight, and the time at which the observation started. In the next paragraphs the relation between these times will be examined. First, an offset between flare outburst and the time when it becomes visible to an observer (at $`\pi \varphi _{\mathrm{crit}}`$) might be present, when the flare takes place on the occulted side of the star. In our model such a time offset $`\mathrm{\Delta }t`$ contributes only to the normalization of the exponential $`I_0=I_{\mathrm{intr}}\mathrm{exp}(\mathrm{\Delta }t/\tau )`$ and cannot be separated from the intrinsic brightness $`I_{\mathrm{intr}}`$ of the outburst. The upper limit for $`\mathrm{\Delta }t`$ is given by $$\mathrm{\Delta }t_{\mathrm{max}}=(0.52\varphi _{\mathrm{crit}})P_{\mathrm{rot}}$$ (4) since for larger time offsets the flare would have been observed also at $`\alpha <\varphi _{\mathrm{crit}}`$, that is before its occultation. Given the rotational periods of several days, $`\mathrm{\Delta }t_{\mathrm{max}}`$ exceeds the typical decay timescale for TTS flares (of a few hours). However, from an observational point of view it is impossible to exclude that the flares occurred already before they rotated away, because data extending over several hours before the reappearance of the flare are not available for the lightcurves analysed here, except in the case of Algol. Indeed, at first glance the combination of the two phases of enhanced count rate in the Algol observation (see Fig. 3 a) looks similar to what is expected to be seen from one very long flare that disappeared behind the star shortly after outburst and reappeared half a rotational cycle later still displaying a strong count rate enhancement. Figure 3 b gives an example of a theoretical lightcurve for a flare which is occulted right after its outburst and whose duration is more than half the rotation period. However, our attempt to model the complete Algol lightcurve from Fig. 3 a by such a single temporary occulted flare was not successful because the model restrictions concerning the relative strength of the pre- and post-occultation part of the flare are not met by the Algol lightcurve. Thus we can rule out offsets larger than $`\mathrm{\Delta }t_{\mathrm{max}}`$ for the Algol observation discussed in this paper, and the short rise in count rate observed before the large flare must be due to an independent event. Second, the start of the observation of a flare event, which is characterised by a beginning enhancement of the observed count rate, can differ from the time at which the outer edge of the plasma loop emerges from the back of the star due to gaps in the data stream. So, strictly speaking, another offset $`\delta t`$ has to be included when fitting the ‘rotating flare model’ to the data to take account of a possible delay of the observation with respect to flare phase $`\pi \varphi _{\mathrm{crit}}`$. Such an additional parameter that allows to determine the rotational phase of the flare region at the beginning of the observed rise is needed to obtain acceptable fits for the flares on Algol and V773 Tau. For the ROSAT observations (of SR 13 and P1724), however, an offset $`\delta t`$ does not improve the fit due to the low statistics of the data. We note here, that the observed flare rise is only apparent according to our model: The star is assumed to have flared (and thus exhibited its maximum emission) well before the observed maximum, and the count rate is low at that time only due to the fact that the flaring volume has not yet become visible. The enhanced X-ray emission during flare events is produced by a hot plasma which has been heated to temperatures of $`10^6\mathrm{K}`$ and above. Optically thin plasma models show, when applied to spectra representing different stages of the flare, that after the outburst the temperature drops exponentially to the quiescent level. The temperature observed for a rotationally modulated flare should thus be highest during the phase where the flare emerges from the backside of the star when the observed lightcurve has not yet reached its maximum. The emission measure, on the other hand, being a volume related parameter is expected to show a time evolution similar to the lightcurve. ## 4 Application of the model We fit the model of Eq. (3) to that part of the lightcurves from the observations introduced in Sect. 2 that are by visual inspection identified with the outburst due to their enhanced count rates. Except for V773 Tau (see Fig. 7) none of these lightcurves (Fig. 3 (c), Fig. 5, and Fig. 6) can be explained by simple sine-like variations due to rotational modulation of the quiescent emission. There is always an additional feature present, namely a flare. The lightcurves discussed here are characterised by a concave shape of the (e-folding) decay phase typical for flares (whether or not rotationally modulated), while a simple sine-like rotational modulation of quiescent emission always produces a convex shape in the decay part. In all cases we examined, the quiescent count rate is held fixed on its average pre-flare value. Thus, three free parameters have to be adjusted to the data: the strength of the flare, $`I_0`$ in cts/s, the decay timescale of the lightcurve, $`\tau `$, and the radius $`r`$ of the flaring volume relative to the stellar radius, $`R`$. An additional freedom allowing for an offset $`\delta t`$ between rotational phase $`\pi \varphi _{\mathrm{crit}}`$ of the flare and the apparent outburst of the flare, which is observed as a rise in count rate, is used for the modeling of the Ginga (Algol) and ASCA (V773 Tau) lightcurves (see the explanation in the previous section). The rotational periods $`P_{\mathrm{rot}}`$ of our sample stars were known from optical photometry, except for the case of SR 13. In principle, $`P_{\mathrm{rot}}`$ could be included as a further free parameter in the fit. But with this additional freedom the fit does not result in a unique solution, as we will show in the example of SR 13, and thus $`P_{\mathrm{rot}}`$ may not be uniquely determined from our model. For Algol we assume synchronous rotation. Our best fit results will be discussed in detail in the following subsections. The best fit parameters for all flares are listed in Table 2 together with the rotation periods and measured quiescent count rates. Note, however, that the model depends to some degree on the initial parameters, and the parameters are not well determined due to correlations, such that similar solutions are obtained for different combinations of parameter values. For the flares on Algol and V773 Tau we computed 90 % confidence levels for the best fit parameters according to the method described by Lampton et al. (1976). The low statistics in the data of the lightcurves of SR 13 and P 1724 do not allow to apply this method. We, therefore, do not give uncertainties for the best fit parameters of these events. ### 4.1 Algol A two day long continuous Ginga observation of Algol in January 1989 (first presented by Stern et al. (1990), 1992) includes a large flare event. Secondary eclipse begins during the decay of that flare, but it seems to affect the count rate only marginally. Preceding the large flare, primary eclipse and a small flare are observed (see discussion in Sect. 3). We therefore base our estimate for the quiescent emission on the time between the two flare events, i.e. immediately before the rise phase of the large outburst which marks the onset of the time interval to which we apply the ‘rotating-flare model’. From fitting Eq. (3) to the data after $`\mathrm{JD}\mathrm{\hspace{0.17em}2447540.65}`$ in Fig. 3 (c) we obtain a best fit $`\chi _{\mathrm{red}}^2`$ of 5.34 for 108 degrees of freedom. The fit can be significantly improved when the critical phase $`\pi \varphi _{\mathrm{crit}}`$ is allowed to vary around the start of observation as explained in Sect. 3 ($`\chi _{\mathrm{red}}^2=3.05`$ for $`107`$ dofs), and all stages of the flare are well represented by the model. Although this value of $`\chi _{\mathrm{red}}^2`$ is still far from representing an excellent fit, the ability of the model to reproduce the overall shape of the X-ray lightcurve is convincing. A detailed spectral analysis of the flare event on Algol was undertaken by Stern et al. (1992). The emission measure $`EM`$ they obtained from a thermal bremsstrahlung spectrum + Fe line emission for 11 time-sliced spectra covering all phases of the flare is displayed in Fig. 4. According to the best fit of our ‘rotating flare model’ to the lightcurve, the flare volume has become almost completely visible ($`V>0.98`$) around $`\mathrm{JD}\mathrm{\hspace{0.17em}2447541}`$, i.e. about 10 hours after the rise in count rate was observed to set in. The exponential decay of the emission measure for the last three values of Fig. 4 can thus be extrapolated to the previous part of the observation to find the values for the emission measure intrinsic to this flare event. The observed emission measure during the rotationally dominated beginning of the flare is then fairly well reproduced by correcting the extrapolated values for the time dependence of the volume ($`EMn_\mathrm{e}^2V`$), where we neglect possible variations of the plasma density $`n_\mathrm{e}`$. Thus, in contrast to the observation, the actual emission measure of the flare event is highest at the onset of the flare at $`\mathrm{JD}2447540.6`$ and it decays simultaneously with the count rate ($`\tau _{\mathrm{EM}}=6.45\pm 0.67\mathrm{h}`$) due to a decrease of $`n_\mathrm{e}`$ or shrinking loop volume. The good agreement between the emission measure observed by Stern et al. (1992) and the values expected from our model (see Fig. 4) provide convincing evidence that the application of the ‘rotating-flare model’ is justified for this flare. The development of the temperature during the flare (see Stern et al. (1992)) does not show the characteristic hump shape, but is close to a pure exponential decay as expected for volume unrelated parameters. The values of the flare parameters ($`\tau `$, $`r`$) resulting from the fit of our model to the lightcurve are similar to those derived from normal (ie. neither occulted nor rotationally modulated) Algol flares observed by various instruments. Ottmann et al. (1996) summarize the characteristic parameters of three Algol flares (see their Table 5): the decay timescale seems to vary by one order of magnitude between $`3`$ and $`36`$ hours, while the loop length found from standard loop modeling extends from $`0.52`$ stellar radii. We conclude that modelling the January 1989 X-ray flare on Algol in terms of rotational modulation yields flare properties which are perfectly consistent with those of other X-ray flares. ### 4.2 SR 13 Casanova (1994) discusses the similarity between a flare of SR 13 observed by the ROSAT PSPC and the Algol flare analysed in the previous subsection. Besides the absolute values of the count rate which is by a factor of 500 higher for Algol (note, that the observations were performed by different instruments and, therefore, the differences in count rate are no direct measure for the differences in flux), the shape of the SR 13 flare is very similar to that of the flare on Algol. The rotational period of the CTTS SR 13 is unknown to the present. We determined the quiescent emission of SR 13 from the pre-flare data of the first satellite orbit. Our attempt to find the rotational period from the modeling of the flare according to Eq. (3) with $`P_{\mathrm{rot}}`$ a free parameter failed, since the uncertainties in the data do not allow to distinguish between different fit solutions. In Fig. 5 (a) we overlay the data points by two solutions of the ‘rotating-flare model’, one was found assuming a period of 3 d, the other one corresponds to twice that period. A detailed spectral analysis of this specific flare event similar to the one carried out for the Algol flare (see Stern et al. (1992) and Sect. 4.1) is not practicable due to the low number of counts. To underline the difficulty in evaluating the spectral information for the flare on SR 13, we briefly discuss the results from our attempts to fit a Raymond-Smith model (Raymond & Smith (1977)) to the spectra during four stages of the flare that were defined in the following way: phase 1 is given by the quiescent stage, phase 2 is the observed, apparent flare rise, and phases 3 and 4 correspond to the observed decay. The three flare time intervals are marked in Fig. 5 (a). The quiescent spectrum was computed from an earlier observation obtained in 1991 March 05-10 by the ROSAT PSPC due to the scarcity of non-flare data in the September observation. A two-temperature Raymond-Smith model was needed to obtain acceptable fits with $`\chi _{\mathrm{red}}^2<1.4`$ for each of the four phases, where we held the temperature of the softer component fixed at $`kT=0.25\mathrm{keV}`$. The graphs in Fig. 5 (c) and (d) display the best fit values for the temperature and emission measure of the hotter component. The large uncertainties of the best fit values shown in Table 2 prohibit a spectral study with better time resolution, but having only four time bins to define the spectral evolution, the decay of the emission measure after the flaring volume became visible could not be pinned down, and thus a check of the ‘rotating flare model’ by modeling of the time development of the emission measure is not possible for this flare on SR 13. A slight indication for cooling is present in the evolution of $`kT`$ during the flare suggesting that the actual outburst might in fact have occurred as early as during the second phase. In cases of insufficient data quality hardness ratios may be used to give a clue to spectral properties. Neuhäuser et al. (1995) showed that the ROSAT hardness ratio HR2 (see Neuhäuser et al. (1995) for a definition) is related to the temperature of the plasma (see their Fig. 4). We computed HR2 for the four different time intervals defined above. The time evolution of the hardness ratio HR2 is displayed in Fig. 5 (b). The decreasing HR2 during the last three intervals supports the decline in temperature measured in the spectra and presents further evidence for cooling. To conclude, the results on the SR 13 lightcurve, while having an admittedly reduced statistical significance, are fully consistent with an interpretation in terms of flare cooling combined with rotational modulation. ### 4.3 P1724 The ROSAT HRI observation of P1724 comprises 13 satellite orbits (see Fig. 6). Similar to the flare on SR 13, constant count rate is observed only during the very first orbit. We, therefore, base our value for the quiescent emission, $`0.04\mathrm{cps}`$, on this time interval and find that it is consistent with most of the observations of P1724 presented by Neuhäuser et al. (1998). However, in March 1991 the count rate was higher by a factor 4, possibly indicating long-term variations in the quiescent emission. The lightcurve during the second orbit resembles a small flare event and is omitted from our analysis. The maximum of the large flare that dominates this observation extends over almost 4 hours. During the decline of the count rate irregular variations are observed that might be due to short timescale activity superposed on the large flare event. We ignore these fluctuations and model the lightcurve beginning after the second data gap by Eq. (3). The total number of source counts measured in this observation is smaller than 1000, and thus far too low for a timesliced hardness ratio analysis. Having in view the similarity between the X-ray lightcurve of this flare and the previously discussed flares, and the good description of the data by our best fit, we trust that the ‘rotating flare model’ applies also to this observation. ### 4.4 V773 Tau An intense X-ray flare on V773 Tau has been reported by Skinner et al. (1997) and interpreted as a sinusoidal variation whose period is approximately equal to the known optical period of V773 Tau, i.e. 71.2 h. The ASCA lightcurve of this event (see Fig. 7) is characterized by constant count rate at maximum emission which lasts over more than 2 h making the event a candidate for a rotationally modulated flare. No data is available prior to the peak emission, but observations resumed about 10 h after the maximum and display a steady decrease in count rate. Since the pre-flare stage and the rise of the flare are completely missing in the data, the flare volume must have emerged from the backside of the star well before the start of the observation, and an additional time offset parameter $`\delta t`$ has to be included in the fit (analogous to the modeling of the flare on Algol), to determine the time that elapsed between phase $`\pi \varphi _{\mathrm{crit}}`$ (= emergence of the flare volume) and the first measurement. Since the flare covers the complete observation a value for the quiescent count rate, $`I_\mathrm{q}=0.10\pm 0.02`$ cps, was adopted from a later ASCA SIS0 observation in February 1996, also presented by Skinner et al. (1997). Despite the fact that the broad maximum of the September 1995 lightcurve can be explained by the loop rotating into the line of sight, no satisfying fit could be obtained for the flare on V773 Tau by the model of Eq. (3) even after a time offset $`\delta t`$ was added ($`\chi _{\mathrm{red}}^2=2.51`$ for $`149`$ degrees of freedom): The decay of the observed lightcurve seems to be faster than our model predictions (see Fig. 7 dotted curve). We note that the data is slightly bended towards the time axis around the 6th data interval after the start of the observation. This behavior, producing an overall ‘convex’ shape of the X-ray lightcurve, could be due to an additional feature on the surface of the star. We suggest that a localized region with enhanced X-ray emission can be responsible for this break if this region disappears due to the star’s rotation at $`\mathrm{JD}\mathrm{\hspace{0.17em}2449977.85}`$. For comparison we show a fit of our ‘rotating flare model’ where such a feature has been included (solid line in Fig. 7, $`\chi _{\mathrm{red}}^2=1.47`$ for $`149`$ degrees of freedom). Since we are interested in a qualitative description of the shape of the lightcurve only we assumed that this region makes up for 0.2 cps during its visibility and begins to disappear gradually at JD 2449977.85. Representing this X-ray emitter by another set of free parameters would certainly further improve the already good agreement between data and model. Skinner et al. (1997) also present the time behavior of the emission measure derived from a two-temperature fit to the ASCA spectrum. If our interpretation of adding a soft X-ray spot, which gradually rotates away, is correct, then the emission measure of the soft component should stay constant for most of the time, but decrease towards the end of the observation. However, the S/N of the time-sliced spectral fits (Skinner et al. (1997), his Fig. 10, middle panel) is not sufficient to judge whether this is indeed the case. To conclude, other interpretations such as a different kind of anomalous flaring cannot be excluded from the data of this observation. Tsuboi et al. (1998) have presented another ASCA flare observation of V773 Tau. In that observation V773 Tau shows the typical flare behavior in the sense of a sharp rise and a subsequent longer decay of count rate, temperature, and emission measure. However, their attempt to fit an e-folding decay to the lightcurve of the hard X-ray count rate was not successful the count rate remaining too high towards the end of the observation. Hence, unusually long decays seem to be characteristic for V773 Tau. ## 5 Summary and Conclusions We have presented a sample of four untypical flare events and provided a common explanation: Parameters that depend on the size of the emitting plasma volume (e. g. count rate, $`EM`$) deviate from the standard exponential decay behavior due to temporary occultation of the flaring volume by the rotating star. This is most evident from the large flare on Algol, for which the data are most abundant and our modeling is therefore most reliable. The comparatively slow rise of the count rate in the X-ray lightcurve, broad maxima and following exponential decay are well represented by a model that describes emission from a spherical plasma loop that emerges from the back of the star and gradually rotates into the line of sight of the observer. The increasing visible fraction of the loop produces the flat maximum and apparent slow-down of the rise stage. In our data there is no indication for sine-like modulation of the X-ray lightcurve, since all but one of the lightcurves are clearly asymmetric, and the duration of all events is well below the rotational period of the host star. The only exception is the flare on V773 Tau where Skinner et al. (1997) proposed sinusoidal modulation to reproduce the shape of the ASCA lightcurve. We suggest a different explanation involving a second X-ray emitting region on the star additionally to a rotationally modulated flare to come up for the ‘convex’ shape of the lightcurve. However, due to the lack of pre-flare data, no conclusive evidence is present for either of the interpretations. Evidence for reheating of the plasma at $`\mathrm{JD2449978}.05`$ inferred from the increase of the hardness ratio (see Skinner et al. (1997)) does not contradict our model, but could be related to the disappearing of a region emitting soft X-rays similar to the one we introduce. No significant change in temperature nor emission measure of the soft component was observed in Skinner’s spectral analysis, but we note that the results of spectral fitting depend on the assumption for the abundances and column density. The decay timescales $`\tau `$ found from our best fit to the respective flare event are all in the typical range for TTS flares (of a few hours) except for V773 Tau, where the flare lasted extraordinary long ($`>20\mathrm{h}`$). Comparatively large loop sizes of a considerable fraction of the radius of the star are obtained for all observations analysed here, $`r`$ spanning between 10 to 65 % of the star radius. These values are in agreement with typical loop sizes for TTS flares inferred from quasi-static loop modeling (see Montmerle et al. (1983), Preibisch et al. (1993)). In view of the large relative size of the ratio between loop and star radius the assumptions we explain in Sect. 3 concerning our model for a rotating flare might seem somewhat oversimplifying. We also note that different solutions of the model seem to describe the data equally well even in the case of the well restrained Algol observation. Therefore, uncertainties in the fit parameters are to be regarded carefully. However, the qualitative description of the scenario is very good and the data are well represented by the model. Other interpretations of the ‘anomalous’ flare events we presented in this paper may not be excluded but are still to be traced. Clearly, continuous observations of whole flares are needed to verify whether an event could be subject to rotational modulation of the kind we discussed in this paper. Up to date most of the flares observed lack completeness in that either the rise or part of the decay were missed by the observation. In the near future XMM will provide the possibility of long, uninterrupted observations (up to $`25`$ h) that will enable to pursue the development of flares in whole. Better statistics are needed to be able to study the time development of spectral parameters for TTS flares, and try to confirm the ‘rotating flare model’ by use of the spectral information similar to our analysis of the Algol observation. ###### Acknowledgements. We would like to thank R. Stern and S. Skinner who provided us the Ginga and ASCA data. The ROSAT project is supported by the Max-Planck-Gesellschaft and Germany’s federal government (BMBF/DLR). RN acknowledges grants from the Deutsche Forschungsgemeinschaft (Schwerpunktprogramm ‘Physics of star formation’).
no-problem/9901/astro-ph9901248.html
ar5iv
text
# VLT and NTT Observations of Two EIS Cluster Candidates. ## 1 Introduction Growing evidence for the existence of clusters at $`z1`$ and beyond makes the identification and study of these systems of great interest for probing galaxy evolution and cosmological models. However, the number of confirmed systems at these high redshifts is currently very small, precluding any robust statistical analysis. The largest sample of spectroscopically confirmed clusters has been selected from ROSAT deep X-ray observations (Rosati et al. 1998, Rosati 1998), while a few other $`z1`$ clusters have been discovered in the surroundings of strong radio sources (e.g., Dickinson 1996; Deltorn et al. 1997), or using infrared observations (e.g., Stanford et al. 1997). Although X-ray and infrared searches are very effective in identifying real clusters, their ability to cover large areas of the sky is presently limited, and these methods are not likely to produce large samples of very distant clusters in the short-term. On the other hand, with the advent of panoramic CCD imagers, optical wide-angle surveys have become competitive in identifying cluster candidates up to $`z1`$. Examples of such surveys, suitable for cluster searches, include those of Postman et al. (1996), Postman et al. (1998) and the ESO Imaging Survey (EIS, Renzini & da Costa 1997), which cover 5, 16 and 17 square degrees, respectively, reaching $`I_{AB}<24`$. These surveys are currently being used for systematic searches of galaxy cluster candidates employing objective matched-filter algorithms (e.g., Postman et al. 1996). In the case of the EIS project, about 300 candidates have been identified, over the redshift interval $`0.2<z<1.3`$, out of which 79 are estimated to have $`z>0.8`$ (Olsen et al. 1998a, b; Scodeggio et al. 1998). However, only with additional observations can these optically-selected high-redshift candidates be confirmed. Establishing the global success rate of this technique (and its possible redshift dependence) is extremely important for the design of future wide-field optical imaging surveys. Indeed, these surveys may play a major role in significantly increasing the number of known distant clusters, thus making them useful tools for probing the high-$`z`$ universe. As a test case, two EIS cluster candidates identified in EIS patch B (EIS 0046-2930 and EIS 0046-2951; Olsen et al. 1998b), were observed with the VLT Test Camera (VLT-TC) as part of the ESO VLT-UT1 Science Verification (SV; see Leibundgut, De Marchi & Renzini 1998). After the public release of these Science Verification data, fields including the two candidate clusters have been observed at the ESO 3.5m New Technology Telescope (NTT), as part of an ongoing infrared (IR) survey of EIS patch B (Jørgensen et al. 1999). Therefore, we had the opportunity to combine the VLT optical data with the NTT IR data, and to use both optical and IR color-magnitude (CM) diagrams to search for evidence of a “red sequence” of luminous early-type galaxies, typical of populous clusters at low as well as at high redshift (e.g., Bower, Lucey & Ellis 1992; Stanford, Eisenhardt & Dickinson 1998; Kodama et al. 1998). A clear identification of this sequence would provide strong support to the reality of the clusters, while allowing an independent estimate of their redshift to be obtained. In this Letter we briefly describe the various observations and the data reduction in Section 2; in Section 3 we present our results; and in Section 4 we summarize our conclusions. ## 2 Observations and Data Reduction Originally, four EIS cluster candidates were selected for the VLT-UT1 SV program, after visual inspection of all candidates found in the EIS-wide Patch B. The four targets were selected to cover a range in redshift and richness among the EIS candidates. However, due to time and weather constraints only two fields were actually observed. The optical observations were conducted on the nights of August 18 and 23, 1998 with the Test Camera of the VLT-UT1, as part of the ESO VLT-UT1 Science Verification (1998). The VLT-TC was equipped with an engineering grade Tektronix $`2048^2`$ CCD, covering a field of view of about 93 $`\times `$ 93 arcsec with an effective pixel size of 0.09 arcsec (after a $`2\times 2`$ rebinning). One of the cluster candidates (EIS 0046-2930) was observed in $`VRI`$, and the other (EIS 0046-2951) only in the $`V`$ and $`I`$ passbands. In Table 1 we summarize the available data, giving the passband, the corresponding total integration time and the median seeing of the combined images. During the exposure of EIS 0046-2930 the transparency was poor and variable, leading to fairly bright limiting magnitudes. Single exposures have been reduced by the ESO Science Verification Team using standard IRAF procedures, and then publicly released. These reduced images were then processed using the EIS pipeline which performed the astrometric and photometric calibration, and coaddition for each band (see Nonino et al. 1998). The VLT-TC optical data were calibrated against the EIS data, for which the uncertainty in the photometric zero-point was estimated to be 0.1 mag in V and 0.02 mag in I. The VLT-TC versus EIS comparison yields an additional uncertainty of about 0.1 mag. Therefore, we estimate that the overall uncertainty in the zero-points is $`<0.15`$ mag in $`V`$ and $`<`$$``$ 0.12 in $`I`$. The IR $`J`$ and $`Ks`$ band images were obtained on October 8 and 9, 1998 using the SOFI infrared spectrograph and imaging camera (Moorwood, Cuby & Lidman 1998) at the NTT. SOFI is equipped with a Rockwell 1024<sup>2</sup> detector that, when used together with the large field objective, provides images with a pixel scale of 0.29 arcsec, and a field of view of about $`4.9\times 4.9`$ arcmin. The full set of SOFI observations will be described elsewhere (Jørgensen et al. 1999); here we describe only those for the fields including the two cluster candidates. Total integration times and the seeing measured on the combined images are given in Table 1. The data were reduced using the Eclipse data analysis software package (Devillard 1998), developed to combine jittered images. The resulting combined images were then input to the EIS pipeline for astrometric and photometric calibrations using observations of standard stars given by Persson (1997). From the scatter of the photometric solution we estimate the zero-point uncertainty in the $`J`$ and $`Ks`$ bands to be $`<0.1`$ mag. In order to facilitate the analysis of the whole dataset, the images from the EIS-wide survey were resampled to a common reference frame, centered on the initial estimate of the two candidate cluster positions, using the Drizzle routine of the EIS pipeline. The resampled images have the same pixel size as the SOFI images. To improve the sensitivity to faint objects the resampled EIS-wide and SOFI images were combined to produce one very deep $`B+V+I+J+Ks`$ image for each field. This image has a sufficiently large field of view ($`4.9\times 4.9`$ arcmin) to allow a reliable estimate of the background source density to be obtained (see Section 3). The source extraction software SExtractor (Bertin & Arnouts 1996) was subsequently used to detect sources in these deep images, while measuring the flux parameters for each individual passband in the separate images. Magnitudes and colors were measured using a 4 arcsec diameter aperture. Also all available VLT images were coadded to produce the $`V+R+I`$ and $`V+I`$ images shown in Figure 2. The resulting images are considerably deeper than those from EIS, and also have much better resolution. This procedure has allowed us to reach approximately the same limiting magnitude in both fields ($`I25.0`$ at about $`2\sigma `$). Even though the transparency during the observations of the EIS 0046-2930 field was poor, this was compensated by the addition of the $`R`$band exposure. We have also resampled and combined the VLT-TC images with those from SOFI, using the same method as above. Using the available multicolor data from EIS-wide plus SOFI, we derived $`(IKs)Ks`$ CM-diagrams for all galaxies within 1 arcmin of the nominal candidate cluster centers. From these diagrams, a tentative color-based selection was made, dividing galaxies into cluster candidate members and foreground/background objects. Based on this selection, we computed for each cluster candidate a new position, obtained as the flux-weighted center of mass of the candidate members. An identical procedure was carried out using the SOFI data only, leading to very similar results. In both cases the new positions were found to be within 0.4 arcmin of the position given in the EIS catalog (Olsen et al. 1998b). Note that this corresponds to the pixel size (0.45 arcmin) of the maximum-likelihood map used in the EIS cluster finding procedure. ## 3 Results Table 2 gives the original cluster candidates coordinates, the new flux-weighted positions as described above, the significance of the detection, the Abell richness and the estimated redshift from the EIS catalog, as listed in Olsen et al. (1998b) and the new redshifts derived below using the CM-diagrams from the combined VLT-TC and SOFI data. All coordinates are in J2000. Note that the estimate of the Abell richness for distant clusters is quite uncertain, but it serves to indicate their relative richness. ### 3.1 EIS 0046-2930 In the EIS candidate clusters catalog this object was identified only in $`I`$-band, and assigned a redshift of $`z_{EIS}0.6`$. However, visual inspection of the original survey images of this field showed the presence of foreground “blue” galaxies and of a fainter red population, not detected in the $`V`$band. Using the deeper optical (reaching $`V26.026.5`$) and the IR catalogs produced from the VLT-TC and from the SOFI images, we can study in greater detail this cluster candidate. The resulting four optical and IR CM-diagrams are shown in Figure 1, for all galaxies within the VLT-TC field of view. The upper panel shows the optical $`(VI)I`$ CM-diagram, where there is a suggestion for a concentration of galaxies at $`(VI)2.7`$, just beyond the reach of the EIS color data. However, the scatter is large compared with that seen in clusters at intermediate redshifts (Olsen et al. 1998b), and cannot be explained by photometric errors in the color which at $`I22`$ are $`<`$$``$ 0.3 mag. This scatter prevents a secure identification of the red sequence. By contrast, the $`(IKs)Ks`$ diagram shows a clear early-type sequence in the interval $`Ks=16.020.0`$ at $`(IKs)3.9`$. Using the above magnitude range, the CM-relation is well-fitted by a linear relation with an estimated scatter of $`0.15`$ ($`Ks<18.5`$), comparable to the estimated error in the color and in agreement with the color dispersion of morphologically classified early-type galaxies in high-$`z`$ clusters (Stanford, Eisenhardt & Dickinson 1998). The infrared $`(JKs)Ks`$ diagram shows an even tighter sequence at $`(JKs)1.8`$, with a scatter of $``$0.1 mag, again comparable with the estimated error in the color. The ten brightest galaxies (in the $`Ks`$ band) for which $`1.7(JKs)1.9`$ and $`(VI)2.3`$ are represented by filled circles in the CM-diagrams and are also numbered in Figure 2 according to their magnitude ranking. These objects are the most likely early-type galaxy members of EIS 0046-2930. The flux-weighted position of the “cluster” is also shown. The projected radial distribution of objects brighter than $`Ks=20`$ and within the color range $`1.7<(JKs)<1.9`$, is shown in Figure 3, in annuli 0.3 arcmin wide. The contrast of this bright red-sequence population relative to the background is clearly seen, while there appears to be no appreciable clustering for galaxies outside this color range. Even though the statistic is poor, the scale and amplitude of the overdensity associated to this population, a factor of $``$ 7 within the innermost 0.3 arcmin, are similar to those observed by Dickinson (1996) for the cluster surrounding 3C 324 at $`z1.26`$. To test the robustness of this results flanking fields with the same size of the VLT-TC field of view were extracted from the same SOFI image and used to obtain CM-diagrams and radial density profiles. None of these fields showed the presence of a concentration of galaxies both in color and in position. This suggests that the concentration of galaxies in both color and projected separation seen in the field of EIS 0046-2930 is significant and that this object is likely to be a real cluster. Further support to this conclusion comes from the matched-filter algorithm which applied to the $`Ks`$band data detects a “cluster” at the $`3\sigma `$ level and at $`z1`$. On the presumption that EIS 0046-2930 is a real cluster, the color of the red sequence can be used to estimate its redshift. This can be achieved either by using synthetic stellar population models, or purely empirically using the colors of the red sequence of clusters of known redshift. Even though the available data are sparse, we have adopted the latter approach because it is model independent. We have used the spectroscopic redshifts and the CM-diagrams given by Stanford, Eisenhardt & Dickinson (1998) for their clusters at $`z>0.5`$ and the $`z=1.273`$ cluster of Stanford et al. (1997) to estimate the location of the early-type galaxies sequence in different passbands for clusters at $`z1`$. Interpolating these relations to the colors of the red sequence of EIS 0046-2930 ($`(RK)=5.4`$, $`(IK)=3.9`$, and $`(JK)=1.8`$) we consistently estimate its redshift to be $`z_{CM}=1.0\pm 0.1`$ (statistical uncertainty only). ### 3.2 EIS 0046-2951 In the EIS catalog this object was estimated to have a redshift of $`z_{EIS}0.9`$, being detected only in the $`I`$-band (Table 2). However, visual inspection of the $`V`$ and $`I`$band EIS images suggested that this system could be an overlap of two concentrations at different redshifts. Using the deeper $`V`$band image obtained with the VLT-TC we are now able to investigate the optical CM-diagram shown in Figure 4. Indeed, we find two concentrations of galaxies: one seen at $`(VI)1.6`$ and another at $`(VI)2.6`$. These colors correspond to redshifts $`z0.25`$ and $`z0.7`$, respectively. However, in the $`(IKs)Ks`$ and $`(JKs)Ks`$ CM-diagrams only one sequence is seen, located at $`(IKs)3.5`$ and $`(JKs)1.7`$. These values lead to redshift estimates of $`z_{CM}=0.9\pm 0.15`$ in both cases, in good agreement with the original estimate based on the matched-filter algorithm. In contrast to the previous cluster, the scatter of the red sequence in both colors is significantly larger (0.21 in $`(IKs)`$ and 0.19 in $`(JKs)`$) and cannot be fully accounted for by the photometric errors in our data ( $`<`$$``$ 0.15 mag). The larger scatter may be due to a larger fraction of spiral galaxies in the “cluster”, or to a stronger contamination by foreground galaxies. As in the previous case, the most likely early-type cluster galaxies have been selected adopting a color-selection criterion similar to that described above. These galaxies, chosen to have $`1.5(JKs)1.9`$ and $`(VI)2.3`$, are identified in Figure 4 and in the right panel of Figure 2. Figure 5 shows the projected radial distribution of color-selected candidate cluster members. In this case we find that the overdensity of the red sequence galaxies is $``$5, over the same radial distance as for the previous cluster. The smaller overdensity of this candidate cluster (and perhaps the larger fraction of spirals) is consistent with the lower original estimate of its richness (Table 2). Note that a 3$`\sigma `$ detection at approximately the same redshift was obtained applying the matched-filter algorithm to the $`Ks`$ data. As for the previous object, the analysis of flanking fields from the SOFI image gives further support to the reality of the observed concentration in color and projected separation, suggesting the existence of a physical association. ## 4 Summary We have used deep $`V`$\- and $`I`$-band images of two EIS cluster candidates taken during the ESO VLT-UT1 Science Verification observations to investigate the reality of these clusters. The VLT data were complemented by infrared data taken with SOFI at the NTT. Optical, IR, and optical-IR CM-diagrams have been constructed to search for the presence of the red-sequence typical of bright early-type galaxies in clusters. In the case of EIS 0046-2930 we find a well-defined sequence at $`(IKs)3.9`$ and $`(JKs)1.8`$. These galaxies are also concentrated relative to the background suggesting the existence of a cluster at $`z=1.0\pm 0.1`$. The evidence for the other candidate, EIS 0046-2951, is less compelling even though we find a sequence at $`(IKs)3.5`$ and $`(JKs)1.7`$, leading to an estimated redshift of $`z=0.9\pm 0.15`$, consistent with its original estimate. However, the scatter in the CM-diagrams is large and the density contrast of the “cluster” relative to the background smaller. In any case, a final conclusion on whether these systems are real physical associations at high-redshift must await spectroscopic observations. These results demonstrate once again the importance of infrared data in locating high-redshift clusters. However, the small field of view of present IR detectors makes large solid angle IR surveys very expensive in terms of telescope time. On the other hand, wide angle optical surveys can efficiently produce a great number of high redshift candidates, but with a major fraction of them which may turn to be spurious after time-consuming spectroscopic follow up. The present experiment is an attempt at exploring a hybrid approach, in which the optically selected candidates are first imaged in the IR before being considered for spectroscopic follow up at a large telescope such as the VLT. Besides providing a first verification of the candidate clusters, the IR images can then be used to search for clusters at higher redshift. The overall efficiency of this strategy remains to be empirically determined, e.g. for the actual complement of ESO telescopes and instruments, and the present paper represents a first step in this direction. ###### Acknowledgements. We acknowledge that this work has been made possible thanks to the Science Verification Team and EIS Team efforts in making the data publicly available in a timely fashion. Special thanks to R. Gilmozzi, B. Leibundgut and J. Spyromilio. Part of the data presented here were taken at the New Technology Telescope at the La Silla Observatory under the program ID 62.O-0514. We thank Hans Ulrik Nørgaard-Nielsen, Leif Hansen and Per Rex Christensen for allowing us to use the data prior to publication.
no-problem/9901/astro-ph9901188.html
ar5iv
text
# Turbulent Convection in Pulsating Stars ## 1. Introduction Cepheid and RR Lyrae modelling has a long history going back to the early 1960’s (recently reviewed in Gautschy & Saio 1995). Right from the beginning it was quite clear that convection had to be present in the pulsating envelopes. Furthermore, modelling showed that convection was necessary to provide a red edge to the instability strip, i.e. to stabilize the stars at lower temperatures. However convection was deemed to have a minor effect on the shape of the light curves and radial velocity curves. And, indeed, purely radiative models gave good agreement with the observations of the Galactic Cepheids (e.g. Moskalik et al. 1992), although a few problems of varying degree of severity persisted (cf. Buchler 1998), such as the inability of radiative codes to model beat pulsations in either Cepheids or RR Lyrae. The light curves of the so-called Beat Cepheids or RR Lyrae indicate that these stars pulsate with two basic frequencies, and with constant power in these frequencies. In addition, radiative codes give pulsation amplitudes that are much too large when compared to the observations. Furthermore the amplitudes depend on the fineness of the numerical mesh. The amplitudes as well as the stability of the limit cycles also depend on the values chosen for the pseudo-viscosity. In the last few years a wealth of data on variable stars in the Magellanic Clouds (MC) has been obtained as a by-product of the EROS and MACHO microlensing projects. Because these galaxies have a metal content that is only one quarter to one half that of our Galaxy, our observational data base has therefore been substantially broadened. Calculations with radiative codes show rather clearly that purely radiative models are incapable of agreement with observations (e.g. Buchler 1998). The fact that resonances among the vibrational modes give rise to observable effects (e.g. Buchler 1993) can be exploited to put constraints on the pulsational models and on the mass–luminosity relations. The best known of these resonances occurs in the fundamental Cepheids ($`P_0/P_2`$=2) in the vicinity of a period $`P_0`$=10 days and is at the origin of the well known Hertzsprung progression of the bump Cepheids. MC observations (Beaulieu et al. 1995, Beaulieu & Sasselov 1997, Welch et al. 1997) show that the resonance may be slightly shifted to half a day or a day higher in period. Structure also appears in the Fourier decomposition coefficients of the first overtone Cepheid light curves, and is most likely due to a resonance $`P_1/P_4`$=2 with the fourth overtone as first pointed out by Antonello et al. (1980). Again MC observations indicate that the resonance center occurs approximately at the same period. When used to constrain purely radiative models (Buchler, Kolláth, Beaulieu & Goupil 1996) one obtains stellar masses that are much too small to be in agreement with stellar evolution calculations. Improvements to the radiative Lagrangean codes have been made in recent years: Adaptive mesh techniques have been used to resolve sharp spatial features such as shocks and ionization fronts. Instead of treating radiation in an equilibrium diffusion approximation, the equations of radiation hydrodynamics have been implemented. However, all these changes have not substantially improved the agreement between modelling and observations. It has become patently clear that some form of convective transport and of turbulent dissipation is needed if we want to make progress. Turbulence and convection are inherently 3D phenomena. While a great deal of progress has been made in 3D simulations, it remains very difficult to model astrophysically realistic conditions which have very large Rayleigh numbers Ra$``$ 10<sup>12</sup>, and very small Prandtl numbers Pr$``$ 10<sup>-6</sup>. It is of course even more difficult to incorporate them in stellar models (see however the solar models of Nordlund & Stein at this meeting). Large amplitude stellar pulsations increase the difficulties by involving time dependence. Indeed, the source regions occur in the partial ionization regions of hydrogen, helium and Fe-group atoms, and these features are neither Lagrangean (they move through the fluid) nor Eulerian (they move through space). Fig. 1 shows the behavior of the turbulent energy $`e_t`$ over a period in a pulsating Cepheid model with a period of 10.9 days ($`M`$=6.1M, $`L`$=3377L, $`T_{eff}`$=5207K, $`X`$=0.70, $`Z`$=0.02). Similar behavior occurs in RR Lyrae models, but we concentrate here on Cepheids which are actually more daunting numerically because of the sharpness of their H ionization front. On the right side we display $`e_t`$ as a function of Lagrangean radius. The lightness of the grey reflects the strength of $`e_t`$. On the left, Fig. 1 displays $`e_t`$ as a function of zone index, i.e. as attached to the Lagrangean mass coordinate, i.e. in the fluid frame. The turbulent energy is largest in the region associated with the combined H and first He ionization fronts. The next most important region of turbulent energy is the He<sup>+</sup>–He<sup>++</sup>. There can also be turbulent energy in the Fe group partial ionization regions, at least for Galactic metallicity, but is comparatively weak and does not show up on the scale of the figure. Fig. 1 clearly shows how the turbulent energy tracks the source regions which move through the fluid during the pulsation. It also shows the importance of time dependence in the convective pulsating envelope. Both figures show that the turbulent energy increases during the pulsational compression phase and that the two turbulent zones briefly merge. Fig. 2 shows the temporal behavior of the convective flux in the frame of the zone index (Lagrangean) and in the stellar frame, respectively, and can be compared to the turbulent energy in Fig. 1. The convective flux exists only in the regions of negative entropy gradient regions ($`Y>0`$, cf. Eq. 6) and is therefore confined to narrower regions than the turbulent energy which can diffuse outside these regions. Attempts at including convection in pulsation codes are actually not new. Castor (1971) was the first to present a nonlocal time dependent formulation and numerical application to a pulsating stellar envelope model. Later, Stellingwerf simplified Castor’s formulation and performed a number of model calculations (Stellingwerf 1982, Bono & Stellingwerf 1994). Similar turbulent convective model equations have been used by Gehmeyr and Winkler (1992). Another early computation of linear convective models is that of Gonczi & Osaki (1980). ## 2. The Turbulent Convection Recipe $$\frac{du}{dt}=\frac{1}{\rho }\frac{}{r}\left(p+p_t+p_\nu \right)\frac{GM_r}{r^2},$$ (1) $$\frac{d}{dt}\left(e+e_t\right)=\frac{\left(p+p_t+p_\nu \right)}{\rho }\frac{1}{r^2}\frac{r^2u}{r}\frac{1}{\rho r^2}\frac{}{r}\left[r^2\left(F_c+F_t+F_r\right)\right],$$ (2) $$\frac{de_t}{dt}=\frac{1}{\rho r^2}\frac{}{r}\left(r^2F_t\right)\frac{e_t^{1/2}}{\mathrm{\Lambda }}\alpha _d\left(e_tS_t\right)\frac{\left(p_t+p_\nu \right)}{\rho }\frac{1}{r^2}\frac{r^2u}{r},$$ (3) $$p_t=\alpha _p\rho e_t,p_\nu =\alpha _\nu \rho \mathrm{\Lambda }e_t^{1/2}\frac{u}{r},$$ (4) $$F_t=\alpha _t\rho \mathrm{\Lambda }\frac{2}{3}\frac{e_t^{3/2}}{r},F_c=\alpha _c\alpha _\mathrm{\Lambda }\rho e_t^{1/2}c_pTY,$$ (5) $$S_t=\alpha _s\alpha _\mathrm{\Lambda }(e_t\frac{p}{\rho }\beta TY)^{1/2},Y=\left[\frac{H_p}{c_p}\frac{s}{r}\right]_+,$$ (6) where $`p`$ is the gas pressure, $`\beta `$ is the thermal expansion coefficient, $`\mathrm{\Lambda }=\alpha _\mathrm{\Lambda }H_p`$, and $`H_p=pr^2/(\rho GM)`$ is the pressure scale height, and other symbols have their usual meanings. This scheme gives rise to an unphysical behavior at the boundaries of the convective regions where $`Y0`$. Because $`\delta S_k\delta Y/\sqrt{Y}`$ the linearization has a pole there that shows up in the growth rates along sequences of models when the zoning is very fine or the mesh happens to fall on a point with small $`Y`$. This difficulty can be avoided with an alternative model equation for the source which is more in line with the Gehmeyr–Winkler formulation (1992) $$S_t=(\alpha _s\alpha _\mathrm{\Lambda })^2\frac{p}{\rho }\beta TY,$$ (7) For comparison, in Fig. 3, we show the effect that the two formulations have on the linear growth rates for a sequence of Cepheid models ($`M`$=6.75M, $`L`$=4843, $`X`$=0.70, $`Z`$=0.02, variable $`T_{eff}`$). The difference is seen not to be substantial, although the alternate $`S_t`$ increases the maximum period that unstable overtone models can have (cf. Fig. 12 and 13 in YKB). For this model we have taken the parameters ($`\alpha _d=1.0,\alpha _c=2.25,\alpha _s=0.75,\alpha _\nu =1.8,\alpha _t=0.25,\alpha _p=0.667,e_0=1.e4,\alpha _\mathrm{\Lambda }=0.4`$). These values are used for illustrative purposes. Except for this one example we have not yet explored the general effects of using expression (7). We are also still in the process of calibrating the $`\alpha `$’s using the best available astrophysical constraints. ## 3. Work Integrand It is interesting to see how turbulent convection affects the stability of the pulsational modes. Turbulent convection actually affects the stability in two ways, indirectly, by altering the structure of the equilibrium model and, directly, in the linearization of the equations. (cf. e.g. Yecko, Kolláth & Buchler 1998, YKB hereafter). The work done on the pulsation per cycle is given by $$W=𝑑t\left(𝑑mp\frac{dv}{dt}\right),$$ (8) where the total pressure $`p=p_g+p_t+p_\nu `$ is composed of the separate contributions of the gas (and radiation) pressure $`p_g`$, the turbulent pressure $`p_t`$ and the eddy viscous pressure $`p_\nu `$. If we denote the linear eigenvalues by $`\sigma =i\omega +\kappa `$, for an assumed exp($`\sigma t`$) dependence, then the relative growth rate is given by $$\eta =2\frac{\kappa }{\omega }=\frac{2\pi }{\omega ^2I}\mathrm{Im}\delta p\delta v^{}𝑑m,$$ (9) $$I=|\delta r|^2𝑑m,$$ (10) and the $`\delta `$ refer to the pressure, specific volume and radial displacement parts of the modal eigenvector, respectively, and $`I`$ is the moment of inertia of the mode. The quantity $`\eta `$ represents the energy growth of a mode over one period, equal to the inverse of the quality factor $`Q`$ that is commonly associated with resonant electronic devices. Here we illustrate with a fundamental Cepheid model (with M=5.2M, L=3293L, $`T_{eff}`$=5677 K, X=0.716, Z=0.01) how the work integrands are affected by convection. It is of interest to see how the various regions contribute to the work, as well as how the turbulent convective quantities affect the stability. In Fig. 4 we display the linear work integrand (thick solid line) together with the separate contributions: gas pressure (thin solid line), $`p_t`$ (dotted line) and $`p_\nu `$ (dashed line). The area under the curve is slightly positive since the mode is linearly unstable. As expected, the eddy pressure is everywhere damping. The turbulent pressure, on the other hand, can be both driving or damping depending on its phase with respect to the density variations. The sharp peak is associated with the H and first He ionization, whereas the broad peak is due to the second He ionization and the tiny peak to Fe. The total work (Eq. 8) that is done over a limit cycle is zero, but again it is of interest to see how nonlinear effects change the nonlinear work integrand which is displayed in Fig. 5. Our nonlinear work integrand is arbitrarily normalized by twice the nonlinear pulsational kinetic energy. The figure shows the separate contributions of $`p_g`$, $`p_t`$ and $`p_\nu `$. Here there is, in addition, a pseudo-viscous pressure whose contribution is very small compared to that of the other pressures and is not shown here. In comparison to the linear work integrand, most noticeable are (a) the broadening of the driving region because the (non Lagrangean) ionization fronts sweep through the envelope during the pulsation; this is already known from purely radiative models (e.g. Figs. 4 and 7 in Buchler 1990) and (b) the greatly enhanced damping by the eddy viscosity pressure $`p_\nu `$. A final comment concerning frequently made approximations. In many early pulsation computations convection was assumed to be ’frozen in’: Convection was included in the computation of the equilibrium model, but all convective quantities were held constant in the calculation of the period and of the linear growth rates. A similar approximation is often made in stellar evolution computations. In YKB we have examined this approximation and found it to be very lacking. The perturbation of the turbulent quantities, and concomitantly of the convective flux, has a very strong damping effect on the pulsation. In Fig. 6 we summarize these results for a $`T_{eff}`$sequence of models (with $`M`$=5M, $`L`$=2060L) for the fundamental and first overtone modes. The solid lines represent the exact growth rates (i.e. correct linearization of all quantities). The line with crosses represents the ’frozen convection’ approximation which is seen to be inadequate. The fundamental instability strip (the domain where the modal growth rate is positive) is enormously broadened and shifted. For the overtone the effect appears even more drastic. The mode, which is stable throughout the whole temperature region, now becomes unstable over a very broad region. The dotted line corresponds to the approximation of ’adiabatic’ convection, i.e. $`T\delta s_t`$ $``$ $`\delta e_tp_t\delta v=0`$. Physically it corresponds to assuming that all convective time scales are very long. This approximation is seen to underestimate somewhat the damping effects of convection. Another convenient approximation is the other extreme, which is to assume that all convective time scales are very short compared to the other time scales, i.e. from Eq. 3 we obtain $$\frac{}{r}\left(r^2F_t\right)\frac{e_t^{1/2}}{\mathrm{\Lambda }}\alpha _d\left(e_tS_t\right)=0.,$$ (11) This is seen to be the best of the approximations. It is also the simplest to apply in evolutionary calculations in which a time independent, local mixing length recipe is used (which would correspond here to setting in addition $`F_t=0`$ or $`\alpha _t=0`$, i.e. no diffusion of turbulent energy). ## 4. Nusselt vs. Rayleigh Numbers The Nusselt number is defined as Nu=$`F_c/F_{cond}`$, where in our case the conductive flux is the radiative flux, and the Rayleigh number is Ra=$`g\beta d^3TY/(\nu \chi )`$. Here $`g`$ is the local gravity, $`d`$ is the local scale height, $`\nu `$ is the kinematic viscosity and $`\chi `$ is the radiative conductivity. There is general agreement that Nu should depend on Ra, viz. Nu = Ra <sup>a</sup>, but there is no theoretical agreement on what the value of $`a`$ should be (e.g. Spiegel 1971). Some experimental results indicate that $`a=0.28`$ (Castaing et al. 1989), but it is not clear that they should apply to the stellar case where the boundaries can adjust to accommodate a fixed stellar luminosity, and where the physical quantities have strong spatial variations, especially through the partial ionization zones. In Fig. 7 we reproduce that behavior of the local Nu versus Ra numbers throughout the convective regions of the two typical Cepheid models $`M`$=5M, $`L`$=2090L, $`T_{eff}`$=4900 (solid line) and 5300K (dotted) of YKB. Only the combined H–He convective regions, where Nu$`>1`$, are shown. For reference we have shown two thin lines with slopes 1/2 and 1/3, respectively. Throughout the convective region the exponent $`a`$ varies between 0.45 and 0.53, and thus agrees best with the higher theoretical value of 1/2 (Spiegel 1971). The right hand side shows the Péclet number defined as the ratio of the thermal diffusion time scale to the convective time scale. ## 5. Sequence of Cepheid models YKB performed some sensitivity tests of the properties of Cepheid models obtained with the 1D turbulent convective model. Here we just add Fig. 8 which displays the strength of the convective luminosity as a function of zone number (bottom scale), for a sequence of Cepheid models starting from the blue edge in front, to the red edge in the back. The importance of the convective flux increases from the blue edge, where it is relatively unimportant, to the red edge. The H and first He ionization regions are always merged into a single zone. Near the blue edge the second ionization region for He forms a separate convective region, but when we arrive at the red edge, convection encompasses both H and He regions, and almost joins with Fe region (left). ## 6. Double-Mode (DM) Pulsations The numerical modelling of double mode (DM) pulsations has been a long standing quest in which purely radiative models have failed. In a recent paper (Kolláth et al. 1998, hereafter KBBY) it was shown that with the inclusion of turbulent convection DM pulsations appear almost naturally in Cepheid models. Almost concomitantly, but independently, Feuchtinger (1998) found DM behavior in RR Lyrae pulsations which we have since also confirmed. KBBY described the behavior of the DM Cepheids in terms of truncated amplitude equations (Eqs. 1 of KBBY), and they appeared to give excellent agreement with the model that was studied. Fig. 1 of KBBY showed the transient evolutions for a given Cepheid model and for different initializations of the hydrocode. The evolution toward a DM pulsational state is clearly exhibited. The results of the pulsational states of a number of Cepheid models were summarized in a bifurcation diagram (Fig. 4 of KBBY). The DM states were obtained with the regular hydrodynamics code after lengthy time integrations with suitable initial conditions. The single mode pulsational states, whether stable or not, were obtained with Stellingwerf’s relaxation method (cf. Kovács & Buchler 1987), sometimes with a lot of perseverance. When such models were more carefully scrutinized it became apparent that a different transient evolution was possible (Kolláth et al. 1999), namely toward the F limit cycle on the bottom right. This situation is shown in Fig. 9. It is clear that in addition to the stable DM there must coexist a stable F limit cycle and a second unstable DM. This new development forces us to also reconsider the bifurcation diagram (Fig. 4 of KBBY). We had suggested that the sharp vertical rise (drop) of the fundamental (overtone) amplitudes was due to the presence of a pole in the discriminant $`𝒟`$. While such a pole is present it turns out to be too far away (in $`T_{eff}`$) to cause the observed vertical slope. Upon closer inspection it has been found (Kolláth et al. 1999) that the bifurcation diagram is a bit more complicated than first thought. Fig. 10 is an adaptation of the results of Kolláth et al. (1999). Indeed, the region of fundamental mode pulsations extends to the left into the region where DM pulsations can occur. There is thus a narrow region of hysteresis where both F and DM pulsation can occur. We note immediately that this bifurcation structure, in particular the hysteresis, cannot be accommodated with the amplitude equations of KBBY that were truncated at the $`A^3`$ terms. Kolláth et al. (1999) show that one can readily get agreement by adding the most important next order terms in the truncation, which normal form theory shows to be $`r_0A_0^5`$ and $`r_1A_1^5`$. (We disregard the additional quintic cross-coupling terms). $`{\displaystyle \frac{dA_0}{dt}}=\left(\kappa _0q_{00}A_0^2q_{01}A_1^2r_0A_0^4\right)A_0`$ (12) $`{\displaystyle \frac{dA_1}{dt}}=\left(\kappa _1q_{10}A_0^2q_{11}A_1^2r_1A_1^4\right)A_1`$ (13) Rather than using amplitudes $`A`$, it is equivalent and perhaps more convenient here to introduce the ’energies’, $`e=A^2`$, instead of the amplitudes $`A`$. The amplitude equations, with the new terms added, take on the form $`{\displaystyle \frac{de_0}{dt}}=2\left(\kappa _0q_{00}e_0q_{01}e_1r_0e_0^2\right)e_0`$ (14) $`{\displaystyle \frac{de_1}{dt}}=2\left(\kappa _1q_{10}e_0q_{11}e_1r_1e_1^2\right)e_1`$ (15) The loci of the fixed points are obtained by setting the RHSs of these equations equal to zero. Without the $`r`$ terms these nullclines (other than the two coordinate axes) are simply straight lines that intersect at most once, and when they intersect they give the DM as has been known for a long time (Buchler & Kovács 1986). Clearly no hysteresis is possible in this case. On the other hand, even with small $`r`$ values the lines bend and multiple intersections become possible. This situation is depicted in Fig. 11. For the sake of illustration, we have chosen the numerical values ($`q_{00}`$=2.179e-3, $`q_{01}`$=4.5e-3, $`q_{10}`$=5.9e-3, $`q_{11}`$=16.e-3, $`r_0`$=–3.e-4, $`r_1`$=1.4e-3), for simplicity keeping these values constant even though in a real sequence of models they would vary. The variation of the growth rates along a sequence is more important and we assume that $`\kappa _0=\overline{\kappa }_0`$ \+ 0.8e-3$`\lambda `$, $`\kappa _1=`$ $`\overline{\kappa }_1`$ \+ 1.6e-3 $`\lambda `$, where $`\overline{\kappa }_0`$=2.e-3, $`\overline{\kappa }_1`$=7.5e-03. The parameter $`\lambda `$ varies between 0 and 1 along this sequence. The corresponding bifurcation diagram is presented in Fig. 12. It is seen to display the same general features as the actual Cepheid diagram. In particular, it has an single-mode O1 regime up to $`\lambda `$ 0.08, a DM regime from $`\lambda `$ 0.08–0.90, and a coexistence between DM and F modes from $`\lambda `$0.67–0.90. To the right $`\lambda >`$ 0.90, only the F mode LC is stable. Note that the annihilation of the stable and unstable DMs that occurs at $`\lambda `$ 0.90 gives rise to the vertical observed tangent. Note that the complexity of the bifurcation diagram is partly due to the values we have chosen for the control parameters, $`T_{eff}`$ and $`\alpha _\nu `$. Our values correspond to realistic Cepheids and are not idealized for the purpose of clarifying the evolution into single and double modes. If we had chosen instead to unravel the complete nature of the bifurcation, we would have been forced to choose both $`T_{eff}`$ and $`\alpha _\nu `$ to correspond to the polycritical point – where the F, O, DM and the trivial solution coexist (this point was previously discussed in KBBY and plotted there in Fig. 3). Near to this point the dynamics is given by the cubic equations (we can take this as the definition of near). Furthermore, the bifurcation structure is straightforward once we know how we move through the parameter space given by $`T_{eff}`$ and $`\alpha _\nu `$. (This is not true if the bifurcation is subcritical, but as yet we have not encounterd this case). For a more general unfolding of the bifurcation, however, this ideal picture is easily extended beyond its reach. So we ought to expect that some effects of this breakdown, in the form of an increasing nonlinearity, will begin to appear. What we seem to be witnessing here is, in fact, the need for quintic terms as the polycritical point becomes more distant. ## 7. Conclusions It is perhaps remarkable that such a simple 1D recipe for turbulent convection can give such drastic improvements over purely radiative codes. It may indicate that, at least for Cepheid and RR Lyrae variables, this recipe incorporates all the physics of turbulence and convection that is essential to model these pulsations. It is possible that a nonlocal, time dependent dissipation which this model equation provides is all that is needed. We are only at the beginning of the process of calibrating the seven $`\alpha `$ parameters that appear in the turbulent convective description. There are numerous constraints that need to be satisfied, and we hope that despite the large number of these $`\alpha `$’s these constraints can be satisfied. In particular it will be a challenge to obtain the observational properties of both the Galactic and of the Magellanic Cloud variable stars. Only then will we know whether our simple 1D model is adequate. ### Acknowledgments. We wish to thank the organizers for a most pleasant and fruitful meeting. This work has been supported by NSF (AST95–28338). ## References Antonello, E. & Poretti, E. Reduzzi, L, 1990, AA 236, 138 Beaulieu, J.P. et al. 1995, AA 303, 137 Beaulieu, J.P. & Sasselov, D. 1997, in 12<sup>th</sup> IAP Coll. Variable Stars and the Astrophysical Returns of Microlensing Surveys, Eds. R. Ferlet & J.P. Maillard Bono, G., Stellingwerf, R.F. 1994, ApJ Suppl 93, 233–269 Buchler J.R. 1990, in The Numerical Modelling of Stellar Pulsations; Problems and Prospects, Ed. J.R. Buchler NATO ASI Ser. C302 (Dordrecht: Kluwer), 1. Buchler, J. R. 1993, Nonlinear Phenomena in Stellar Variability, (Dordrecht: Kluwer), repr. from ApSpS 210, 1 (1993) Buchler, J.R., Kolláth, Z., Beaulieu, J.P., Goupil, M.J., 1996, ApJLett 462, L83 Buchler, J.R. & Kovács, Z. 1986, ApJ 308, 661 Buchler, J. R. 1998, in A Half Century of Stellar Pulsation Interpretations: A Tribute to Arthur N. Cox, eds. P.A. Bradley &J.A. Guzik, ASP 135, 220 Castaing, B. et al. 1989, J. Fluid Mech. 204, 1 Castor, J. I. 1971, ApJ, 166, 109 Feuchtinger, M. 1998, AA 322, 817 Gautschy, A. & Saio, H. 1995, ARAA 33, 75 and 1996, ibid. 34, 551 Gehmeyr, M. , Winkler, K.-H. A. 1992, AA 253, 92–100; ibid. 253, 101–112 Gonczi, G. & Osaki, Y. 1980, AA 84, 304 Kolláth, Z., Beaulieu, J.P., Buchler, J. R. & Yecko, P., 1998, 502, L55 \[KBBY\] Kolláth, Z., Buchler, J. R., Yecko, P. Szabó, R. & Csubry, Z. 1999 (in preparation) Kovács, G. & Buchler, J.R. 1987, ApJ 324, 1026 Moskalik, P., Buchler, J.R. & Marom, A. 1992, ApJ 385, 685 Spiegel, E. A. 1971, Comments on Astrophysics 53 Stellingwerf, R.F. 1982, ApJ 262, 330 Welch, D. et al. 1995, in Astrophysical Applications of Stellar Pulsation, IAU Coll. 155, ed R.S Stobie (Cambridge, University Press) Yecko, P., Kolláth Z., Buchler, J. R. 1998, A&A 336, 553 \[YKB\]
no-problem/9901/cond-mat9901082.html
ar5iv
text
# Nonlinear dynamics in superlattices driven by high frequency ac-fields ## Abstract We investigate the dynamical processes taking place in nanodevices driven by high-frequency electromagnetic fields. We want to elucidate the role of different mechanisms that could lead to loss of quantum coherence. Our results show how the dephasing effects of disorder that destroy after some periods coherent oscillations, such as Rabi oscillations, can be overestimated if we do not consider the electron-electron interactions that can reduce dramatically the decoherence effects of the structural imperfections. Experimental conditions for the observation of the predicted effects are discussed. Recent advances in laser technology make possible to drive semiconductor nanostructures with intense coherent ac-dc fields. This opens new research fields in time-dependent transport in mesoscopic systems and puts forward the basis for a new generation of ultra-high speed devices. Following these results several works have been devoted to the analysis of the effects of time-dependent fields on the transport properties of resonant heterostructures , and to exploit the application in ultrafast optical technology: high-speed optical switches, coherent control of excitons, etc. . Within this context, we are interested in the decoherence processes producing the observed fast dephasing of coherence phenomena in semiconductor superlattices (SL’s) and more specifically in the interplay between the growth imperfections (disorder) and many-body effects as the electron-electron (e-e) interaction. The interplay between the effects of disorder and many-body effects on electronic properties is a long-standing problem in solid-state physics. Probably one of the most promising way to gain insight into this intricate problem is to combine the actual state-of-the-art of the Molecular Beam Epitaxy (MBE), which allow us to grew samples with monolayer perfection and consequently with well-characterized disorder, with coherent oscillations that are extremely sensitive to imperfections and nonlinear effects. The oscillations of a two level system between the ground and excited states in the presence of a strong resonant driving field, often called transient nutation or Rabi oscillation (RO), are discussed in textbooks as a topic of time-dependent perturbation theory. Consider a two state system with ground state energy $`E_0`$ and excited state $`E_1`$ in the presence of a harmonic perturbation. If the frequency of the perturbation matches roughly the spacing between the two levels, the system undergoes oscillations with a frequency $`\mathrm{\Omega }_R`$ which is much smaller than the excitation frequency $`\omega _{\mathrm{ac}}`$. This Rabi frequency depends on the mismatch $`\delta \omega (E_1E_0)/\mathrm{}\omega _{\mathrm{ac}}`$ between the level spacing and the excitation frequency, and on the matrix element $`F_{10}`$ of the perturbation $`\mathrm{\Omega }_R=\left(\delta \omega ^2+|F_{10}|^2/\mathrm{}^2\right)^{1/2}`$. If we start with the system initially in the ground state, transitions between the ground and the excited state will occur with a period $`T_R=2\pi /\mathrm{\Omega }_R`$ as time evolves. Semiconductor SL’s present Bloch minibands with several states each one; thus it is not clear whether they can be correctly described as a a pure two state system. We should also take into account the presence of imperfections introduced during growth processes and scattering mechanism as e-e interactions. Interface roughness appearing during growth in actual SL’s depends critically on the growth conditions. For instance, protrusions of one semiconductor into the other cause in-plane disorder and break translational invariance parallel to the layers. To describe local excess or defect of monolayers, we allow the quantum well widths to fluctuate uniformly around the nominal values; this can be seen as substituting the nominal sharp width by an average along the parallel plane of the interface imperfections. Our approximation is valid whenever the mean-free-path of electrons is much smaller than the in-plane average size of protrusions as electrons only see micro-quantum-wells with small area and uniform thickness. Therefore, in the following we will take the width of the $`n`$th quantum well to be $`a(1+Wϵ_n)`$, and the width of the $`n`$th barrier as $`b(1+Wϵ_n)`$ where $`W`$ is a positive parameter measuring the maximum fluctuation, $`ϵ_n`$’s are distributed according to a uniform probability distribution, $`𝒫(ϵ_n)=1`$ if $`|ϵ_n|<1/2`$ and zero otherwise, $`a`$ is the nominal quantum well width and $`b`$ is the nominal quantum barrier width. Even with its rather satisfactory degree of success, many-body calculations have difficulties that, in some cases, may complicate the interpretation of the underlying physical processes. Presilla et al suggested a new treatment of these interactions that, loosely speaking, could be regarded as similar to Hartree-Fock and other self-consistent techniques, which substitute many-body interactions by a nonlinear effective potential. More recently we have proposed a new model where the nonlinear interaction is driven by a local field instead of the mean-field approach used by Presilla and co-workers.. Here we present a new model that solves self-consistently the Poisson and the time-dependent Schödinger equations, and we compare the results with a local treatment based in the non-linear Schrödinger equation. The band structure and the potential at flat band is computed by using a finite-element method. The eigenstate $`j`$ of the band $`i`$ with eigenenergy $`E_i^{(j)}`$ is denoted as $`\psi _i^{(j)}(x)`$. A good choice for the initial wave packet is provided by using a linear combination of the eigenstates belonging to the first miniband. For the sake of clarity we have selected as the initial wave packet $`\mathrm{\Psi }(x,0)=\psi _i^{(j)}(x)`$, although we have checked that this assumption can be dropped without changing our conclusions. The subsequent time evolution of the wave packet $`\mathrm{\Psi }(x,t)`$ is calculated numerically by means of an implicit integration schema designed for consider time-dependent fields . The envelope-functions for the electron wavepacket satisfies the following quantum evolution equation $$i\mathrm{}\frac{\mathrm{\Psi }(x,t)}{t}=\left[\frac{\mathrm{}^2}{2m^{}}\frac{d^2}{dx^2}+V_{NL}(x,t)\right]\mathrm{\Psi }(x,t),$$ (1) where $`x`$ is the coordinate in the growth direction of the SL. We consider two approaches to the nonlinear potential $`V_{NL}(x,t)`$ in Eq.( 1). On the one hand we take into account the model described, in other context, in for our problem, where $`V_{NL}(x,t)`$ is $$V_{NL}(x,t)=V(x)eF_{AC}x\mathrm{sin}(\omega _{AC}t)+\alpha _{loc}|\mathrm{\Psi }(x,t)|^2,$$ (2) and $`V(x)`$ is the potential at flat-band, $`F_{AC}`$ and $`\omega _{AC}`$ are the strength and the frequency of the ac field respectively, and all the nonlinear physics is contained in the coefficient $`\alpha _{loc}`$, which we discuss below. There are several factors that configure the medium nonlinear response to the tunneling electron. We want to consider only the repulsive electron-electron Coulomb interactions, which should enter the effective potential with a positive nonlinearity, i.e., the energy is increased by local charge accumulations, leading to a positive sign for $`\alpha _{loc}`$. On the other hand, we have considered a different approach by solving self-consistently the Schrödinger and Poisson equations obtaining a Hartree-like potential. In this context, the non-linear potential is, $$V_{NL}(x,t)=V(x)eF_{AC}x\mathrm{sin}(\omega _{AC}t)+\alpha _{self}V_H(x,t),$$ (3) where now $`V_H`$ it is obtained by solving the Poisson equation for the density of charge $`|\mathrm{\Psi }(x,t)|^2`$, and $`\alpha _{self}`$ is the coupling parameter. We present here results for a SL with $`10`$ periods of $`100`$Å GaAs and $`50`$Å Ga<sub>0.7</sub>Al<sub>0.3</sub>As with conduction-band offset $`300`$meV and $`m^{}=0.067m`$, $`m`$ being the free electron mass. To illustrate the effects of the nonlinear coupling we show in Fig. 1 the conduction-band profile for a perfect SL ($`W=0`$) at $`t=0.4`$ (lower) and $`1.2`$ps (upper) when the ac field is tuned to the resonant frequency $`\omega _{\mathrm{ac}}=\omega _{\mathrm{res}}24`$THz, for (a) the linear case and modeling the e-e interaction with (b) the self-consistent method ($`\alpha _{self}=10^3`$) and with (c) the local model ($`\alpha _{loc}=10`$). To show the effects of the interface roughness we plot in Fig. 2 the probability of finding an electron, initially situated in $`\psi _0^{(5)}(x)`$, in the state $`\psi _1^{(5)}(x)`$ as a function of time when the ac field is tuned to the resonant frequency $`\omega _{\mathrm{ac}}=\omega _{\mathrm{res}}24`$THz with (a) $`W=0`$ (perfect SL) and (b) $`W=0.03`$ (imperfections around one monolayer). These results suggest the existence of a characteristic scattering time $`\tau _{\mathrm{dis}}`$ related to the amount of disorder in the sample, after which RO’s are destroyed by disorder. In Fig. 3 we plot the probability of finding an electron, initially situated in $`\psi _0^{(5)}(x)`$, in the state $`\psi _1^{(5)}(x)`$ as a function of time when the ac field is tuned to the resonant frequency, for different values of the nonlinearity coupling (a) $`\alpha _{self}=5\times 10^5`$, (b) $`10^4`$, (c) $`5\times 10^4`$ and (d) $`10^3`$. The results for the local model are very similar. When we compare this picture with Fig. 2 we see the process of vanishing of the RO’s are completely different. In the second case, the effects are the same for any time, then we could not speak about a dephasing time, apparently we only modified the electronic structure and then we are decreasing the resonant coupling between the external ac-field and the Bloch bands. Nevertheless the main goal of this work is to show how the nonlinearity effects can reduce the dephasing effects introduced by the growth imperfections. In Fig. 4 we plot the occupation probability of the state $`\psi _1^{(5)}(x)`$ as a function of time considering imperfections about one monolayer ($`W=0.03`$) for (a) the linear case and considering together with the imperfections the e-e interaction with the self-consistent model (b) $`\alpha _{self}=10^4`$ and with the local one (c) $`\alpha _{loc}=5`$. We can see clearly how nonlinearity prevents the dephasing effects introduced by the imperfections allowing the observation of Rabi oscillations during larger coherence times. These theoretical results are completely consistent with recent experiments in transport properties of intentional disordered superlattices with doped and undoped superlattices, where is showed than the Coulomb interactions could be the responsible of the suppression of disorder effects leading to quasimetallic behavior at low temperatures when the doping of the samples is increased. In summary, we have shown how the dephasing effects of disorder are dramatically reduced when we consider the e-e interaction. We have studied two different models to introduce the non-linear interaction and the results are very similar. Our results shows that it is possible to enlarge the dephasing times and, consequently the number of periods of coherence oscillations of electrons in SL’s. In semiconductor heterostructures this can be done by increasing the doping or with very intense laser excitation fields. It goes without saying that to develop new devices for THz science it is crucial to understand how to control and enlarged the coherence times. We think that the nonlinear effects could be the key to solve this problem. Further work along these lines is currently in progress. The authors would like to thank Francisco Domínguez-Adame for helpful discussions and critical reading of the manuscript. Also we thank to I. Bossi, Rafael Gómez-Alcalá, Claudio Andreani and Gennady Berman, for very valuable conversations. J.D. and E.D. thanks the Dipartimento di Fisica “A. Volta” of the Universitá di Pavia for hospitality during a stay when part of this work was done. Work in Madrid was supported by CAM under Project No. 07N/0034/1998, and in Pavia by the INFM Network “Fisica e Tecnologia dei Semiconduttori III-V”. J.D. and E.D. also acknowledges partial support from Fundación Universidad Carlos III (Spain) and INFM (Italy).
no-problem/9901/cond-mat9901251.html
ar5iv
text
# Fixed points of Hopfield type neural networks ## Abstract The set of the fixed points of the Hopfield type network is under investigation. The connection matrix of the network is constructed according the Hebb rule from the set of memorized patterns which are treated as distorted copies of the standard-vector. It is found that the dependence of the set of the fixed points on the value of the distortion parameter can be described analytically. The obtained results are interpreted in the terms of neural networks and the Ising model. $`\mathrm{𝟏}^{}.`$ The problem of maximization of a symmetric form which is quadratic in spin variables $`\sigma _i`$: $$\{\begin{array}{c}F(\stackrel{}{\sigma })=_{i,j=1}^nJ_{ij}\sigma _i\sigma _j\mathrm{max},\sigma _i=\{\pm 1\},\hfill \\ \stackrel{}{\sigma }=(\sigma _1,\mathrm{},\sigma _n),J_{ij}=J_{ji},i,j=1,2,\mathrm{},n.\hfill \end{array}$$ $`(1)`$ is under investigation. This problem arises in the Ising model, in the surface physics, in the theory of optimal coding, in factor analysis, in the theory of neural networks and in the optimization theory . Here the aim is to obtain an effective method for the search of the global maximum of the functional and a constructive description of the set of its local extrema. The $`n`$-dimensional vectors $`\stackrel{}{\sigma }`$, which define $`2^n`$ configurations, will be called the configuration vectors. The configuration vector which gives the solution of the problem (1) will be called the ground state. We investigate the problem (1) in the case of the connection matrix constructed with regard to the Hebb rule from the $`(p\times n)`$-matrix $`𝐒`$ of the form $$𝐒=\left(\begin{array}{ccccccc}1x& 1& \mathrm{}& 1& 1& \mathrm{}& 1\\ 1& 1x& \mathrm{}& 1& 1& \mathrm{}& 1\\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}\\ 1& 1& \mathrm{}& 1x& 1& \mathrm{}& 1\end{array}\right),$$ $`(2)`$ where $`x`$ is an arbitrary real number. We introduce the special notation $`𝐍`$ for the related connection matrix: $$𝐍=𝐒^𝐓𝐒,N_{ii}=0,i=1,2,\mathrm{},n.$$ $`(3)`$ According to the conventional neural network tradition we treat the $`n`$-dimensional vectors $`\stackrel{}{s}^{(l)}`$, which are the rows of the matrix $`𝐒`$, as $`p`$ memorized patterns embedded in the network memory (it does not matter that not all the elements of the matrix $`𝐒`$ are equal $`\{\pm 1\}`$; see Note 1). Then the following meaningful interpretation of the problem can be suggested: the network had to be learned by p-time showing of the standard $`\stackrel{}{\epsilon }(n)=(1,1,\mathrm{},1)`$, but an error had crept into the learning process and in fact the network was learned with the help of its $`p`$ distorted copies; the value of the distortion $`x`$ was the same for all the memorized patterns and every time only one coordinate had been distorted: $$\stackrel{}{s}^{(l)}=(1,\mathrm{},1,\underset{l}{\underset{}{1x}},1,\mathrm{},1),l=1,2,\mathrm{},p.$$ $`(4)`$ When $`x`$ is equal zero, the network is found to be learned by $`p`$ copies of the standard $`\stackrel{}{\epsilon }(n)`$. It is well-known that in this case the vector $`\stackrel{}{\epsilon }(n)`$ itself is the ground state and the functional has no other local maxima. For continuity reasons, it is clear that the same situation remains for a sufficiently small distortion $`x`$. But when $`x`$ increases the ground state changes. For the problem (1)-(3) we succeeded in obtaining the analytical description of the dependence of the ground state on the value of the distortion parameter. Notations. We denote by $`\stackrel{}{\epsilon }(k)`$ the configuration vector which is collinear to the bisectrix of the principle orthant of the space $`\mathrm{R}^\mathrm{k}`$. The vector, which after $`p`$ distortions generates the set of the memorized patterns $`\stackrel{}{s}^{(l)}`$, is called the standard-vector. Next, $`n`$ is the number of the spin variables, $`p`$ is the number of the memorized patterns and $`q=np`$ is the number of the nondistorted coordinates of the standard-vector. Configuration vectors are denoted by small Greek letters. We use small Latin letters to denote vectors whose coordinates are real. Note 1. In the neural network theory the connection matrix obtained with the help of Eq.(3) from $`(p\times n)`$-matrix $`𝐒`$ whose elements are equal $`\{\pm 1\}`$ is called the Hebb matrix. If in Eq.(3) $`(p\times n)`$-matrix $`𝐒`$ is of the general type, the corresponding connection matrix will be called the matrix of the Hebb type. With regard to the set of the fixed points of the network an arbitrary symmetric connection matrix with zero diagonal elements is equivalent to a matrix of the Hebb type. Indeed, equality of the diagonal to zero guarantees the coincidence of the set of the local maxima of the functional (1) with the set of the network’s fixed points. But the local maxima do not depend on the diagonal elements of the connection matrix, so the lasts can be chosen whatever we like. In particular, all the diagonal elements can be taken so large that the connection matrix becomes a positive definite one. And such a matrix can be already presented in the form of the matrix product (3), where the elements of the related matrix $`𝐒`$ are not necessarily equal to $`\{\pm 1\}`$. In other words, with the help of the simple deformation of the diagonal a symmetric connection matrix turns into the Hebb type matrix and as a result, the set of the local maxima of the functional (1) does not change. This reasoning is correct for the Hebb matrix too, since it is a symmetric one and its diagonal elements are equal zero. In such a way we ascertain the actuality of the Hebb type connection matrices for the Hopfield model (for details see ). $`\mathrm{𝟐}^{}.\text{ Basic model.}`$ Let us look for the local maxima among configuration vectors whose last coordinate is positive. Since $`q`$ last columns of the matrix $`𝐒`$ are the same, the configuration vector which is ”under the suspicion” to provide an extremum is of the form $$\stackrel{}{\sigma }^{}=(\underset{\stackrel{}{\sigma }^{}}{\underset{}{\sigma _1,\sigma _2,\mathrm{},\sigma _p}},\underset{q}{\underset{}{1,\mathrm{},1}}),$$ $`(5)`$ where we denote by $`\stackrel{}{\sigma }^{}`$ the $`p`$-dimension part of the vector $`\stackrel{}{\sigma }^{}`$, which is formed by its first $`p`$ coordinates. The direct calculations (or see ) show that $$F(\stackrel{}{\sigma }^{})x^22x(q+p\mathrm{cos}w)\mathrm{cos}w+(q+p\mathrm{cos}w)^2,$$ $`(6)`$ where $$\mathrm{cos}w=\frac{_{i=1}^p\sigma _i}{p}$$ $`(7)`$ is the cosine of the angle between the vectors $`\stackrel{}{\sigma }^{}`$ and $`\stackrel{}{\epsilon }(p)`$. Depending on the number of the coordinates of the vector $`\stackrel{}{\sigma }^{}`$ whose value is ”–1”, $`\mathrm{cos}w`$ takes the values $`\mathrm{cos}w_k=12k/p`$, where $`k=0,1,\mathrm{},p`$. Consequently, $`2^p`$ ”suspicious-looking” vectors $`\stackrel{}{\sigma }^{}`$ are grouped into the $`p+1`$ classes $`\mathrm{\Sigma }_k`$: the functional $`F(\stackrel{}{\sigma }^{})`$ has the same value $`F_k(x)`$ for all the vectors from the same class. The classes $`\mathrm{\Sigma }_k`$ are numerated by the number $`k`$ of the negative coordinates which have the relevant vectors $`\stackrel{}{\sigma }^{}`$, and the number of the vectors in the $`k`$-class is equal to $`C_p^k`$. To find the ground state under a given value of $`x`$, it is necessary to determine the greatest of the values $`F_0(x),F_1(x),\mathrm{},F_p(x)`$. Under the comparison the term $`x^2`$ can be omitted. Therefore, to find out how the ground state depends on the parameter $`x`$, it is necessary to examine the family of the straight lines $$L_k(x)=(q+p\mathrm{cos}w_k)^22x(q+p\mathrm{cos}w_k)\mathrm{cos}w_k:$$ $`(8)`$ in the region where the $`L_k(x)`$ majorizes all the other straight lines, the ground state belongs to the class $`\mathrm{\Sigma }_k`$ and is $`C_p^k`$ times degenerated. The analysis of the relative position of the straight lines $`L_k(x)`$ gives: Theorem. As $`x`$ varies from $`\mathrm{}`$ to $`\mathrm{}`$ the ground state in consecutive order belongs to the classes $`\mathrm{\Sigma }_0,\mathrm{\Sigma }_1,\mathrm{},\mathrm{\Sigma }_{k_{max}}`$. The jump of the ground state from the class $`\mathrm{\Sigma }_{k1}`$ into the class $`\mathrm{\Sigma }_k`$ occurs at the point $`x_k`$ of intersection of the straight lines $`L_{k1}(x)`$ and $`L_k(x)`$: $$x_k=p\frac{n(2k1)}{n+p2(2k1)},k=1,2,\mathrm{},k_{max}.$$ If $`\frac{p1}{n1}<\frac{1}{3}`$, one after another all the $`p`$ rebuildings of the ground state take place according the above scheme: $`k_{max}=p`$. And if $`\frac{p1}{n1}>\frac{1}{3}`$, the last rebuilding is the one whose number is $`k_{max}=\left[\frac{n+p+2}{4}\right]`$. The functional has no other local maxima. This theorem allows to solve a lot of practical problems. In Fig.1 the typical examples of the relative position of the straight lines $`L_k(x)`$ are presented for the cases $`\frac{p1}{n1}<\frac{1}{3}`$ (a) and $`\frac{p1}{n1}>\frac{1}{3}`$ (b). When $`x`$ changes from $`\mathrm{}`$ to $`x_1`$, the ground state is the standard-vector $`\stackrel{}{\epsilon }(n)`$ (it exhausts the class $`\mathrm{\Sigma }_0`$). In the point $`x_1`$ the ground state jumps from the class $`\mathrm{\Sigma }_0`$ to the class $`\mathrm{\Sigma }_1`$ and becomes $`p`$ times degenerated. When $`x`$ achieves the value $`x_2`$, the ground state jumps from the class $`\mathrm{\Sigma }_1`$ to the class $`\mathrm{\Sigma }_2`$ and becomes $`C_p^2`$ times degenerated, and so on. As $`x`$ increases, the value of the functional for the ground state at first monotonically decreases and then, after reaching the minimum value, increases monotonically. For what follows let us note that $`k_{max}[\frac{p+1}{2}]`$, and $`x_{k_{max}}p`$. The case $`p=n`$ worth to be specially mentioned. Here all the jump points $`x_k`$ stick to one point $$x^{}x_k=\frac{n}{2},k=1,2,3,\mathrm{},\left[\frac{n+1}{2}\right].$$ $`(9)`$ For any $`x`$ from the left of $`x^{}`$ the ground state is the standard-vector $`\stackrel{}{\epsilon }(n)`$, and for $`x`$ from the right of $`x^{}`$ the ground state belongs to the class $`\mathrm{\Sigma }_{[\frac{n+1}{2}]}`$ and is $`C_n^{[\frac{n+1}{2}]}`$ times degenerated. The interval $`x_1<x<x_{k_{max}}`$ will be called the rebuilding region of the ground state. This region is examined in details in . Here we would like to mention only that its left boundary $`x_1\frac{p}{2}`$ is the monotonically increasing function of $`p`$ as well as of $`n`$. And also, when $`p=const`$ and $`n\mathrm{}`$ the rebuilding region tightens to the point $$x^{\prime \prime }=p.$$ $`(10)`$ Here for $`x<x^{\prime \prime }`$ the ground state is the standard-vector and for $`x>x^{\prime \prime }`$ the ground state belongs to the class $`\mathrm{\Sigma }_p`$; again it is a nondegenerate one. Note 2. The Theorem remains valid, if: a). The memorized patterns (4) are normalized to unit to prevent their length being dependent on the varying parameter $`x`$. As a result the maximum value of the functional (1) for the ground state decreases monotonically as function of $`x`$. b). An arbitrary configuration vector $`\stackrel{}{\alpha }=(\alpha _1,\alpha _2,\mathrm{},\alpha _p,\alpha _{p+1},\mathrm{},\alpha _n)`$ is used in place of the standard-vector $`\stackrel{}{\epsilon }(n)`$. Then all the results are formulated with respect to configuration vectors $`\stackrel{}{\sigma }^{}=(\alpha _1\sigma _1,\alpha _2\sigma _2,\mathrm{},\alpha _p\sigma _p,\alpha _{p+1},\mathrm{},\alpha _n)`$, and the elements of the connection matrix are chanched: $$N_{ij}^{(\alpha )}=N_{ij}\alpha _i\alpha _j,i,j=1,2,\mathrm{},n.$$ $`(11)`$ c). The first $`p`$ coordinates of the space $`\mathrm{R}^\mathrm{n}`$ are subjected to a rotation. If in this connection the standard-vector does not change, all the results of the ”Basic model” are valid, though in this case the memorized patterns are obtained from $`\stackrel{}{\epsilon }(n)`$ by simultaneous distortion of its $`p`$ coordinates! But if as a result of the rotation the standard-vector turns into $$(u_1,u_2,\mathrm{},u_p,1,\mathrm{},1),u_l\mathrm{R}^1,\underset{l=1}{\overset{p}{}}u_l^2=p,$$ the elements of the relevant connection matrix take the form $$N_{ij}^{(U)}=N_{ij}u_iu_j,i,j=1,2,\mathrm{},n.$$ $`(12)`$ Then, as in the ”Basic model”, the vectors $`\stackrel{}{\sigma }^{}`$, which are ”under the suspicion” to provide an extremum, are grouped in the classes where the value of the functional is constant. But now the vectors $`\stackrel{}{\sigma }^{}`$ belong to the same class if their p-dimensional parts are equidistant from the vector $`\stackrel{}{u}=(u_1,u_2,\mathrm{},u_p)`$. And just as before when $`x`$ increases, the ground state in consecutive order jumps from a class to the next one. Here the Theorem remains valid, however a correction of the formulae is necessary. We would like to note, that since the choice of the values $`\{u_l\}_{l=1}^p`$ is completely in the researcher’s hand, it is possible to construct the Hopfield type networks with preassigned sets of fixed points. An additional analysis is required to find out the limits of this method. We would like to mention that we succeeded in the generalization of the ”Basic model” to the case when the linear term $`h_{i=1}^n\sigma _i`$ was added to the functional $`F(\stackrel{}{\sigma })`$ which had to be maximized. In physics models due to such a term the external magnetic field can be taken into account. Now we prepare these results for publication. $`\mathrm{𝟑}^{}.\text{ Interpretations.}`$ Let us in short discuss the results which are relative to the ”Basic model”. Neural networks. In this case the Theorem has to be interpreted in the framework of the meaningful setting of the problem, which has been done above (see $`1^{}`$): the quality of ”the truth” (the standard-vector) reconstruction by the network depends on the distortion value $`x`$ during the learning stage and on the length $`p`$ of the learning sequence. In agree with the common sense the error of the network increases with the increase of the distortion $`x`$. Also it is quite reasonable that the left boundary of the rebuilding region $`x_1`$ is the increasing function of $`p`$ and $`n`$. Indeed, when $`n`$ and $`x`$ are fixed, merely due to increase of the number of the memorized patterns $`p`$ the value of $`x_1`$ can be forced to exceed $`x`$ (of course, if $`x`$ is not too large). As a result $`x`$ turns out to be left of $`x_1`$, i.e. in the region where the only fixed point is the standard-vector. This conclusion is in agreement with the practical experience according which the greater the length of the learning sequence, the better the signal can be read through noise. In the same way the increasing of $`x_1`$ with the increasing of the number $`n`$ can be interpreted. When $`p=const`$ and $`n\mathrm{}`$ all the jump points $`x_k`$ stick to one point $`x^{\prime \prime }=p`$. In this case for $`x<x^{\prime \prime }`$ the ground state is the vector which belongs to the class $`\mathrm{\Sigma }_0`$: $$\stackrel{}{\epsilon }^{(+)}(n)=(\underset{p}{\underset{}{1,1,\mathrm{},1}},1,\mathrm{},1),$$ $`(13)`$ and for $`x>x^{\prime \prime }`$ the ground state is the vector which belongs to the class $`\mathrm{\Sigma }_p`$: $$\stackrel{}{\epsilon }^{()}(n)=(\underset{p}{\underset{}{1,1,\mathrm{},1}},1,\mathrm{},1)$$ $`(14)`$ (see Eq.(10)). As we see it, this result is a nontrivial one. In terms of the learning process, the distinct parts of the vectors $`\stackrel{}{\epsilon }^{(+)}(n)`$ and $`\stackrel{}{\epsilon }^{()}(n)`$ are two opposed statements. And the network ”feels” this. When the distortions $`x`$ is not very large (less than $`x^{\prime \prime }`$) the memorized patterns $`\stackrel{}{s}^{(l)}`$ (4) are interpreted by the network as the distorted copies of the vector $`\stackrel{}{\epsilon }^{(+)}(n)`$ (13). But if during the learning stage the distortions exceed $`x^{\prime \prime }`$, the network interprets the memorized patterns $`\stackrel{}{s}^{(l)}`$ as the distorted copies of other standard-vector $`\stackrel{}{\epsilon }^{()}(n)`$ (14). The last result is in agreement with the practical experience too: we interpret deviations in the image of a standard as permissible ones only till some threshold. If only this threshold is exceeded, the distorted patterns are interpreted as quite different standard. (For details see . In the same reference the very interesting dependence of $`k_{max}`$ on the relation between $`p`$ and $`n`$ is discussed.) The Ising model at T=0. The interpretation of this model in terms of the matrix $`𝐒`$ is not known yet. Therefore here the obtained results are interpreted starting from the form of the Hamiltonian $`𝐍`$ (3). Let’s write it in the block-matrix form: $$𝐍\left(\begin{array}{cc}𝐀& 𝐁\\ 𝐁^𝐓& 𝐂\end{array}\right),$$ where the diagonal elements of the $`(p\times p)`$-matrix $`𝐀`$ and the $`(q\times q)`$-matrix $`𝐂`$ are equal zero, and $$\{\begin{array}{cc}a_{ij}=12y,\hfill & i,j=1,2,\mathrm{},p,ij;\hfill \\ b_{ik}=1y,\hfill & i=1,2,\mathrm{},p,k=1,2,\mathrm{},q;\hfill \\ c_{kl}=1,\hfill & k,l=1,2,\mathrm{},q,kl;\hfill \\ y=\frac{x}{p}.\hfill & \end{array}$$ This matrix corresponds to a spin system with the infinitely large interaction radius. The system consists of two subsystems, which are homogeneous with regard to the spin interaction. The interaction between the $`p`$ spins of the first subsystem is equal to $`12y`$; the interaction between the $`q`$ spins of the second subsystem is equal to $`1`$; the crossinteraction between the spins of each subsystems is equal to $`1y`$. When $`p=n`$ all the spins are interacting with each other in the same way. (We would like to remind that the connection matrix can be generalized – see Eqs. (11), (12).) While $`y<\frac{1}{2}`$ all the spins are interacting in the ferromagnetic way; when $`\frac{1}{2}<y<1`$, the interaction between the spins of the first subsystem becomes of antiferromagnetic type, and when $`1<y`$ the crossinteraction is of antiferromagnetic type too. The Theorem allows to trace how the ground state depends on the variation of the parameter $`y`$. Let $`p<n`$. For $`y(\mathrm{},\frac{1}{2})`$ the ground state is the ferromagnetic one since $`\frac{1}{2}<y_1=\frac{x_1}{p}`$, and for $`x<x_1`$ the ground state is the standard-vector $`\stackrel{}{\epsilon }^{(+)}(n)`$ (13). But it is interesting that the ground state remains the ferromagnetic one even if $`\frac{1}{2}<y<y_1`$, i.e. when the antiferromagnetic interactions already shown up in the system. In other words, when $`p<n`$ there is ”a gap” between the value of the external parameter $`y`$ which corresponds to the destruction of the ferromagnetic interaction and the value of this parameter which corresponds to the destruction of the ferromagnetic ground state. Only after a ”sufficient amount” of the antiferromagnetic interactions is accumulated, the first jump of the ground state occurs and it ceases to be the ferromagnetic one. Then after another critical ”portion” of the antiferromagnetic interaction is accumulated the next jump of the ground state occurs (it happens when $`y`$ exceeds $`y_2=\frac{x_2}{p}`$), and so on. After the parameter $`y`$ reaches the value $`y^{\prime \prime }=1=\frac{x^{\prime \prime }}{p}`$, the crossinteraction becomes the antiferromagnetic one too. The ground state continues ”to jump” after that, since $`x_{k_{max}}p`$. The energy $`E=F`$ of the ground state as a function of the parameter $`y`$ has breaks at the points $`y_k=\frac{x_k}{p}`$. It increases till $`yy^{\prime \prime }=1`$ and decreases when $`y>y^{\prime \prime }`$. However, if the memorized patterns are normalized to unit, the energy of the ground state is a monotonically increasing function of the external parameter. It is natural to treat the case $`p=const,\text{ }n\mathrm{}`$ as the case of an infinitely large sample with a few number of impurities. In this case all $`y_k`$ stick to the point $`y^{\prime \prime }`$ (see Eq.(10)). Depending only on the type of the crossinteraction between the impurities and the rest of the sample, the ground state is either the ferromagnetic one (the vector $`\stackrel{}{\epsilon }^{(+)}(n)`$ (13)), or the spins of the impurities are directed in an opposite way with respect to the other spins of the sample (and the ground state is the vector $`\stackrel{}{\epsilon }^{()}(n)`$ (14)). Finally, let’s discuss the case $`p=n`$. Then all $`y_k`$ stick to the point $`y^{}=\frac{1}{2}`$ (see Eq.(9)). Here the destruction of the ferromagnetic interaction occurs simultaneously with the change of the ground state ( ”the gap” disappears). As long as the interaction of the spins is ferromagnetic ($`y<\frac{1}{2}`$), the ground state is ferromagnetic too. But when the interaction of the spins becomes antiferromagnetic ($`y>\frac{1}{2}`$), the ground state turns out to be $`C_n^{[\frac{n+1}{2}]}`$ times degenerated. From the right of $`\frac{1}{2}`$ it is natural to associate the state of the system with the spin glass phase. Acknowledgments. The author is grateful to prof. A.A.Ezhov for helpful advices on the substance of the work.
no-problem/9901/nucl-th9901016.html
ar5iv
text
# Experimental and Theoretical Search for a Phase Transition in Nuclear Fragmentation ## Abstract Phase transitions of small isolated systems are signaled by the shape of the caloric equation of state $`e^{}(T)`$, the relationship between the excitation energy per nucleon $`e^{}`$ and temperature. In this work we compare the experimentally deduced $`e^{}(T)`$ to the theoretical predictions. The experimentally accessible temperature was extracted from evaporation spectra from incomplete fusion reactions leading to residue nuclei. The experimental $`e^{}(T)`$ dependence exhibits the characteristic S-shape at $`e^{}=23`$ MeV/A. Such behavior is expected for a finite system at a phase transition. The observed dependence agrees with predictions of the MMMC-model, which simulates the total accessible phase-space of fragmentation. In the macroscopic physics phase transitions are usually defined by a divergence at the critical temperature, for example in heat capacity $`c=de^{}/dT_{thd}`$, where $`e^{}`$ and $`T_{thd}`$ is the excitation energy and the thermodynamic temperature. This is corresponding to the well known finding that at a first order phase transition temperature stays constant while additional energy is pumped into the system. This picture becomes different if we deal with finite and isolated systems like nuclei. Due to conservation of mass, charge and especially total energy the signal of a first order phase transition is given by an ”S-shape” in $`e^{}(T)`$, called the caloric equation of state ($`CES`$), as shown in the figure 1 for a decaying nucleus. Pictorially speaking we find that the system is cooling down with rising excitation energy at a first order phase transition in a finite system . For a finite and isolated (microcanonical) system the heat capacity is no longer a positive definite quantity. At first-order phase transitions it has two divergences (instead of one for the infinite matter). In the region between the two poles it becomes a multi-valued function. The signal of fig. 1 can be obtained only in a microcanonical description which takes into account the strict mass, charge and energy conservation. This behavior of a fragmenting nuclear system at a phase transition is due to the opening of new decay channels, i.e. the population of additional regions of phase space $`\mathrm{\Omega }(e^{})`$ ($`e^{}`$ is here the excitation energy per nucleon) . In case of fig. 1 it is connected to the onset of IMF (intermediate mass fragment) emission and the new phase space associated to IMF. The specific entropy $`s(e^{})=\frac{ln\mathrm{\Omega }(e^{})}{N}`$, N the number of nuclei, rises then higher compared to a normal Fermi-gas. It is this strong rise of entropy that leads to an anomaly in the $`CES`$ by the relation $$\frac{1}{T_{thd}}\beta (e^{})=\frac{s(e^{})}{e^{}}.$$ (1) A recent experimental observation showing a structure in $`e^{}(T)`$-curve fueled the discussion about the appearance and the measurement of a phase transition. It is known that apparent temperature is sensitively dependent on the mass of the source . We suppose together with and in opposite to that the curve shown in ref. is just the effect of changing mass of the source without undergoing any phase transition. Another discussion can be seen in . Here we concentrate on a different set of experimental data deduced from ref. . We are going to perform a comparison of an experimentally obtained ”S-shape” in $`e^{}(T)`$ with theoretical predictions of the Berlin - microcanonical statistical multifragmentation model $`MMMC`$ which simulates the phase space $`\mathrm{\Omega }(e^{})`$ for decaying nuclei. We are going to describe the model in some detail at a later stage. $`MMMC`$ predicts two phase transitions in nuclear fragmentation . The phase transition at lower excitation energy at $`e^{}23`$ MeV per nucleon was shown in figure 1. A second phase transition at higher $`e^{}`$, which is not the subject of this treatise and not shown here, is due to the true multifragmentation. Another similar statistical fragmentation model, the Copenhagen model SMM also predicts phase transitions. Since SMM has some mixed microcanonical-canonical features and has a varying freeze-out volume it produces a slightly different signal of a phase transition compared to MMMC. The thermodynamical temperature $`T_{thd}`$, equation (1), cannot be accessed directly in an experiment. For the experimental comparison we need to find a related quantity which would keep the information on the behavior of $`T_{thd}`$ . Such a quantity, which we call apparent temperature $`T_{app}`$ is thus not a temperature in the sense of thermodynamics. In this work we show experimentally accessible ”S-shapes” of the $`CES`$ $`e^{}(T_{app})`$ extracted from incomplete fusion reactions resulting from 701 MeV <sup>28</sup>Si + <sup>100</sup>Mo . We plot $`e^{}`$ vs. $`T_{app}`$, where $`T_{app}`$ (the apparent temperature) is the slope of the raw evaporation spectra. The details of the experiment and the extraction of the needed parameters can be found in . Here we outline some of the important features. Heavy evaporation residues were detected at forward angles, therefore this experiment does not probe multi-fragment final states. Charged particles (including IMFs) and neutrons were detected in concentric 4$`\pi `$-detectors. The excitation energy of the source was deduced from linear momentum reconstruction. The raw spectra of protons, deuterons, tritons and alpha particles were fitted with a three moving source prescription. The data at backward angles are well described by a surface-evaporating Maxwellian moving source: $$\frac{d\sigma }{dE_{kin}}(E_{kin}B)e^{(\frac{E_{kin}B}{T_{app}})},$$ (2) where $`E_{kin}`$ is the center of mass kinetic energy of the particles, $`B`$ the Coulomb barrier and $`T_{app}`$, which is the slope of the raw spectra, is the desired apparent temperature. Figure 2 presents the excitation energy per nucleon $`e^{}`$ versus $`T_{app}`$ for protons, deuterons, tritons and alpha particles. This representation of the data exhibit two noteworthy trends. The first trend concerns the general shape of these curves and the second is the horizontal displacement (along the $`T_{app}`$ axis) as one progresses from protons to deuterons to tritons and alpha particles. We shall focus on the first observation although the second observation is also of interest and we shall briefly discuss it also. We find that all the four curves for different particles show an ”S-shape” in the expected region of excitation energies (compare to fig. 1), but no backbending is seen. Here one needs to keep in mind that the experimental data points correspond to sources which are slightly changing with the excitation energy. From lowest to highest energy the mass is growing from 105 to 122 nuclei and the charge from 47 to 54. We expect this change in mass and charge to smear out the ”S-shape” with a backbending shown in figure 1. Next we are going to perform a comparison to the MMMC-model simulation. The $`MMMC`$-model assumes that the compound system fragments quite early but the fragments remain stochastically coupled as long as they are in close contact. Consequently, the system is equilibrated inside a freeze-out volume. The size of this volume, which is a simulation parameter of $`MMMC`$ is in our energy region at about 6 times the normal nuclear volume. This corresponds to an average maximum distance between the fragments of $`2`$fm. Here the nuclear interaction between the fragments drops to the point that subsequent mass exchange is unlikely. Then the fragments (which can be in excited states) leave this volume and may de-excite as they trace out Coulomb-trajectories. The ensuing formation of fragments is determined by the accessible phase space which is sampled with the Monte Carlo method using the Metropolis importance sampling. The experimental analysis of the data provides the values of the mass $`A`$, charge $`Z`$, excitation energy $`E^{}`$, and angular momentum $`L`$ of the source , which are the input into the $`MMMC`$ simulation. The only simulation parameter of the model, the freeze-out radius $`R_f`$ was taken as its standard value of $`2.2A^{1/3}`$ fm, this means that we simulate a phase transition at constant volume. The results of $`MMMC`$ calculations, performed with these input values, were subjected to a software filter of the experimental set-up which, most importantly, selects only those events with one big residue. The mass of the residue was chosen to be $`A_{res}90`$, which is close to $`A_{res}`$ estimated from the experimental data (the experiment did not directly measure the mass of the residue). Figure 3 shows a comparison of the $`e^{}(T_{app})`$ curves extracted from the experimental data for protons and $`\alpha `$-particles to the $`e^{}(T_{app})`$ dependence deduced from the $`MMMC`$-model using its standard parameters. Also the experimental uncertainties for the proton and alpha curves are given. The horizontal bars give the statistical uncertainty to extract the temperature (slope) from the experimental raw spectra. The vertical bars (here only given at the lowest and highest proton or alpha point) indicate the systematic difference of the excitation energy extracted by the ”top-down” resp. the ”bottom-up” procedures employed in ref.. The two alternative methods lead essentially to an up or down shift of the $`CES`$ curve without changing the main structure of the curves. The similarity of the shapes of the experimental and simulated $`CES`$ $`e^{}(T_{app})`$ is quite evident. The differences between the shapes of these curves and the parabolic dependence (dotted curve) expected for a simple Fermi gas is clearly seen indicating that some additional degrees of freedom, which are apparently included in the $`MMMC`$-model, become significant in this energy range. The theoretical value of $`T_{app}`$ was extracted from fitting, as was done for the raw experimental spectra. It is worth noting that calculated temperatures $`T_{app}`$, extracted from the Maxwellian fits, are close though not identical to the unique thermodynamical temperatures $`T_{thd}`$ from the equation 1, as can be seen from comparing figure 1 and figure 3. The curve in figure 1 is calculated for the mass and charge corresponding to the highest value of experimental energies, but for the whole energy range. The values of $`R_f`$ and $`A_{res}`$ do not influence the general shape of the calculated $`e^{}(T_{app})`$ curves. However, the $`e^{}(T_{app})`$ curves shift along the $`T_{app}`$ \- axis if different values of these parameters are used. The shifts produced by reasonable changes in $`A_{res}`$ are larger than those produced by reasonable changes in $`R_f`$. We checked that the anomaly in the $`CES`$ is not due to the changes of the angular momentum from $`L=18.2`$ to $`48.8\mathrm{}`$. It exists also at $`L=0`$. While the similarity of the shapes of the experimental and simulated $`CES`$ $`e^{}(T_{app})`$ is good for p’s and $`\alpha `$-particles, significant differences exist. The simulated curves for deuterons and tritons (not shown here) have the same shape but are shifted towards lower values of $`T_{app}`$. The higher $`T_{app}`$ values of the experimental deuteron and triton spectra might be an indication that the production of these less bound fragments might occur in an earlier hotter stage of the reaction, an ingredient not included in the $`MMMC`$. We have also compared the multiplicities of neutrons $`M_n`$, protons $`M_p`$, deuterons $`M_d`$, tritons $`M_t`$ and alpha $`M_\alpha `$ calculated with $`MMMC`$ with the experimental values. The total number of the evaporated particles is the same in the calculation and experiment. The model overestimates $`M_p`$ by approximately a factor 1.3 and underestimates $`M_\alpha `$ by a factor of 2. The $`MMMC`$ calculation reproduces $`M_n`$ and $`M_t`$ at all values of $`e^{}`$. On the other hand, the values of $`M_d`$ are not reproduced by the model calculation, which systematically predicts values which are too high. In this context one should keep in mind: In $`MMMC`$ we treat deuterons as spherical nuclei with normal nuclear matter density. This may overestimate their stability. The experimental data also suggest an association between the onset of significant IMF production and the upswing in the $`e^{}(T_{app})`$ dependence. This is seen in figure 5 where the measured absolute IMF-multiplicities ($`M_{IMF}`$) associated with the experimentally selected events which produce a large residue are compared to the absolute $`M_{IMF}`$ of the corresponding events from the model calculation. (The smooth dependence of $`M_{IMF}`$ on the excitation energy underlines the high statistical quality of both experimental and simulated data.) Both the data and the calculations exhibit a dramatic increase in $`M_{IMF}`$ for $`e^{}`$ between $`e^{}1.5`$ and $`3`$MeV/A. The primary difference between the experimental data and the model predictions is that the $`M_{IMF}`$’s rise at higher $`e^{}`$-values for the model predictions than they do for the experimental data. Another point worth noting is that the values of $`M_{IMF}`$ in these decay-channels are less than $`0.1`$. In other words, most (more than $`90`$%) of the selected events (both in the experiment and in the model calculation) have no IMF. Besides the overall agreement in the experimental an theoretical $`CES`$ some uncertainty of the interpretation remains. There are too few experimental data points outside the ”S-shape” region to interpret the data unambiguously. Further there is a small change in the charge and mass of the source for different excitation energies. Therefore it is desirable to perform a similar experimental analysis covering a larger range of excitation energies and selecting a strictly constant $`A_{source}`$ and $`Z_{source}`$ for the whole energy range. Finally, we would like to point out that the extracted temperatures $`T_{app}`$ for $`p`$ and $`\alpha `$-particles in related earlier experimental work by the Texas A&M-group , exhibit trends similar to those presented here. This data is for similar masses of compound nucleus ranging from 109 to 128 and in the same energy region. We plot the data for the apparent temperatures $`T_{app}(e^{})`$ for alphas in figure 4. The proton data (not plotted here) show also a similar backbending. Despite the large error-bars this shape anomaly had already been noted and was well reproduced by $`MMMC`$ in fig. 13 of ref.. In this paper we have shown that a strong anomaly exists in the shape of the experimental $`CES`$ $`e^{}(T_{app})`$ for the apparent temperature. This ”S-shape” suggests that the relevant phase space becomes enlarged in the region of $`e^{}=23MeV/A`$. In terms of thermodynamics this is associated with a phase transition for this isolated, strongly interacting quantal system. MMMC-model reproduces the general shape of the experimental $`e^{}(T_{app})`$ curve at right excitation energies. This supports the hypothesis of strong stochastic mixing of the various fragmentation channels and the statistical equilibration at freeze-out. As the production of intermediate mass fragments increases dramatically in this region of excitation energy we associate the ”S-shape” to the additional phase space opened by IMF production. This association is also supported by the results of the $`MMMC`$ calculation where the ”S-shape” in $`e^{}(T)`$ is seen in the energy region of strongly increasing $`M_{IMF}`$. More to the point the ”S-shape” in the caloric equation of state associated with IMF production is even seen in the evaporation spectra of different particles in events which in more than $`90`$% of the cases have no IMF. In addition, the calculations are rather insensitive to variations of its basic model parameter, the freeze-out volume within broad limits and thus no adjustment of this parameter was necessary to reproduce the general shape of the experimental caloric equation of state. Prior to this work the primary evidence of the validity of the concept of stochastic coupling of two moving nuclei in proximity was the finding of a strong pre-barrier surface friction in deep inelastic collisions . An experimental support for the equilibration hypothesis was given also at higher excitation energies in . While this work adds weight to the argument that strong stochastic mixing and equilibration exists up to rather extended configurations of the fragmented source, we do not believe that the issue is closed. For while the general shape of the caloric equation of state was reproduced by the $`MMMC`$ model, differences exist with particle type which may imply the existence of a dynamics or a (mean) sequence, which is not dealt with by the single freeze-out configuration of the $`MMMC`$ model. Taking all the findings together, the anomaly in all four spectra (proton, deuteron, triton, and alpha) at the same excitation energy as predicted by $`MMMC`$ and also the earlier data by the Texas A&M-group (fig. 2, 3 and 4), we see a strong support for the significance of our interpretation of the ”S-shape” in the $`e^{}(T_{app})`$ as a signal of a phase transition in nuclear fragmentation. The transition is from pure evaporation to asymmetric fission, which is associated to the onset of IMF emission. Nevertheless, additional experimental and theoretical confidence is desirable. O.S. is grateful to GANIL for the friendly atmosphere during her stays there. This work is supported in part by IN2P3/CNRS.
no-problem/9901/cond-mat9901057.html
ar5iv
text
# The Phase Separation Scenario for Manganese Oxides \[ ## Abstract Recent computational studies of models for manganese oxides have revealed a rich phase diagram, not anticipated in early calculations in this context performed in the 1950’s and 60’s. In particular, the transition between the antiferromagnetic insulator state of the hole-undoped limit and the ferromagnetic metal at finite hole-density was found to occur through a mixed-phase process. When extended Coulomb interactions are included, a microscopically charge inhomogeneous state should be stabilized. These phase separation tendencies, also present at low electronic densities, influence the properties of the ferromagnetic region by increasing charge fluctuations. Experimental data reviewed here using several techniques for manganites and other materials are consistent with this scenario. Similarities with results previously discussed in the context of cuprates are clear from this analysis, although the phase segregation tendencies in manganites seem stronger. \[To appear in Science\] \] I. Introduction Hole-doped manganese oxides with a perovskite structure have stimulated considerable scientific and technological interest due to their exotic electronic and magnetic properties . These manganites have a chemical composition $`\mathrm{R}_{1\mathrm{x}}\mathrm{A}_\mathrm{x}\mathrm{MnO}_3`$, with R a rare-earth ion and A a divalent ion such as Ca, Sr, Ba, or Pb. They present an unusual magnetoresistance (MR) effect, whereby magnetic fields induce large changes in their resistivity $`\rho `$, a property that may find applications in sensor technologies such as that utilized in magnetic storage devices. For example, in La-Ca-Mn-O thin films, the ratio $`(\rho (0)\rho (\mathrm{H}))/\rho (\mathrm{H})`$, with $`\rho (\mathrm{H})`$ the resistivity in a magnetic field H, can be as large as $`10^3`$ at 77K (H=6T). The term “colossal” magnetoresistance (CMR) has been coined to describe this effect . The unusual properties of manganese oxides challenge our current understanding of transition-metal oxides, and define a basic research problem that involves an interplay between the charge, spin, phononic, and orbital degrees of freedom. Manganites have a rich phase diagram that includes a well-known ferromagnetic (FM) phase that spans a robust range of electronic densities. The CMR effects have been observed particularly at small hole densities x but also at $`\mathrm{x}`$$``$$`0.5`$, which are the density limits of the FM-phase. The strength of the MR effect increases as the electronic bandwidth is decreased through chemical substitution , which also reduces the Curie critical temperature $`\mathrm{T}_\mathrm{C}`$. At hole concentrations $`\mathrm{x}`$$``$$`0.5`$, an antiferromagnetic (AF) charge-ordered (CO) insulating state, discussed by Goodenough , is involved in the CMR effect, which at these densities is extraordinarily large . In the undoped limit, the $`\mathrm{Mn}^{3+}`$ ions have four electrons in the 3d-shell, and they are surrounded by oxygens $`\mathrm{O}^2`$ forming an octahedron. This crystal environment breaks the full rotational invariance, causing the two $`\mathrm{e}_\mathrm{g}`$\- and three $`\mathrm{t}_{2\mathrm{g}}`$-orbitals to split. The strong Hund coupling ($`\mathrm{J}_\mathrm{H}`$) in these systems favors the spin alignment of the four electrons in the active shell; on average three electrons populate the $`\mathrm{t}_{2\mathrm{g}}`$-orbitals and one occupies the $`\mathrm{e}_\mathrm{g}`$-states. The $`\mathrm{t}_{2\mathrm{g}}`$-electrons are mainly localized, whereas the $`\mathrm{e}_\mathrm{g}`$-electrons are mobile and use O p-orbitals as a bridge between Mn ions. When the manganites are doped with holes through chemical substitution, $`\mathrm{Mn}^{4+}`$-ions with only three $`\mathrm{t}_{2\mathrm{g}}`$-electrons are formed. In addition, in the undoped limit the $`\mathrm{e}_\mathrm{g}`$-degeneracy is split due to Jahn-Teller (JT) distortions; as a consequence, a one-orbital approximation has been frequently used since the earliest theoretical studies . For these reasons, typical electronic models for the manganites include at least a kinetic energy contribution for the $`\mathrm{e}_\mathrm{g}`$-electrons, regulated by a hopping amplitude $`\mathrm{t}`$, and a strong $`\mathrm{J}_\mathrm{H}`$ coupling contribution between the $`\mathrm{e}_\mathrm{g}`$\- and $`\mathrm{t}_{2\mathrm{g}}`$-spins. The localized spin is large enough (3/2) to be approximated by a classical spin, which simplifies the calculations. Here this model will be simply referred to as the “one-orbital” model, although other names are sometimes used, such as FM Kondo model. This formalism leads to a natural explanation for the FM-phase of the manganites, because carriers energetically prefer to polarize the spins in their vicinity. When an $`\mathrm{e}_\mathrm{g}`$-electron jumps between nearest-neighbor ions, it does not pay an energy $`\mathrm{J}_\mathrm{H}`$ if all of the spins involved are parallel. The hole-spin scattering is minimized in this process, and the kinetic energy of the mobile carriers is optimized. This mechanism is usually referred to as double-exchange (DE) . As the carrier density grows, the FM distortions around the holes start overlapping and the ground state becomes fully ferromagnetic. Currently there is not much controversy about the qualitative validity of DE to stabilize a FM-state. However, several experimental results suggest that more complex ideas are needed to explain the main properties of manganese oxides. For instance, above $`\mathrm{T}_\mathrm{C}`$ and for a wide range of densities, several manganites exhibit insulating behavior of unclear origin that contributes to the large MR results. The low-temperature (T) phase diagram of these materials has a complex structure , not predicted by DE, that includes insulating AF- and CO-phases, orbital ordering, FM-insulating regimes, and, as discussed extensively below, tendencies toward the formation of charge inhomogeneities, even within the FM-phase. To address the strong MR effects and the overall phase diagram of manganites, the DE framework must be supplemented with more refined ideas. II. Phase Separation in the One-Orbital Model Motivated by new experimental research on manganese oxides, there has been considerable theoretical work in the analysis of models for these materials. Several many-body techniques for modeling strongly correlated electron systems were developed and improved during recent efforts to understand high-temperature superconductors, and thus it is natural to apply some of these methods to manganite models. Of particular relevance here are the computational techniques that allow for an unbiased analysis of correlated models on finite clusters . The first comprehensive computational analysis of the one-orbital model was presented by Yunoki et al. using classical spins for the $`\mathrm{t}_{2\mathrm{g}}`$-electrons and the Monte Carlo (MC) technique. Several unexpected results were found in this study. In particular, when calculating the density of $`\mathrm{e}_\mathrm{g}`$-electrons $``$$`\mathrm{n}`$$``$ $`=(1\mathrm{x})`$ as the chemical potential $`\mu `$ was varied, it was surprising to observe that some densities could not be stabilized; in other words, $``$$`\mathrm{n}`$$``$ was found to change discontinuously at special values of $`\mu `$. These densities are referred to as “unstable.” Alternative calculations in the canonical ensemble where the density is fixed to arbitrary values, rather than being regulated by $`\mu `$, showed that at unstable densities, the resulting ground state is $`not`$ homogeneous, but it is separated into two regions with differing densities. The two phases involved correspond to those that bound the unstable range of densities \[9-11\]. This phenomenon, which has been given the name of “phase separation” (PS), appears in many contexts, such as the familiar liquid-vapor coexistence in the phase diagram of water, and it is associated with the violation of the stability condition $`\kappa ^1=`$$`\mathrm{n}^2`$$`^2\mathrm{E}/\mathrm{n}^2>0`$, with $`\mathrm{E}`$ the energy of the system per unit volume, and $`\kappa `$ the compressibility. In the realistic limit $`\mathrm{J}_\mathrm{H}/\mathrm{t}`$$``$$`1`$, phase separation occurs between hole-undoped $``$$`\mathrm{n}`$$``$$`=1`$ and hole-rich $``$$`\mathrm{n}`$$``$$`<`$$`1`$ phases \[9-11\]. Although the $`\mathrm{e}_\mathrm{g}`$\- and $`\mathrm{t}_{2\mathrm{g}}`$-spins of the same ion tend to be parallel at large $`\mathrm{J}_\mathrm{H}`$, their relative orientation at one lattice-spacing depends on the density. At $``$$`\mathrm{n}`$$``$$`=`$$`1`$, an AF arrangement results because the Pauli principle precludes movement of the electrons if all spins are aligned. However, at stable $``$$`\mathrm{n}`$$``$$`<`$$`1`$ densities, DE forces the spins to be parallel, as computer studies have indicated \[9-11\]. Yunoki and Moreo have shown that if an additional small Heisenberg coupling among the localized spins is introduced, PS occurs also at small $``$$`\mathrm{n}`$$``$, this time involving FM- ($``$$`\mathrm{n}`$$``$$`>`$$`0`$) and electron-undoped AF-states. Phase segregation near the hole-undoped and fully-doped limits implies that a spin-canted state for the one-orbital model is not stable. Other authors arrived to similar conclusions after observing phase segregation tendencies using several analytical techniques \[12-14\]. If a spin-canted state is unequivocally found in experiments, mechanisms other than DeGennes’ may be needed to explain it. Note also that a canted state is difficult to distinguish experimentally from a mixed AF-FM state. It is interesting that PS behavior is not unique to manganese oxides. Indeed, the existence of PS in AF rare-earth compounds has been addressed by Nagaev for many years . In these materials, there is a small density of electrons interacting with localized spins. Actually, $`\mathrm{Eu}_{1\mathrm{x}}\mathrm{Gd}_\mathrm{x}\mathrm{Se}`$ has a very large MR effect similar to that observed in manganites . Calculations in this context were performed using the one-orbital model mainly in the limit where the localized-conduction spin-spin coupling is smaller than the bandwidth (equivalent to $`\mathrm{J}_\mathrm{H}`$$``$$`\mathrm{t}`$) and at small $``$$`\mathrm{n}`$$``$. Nevertheless, some of these results have been discussed also in the context of manganites . Note that in the recently established phase diagram of the one-orbital model, PS occurs at both high- and low-electronic density . Because the $`\mathrm{n}`$$``$$`1`$ limit corresponds to the dilute AF semiconductors mentioned above, the computational studies confirm that these materials should also exhibit PS tendencies. A broad distribution of FM cluster sizes should be expected, with a concomitant distribution of electrons trapped in those clusters . Gavilano et al. have recently reported a two-phase mixed regime in these materials that may be related to intrinsic PS tendencies. Analogous results were also observed in other diluted magnetic semiconductors . The analysis of experimental data in this context should certainly allow for the possibility of large-scale inhomogeneous states. III. Phase Separation in the Two-orbitals Model Most of the theoretical studies for manganites have been carried out using the one-orbital model, which certainly provides a useful playground for the test of qualitative ideas. However, quantitative calculations must necessarily include two active $`\mathrm{e}_\mathrm{g}`$-orbitals per Mn-ion to reproduce the orbital-ordering effects known to occur in these materials . In addition, it has been argued that dynamical JT effects cannot be neglected , and the electron-JT-phonon coupling $`\lambda `$ should be important for the manganites. Although computational studies accounting for JT-phonons are at their early stages, some illustrative results are already available. Yunoki et al. recently reported the low-temperature phase diagram of a two-orbital model using the Monte Carlo method, and analyzed the results in a manner similar to the one-orbital case. The results are reproduced in Fig.1 for a one-dimensional (1D) system at large Hund coupling. The phase diagram is rich and includes a variety of phases such as metallic and insulating regimes with orbital order. The latter can be uniform, with the same combination of orbitals at every site, or staggered, with two combinations alternating between the even- and odd-sites of the lattice at $`n=1`$. Recently, our group observed that the density of states exhibits $`pseudogap`$ behavior caused by the PS tendencies, both in the one- and two-orbital cases, in agreement with photoemission experiments for layered manganites . Of special importance for the discussion here are the regions of unstable densities. Phase separation appears at small $`\mathrm{e}_\mathrm{g}`$-densities between an electron-undoped AF-state and a metallic uniform-orbital-ordered FM-state. The latter phase itself coexists at larger densities and intermediate values of $`\lambda `$ with an insulating ($``$$`\mathrm{n}`$$``$$`=`$$`1`$) staggered-orbital-ordered FM-state, in an orbital-induced PS process . The overall results are qualitatively similar to those obtained with other model parameters, and in studies of 2D and 3D systems. Overall, PS tendencies are strong both in the one- and two-orbital models, and over a wide range of couplings. Similar tendencies have been recently observed including large on-site Hubbard interactions , which is reasonable because at intermediate and large electron-phonon coupling a negligible probability of on-site double-occupancy was found . The macroscopic separation of two phases with different densities, and thus different charges, should actually be prevented by long-range Coulombic interactions, which were not incorporated into the one- and two-orbital models discussed thus far. Even including screening and polarization effects, a complete separation leads to a huge energy penalty. This finding immediately suggests that the two large regions involved in the process will break into smaller pieces to spread the charge more uniformly. These pieces are hereafter referred to as polarons if they consist of just one-carrier in a local environment that has been distorted by its presence. This distortion can involve nearby spins (magnetic polaron), nearby ions (lattice polaron), or both, in which case this object will be simply referred to as a “polaron.” However, the terms “clusters” or “droplets” are reserved for extended versions of the polarons, characteristics of a PS regime, containing several carriers inside a common large magnetic distortion or lattice distortion or both. The present discussion suggests that in the regime of unstable densities the inclusion of extended Coulomb interactions will lead to a stable state, with clusters of one phase embedded in the other \[see also (15)\]. It is expected that the competition between the attractive DE tendencies among carriers and the Coulomb forces will determine the size and shape of the resulting clusters. Either sizable droplets or polarons may arise as the most likely configuration . The stable state resulting from the inclusion of extended Coulomb interactions on an otherwise PS unstable regime will be referred to as a charge-inhomogeneous (CI) state. However, the ideas presented here will still be described as the “PS scenario,” with the understanding that only microscopic phase segregation is the resulting net effect of the DE-Coulomb competition. Related ideas have been previously discussed in the context of the cuprates , with attractive interactions generated by antiferromagnetism or phonons. An exception to the existence of only purely microscopic effects occurs if the competing phases have approximately the same density, as observed experimentally at $`\mathrm{x}=0.5`$ (discussed below). In this case, large-scale PS can be expected. Note also that the CI-state is certainly different from the metastable states that arise in a standard first-order transition. Figure 2 contains a cartoon-like version of possible charge arrangements in the CI-state, which are expected to fluctuate in shape, especially at high temperature where the clustering is dynamic. Unfortunately, actual calculations supporting a particular distribution are still lacking. Nevertheless, the presently available results are sufficient to establish dominant trends and to allow a qualitative comparison between theory and experiment, as shown below. Phase separation in manganese oxides has clear similarities with the previously discussed charge inhomogeneities observed in copper and nickel oxides . Actually, studies of 1D generalizations of the t-J model by Riera et al. showed that as the localized spin magnitude S grows, the phase diagram is increasingly dominated by either FM or PS tendencies. The importance of PS arises from the dominance of the Heisenberg interactions over the kinetic energies as S increases, which causes holes to be expelled from the AF-regions because they damage the spin environment. The tendency toward phase segregation decreases across the transition-metal-row, from a strong tendency in Mn, to a weak tendency in Cu . The stripes observed in cuprates could certainly appear in manganites as well through the competition of the DE attraction and Coulomb repulsion among clusters. IV. Influence of Phase Separation on the Ferromagnetic Phase A critical aspect of the scenario discussed here is the influence that the low-temperature PS regime exerts on the behavior of electrons at higher temperatures, and especially on the ordered phases which neighbor PS regimes. As an illustration, consider the lines of constant $`\kappa `$$`\mathrm{n}^2=`$$`\mathrm{d}\mathrm{n}/\mathrm{d}\mu `$ of the 1D one-orbital model at large $`\mathrm{J}_\mathrm{H}`$ (Fig.3A). Because PS occurs through the divergence of $`\kappa `$, naturally this quantity is the largest at those densities where PS is observed (see above). A large $`\kappa `$ implies that strong charge fluctuations occur, because $`\kappa `$$`(\mathrm{N}^2\mathrm{N}^2)`$, with $`\mathrm{N}`$ the total number of particles. A characteristic crossover temperature for ferromagnetism $`\mathrm{T}_\mathrm{C}^{}`$, occurs where the zero-momentum spin structure factor starts growing very rapidly as the temperature is reduced. $`\mathrm{T}_\mathrm{C}^{}`$ is expected to become truly critical in higher dimensional systems, where also a finite critical temperature for PS is expected to exist. Figure 3A shows that the PS tendencies influence the neighboring FM-state because the compressibilities close to the PS regime, located at $`0.8`$$``$$`\mathrm{n}`$$`1.0`$, are much larger than those at, e.g., $`\mathrm{n}`$$`=0.5`$. This result implies that even within the FM-phase, which is uniform when time-averaged, there is a dynamical tendency toward cluster formation because $`\kappa `$ is large. The same situation occurs for $`\mathrm{T}`$$`>`$$`\mathrm{T}_\mathrm{C}^{}`$ and at low hole-densities. This effect should influence transport properties, including resistivity. Although it is difficult to evaluate $`\rho `$ with finite-cluster techniques, crude estimations can be made using the inverse of the zero-frequency Drude weight found from the optical conductivity . As an example, results are shown in Fig.3B for the one-orbital model. This estimation of $`\rho `$ produces the qualitatively expected results, namely, it behaves as an insulator at small x and rapidly decreases as x increases, turning smoothly into a metal. Studies by our group using more sophisticated techniques connecting the cluster with ideal metals have recently been found to produce qualitatively similar data. The results compare well with experiments for Sr-doped compounds . Starting from a regime with dynamical cluster formation above $`\mathrm{T}_\mathrm{C}`$, the metallic state can be obtained if the clusters grow in size as T is reduced, eventually reaching the limit where percolation is possible. At this temperature, the carriers move over long distances and the metallic state is reached. The same mechanism arises in polaronic theories \[see also (32)\]. V. Comparing Theory with Experiments: the Phase Diagrams The computational results are consistent with several experiments on a variety of manganese oxides. Consider, for example, $`\mathrm{La}_{1\mathrm{x}}\mathrm{Sr}_\mathrm{x}\mathrm{MnO}_3`$. The experimentally observed sharp increase in $`\rho `$ as $`\mathrm{x}`$ decreases towards the undoped limit, both above and below $`\mathrm{T}_\mathrm{C}`$ , is difficult to explain if the only effect of the correlations were to induce a reduced effective electronic hopping $`\mathrm{t}_{\mathrm{eff}}=\mathrm{t}\mathrm{cos}(\theta /2)`$, where $`\theta `$ is the angle between nearest-neighbor sites . In addition, the insulating properties of the intermediate region $`0.0<\mathrm{x}<0.16`$ do not fit into the simpler versions of the DE ideas. This regime is important because the CMR effect is maximized at the lowest $`\mathrm{T}_\mathrm{C}`$, that is, at the boundary between the metallic and insulating regions. Note also that recent experiments for hole-densities slightly above x=0.5 showed that the ground state of the $`(\mathrm{La},\mathrm{Sr})`$-based manganese oxide is an A-type AF-metal with uniform $`\mathrm{d}_{\mathrm{x}^2\mathrm{y}^2}`$ orbital-order (Fig.4A). This phase, as well as the orbital-ordered x=0 A-type AF-insulator, does not appear in the one-orbital model . For these reasons, $`\mathrm{La}_{1\mathrm{x}}\mathrm{Sr}_\mathrm{x}\mathrm{MnO}_3`$ does not seem a typical DE material when considered including all available densities, although at $`\mathrm{x}0.3`$ Furukawa showed that it has DE characteristics. However, the experimental results mentioned above can be more naturally accounted for once PS tendencies and the coupling with JT-phonons are considered (narrower band materials are more complicated because of their x=0.5 CO-state, and they will be discussed below). Actually, it has already been argued that $`\rho `$ should rapidly grow as $`\mathrm{x}`$ decreases due to the strong charge fluctuations at small $`\mathrm{x}`$ caused by the nearby phase segregation regime (Fig.3B). In this context, the insulating state above $`\mathrm{T}_\mathrm{C}`$ of the lightly hole-doped $`(\mathrm{La},\mathrm{Sr})`$-compound can be rationalized as formed by clusters of one phase (FM or AF) embedded into the other. Even the experimentally observed A-type AF-metallic $`\mathrm{d}_{\mathrm{x}^2\mathrm{y}^2}`$-ordered phase at $`\mathrm{x}0.5`$ (Fig.4A) can be related to the phase with similar characteristics near x=0.5 found in the theoretical calculations (Fig.1). Although simulations with the realistic hopping amplitudes needed to stabilize an A-type AF-state in a 3D environment have not been performed yet, at least the 1D and 2D FM tendencies, as well as the stabilization of a uniform orbital-ordering (Fig.1), are clear. If phenomenologically one assumes that $`\lambda /\mathrm{t}`$ decreases with hole doping, the dashed-line in Fig.1 runs through the proper series of experimentally observed phases , namely, an insulating staggered orbital-ordered state at x=0, a charge-segregated regime at small x, a metallic orbital-disordered FM-phase at a higher density, and finally the $`\mathrm{x}0.5`$ orbitally ordered FM-state compatible with A-type AF-order in dimensions $`\mathrm{D}<3`$ . If it were possible to complete the phase diagram of $`\mathrm{La}_{1\mathrm{x}}\mathrm{Sr}_\mathrm{x}\mathrm{MnO}_3`$ by synthesizing $`\mathrm{x}`$$`>`$$`0.6`$ samples, the calculations predict a new mixed-phase, involving the A-type AF-metal with $`\mathrm{x}<1`$ and a G-type AF-insulator with $`\mathrm{x}=1`$, where large MR effects could potentially occur. VI. Experimental Evidence of Charge Inhomogeneities Independently of the development of the theoretical ideas on PS, a large body of experimental evidence has accumulated that suggests the existence of charge inhomogeneities in manganese oxides either in macroscopic form or, more often, through the presence of small clusters of one phase embedded into another. The results have been obtained on several materials, at a variety of temperatures and densities, and using a large array of microscopic and macroscopic experimental techniques. These studies have individually concentrated on particular parameter regions, and the results have rarely been discussed in comparison to similar results obtained in other phase regimes. However, once all of these experimental data are combined, it appears that the manganite metallic FM-phase is surrounded both in temperature and density by charge inhomogeneous regions involving FM clusters coexisting with another phase, which in some cases is AF. It would be unnatural to search for special justifications for each one of these experimental results. The most economical hypothesis is to explain the data as arising through a single effect, such as tendencies to PS that compete strongly with ferromagnetism both at large and small x, as well as above $`\mathrm{T}_\mathrm{C}`$. The experimental details are the following: VI.1 Sr-doped Manganese Oxides: Part of the phase diagram of $`\mathrm{La}_{1\mathrm{x}}\mathrm{Sr}_\mathrm{x}\mathrm{MnO}_3`$ is schematically shown in Fig.4B, including at their proper location in T and $`\mathrm{n}`$ some of the descriptions of charge inhomogeneity found in the literature, along with the experimental techniques that have reported such inhomogeneity. They include results by Egami et al. , where evidence for an inhomogeneous FM-state and small polarons both at high- and low-T was reported using pair-density functional (PDF) techniques. In addition, a recent analysis of the optical conductivity of $`\mathrm{La}_{7/8}\mathrm{Sr}_{1/8}\mathrm{MnO}_3`$ by Jung et al. observed PS features in the data. Magnetic, transport, and neutron scattering experiments by Endoh et al. on $`\mathrm{La}_{0.88}\mathrm{Sr}_{0.12}\mathrm{MnO}_3`$ revealed PS tendencies between two FM-phases, one metallic and the other insulating. Other authors have also reported inhomogeneities in Sr-doped compounds . VI.2 Ca-doped Manganese Oxides: In Fig.5, the phase diagram of $`\mathrm{La}_{1\mathrm{x}}\mathrm{Ca}_\mathrm{x}\mathrm{MnO}_3`$ is sketched. Although previous reports contain more details, for the present discussion just the dominant qualitative aspects are needed. The list is not exhaustive but it is sufficient to illustrate the notorious presence of charge inhomogeneities in this compound near the FM-phase, and even inside it. Overall, the analysis of available data for Ca-doped $`\mathrm{LaMnO}_3`$ leads to conclusions similar to those presented for their Sr-doped counterpart. The details are the following (postponing the analysis for $`\mathrm{x}`$$``$$`0.5`$, which requires special discussion). Consider first the small-angle neutron scattering (SANS) results at x=0.05 and 0.08 and low-T by M. Hennion et al. . They revealed the existence of a liquid-like distribution of FM droplets with a density 1/60 that of holes. In a similar regime of parameters, nuclear magnetic resonance (NMR) experiments by Allodi et al. reported the coexistence of FM and AF features and the absence of spin-canting . SANS results at x=1/3 and $`\mathrm{T}>\mathrm{T}_\mathrm{C}`$ by Lynn et al. and De Teresa et al. observed a short (weakly T dependent) FM correlation length, attributed in to magnetic clusters $`10`$ to $`20\mathrm{\AA }`$ in diameter. Other experimental results, not reviewed here, agree with this conclusion. Even within the metallic FM-phase ($`\mathrm{T}<\mathrm{T}_\mathrm{C}`$), indications of charge inhomogeneities have been reported. Transport measurements by Jaime et al. were analyzed using a two-fluid picture involving polarons and free electrons. $`\mu `$-Spin relaxation and resistivity data by Heffner et al. were interpreted as produced by a multidomain sample. X-ray absorption results by Booth et al. provided evidence of coexisting localized and delocalized holes below $`\mathrm{T}_\mathrm{C}`$ . Using Raman and optical spectroscopies, S. Yoon et al. found localized states in the low-T metallic FM-phase of several manganese oxides. Neutron scattering experiments reported an anomalous diffusive component in the data below $`\mathrm{T}_\mathrm{C}`$, which could be explained by a two-phase state. Fernandez-Baca et al. have shown that this diffusive component is enhanced as the $`\mathrm{T}_\mathrm{C}`$ of the considered manganite decreases. Actually, the low-energy component of the two-branch spin-wave spectrum observed at small x has similarities with the diffusive peak at x=1/3 . For the large hole-density regime, neutron scattering by Bao et al. at $`\mathrm{x}`$$``$$`0.8`$ using $`\mathrm{Bi}_{1\mathrm{x}}\mathrm{Ca}_\mathrm{x}\mathrm{MnO}_3`$, which behaves similarly to $`\mathrm{La}_{1\mathrm{x}}\mathrm{Ca}_\mathrm{x}\mathrm{MnO}_3`$, found FM-AF features coexisting between 150 and 200K. Optical measurements at $`\mathrm{x}`$$`>`$$`0.5`$ by Liu et al. reported similar features. Cheong and Hwang in found a finite magnetization at low-T and $`\mathrm{x}`$$``$$`0.83`$ in the $`(\mathrm{La},\mathrm{Ca})`$-compound \[see also (52)\]. The system remains insulating, and the results could be compatible with spin-canted or mixed-phase states. More work at small electronic density is needed to clarify if the phase segregation predicted by the theoretical calculations indeed appears in experiments. VI.3 Manganese Oxides with a Charge-Ordered State Near x=0.5: Results involving the “charge-ordered” AF-state at $`\mathrm{x}`$$``$$`0.5`$ require special discussion. Here, the extraordinarily large CMR effect involves the abrupt destabilization of the CO-state by a magnetic field . Evidence for PS tendencies is rapidly accumulating in this region of the phase diagram of narrow band manganese oxides. Several experiments for $`\mathrm{La}_{1\mathrm{x}}\mathrm{Ca}_\mathrm{x}\mathrm{MnO}_3`$ have already reported coexisting metallic FM and insulating CO-AF clusters near $`\mathrm{x}`$$`=`$$`0.5`$ (Fig.5) . Another compound at the FM-CO boundary at low-T is $`\mathrm{Pr}_{0.7}\mathrm{Ca}_{0.3}\mathrm{MnO}_3`$. Here x-ray synchrotron and neutron-powder diffraction results were attributed to the presence of FM clusters in the CO-phase. Exposure to x-rays produces a nonuniform PS phenomenon characteristic of two competing states, with an increasing size of the FM droplets and no evidence of spin-canting. This phenomenon is expected to appear at other hole densities as well. Related manganese oxides exhibit similar features. For example, the absorption spectra of thin films of $`\mathrm{Sm}_{0.6}\mathrm{Sr}_{0.4}\mathrm{MnO}_3`$ have been attributed to the formation of large clusters of the CO-state above its ordering temperature. In $`(\mathrm{La}_{0.5}\mathrm{Nd}_{0.5})_{2/3}\mathrm{Ca}_{1/3}\mathrm{MnO}_3`$, insulating CO and metallic FM regions coexist . However, consider $`\rho `$ at 300 K for $`\mathrm{La}_{1\mathrm{x}}\mathrm{Ca}_\mathrm{x}\mathrm{MnO}_3`$ . A smooth connection between the undoped, lightly-doped, and heavily-doped compounds seems to exist even in this narrow-band material. No obvious precursors for $`\mathrm{x}`$$``$$`0.5`$ of the low-T CO-state have been reported. The same occurs for $`\mathrm{Nd}_{0.5}\mathrm{Sr}_{0.5}\mathrm{MnO}_3`$, which is metallic above $`\mathrm{T}_\mathrm{C}`$, becomes FM upon cooling, and reaches the CO-state through a first order transition upon further decreasing T . These results establish a possible qualitative difference between the MR effect at small x and $`\mathrm{x}`$$``$$`0.5`$. In the former, charge inhomogeneities appear above and below the critical temperatures, and the mutual influence of neighboring phases (notably FM and PS) is important. However, at $`\mathrm{x}0.5`$, the CO- and FM-states do not seem to have much influence on each other. It could even be that the low-T microscopic PS tendencies at $`\mathrm{x}`$$``$$`0.5`$ may be caused by metastabilities rather than a stable CI-state. However, since the competing states at $`\mathrm{x}0.5`$ have similar $`\mathrm{n}`$s charge inhomogeneities involving large clusters are possible because Coulomb interactions will not prevent it (see above). Further experimental work is needed to clarify this situation. The theoretical study of competing FM and CO-AF states of narrow band manganese oxides in a single formalism also represents a challenge for computational studies. Preliminary results are promising since a CE-type CO state has been recently stabilized in computer simulations carried out at large electron-phonon coupling . VI.4 Layered Manganites: Tendencies toward PS in layered manganites have also been observed. Neutron scattering results for the bilayered $`\mathrm{La}_{1.2}\mathrm{Sr}_{1.8}\mathrm{Mn}_2\mathrm{O}_7`$ revealed a weak peak at the AF-momentum of the parent compound that coexists with the dominant FM signal . Recently, PS between A-type metallic and CE-type insulating CO-states was reported for $`\mathrm{La}_1\mathrm{Sr}_2\mathrm{Mn}_2\mathrm{O}_7`$ . In addition, studies of the one-layer material $`\mathrm{Sr}_{2\mathrm{x}}\mathrm{La}_\mathrm{x}\mathrm{MnO}_4`$ observed direct evidence for macroscopic PS at small electronic density . VII. The PS scenario compared with other theories for manganites. The PS scenario is qualitatively different from other theories proposed to explain the CMR effects in manganese oxides. It improves on the simpler versions of the DE ideas by identifying charge inhomogeneities as the main effect competing with ferromagnetism, and by assigning the insulating properties above $`\mathrm{T}_\mathrm{C}`$, fundamental for the low hole-density CMR effect, to the influence of those competing phases. In particular, the compressibility increase above $`\mathrm{T}_\mathrm{C}`$ caused by PS leads to dynamical cluster formation. The ideas presented here also differ qualitatively from those by Millis et al. , although there are common aspects. In the PS scenario, charge inhomogeneities over several lattice spacings, not contained in local mean-field approximations , are believed to be relevant for the description of the insulating state above $`\mathrm{T}_\mathrm{C}`$. In addition, the orbital ordering plays a key role in the results presented in Fig.1. Although the importance of the JT-coupling introduced by Millis et al. is shared in both approaches, in the PS scenario a state formed by independent local polarons is a special case of a more general situation where clusters of various sizes and charges are possible. These fluctuations increase as $`\mathrm{T}_\mathrm{C}`$ decreases explaining the optimization of the MR effect at the boundary of the FM-phase. Note that the regime of small x is crucial to distinguish between the PS scenario and other polaronic theories based on more extended polarons and percolative processes . Other theories are based on the electronic localization effect using the off-diagonal disorder intrinsic to the DE model , and nonmagnetic diagonal disorder caused by the chemical substitutions. These effects lead to a large MR under some approximations . However, the calculations are difficult because they involve both strong couplings and disorder, and the prominent cluster and polaron formation found in experiments has not been addressed in this framework. A better starting point for manganites may need a formalism that accounts for the tendency to develop charge inhomogeneities before including disorder. VIII. Conclusions A variety of recent calculations have found PS tendencies in models for manganites, usually involving FM and AF phases. These tendencies, which should lead to a stable but microscopically inhomogeneous state upon the inclusion of Coulomb interactions, compete strongly with ferromagnetism in the phase diagrams and are expected to increase substantially the resistivity. Particularly, when two-orbital models are studied, the results are in good agreement with a large list of experimental observations reviewed here. Tendencies toward charge inhomogeneous states exist in real manganese oxides all around the FM-phase in the temperature-density phase diagram. The computer simulations have shown that the region with PS tendencies substantially influences the stable FM-phase by increasing its compressibility, an aspect that can be tested experimentally. This provides a rationalization for the experimental observation of a large MR effect at the boundaries of the FM-phase. The presence of short-range charge correlations is certainly a crucial feature of the PS scenario. However, considerable work still remains to be done. The inclusion of extended Coulomb interactions and the stabilization of the x=0.5 charge-ordered state are the next challenges for computational studies. Analytical techniques beyond the local mean-field approximations are needed to capture the essence of the charge inhomogeneous state. Macroscopic phenomenological approaches should be used to obtain predictions for transport properties and the shapes of the clusters that arise from the competition of the DE attraction and Coulomb repulsion. These resulting clusters are surely not static but fluctuating, specially above $`\mathrm{T}_\mathrm{C}`$. In related problems of nuclear physics at high density, several geometries have been found including spherical drops, rodlike structures (stripes or “spaghetti”) and platelike ones (“lasagna”) . Similar rich phenomena may occur in manganites. On the experimental front it is crucial to establish if the various regimes with charge inhomogeneities (Figs.4B and 5) are related, as predicted by the theoretical calculations. For example, work should be carried out to link the small x regime of $`(\mathrm{La},\mathrm{Ca})`$-manganites where FM-droplets appear, with the polarons reported at the x=1/3 density, and beyond into the highly hole-doped regime. In addition, phase segregation tendencies should also be studied close to the fully doped limit $`\mathrm{n}1`$ of manganese oxides, as well as in related compounds such as doped AF semiconductors . Acknowledgments Part of the ideas discussed here were developed in collaborations with N. Furukawa, J. Hu, and A. Malvezzi . The authors are specially thankful to W. Bao, S. J. Billinge, C. H. Booth, S. L. Cooper, T. Egami, A. Fujimori, N. Furukawa, J. Goodenough, M. Hennion, M. Jaime, J. Lynn, S. Maekawa, Y. Moritomo, J. Neumeier, T. W. Noh, G. Papavassiliou, P. G. Radaelli, A. Ramirez, P. Schiffer, Y. Tokura, H. Yoshizawa for their important comments and criticism. A. M. and E. D. are supported in part by grant NSF-DMR-9814350.
no-problem/9901/cs9901009.html
ar5iv
text
# 1 Introduction ## 1 Introduction A specter is haunting the publishing industry. It is the specter of Encyclopaedia Britannica. My first paper on electronic publishing \[Odlyzko1\] cited Encyclopaedia Britannica as an example of a formerly flourishing business that fell into trouble in just a few years by neglecting electronic media. Since that time, Encyclopaedia Britannica has collapsed, and was sold to Jacob Safra, who is investing additional funds to cover losses and revamp the business \[Melcher\]. The expensive sales force has been dismissed, and while print versions can still be purchased from bookstores, the focus is on electronic products. This collapse occurred even though Encyclopaedia Britannica had more than two centuries of tradition behind it, and was by far the most scholarly and best known of the English-language encyclopedias. In the apt words of \[EvansW\], > Britannica’s downfall is more than a parable about the dangers of complacency. It demonstrates how quickly and drastically the new economics of information can change the rules of competition, allowing new players and substitute products to render obsolete such traditional sources of competitive advantage as a sales force, a supreme brand, and even the world’s best content. This paper concentrates on scholarly journals. Not only that, but it will not deal with journals such as Science or IEEE Spectrum, which are distributed to tens or hundreds of thousands of readers. It will concentrate on the low-circulation journals that are sold primarily to libraries, and typically have about a thousand subscribers. These are the journals that bring in the bulk of revenues to scholarly publishers, and are the source of the research library crisis. Still, the Encyclopaedia Britannica example will be used several times in analyzing these journals. The markets are different, but there are many similarities. A few years ago there was considerable skepticism whether electronic journals were feasible at all. A large part of \[Odlyzko1\] was therefore devoted to demonstrating that Licklider \[Licklider\] was right in the early 1960s in predicting that by the late 1990s, computing, communications, and storage technologies would be adequate for handling the scholarly literature. By now, most such doubts have been dispelled (although there are still exaggerated concerns about durability of digital storage as well as technical standards). It is also widely accepted that electronic journals are desirable and inevitable. Therefore we see rapid growth of digital material. Scholarly journals that exist only in electronic formats continue to proliferate. However, since they started from a low base, they still cover a small fraction of the literature. The dominant electronic journals (if not in absolute numbers, then certainly in amount of peer-reviewed material) are digital versions of established print serials. (See \[ARL, HitchcocCH\] for latest estimates of the electronic marketplace.) The largest scholarly publisher, Elsevier, will soon have all its approximately 1200 journals available electronically. Professional societies, such as the ACS, APS, AMS, and SIAM, also have either already created electronic versions of all their research journals, or are in the process of doing so. The question of whether most scholarly journals will be electronic or not is thus settled. While it is now widely accepted that scholarly journals have to be electronic, how they are to be delivered, and especially at what price, remains to be decided. This article examines the current practices by publishers, both commercial and professional society ones, and their likely evolution and impact on libraries. Some features of the electronic offerings from established publishers (such as offering only bundles of journals, without a chance to purchase individual ones) are causing controversy among scholars and librarians. The subtitle of the article \[Kiernan1\] describes the mixture of reactions well: “Some see a way to meet professors’ needs; others say publishers are protecting profits.” There is no doubt that the publishers’ primary motive is protection of revenues and profits. This is true for both commercial and learned society publishers. Still, this article argues that professors’ needs are likely to be better satisfied by these new electronic offerings than by traditional print journals. However, for the publishers to protect their revenues and profits, they will have to usurp much of the role and resources of libraries. Further, publishers’ success is likely to retard the development of an even more efficient system. Encyclopaedia Britannica was vulnerable largely because it had an enormously bloated cost structure. The $1,500 to $2,500 that purchasers paid for each set included a couple of hundred dollars for the printing, binding, and distribution. Most of the rest was for the sales force and general administrative overhead. The vaunted editorial content apparently amounted to well under 10 percent of the total price. That is what allowed $50 CD-ROM encyclopedias to compete. They did not have the same quality of content, nor the nicely printed volumes, but they did have superior searchability, portability, and an irresistible price. It is important to note that after some abortive attempts to sell first $1,200, then $300 CD-ROMs, Encyclopaedia Britannica is now offering its CD-ROMs for $125 or even less. It is not known publicly what its total budget or internal cost allocations are, but it appears safe to say that the entire encyclopedia industry is spending much more on content than it used to. At Britannica, editorial staff reportedly has increased by over 25 percent. Further, usage of encyclopedias has probably increased substantially. While most of the CD-ROM versions are hardly ever used (which was also true of the paper editions, of course), there are tens of millions of them, many more than the print encyclopedias. This means that total usage is surely up. Universities that subscribe to the online version of the Encyclopaedia Britannica report that usage is far higher than it ever was for the printed versions \[Getz\]. As with Encyclopaedia Britannica, the main effect of new technologies on other parts of the publishing industry will be elimination of costs that once were unavoidable. Spending on content will go up. Total profits, which many finger as the culprit in the library crisis, may also increase. (It was noted in \[Odlyzko1\] that while revenues of the World Book encyclopedia went down when it switched to a CD-ROM format, profits grew.) However, the entire information industry is likely to become much more efficient, with more resources devoted to the intellectual content that matters. The current scholarly journal system is full of unnecessary costs. The ones that have attracted the most attention in the past were those associated with publishing. The main traditional functions of publishers, in which they handled copy editing, production, and distribution of material provided to them for free by scholars, are mostly obsolete. The difference in quality between the manuscripts that scholars can produce themselves, and the final printed journal versions, has decreased almost to the vanishing point with the arrival of easy to use computerized typesetting. (Here I am referring to copy editing and other tasks performed by professionals at publishers. Peer review is another matter. It was and continues to be done gratis by scholars, so that even if it is facilitated by publishers today, it can be performed without them.) To a large extent publishers are responding to cuts in subscriptions of large (and therefore expensive) journals by launching smaller, more specialized serials. These are often treated with much less care, so they are not much better in quality of presentation than camera-ready journals. Furthermore, they often have laughably small circulations (such as the figure of 300 or lower cited by a publisher \[Beschler\]). Thus the current scholarly journal system is becoming dysfunctional. To survive in the long run, publishers will need to move towards provision of intellectual value (such as that provided by the staffs of reviewing journals). That is a hard task, requiring new skill sets, and often new personnel. What keeps the publishers’ situation from being hopeless is the tremendous inertia of the scholarly community, which impedes the transition to free or inexpensive electronic journals. Another factor in the publishers’ favor is that there are other unnecessary costs that can be squeezed, namely those of the libraries. Moreover, the unnecessary library costs are far greater than those of publishers, which creates an opportunity for the latter to exploit and thereby to retain their position. Section 2 briefly reviews the economics of scholarly journals. Section 3 discusses the basic strategy that established publishers are following in moving to electronic journals. Section 4 concentrates on some features of the current electronic journal pricing and licensing policies. Finally, Section 5 offers some speculation about the future. ## 2 Economics and technology This section reviews the basic economic facts about scholarly journal publishing. They were first presented in \[Odlyzko1\] and then in greater detail (and with more data about electronic journals, based on more experience) in \[Odlyzko4\]. See also \[TenopirK\]. Conventional print journals bring in total revenues to publishers of about $4,000 per article. On the other hand, there are many flourishing electronic journals that operate without any money changing hands, through the unpaid labor of their editors (and with a trivial implicit subsidy by the editors’ institutions that provide computers and network connections). There is still some question whether this model can scale to cover most of peer-reviewed literature and satisfy scholar’s needs. Even if the totally free journals will not suffice, experience has shown that quality that is perfectly adequate for most readers can be produced in the electronic environment for less than $400 per article \[Odlyzko4\]. Such costs can be recovered either through subscription fees or charges to authors, and both models are being tried. Journal subscription costs are only one part of the scholarly information system. As was pointed out in \[Odlyzko1\], internal operating costs of research libraries are at least twice as high as their acquisition budgets. Thus for every article that brings in $4,000 in revenues to publishers, libraries in aggregate spend $8,000 on ordering, cataloging, shelving, and checking out material, as well as on reference help. The scholarly journal crisis is really a library cost crisis. If publishers suddenly started to give away their print material for free, the growth of the literature would in a few years bring us back to a crisis situation. It is important to emphasize the point about the cost of libraries. The $4,000 per article is rough estimate (see \[Odlyzko1, Odlyzko4, TenopirK\]) and one can argue that the precise figure should be higher or lower. On the other hand, the exact dollar figures for the 120 members of the Association of Research Libraries, which includes most of the large research libraries in the U.S. and Canada, do show that purchases of books, journals, and other materials make up rather consistently about a third of their budgets, and have done so for years \[ARL\]. The other two thirds goes overwhelmingly for salaries and wages of librarians and support staff, with a small fraction for items such as binding. The table below shows the breakdown of library expenditures at several universities during the 1996–97 academic year, taken from the comprehensive statistics collected by the ARL and available online at \[ARL\]. (Harvard has the world’s highest library budget.) | | circulation | staff | purchases | total budget | | --- | --- | --- | --- | --- | | Brown | 0.3M | 240 | $5.0M | $14.8M | | Harvard | 1.4M | 1182 | $17.5M | $70.9M | | Ohio State | 1.5M | 423 | $8.6M | $22.1M | | Princeton | 0.6M | 384 | $9.2M | $24.9M | This division of costs has held for a long time. For example, in the 1996–97 academic year, Harvard spent 24.7% of its library budget on acquisitions, whereas in 1981–82 it spent 27.5% ($5.8M out of $21.1M) The ARL numbers substantially underestimate the internal costs of libraries, since they include neither the costs of the buildings, nor of building maintenance, nor of employee fringe benefits. In many cases those numbers also fail to include the costs of library automation systems. If those additional costs were to be included, costs of acquisitions might turn out to be under a quarter of the total costs of the library system \[Getz\]. Thus, even though much of the cost to a library that is associated with a journal is incurred in the future, in preserving the issues and making them accessible, it seems safe to say that the internal costs of the library associated with that journal are at least twice the purchase price. The high internal costs of libraries come from the need to provide information about, and easy access to huge collections of material that are used infrequently at any single place. As an example, suppose that we ignore all the other activities of the Harvard libraries, and allocate the entire library cost to circulating items. We would then discover that circulating the 1.4 million items that were borrowed (out of 13.6 million volumes in the Harvard collection \[ARL\]) cost around $50 each. By comparison, there are commercial services (aimed at allowing publishers to reprint books in extremely small runs) that will digitize a book for a one-time fee of $100 to $150, and then print individual copies of a 300-page book for about $5 \[NYT\]. That is an order of magnitude reduction in cost. Of course, this comparison ignores all the other function of the library, but it does demonstrate the dramatic cost savings that are becoming possible if one can cut back on the acquisition and management of a physical collection. The high cost of operating libraries is giving publishers a chance to maintain their revenues. Standing at the level of $4,000 articles, they are naturally reluctant to jump into the chasm of free or at most $400 articles. Instead, they are enviously eyeing the $8,000 per article spent by libraries. They are responding, either by careful design, or through competitive instinct, in ways that should reduce the costs of the total system by decreasing the role and cost of libraries. To the extent they succeed, this should produce a much superior scholarly information system, although still an unnecessarily expensive one. There have been occasional proposals that libraries take over the functions of publishers. Given the unnecessarily high price structure of publishers, such a course is conceivable. However, what is much more likely to happen in the competition for resources between libraries and publishers is that it will be the publishers who will come out ahead. There are cultural, economic, technological, and legal reasons for this prediction: 1. There are fewer publishers, so it is easier for them to mount electronic publishing efforts on a large scale, 2. Publishers are more used to competition than librarians, who stress cooperation, 3. Publishers control copyrights, and thus conversion of old material (crucial for reducing library costs) cannot be carried out without their cooperation, and, perhaps most important, 4. The publishers’ target is more inviting: there is more than twice as much resources for them to go after as there is for librarians. If the scholarly publishing business were efficient and run for the benefit of the scholarly enterprise, both libraries and publishers would have to shrink rapidly. However, this business is anything but efficient. A major contributor to this inefficiency is academic inertia. As shown in the discussion of rates of change in \[Odlyzko6\], academia is among the slowest to change in general. Further, scholarly publication is a sufficiently small part of research life that it does not attract much attention. Libraries usually consume 3% to 4% of university budgets, so any savings that might be realized from library cutbacks would not make a dramatic difference to total spending. (Among the academic ARL members, library spending averages about $12,000 per full time faculty member \[ARL\].) Furthermore, library buildings, often the most prominent on campus, easily attract donors who like to see their names immortalized on such central facilities. The most convincing demonstration of scholarly inertia is the reaction (or the lack of it) to the Ginsparg preprint archive. Starting in 1991, it has become the fundamental communication method for a growing roster of fields, starting with theoretical high energy physics, later spreading to other areas of physics, and now also to computer science and mathematics \[Ginsparg\]. It is a sterling example of how technology can lead to a sudden, profound, and beneficial transformation. Yet in 1998, this archive still processed only 24,000 submissions, which is substantial (about half of the volume of all mathematics papers published that year), but small compared to the perhaps 2 million papers in all STM (science, technology, medicine) areas. The attractions of the archive are great. It transforms the mode of operation of any community of scholars that embraces it, and the transition is invariably one-way, as not a single group has abandoned it. It quickly becomes the dominant mode of communication inside any group that embraces it. However, in spite of extensive publicity, it has not swept scholarly communication yet. It appears that there were special cultural factors that led to the quick adoption of the archive by Ginsparg’s own community of theoretical high energy physicists (primarily the reliance on massive mailings of preprints), and it has been a struggle for pioneers in other areas to duplicate the process. There are still many areas (especially in chemistry and medicine) where not just preprint archives, but preprints themselves, are rare, and in which prestigious journals get away with policies that forbid any formal consideration of a paper that has been circulated in preprint form. The significance of the Ginsparg archive is two-fold. On one hand, it shows that scholars can embrace new technology in a short period and derive enough benefit that giving it up becomes unthinkable. On the other hand, it also shows that it requires a substantial critical mass or an external push in an area to make the transition. In most of the STM fields, this critical mass is not present yet. A Ginsparg-style centralized preprint archive (or a decentralized system like MPRESS from the European Mathematical Society) is not compatible in the long run with an expensive journal publishing operation that collects $4,000 per article. “Available information determines patterns of use” in the apt words of Susan Rosenblatt \[Odlyzko5\], and if the basic preprints are available for free, few will pay a fortune for slight enhancements, which is all that current journals offer. The question is what is meant by “the long run.” The discussion in \[Odlyzko6\] as well as that above about the Ginsparg archive shows that academia moves at a glacial pace. Even in Ginsparg’s own theoretical high energy physics community, most researchers still publish their papers in conventional print journals (although a few senior ones have given up the practice on the grounds that it does not serve to propagate their results). Thus if academia were left to itself, the current journal system might continue to stumble along for a couple of decades until the subversive effect of preprints would make it clear the system was not worth its cost. In the discussion on diffusion of new technologies in \[Odlyzko6\], many rapid transitions were identified with the presence of forcing agents, people or institutions that can compel action. The prediction of \[Odlyzko1\] was that a collapse of the existing print journal system would come when academic decision makers (presidents, deans, …) realized that this system was superfluous, and go to departments with offers of the type “Would you rather stay with the existing library system at $12,000 per head, or would you be willing to cut that back to $6,000 per head, and use the savings for salaries, travel, …?” I think this is still the most likely scenario for change, but that it will involve abandonment of print and cutbacks in libraries, and less of a cutback at publishers. Publishers, who have been scared of electronic publishing, are likely to become forcing agents, and speed the transformation. ## 3 The demise of print journals Most established publishers have already created or are creating electronic versions of their scholarly print journals. Often they are offering these digital editions at no extra cost to subscribers to the print versions. In some cases, institutions that forego the print version receive a modest discount. A coherent strategy for the publishers should contain two additional steps in the future. The first step is to eliminate print editions entirely. (This has not yet been announced by any major publisher.) The second one is to convert the old issues to digital form, either themselves or through organizations like JSTOR \[Guthrie\]. (This is being done by several professional society publishers, but not yet by any commercial ones.) This would get libraries out of the journal distribution and archiving business (except as licensing agents, to be discussed below) and allow for drastic reductions in library budgets. Eliminating print editions would allow for some reduction in costs of publishers (even if they kept their current expensive editing system), so they have a financial incentive to do it. In digitization, they would have to spend money beyond their current budgets. The key point is that it would not be a lot of money. An earlier article \[Odlyzko4\] mentioned a range of digitization costs between $0.20 and $2.00 per page. There are now projects (such as the commercial one for book reprinting mentioned above \[NYT\], and the Florida Entomological Society’s project described in \[Walker\]) that show one can obtain a high quality digital version for $0.60 per page. To put these numbers in perspective, all publishers collectively get about $200 million per year for mathematical journals. On the other hand, the entire mathematical literature accumulated over the centuries is perhaps 30 million pages, so digitizing it at a cost of $0.60 per page would cost $18 million, less than 10% of the annual journal bill. Further, this would be a one-time expense. On the way towards eliminating print editions, publishers will have to solve a few thorny problems. One of them is interlibrary loans. Except for a few small organizations, until recently all publishers had blanket prohibitions on the use of electronic editions for interlibrary loans. This was naturally resented by librarians, who rely on such loans to satisfy a small but important and growing fraction of their clients’ demands. Without the right to use electronic editions for interlibrary loans, libraries were almost uniformly unwilling to even consider abandoning print editions. Recently some large publishers have announced changes in their policies. Electronic editions of journals of those publishers can now be used to satisfy interlibrary loan requests, but only by printing out the requested articles and sending them out in the printed form. Libraries will thus have the same functionality as before (or even better, since there will be no need to find volumes on shelves and make photocopies). The continued prohibition on electronic delivery of the electronic version should suffice to maintain the distinction between owning and borrowing that does not naturally exist in cyberspace, and thus maintain demand for subscriptions. Can print journals be eliminated? Previous predictions of the eclipse of printed matter by microfilm, for example, failed to come true. (See \[Odlyzko1\] for a brief survey and references to numerous faulty predictions in this area.) Print is certainly persistent, as has been observed many times (cf. \[Crawford\]). There is even a commercial publisher that is about to start selling a print edition of the Electronic Journal of Combinatorics, the most successful of the free electronic journals in mathematics. (The electronic version will remain free, and the publisher will only get rights to distribute a print version.) Yet I am convinced that printed journals are largely on their way out. I do not mean that print is on its way out. For reasons of technology and inertia, print is likely to be with us for several decades, and even proliferate, as personal computer printers improve in quality and drop in price. All that will happen is that there will be a simple substitution, the kind that eases all technological transitions \[Odlyzko6\]. Scholars will print articles on their personal or departmental printers instead of going to the library, photocopying those articles, and bringing the copies back to their offices to study. The transition to electronic distribution and storage should not take too long. There is tremendous inertia in academia, with some scholars swearing that nothing can substitute for browsing of bound printed journals. However, this resistance can be overcome. We already have examples of academic libraries in which efficient document delivery (from the library’s own collections) has drastically reduced physical visits to the library by faculty and students. Further, network effects will be playing an increasing role. More material available in electronic formats and increasing linking of digital forms of articles will all be making it much more attractive to browse on a screen and print out articles for careful study. For example, in mathematics, the two main reviewing publications, Mathematical Reviews and Zentralblatt für Mathematik, whose electronic forms are catching on much faster \[AndersonDR\] (for obvious reasons of much greater efficiency) than online versions of primary research journals, are beginning to offer links to articles being reviewed. Publishers will surely help this move by making the electronic versions more attractive than print ones. They are already beginning to provide links to references, and making online versions of articles available earlier than the print editions. At some point they will surely also increase the prices of print editions (compared to the online ones), and perhaps lengthen print publication backlogs. Eventually, enough libraries will agree to eliminate print subscriptions that they will be phased out. (As an intermediate step, they might be farmed out to specialized inexpensive publishers to produce out of the electronic versions.) What I am predicting is that publishers, who used to resist electronic publishing, will, out of self-interest, play the role of the forcing agents that accelerate natural technological transitions \[Odlyzko6\]. The elimination of print editions of journals will eventually reduce publishers’ costs. (Even though they have yet to concede that acceptable quality can be obtained in electronic publishing for 10% of the current print costs, they do admit that savings of 20–30% can be obtained by elimination of printing and distribution costs.) Most important, this step will reduce library costs and relieve the cost pressures on academic information systems. Thus the decisive steps towards eliminating print versions of journals are likely to be taken by academic decision makers, the deans and presidents, when they realize how much can be saved. What about librarians? I expect they would adjust easily to a paperless journal environment. First of all, transition would be gradual. While there is inertia among scholars, there is also a much more understandable inertia in the library system, given the huge accumulated print collections. These collections will have to be maintained until the slow conversion to digital format is completed. (And some materials will never be converted.) Further, there may well be a revival of scholarly monograph publishing, which has been getting squeezed out of library budgets by journals. (It is hard to forecast what effect this will have on the libraries, though, since the number of monographs published is likely to increase, but many of them will be distributed electronically.) The main job losses will be in the less-skilled positions (with the part-time student assistants who check out and reshelve material going first). Reference librarians are likely to thrive, although their job titles may not mention the library. After all, we will be in the Information Age, and there will be much more information to collect, classify, and navigate. Information specialists are likely to abound, and to have much more interesting jobs. Although there will be many opportunities, librarians will have to compete to retain their preeminence as information specialists \[Odlyzko5\], and operate in new ways. However, there are two other jobs that they are also well-positioned to retain. One if that of negotiating electronic access licenses. The other is that of enforcing access restrictions. It is worth emphasizing that if the publishers do succeed in their approach, and disintermediate the librarians while retaining their revenues and profits, the resulting system is likely to be much superior to the present one. Defenders of the current libraries tend to come from top research universities, which do have excellent library collections. That is an exception, though. Most scholars, and an overwhelming majority of the population, make do with very limited access to those precious storehouses of knowledge. (There is an illuminating graph in \[GriffithsK\], reproduced as Fig. 9.4 on p. 202 in \[Lesk\], that shows library usage decreasing rapidly as the effort to reach the library grows, even on a single campus. For the bulk of the world’s population, little is available.) Electronic publishing promises far wider and superior access. I am not forecasting a new age of universal enlightenment, with the couch potatoes starting to read scholarly articles. However, there will be growth in usage of scholarly publications by the general public. The informal associations devoted to discussions of medical problems (those on AIDS present the best example) show how primary research material does get used by the wide public if it is easily available. For scholars alone, there will be a huge increase in productivity with much easier access to a wider range of information. The basic strategy of the publishers, faced with pressure to reduce costs, is to disintermediate the libraries. There is nothing nefarious in this approach. As we move towards the information age, different groups will be vying to fill various rapidly evolving ecological niches. After all, many scholars are proposing that they and the librarians disintermediate the publishers, while others would bypass librarians and publishers both, and handle all of primary research publishing themselves. In this environment, some of the potentially extremely important players might be Kinko’s copy shops. They may end up disintermediating the bookstores and libraries, by teaming up with publishers to print books on demand. They might also disintermediate the publishers, by making deals directly with authors and their agents. ## 4 Fairness and the new economics of information goods The previous section outlined the strategy that established publishers appear to be pursuing or likely to pursue. Here we discuss the tactics. There are extensive fears and complaints about the pricing and access policies publishers offer for their electronic journals, as can be seen in the messages in \[LIBL, NSPI\]. Many of these concerns are likely to be allayed with time, as they are natural outcomes of a move towards a new technological and economic environment. By negotiations, compromise, and experiment, librarians and publishers will work out standard licensing terms that they and scholars can live with. As one example, there is great concern among librarians and scholars about access to electronic journal articles once a subscription is canceled. This is clearly an issue, but one that can be solved through negotiations. Some issues that are raised by librarians will not go away. The basic problem with information goods is that marginal costs are negligible. Therefore pricing according to costs is not viable, and it is necessary to price according to value. What this means is that we will be forced into new economic models. Many people, especially Hal Varian \[Varian\], have been arguing for a long time that we will see much greater use of methods such as bundling, differential quality, and differential pricing. (See also \[Odlyzko2, Odlyzko3, ShapiroV\].) Unfortunately this will increase complaints about unfairness \[Odlyzko3\]. Many of the prices and policies will seem arbitrary. That is because they will be largely arbitrary, designed to make customers pay according to their willingness and ability to pay. The current U.S. airline pricing practices are a good example of the practices that work well in providing service to a wide spectrum of users with varying needs. However, those practices are universally disliked. That may also be the fate of scholarly journal publishing in cyberspace. Pricing according to value means different prices for different institutions. Hollywood rents movies to TV networks at prices reflecting the size and affluence of that network’s audience, so that a national network in Ireland will pay much more than that of Iceland, but much less than one of the large U.S. networks. We can expect prices of electronic scholarly journals to be increasingly settled by negotiations. The consolidation of publishers as well as libraries (through consortia) will help make this process manageable. There is unhappiness among scholars and librarians about restrictions on usage of some electronic databases, such as limiting the number of simultaneous users, or restricting usage to a single workstation inside the library. The preferred method of access is, of course, from the scholar’s office. However, that is precisely the point; to offer a more convenient version (such as one available without restrictions from any place on campus) for a high price, and a less convenient version (that requires a physical visit to the library, and possibly waiting in line) for a lower price. Such techniques are likely to proliferate, and a natural function for libraries will be to enforce restrictions imposed by publishers. We can already see this in the license conditions for hybrid journals that appear both in print and electronic formats. Publishers of such journals almost universally allow only the print version to be used for interlibrary loans. Although no publisher has explained clearly the rationale for this restriction, it is easy to figure out its role. Obtaining a copy of the paper article is slow, cumbersome, and expensive, and this serves to deter wide use of interlibrary loans as substitutes for owning the journal. If interlibrary loans of electronic versions were allowed, though, the borrower would be in almost the same position as a subscriber. Even if only paper copies of electronic versions of an article were allowed, the ease of making the copy from the digital form and mailing it out would make interlibrary loans much faster and less expensive, and that might undermine the market for subscriptions. Artificial restrictions in order to maintain subscriptions are becoming much more obvious in cyberspace than in print, but are not new. For example, even a casual examination shows that the Copyright Clearance Center (CCC) and the copyright litigations of the last two decades have practically no economic value to publishers aside from restricting photocopying and thus maintaining the subscriber base. In the fiscal year ending June 30, 1997, CCC paid $35 M to copyright holders from the fees it collected. Not all this money was for scholarly publishing, and even if it were, it is tiny compared to total revenues in the U.S. for scholarly publishers, which amount to several billion dollars per year. Thus all the legal attacks on supposedly illicit photocopying and the demands for CCC fees provide little additional revenue. However, they do serve to discourage dropping of subscriptions, by making copying more expensive and more cumbersome. Many scholars have run into problems with obtaining permission to republish their works in collected papers volumes and the like, with reprint fees often being demanded. Yet such fees bring in trivial amounts of money. Some publishers, such as the American Economic Association \[Getz\] and ACM, grant blanket permissions for copying for educational use, as they have decided that the costs of handling all the copy requests were higher than the revenue derived from that activity. Thus this is another case of a barrier that exists not to increase revenues directly, but to discourage copying. A major concern of librarians and scholars alike is that publishers will move towards a “pay-per-view” model \[Kiernan2\]. There is little evidence of this happening, and on balance, just the opposite is occurring. There is spread of consortium licensing, in which a publisher licenses all its electronic journals to all the institutions in a region, state, or even country (with the United Kingdom taking the lead in national licensing). This was to be expected. While there are some economic models that favor pay-per-view \[ChuangS\], and such pricing approaches are likely to be used in some fraction of cases, to deal with unusual needs, subscriptions, bundling, and site licensing are likely to dominate. This conclusion is supported by standard economic models (cf. \[BakosB, Odlyzko3, Varian\]). It is also supported by empirical evidence of people’s aversion to pay-per-view (cf. \[FishburnOS\]) and by estimates of scholars’ willingness to pay for information as individuals \[Hunter, Odlyzko1\]. There are likely to be “pay-per-view” options, but they will probably be of marginal importance, just for dealing with demand from those who do not fit into the large classes covered by some subscription or site-license model. A major reason for this is “sticker shock.” Recall that the typical article brings its publishers revenues of about $4,000. On the other hand, all studies that have been carried out suggest that such an article is read, even if superficially (i.e., going beyond just glancing at the title page and abstract) by a couple of hundred scholars. This is also consistent with data from the Ginsparg archive, where on average a paper is downloaded on the order of 150 times in its first two years there. If we assume 200 readers, then to obtain the current $4,000, the charge for “pay-per-view” would have to be $20. I predict that few scholars would be willing to pay that much, especially for an article they had only seen the abstract for, even if the money came from their grants or departmental budgets. Of course they effectively do pay that much now, but the charges are hidden. (In fact, their institutions are paying $60 for each article read, of which $20 goes to the publisher, and $40 to internal library costs.) A shift to “pay-per-view” would expose the exorbitant costs of the current system. Bundling, site licensing, and consortium pricing are all strategies that enable publishers to increase their revenues by averaging out the different valuations that separate readers or libraries place on articles or journals. Many librarians regard consortia as advantageous because they supposedly provide greater bargaining power and thus lower prices. However, they are more likely to be helpful to publishers in maximizing their revenues. Consider a simple example of a library consortium formed by three institutions, call them A, B, and C. Suppose that A is a major research university, B a big liberal arts school with some research programs, and C a strictly teaching school. Consider a publisher of the (fictional) Journal of Zonotopes (JZ). Suppose the annual institutional subscription is $2,000, and currently only A receives it. Further, suppose that B and C used to subscribe, but stopped once the price exceeded $1,000 a year (for B) and $200 (for C). Thus the publisher may well conclude that B and C might still be willing to pay $1,000 and $200 per year for JZ, respectively. If the publisher were to stick to the policy of a uniform price for each institution, it could not gain anything by lowering JZ’s price, and would risk losing A’s subscription by raising it. Suppose that instead the publisher offers the consortium of A, B, and C a deal in which for a total price of $2,500 per year, A continues to receive a print copy of JZ, and all three schools get unrestricted access to the electronic version. Even if the faculty and students of schools B and C value the electronic version of JZ at half of the print value, and those of A place no value on the digital format, the total value of the package to the three institutions would be $2,600 per year, and so collectively they would be likely to spend the extra $500. To pursue the example above in greater detail, let us note that the attractiveness of the consortium offer is much greater than presented above if one also considers internal library costs. Institution A is really valuing JZ at $6,000 or more, since those are its total costs associated with the journal, while B and C value it at $3,000 and $600, respectively. Thus (even ignoring possible savings that A could realize by dropping its print version), the consortium of A, B, and C might be willing to pay $3,000 or more for the package. There are costs associated with negotiating the license, providing assistance in accessing the electronic version of the journal, and so on, but those costs are far smaller than those associated with handling physical collections. The low marginal costs of providing digital information makes it possible to distribute that information widely. If some benefactor offered to purchase for Smith College, say, all the materials that Harvard acquires, this would bankrupt Smith, as it would not be able to pay for proper handling of the huge mass of material. On the other hand, an offer of electronic access to all the materials that Harvard has access to could be provided inexpensively. What we are likely to see with the spread of library consortia is much wider access to information than we ever had before. National licensing plans are the extreme example of this, with everybody inside a country getting access to all of a publisher’s material. Bundling is likely to be widespread. Several publishers already offer their electronic journals in a single package, with no chance for purchasing access to a subset. This minimizes administrative costs, but more important, again helps take advantage of uneven preferences for different journals to obtain higher revenues. It also has the advantage of protecting publishers from the subversive influence of preprints. Several areas, and theoretical high energy physics in particular (since it has relied on the Ginsparg archive the longest), might already be willing to give up most of their journals, if hard economic times came, and academic decision makers came to departments with offers of the type “Either you give up your journals, or you give up three postdocs.” In most areas, though, such a move is not feasible, since the preprint culture is not sufficiently developed. Now if the journals in theoretical high energy physics only come in a package with other journals from less advanced fields, then an offer like that above cannot be made. Thus bundling can serve the publishers’ economic interests in retarding evolution of scholarly publishing to the rate of the slowest area. Scholarly publishers are consolidating, with Elsevier, already the largest player in this market, in the forefront of the acquisition and merger wave. The publishers’ market power may be counterbalanced, though, by the rise of library consortia. How the publisher oligopoly will interact with purchaser cartels will be an interesting phenomenon to watch. ## 5 Will it work? Will the publishers succeed in disintermediating the libraries, and preserving their revenues? There are two problems they face. One is a short-term one. While electronic publication will eventually reduce the expenses of both publishers and libraries, right now it is raising those expenses, as both parties have to handle print and digital media at the same time. The other problem, the longer-term one, is that publisher revenues are far greater than is necessary to provide quality sufficient for primary publications. The manuscripts prepared by authors have been improving, to the point that all the copy editing and typesetting that publishers contribute is of diminishing value. Furthermore, in spite of the attempts of some publishers, there is no way to stop the preprint tide. The free circulation of preprints offers so many advantages to scholars that it is only a matter of time until they become universal. To survive in the long run, publishers will have to contribute more that is of real value. They are starting to do so by adding links to their electronic articles and similar measures. I suspect they will have to do a lot more. Until they do, they are vulnerable. Their main danger will come not from competition by Kinko’s, but from a change in perceptions by administrators. The analogy with Encyclopaedia Britannica might serve to illuminate the danger. To quote from \[EvansW\] again, > Judging from their initial inaction, Britannica’s executives failed to understand what their customers were really buying. Parents had been buying Britannica less for its intellectual content than out of a desire to do the right thing for their children. Today when parents want to “do the right thing,” they buy their kids a computer. Nontraditional methods for information dissemination (preprints, but also email, Web pages, and so on) are growing in importance. At some point the administrators in charge of libraries may decide that “doing the right thing” for their faculty and students means redirecting resources away from traditional expensive journals. ### Acknowledgements: I thank Stevan Harnad and the the other members of the American Academy of Arts and Sciences study group on transition from paper for their comments.
no-problem/9901/nucl-ex9901004.html
ar5iv
text
# Induced Proton Polarization for 𝜋⁰ Electroproduction at 𝑄²=0.126 GeV2/c2 around the Δ(1232) Resonance ## Abstract We present the first measurement of the induced proton polarization $`P_n`$ in $`\pi ^0`$ electroproduction on the proton around the $`\mathrm{\Delta }`$ resonance. The measurement was made at a central invariant mass and a squared four-momentum transfer of $`W=1231`$ MeV and $`Q^2=0.126`$ GeV<sup>2</sup>/c<sup>2</sup>, respectively. We measured a large induced polarization, $`P_n=0.397\pm 0.055\pm 0.009`$. The data suggest that the scalar background is larger than expected from a recent effective Hamiltonian model. At low $`Q^2`$, the $`N\mathrm{\Delta }`$ transition is dominated by the magnetic dipole amplitude. In a simple SU(6) model in which all the quarks occupy S states in the $`N`$ and $`\mathrm{\Delta }`$ wavefunctions, the $`N\mathrm{\Delta }`$ transition is a spin flip of a single quark. If the quarks are allowed to occupy D states as well as S states in the $`N`$ or $`\mathrm{\Delta }`$ wavefunctions, then electric and Coulomb quadrupole transitions are allowed . The ratios of these quadrupole amplitudes to the dominant magnetic dipole amplitude, referred to as the $`R_{EM}`$ and $`R_{CM}`$, are indicative of the relative importance of the D state in the nucleon and $`\mathrm{\Delta }`$ wavefunction in this model. A sensitive probe of the $`N\mathrm{\Delta }`$ transition is pion production on the free nucleon. However, many processes in addition to the $`N\mathrm{\Delta }`$ transition contribute to pion production: non-resonant nucleon excitation, photon–vector-meson coupling and excitation of other resonances. Rescattering of the final-state hadrons also affects the pion production observables . We refer to the non-resonant processes as “background” . In order to extract information about the $`N\mathrm{\Delta }`$ transition from pion production observables, one must understand the contributions from the background processes. Electroproduction experiments were performed in the late 60’s and early 70’s in which the $`R_{CM}`$ was extracted by performing multipole analysis of ($`e`$,$`e^{}p`$) data acquired over a wide range of energies and angles . These analyses extracted an average $`R_{CM}`$ of roughly $`7\%`$ for $`Q^2`$ up to 1 GeV<sup>2</sup>/c<sup>2</sup>. In 1993, an ($`e`$,$`e^{}\pi ^0`$) experiment was conducted at ELSA at $`Q^2=0.127`$ GeV<sup>2</sup>/c<sup>2</sup> . The analysis of this experiment yielded a large $`R_{CM}`$ of $`0.127\pm 0.015`$, in agreement with the analysis by Crawford of earlier ($`e`$,$`e^{}p`$) data at the same $`Q^2`$. We conducted a series of $`H`$($`e`$,$`e^{}p`$)$`\pi ^0`$ measurements at the same $`Q^2`$ as the ELSA measurement. We measured two types of observables: 1) the cross section over a range of proton scattering angles with respect to the momentum transfer for a wide range of the invariant mass around the $`\mathrm{\Delta }`$, 2) the induced proton polarization in parallel kinematics in which the proton is detected along the direction of the momentum transfer. The cross section measurements allow for the extraction of the $`R_{CM}`$. The induced polarization measurement is sensitive to the background contributions. We discuss in this paper the results of the polarization measurement, which is the first such measurement of the $`N\mathrm{\Delta }`$ transition. Past electroproduction measurements were performed over a wide range of $`Q^2`$, but only the angular dependence of the coincidence cross section was extracted from the data . This data constrains only the real part of the interference response tensor . In parallel kinematics the induced polarization $`P_n`$ is proportional to the imaginary part of a longitudinal-transverse interference response tensor; hence, it is proportional to the interference of the resonant and background amplitudes. In this manner, $`P_n`$ is sensitive to the same physics as the beam helicity asymmetry proportional to $`R_{LT^{}}`$, the “fifth response function” . Thus $`P_n`$ is in a new class of pion production observables. The experiment was conducted in 1995 in the South Hall of M.I.T.-Bates. A 0.85% duty factor, 719 MeV electron beam was incident on a cryogenic liquid hydrogen target. Electrons were detected with the Medium Energy Pion Spectrometer (MEPS) which was located at 44.17<sup>o</sup> and set at a central momentum of 309 MeV/c. Coincident protons were detected with the One-Hundred-Inch Proton Spectrometer (OHIPS) which was located at -23.69<sup>o</sup> and set at a central momentum of 674 MeV/c. The final-state proton polarization components were measured with the Focal Plane Polarimeter (FPP) . The central invariant mass and the squared four-momentum transfer were $`W=1231`$ MeV and $`Q^2=0.126`$ GeV<sup>2</sup>/c<sup>2</sup>. We sampled data over a range of $`W`$ between 1200 and 1270 MeV. The focal plane asymmetries were calculated following the procedure detailed in Ref. . This procedure involved the use of polarimetry data of elastic scattering from hydrogen to determine the false asymmetries of the polarimeter. In the one photon exchange approximation with unpolarized electrons, elastically scattered protons cannot be polarized. Therefore, any measured non-zero polarization is due to false asymmetries. The resulting false asymmetries were small, $`<0.004`$. The polarization of the protons at the polarimeter is the asymmetry of the secondary scattering divided by the $`p`$-$`{}_{}{}^{12}C`$ inclusive analyzing power. We determined the analyzing power by using calibration data of the FPP taken at the Indiana University Cyclotron Facility . From our data taken with an incident proton energy of 200 MeV and the world’s data for analyzing power for energies between 150 and 300 MeV , we determined a new fit to the functional form of the analyzing power according to Aprile-Giboni et al. . The uncertainty in the analyzing power for this measurement was 1.5%. In a magnetic spectrometer such as OHIPS, the polarizations at the target and focal plane are related by a spin precession transformation. This transformation depends on the precession of the spin in the spectrometer and on the population of events across the acceptance. For this measurement, the transformation simplified to a simple multiplicative factor for the induced polarization because the electron beam was unpolarized and the protons were detected along the direction of the momentum transfer. To determine this transformation, we used the Monte Carlo program MCEEP modified to use the spin transfer matrices of COSY . We populated events across the acceptance using a preliminary electroproduction model by Sato and Lee (SL) based on their photoproduction model described in Ref. . The transformation was $$P_n=(1.070\pm 0.016)P_X,$$ (1) where $`P_X`$ is the polarization component extracted from the azimuthal asymmetry of the secondary scattering, and $`P_n`$ is the normal type polarization at the target. We varied parameters in the COSY and MCEEP models by their measured uncertainties to determine the uncertainty of the spin precession transformation. To compare to theoretical models, we corrected the measured polarization for finite acceptance effects. We determined the correction with MCEEP using the SL pion production model: $$\frac{P_n\mathrm{for}\mathrm{Point}\mathrm{Acceptance}}{P_n\mathrm{for}\mathrm{Full}\mathrm{Acceptance}}=1.159\pm 0.011.$$ (2) This correction is mostly due to the large electron acceptance. The uncertainty in the acceptance correction reflects uncertainties in the experimental acceptance. Applying the spin transformation factor and the acceptance correction, we determined that the induced polarization for a point acceptance was $$P_n=0.397\pm 0.055\pm 0.009,$$ (3) where the first uncertainty is statistical and the second is systematic. Our analysis does not depend on the absolute scale of the model predictions. Thus, the smooth variations of the cross section and of the induced polarization over the experimental phase space predicted by the model of Sato and Lee suggest that the model sensitivity should be sufficiently small to be neglected for this measurement. Corrections to $`P_n`$ due to radiative processes are small, 0.02%, and were not included. In parallel kinematics all the response functions can be constructed from two complex amplitudes which we label $`S`$ and $`T`$ . In terms of the Chew-Goldberger-Low-Nambu amplitudes and multipole amplitudes expanded up to p wave , these two amplitudes are | $`S`$ | $`=`$ | $`F_5^{}F_6^{}`$ | $``$ | $`S_{0+}`$ | $``$ | $`S_1`$ | $``$ | $`4S_{1+}`$, | | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | $`T`$ | $`=`$ | $`F_1+F_2`$ | $``$ | $`E_{0+}`$ | $`+`$ | $`M_1`$ | $``$ | $`3E_{1+}`$ | $``$ | $`M_{1+}`$. | (4) In parallel kinematics, $`P_n`$ is proportional to the imaginary part of a longitudinal-transverse interference divided by the unpolarized cross section . In terms of $`S`$ and $`T`$, $`P_n`$ $`=`$ $`{\displaystyle \frac{\sqrt{2ϵ_s(1+ϵ)}\mathrm{Im}S^{}T}{|T|^2+ϵ_s|S|^2}},`$ (5) $`=`$ $`{\displaystyle \frac{\sqrt{2ϵ_s(1+ϵ)}\left(\beta _S\zeta _S\beta _T\right)}{\left(1+\beta _T^2\right)+ϵ_s\left(\beta _S^2+\zeta _S^2\right)}},`$ (6) where $`ϵ=(1+2q_{lab}^2/Q^2tan^2\frac{1}{2}\mathrm{\Theta }_e)^1`$, $`ϵ_s=Q^2/q_{cm}^2ϵ`$, $`q_{lab}`$ ($`q_{cm}`$) is the three-momentum transfer in the lab (center-of-momentum) frame, $`\mathrm{\Theta }_e`$ is the scattering angle of the electron with respect to the beam, $`\beta _{S(T)}=\mathrm{Re}S(T)/\mathrm{Im}T`$ and $`\zeta _S=\mathrm{Im}S/\mathrm{Im}T`$. The $`0^{th}`$ order approximation to $`P_n`$ is obtained by assuming only a purely resonant $`N\mathrm{\Delta }`$ transition contributes. Then at resonance, the contributing amplitudes are purely imaginary, and thus $$\beta _S=\beta _T=0\mathrm{and}\zeta _S=4\frac{R_{CM}}{1+3R_{EM}}.$$ (7) This approximation gives $`P_n=0`$. A non-zero $`\beta _S`$ and/or $`\beta _T`$ at resonance, comes from background contributions. In this manner, $`P_n`$ is sensitive to the background. In Fig. 1 our result is compared to two different pion production models plotted over a range of the invariant mass $`W`$ at a fixed $`Q^2=0.126`$ (GeV/c)<sup>2</sup>. Results from a preliminary electroproduction model based off the published SL photoproduction model are plotted for 0% and 1.4% probability of a D state in the $`\mathrm{\Delta }`$ wavefunction. The model of Mehrotra and Wright for the simultaneous fit to $`\pi ^0`$ and $`\pi ^+`$ production data requiring unitarity (MW) is also plotted. This model does not consider $`\mathrm{\Delta }`$ resonance quadrupole amplitudes. Neither of these models successfully reproduces the measured $`P_n`$. The constraints on the ratios due to this measurement are illustrated in Fig. 2. The two bands denote the regions of $`\{\beta _T,\beta _S\}`$ consistent with this measurement for $`\zeta _S=0`$ and $`0.4`$. These values of $`\zeta _S`$ correspond to an $`R_{CM}=0`$ and $`R_{CM}=9.1\%`$ when calculated from Eq. (7) with $`R_{EM}=3\%`$. Also shown are the $`\{\beta _T,\beta _S\}`$ points for the SL and MW models. The SL model with a deformed (non-deformed) $`\mathrm{\Delta }`$ has $`\zeta _S=0.001(0.047)`$, which violates the simple relation of Eq. (7) because of a strong imaginary $`S_{0+}`$. The MW model does not consider imaginary scalar contributions so that $`\zeta _S0`$. Since $`\zeta _S`$ of these models are approximately zero, the points on the graph should be compared to the vertically-hatched region. For the wide range of $`\beta _T`$ and $`\zeta _S`$ in the figure, $`\beta _S`$ is larger than 20%. It is possible to satisfy the restrictions of $`P_n`$ with lower $`\beta _S`$, but this requires $`\zeta _S<0.4`$. However, the sum $`\zeta _S^2+\beta _S^2`$ is limited by the small longitudinal contribution to the cross section . The results from the companion cross section data will provide additional information to constrain the ratios. For the SL model to describe the $`P_n`$ data, the two extreme corrections to the model are to increase either $`\beta _S`$ or $`\zeta _S\beta _T`$. As the model differs from the measurement by roughly a factor of two, we want to signficantly change the ratios. Since the model describes the measured cross section as a function of the invariant mass well , we cannot radically alter the transverse contributions. $`\zeta _S`$ differs by only 0.05 between the SL calculations with non-deformed and deformed $`\mathrm{\Delta }`$, so any large change in the real or imaginary scalar amplitudes must come from non-resonant contributions. Following these conjectures, we conclude that the large $`P_n`$ of this measurement indicates that the scalar background contributions are larger than expected from the SL model. The inclusion of rescattering in the SL model has a significant effect on the scalar background contributions compared to the MW model. Both models use a similar description of the Born amplitudes at the tree level: pseudovector $`\pi NN`$ coupling and $`\rho `$ exchange. However, the real scalar contributions are quite different as demonstrated by the difference in the $`\beta _S`$ values in Fig. 2. Thus, the rescattering procedure in the SL model significantly enhances the background scalar contributions. It is difficult to directly compare the background of this measurement with that of measurements from which the $`R_{CM}`$ is extracted. In general, the two observables can involve different combinations of multipole amplitudes. In addition, $`P_n`$ is sensitive to the real part of the background, whereas the observables used to extract $`R_{CM}`$ are sensitive to the imaginary part. Previous extractions of the $`R_{CM}`$ neglected the non-resonant terms under the assumption that they are small. Our data demonstrate that the background contributions are significant compared to the dominant resonant contributions and are not well described by recent models. Therefore, one cannot a priori neglect the background terms in the $`R_{CM}`$ extraction. In summary, we measured a large induced polarization for pion production at $`W=1231`$ MeV and $`Q^2=0.126`$ GeV<sup>2</sup>/c<sup>2</sup>. The data suggest that the scalar background is larger than expected from recent effective Hamiltonian models. We demonstrated that the large induced polarization of this measurement provides a significant constraint on scalar background contributions. Results from the companion M.I.T.-Bates cross sections measurements and from future experiments planned at several facilities will constrain theoretical approaches and improve our understanding of the $`N\mathrm{\Delta }`$ transition. The authors wish to thank the staff of M.I.T.-Bates as well as T.-S. H. Lee for his preliminary model calculations. This work was supported in part by the U.S. Department of Energy and the U.S. National Science Foundation.
no-problem/9901/physics9901012.html
ar5iv
text
# Sous les Chemises, la Symétrie… (Pour La Science (Paris) Hors série “Les Symétries du Monde”, 16-19, Juillet 1998) L’apparition spontanée de l’ordre dans la matière résulte de brisures de symétrie. Comment l’état collectif d’un système passe-t-il d’une symétrie à une autre, comment brise-t-il sa symétrie ? C’est ce que nous allons voir, à travers une métaphore de chemises et de couleurs. Le phénomène de brisure spontanée de symétrie est un mécanisme essentiel de la physique des comportements collectifs dans la matière. Il est à l’origine de l’existence de structures ordonnées, à partir desquelles apparaissent de nombreuses propriétés physiques, inexistantes au niveau de l’atome ou de la molécule isolée. Il marque une transition entre un état collectif de symétrie élevée (comme un liquide, symétrique par toutes translations et rotations continues) et un état ordonnée, de symétrie moindre (comme un cristal, symétrique par rapport à seulement certaines translations et rotations discontinues). Citons par exemple, l’aimantation d’un système magnétique (la capacité d’un aimant à attirer un clou), la supraconductivité d’un alliage (la conduction d’électricité sans aucune perte d’énergie), et aussi peut-être, la création de l’univers (séparation de la matière de l’anti-matière, lors du Big Bang). Plus communément, la brisure spontanée de symétrie est responsable des aspects très différents, sous lesquels on peut trouver une même et unique substance : ainsi, l’eau peut être liquide, solide quand elle est glace, ou gazeuse quand elle est vapeur. Autre exemple : le fer peut être en phase paramagnétique (les moments magnétiques atomiques sont désordonnés), ou dans la phase ferromagnétique de l’aimant (tous les moments magnétiques atomiques pointent dans la même direction). Pour décrire les brisures de symétrie et leur dynamique, utilisons une métaphore sociale : au lieu d’atomes portant des moments magnétiques, imaginons des individus portant des chemises de couleurs différentes. La chemise fait le moine. Observons le comportement d’une population de N personnes, chacune isolée dans une chambre, avec à sa disposition une chemise verte et une rouge. Chaque personne choisit la chemise de son go t, sans connaitre le choix des autres. Vu de l’extérieur, ce choix est aléatoire comme l’est le résultat pile ou face du lancer d’une pièce de monnaie. Lorsque chacun a fait son choix, nous obtenons une configuration particulière de la distribution des couleurs. Chacun pouvant faire deux choix, indépendamment du choix des autres, nous avons $`2^N`$ configurations possibles. Ce nombre croit très vite avec le nombre de personnes N. D’une dizaine (16) pour N=4, il dépasse déjà le million (1 048 576) pour N=20 (songez qu’il y a environ $`10^{22}`$ molecules dans un gramme d’eau). Si N est assez grand, la plupart des configurations ont le même nombre de chemises rouges et vertes (bien que chaque configuration soit équiprobable). L’état obtenu est symétrique en moyenne, par rapport à l’échange des couleurs verte et rouge. Une fois les choix individuels réalisés, nous supprimons tous les murs : chaque personne voit la couleur de la chemise de ses voisins, et seulement de ses voisins. Supposons maintenant que chaque individu ait une tendance extrème à l’imitation, et replaçons les murs pour permettre à chacun de refaire un choix de couleur : chaque individu souhaite porter la couleur de chemise que la majorité de ses voisins portait, et à partir de ce qu’il a vu, décide de garder la même chemise, ou d’en changer. Nous retirons de nouveau les murs, et tout un chacun observe ses voisins. Puis nous remettons les murs, et une autre réactualisation de couleur se fait, et ainsi de suite un certain nombre de fois. Nous imaginons assez bien que l’on va assister à des changements, apparemment désordonnés, et intempestifs de couleur de chemises. Mais au bout d’un certain temps, soudainement, comme par miracle, tout le monde va porter la même couleur. À ce stade, le phénomène est similaire à l’effet de mode bien connu en sociologie. Mais ce processus a toutefois quelque chose de surprenant, car au début, on va voir se former de ci de là, des petits groupes homogènes d’une seule couleur, certains en rouge, d’autres en vert. Mais localement, à la frontière de ces groupes, il y aura comme des luttes d’expansion d’un groupe au détriment d’un autre, tant t le rouge, tant t le vert. Cependant, petit à petit, inéxorablement, un de ces petits groupes, apparemment comme les autres, va se mettre à croitre, au delà de la taille moyenne des autres groupes, et cela, sans qu’on sache trop pourquoi. Mais alors, il va s’étendre, et très vite, de façon irreversible à toute l’assemblée. D’un coup tout le monde portera la même couleur, et n’en changera plus. Si l’expérience est répétée, on retrouvera le même effet d’une couleur unique, mais la couleur collectivement choisie sera tant t le rouge, tant t le vert. On constate donc, à ce stade, que l’interaction visuelle entre voisins uniquement, suffit à produire un ordre de couleur, qui s’étend à l’ensemble du groupe, donc bien au delà des proches voisins que chacun voit. Mais simultanément, le choix de la couleur semble arbitraire. Parfois c’est le vert qui l’emporte, et parfois c’est le rouge. Ce phénomène d’uniformisation est ce qu’on appelle en physique une brisure spontanée de symétrie. Elle se produit dès que l’on introduit des interactions à courte portée, entre éléments de même nature, par exemple, dans un système de moments magnétiques (qu’on appelle des spins). À température nulle, ceux-ci vont tous s’aligner parallélement (c’est le ferromagnétisme) sans exception, mais dans une direction, elle, prise au hasard. On a donc un effet de propagation spontané d’un ordre à longue distance, dans la direction des spins, alors que les interactions entre ces mêmes spins, ne sont qu’à courte distance. C’est en fait, pour diminuer son énergie interne, que le système dans son ensemble, va sélectionner un seul état au hasard, au détriment de tous les autres états possibles. L’état collectif est donc “totalitaire”, avec un seul et unique état pour tout le monde, sans aucune exception. Dans notre exemple de chemises, lorque la symétrie de couleur est brisée, on n’y voit que du vert, ou que du rouge. En physique, cela n’est cependant vrai, que si la température du système reste nulle. Dès qu’elle est différente de zéro, les choses se compliquent, avec une réintroduction partielle d’états, auparavant exclus. La température va favoriser un “non-conformisme individuel” avec une augmentation des énergies locales correspondantes. Un spin va pouvoir ainsi s’orienter dans une direction différente de celle de la majorité de ses voisins. On dit en physique que le spin est “excité”. La possibilité, pour un spin d’être excité, est de nature probabiliste. C’est à dire qu’à chaque configuration locale dans l’orientation respective de ses voisins est associée une probabilité d’occurence dont l’amplitude dépend de la valeur de la température. Celle-ci introduit donc la possibilité statistique d’avoir de ci de là, une configuration locale de spins dont les orientations respectives non seulement ne minimisent pas l’énergie interne, mais au contraire peuvent même la maximiser. Cela dans le cadre Boltzmanien d’une description statistique de la température. La notion de probabilité introduit une dynamique dans la distribution de ces configurations d’énergie non-minimisée, leur durée de vie étant alors finie. Cela signifie que ce ne sont pas toujours les mêmes spins qui sont excités au cours du temps. Le désordre se déplace, il est “mobile”. Il va donc y avoir une compétition constante entre l’ordre collectif, issu de la brisure spontané de symétrie, et ce désordre local produit par la température. L’équilibre des deux étant directement régi par la valeur de cette même température. Pour mieux comprende ce phénomène, revenons à notre monde de chemises à deux couleurs, où la symétrie a été brisée, disons vers le rouge. Automatiquement, plus personne alors, n’achètera de chemise verte, et très vite, le marché va se saturer avec des ventes de chemise qui vont stagner. Pour les faire repartir, avec des ventes de chemises vertes, les vendeurs vont penser à baisser le prix du vert par rapport au rouge. Et naturellement, par souci d’économie, cette baisse de prix va tenter un certain nombre de personnes. Mais alors, en portant leur chemise verte, ils devront assumer une certaine tension avec leurs voisins en rouge, qui critiqueront leur “différence”. On peut donc imaginer qu’une personne donnée, portera sa chemise verte, seulement de temps en temps, pour ne pas toujours être hors norme, en opposition au groupe, tout en faisant des économies. Et le marché des chemises se portera mieux. Par contre, si le prix du vert baisse encore, la récompense de la marginalisation étant plus grande, les marginaux augmenteront automatiquement. Ce qui mécaniquement, conduira plusieurs chemises vertes à se retrouver par hasard, c te à c te, diminuant du même coup, la tension sociale locale contre le vert. Les marginaux seront donc, de moins en moins marginaux par leur augmentation. À un certain niveau de prix de la chemise verte, il y aura suffisamment de gens en vert, pour que la tension sociale disparaisse, et seul persiste l’intérêt financier. Alors très vite, tout le monde portera une chemise verte. Les vendeurs de chemises auront donc réussi à changer la couleur du groupe, mais avec de nouveau une saturation du marché, et en plus, avec un profit moindre, puisque le vert est moins cher que le rouge. Ce phénomène de basculement s’appelle une transition de phase du premier ordre. Et la différence de prix entre les couleurs est ce que l’on appelle en physique, un champ extérieur qui “brise la symétrie”. Pour un système de spins, ce sera un champ magnétique uniforme appliqué dans une direction précise. En fait un tout petit champ suffit, à déterminer la couleur de tout le monde, mais cela prendra plus de temps. Les vendeurs n’ont donc pas besoin, en fait, de beaucoup baisser le prix du vert pour que la nouvelle mode totalitaire ne devienne le vert. Une telle situation n’était pas très avantageuse pour les vendeurs de chemises. Ils vont donc adopter une autre stratégie, pour éviter le basculement précédent. Maintenant, à partir de l’état saturé, disons en rouge, ils vont faire des soldes, mais cette fois, simultanément sur le vert et le rouge. Cela pour ne pas briser, par les soldes, la symétrie entre les deux couleurs. Ainsi la brisure spontanée de la symétrie initiale vers le rouge sera maintenue malgré les soldes. Pour toucher plus de monde, ils vont également changer constamment les points de soldes, qui seront donc volants. Ce sera toujours sur les deux couleurs, mais à des endroits différents, et pour un temps donné. Ainsi, au hasard de ces soldes mobiles, dans l’espace et dans le temps, les gens achèterons, des chemises des deux couleurs, par éconnomie et par conformisme. Cependant ils porteront plus souvent le rouge que le vert, puisque le rouge était et donc reste majoritaire. Cette fois-ci on a pas de basculement d’une couleur dans une autre, mais bien une atténuation de la couleur dominante. Il y a des fluctuations de couleurs. Prenons un autre exemple, celui de la conduite automobile. À l’époque des premières voitures, il n’y avait pas de code de la route, et la circulation se faisait selon l’envie et le bon vouloir de chacun des très rares propriétaires de voitures. On roulait donc, à gauche, à droite ou au milieu. Lorsque deux voitures se retrouvaient face à face, les deux conducteurs se mettaient d’accord sur la façon de se contouner mutuellement. Mais lorsque le nombre de véhicules a augmenté au point d’avoir un grand nombre de rencontres nez à nez, le traffic s’en est trouvé interrompu, et les chauffeurs ont du se mettre d’accord sur un choix de circulation, soit à droite, soit à gauche. Le choix était arbitraire quant à son efficacité. La France a choisit la droite, l’Angleterre la gauche, prouvant bien que l’un et l’autre choix étaient possibles. Ceci est une donc une “brisure de la symétrie” qui existait initialement, pour quelques voitures isolées, mais qui a été supprimé au profit d’une efficacité collective. On peut schématiser la situation par les voies à ligne jaune continue. Ensuite les voitures ont commencé a avoir des vitesses très différentes, et l’impossibilité de doubler est devenue un frein à l’efficacité précédente. On est alors passé à la ligne jaune pointillée, qu’on peut franchir mais pour un temps court. On a ainsi rétabli la symétrie qui existait initialement, mais de façon sporadique et transitoire, pour de nouveau augmenter l’efficacité du système. on a donc l’existence d’un certain désordre dans la conduite. La question se pose alors de savoir, jusqu’où peut aller l’introduction de ce désordre local, dans l’ordre initial. En termes physiques, que se passe-t-il lorsqu’on augmente de plus en plus la température ? Dans le cas des chemises, l’analogue de la température est l’amplitude des soldes simultanées sur le rouge et le vert, par rapport au prix de référence. Pour de petites baisses, les gens porteront plus souvent, la couleur dominante. Mais plus la baisse sera importante, plus ils porteront les deux couleurs indifferemment, et aussi souvent l’une que l’autre. Les fluctuations de couleurs auront alors énormément augmentées. Dans ce cas, le système subi encore, une transition de phase, mais cette fois, de nature différente de la précedente, car ici, la symétrie initialement présente entre les couleurs, a été rétablie collectivement. Ce type de transition est qualifiée du deuxième ordre, par rapport à la précedente, dite du premier ordre. Dans le premier cas, on a eu, à un certain moment, un basculement brutal de l’état collectif du système, alors que pour une transition du deuxième ordre, le changement se fait de façon continue, mais avec beaucoup de fluctuations. Après la transition, on a une répartition équilibrée des couleurs. C’est ce qu’on appelle en physique une phase désordonnée, c’est à dire une phase, où la brisure de symétrie a disparue. Mais attention dans une telle phase, les corrélations de choix de couleur, entre les individus n’ont pas disparues pour autant. Il reste autant de chemises vertes que de rouges, avec toujours une tendance local à l’uniformisation, mais qui maintenant ne se propage plus à toute la population. On a alors un désordre maximum. Par contre, dans le cas des voitures, trop augmenter le désordre produirait de plus en plus d’accidents, et bloquerait toute la circulation. Pour différencier une phase ordonnée avec du désordre, (mais où la brisure de symétrie persiste), d’une phase désordonnée (sans symétrie brisée), il suffit de renverser les “couleurs” respectives de chacun. Si la couleur d’ensemble du groupe a changée (passée d’un excés de vert à un excés de rouge, ou vice versa), c’est que la symétrie est brisée. Par contre si elle reste globalement la même (mi-vert, mi-rouge) alors la symérie n’est pas brisée. On peut d’ailleurs mesurer le degré de la brisure de symétrie par un paramètre, le paramètre d’ordre. Il est est égal à un, l’ordre est total, totalitaire, lorsque on est à température nulle. Et il va diminuer vers zéro en fonction de l’augmentation de température. Il va d’abord décro tre lentement, puis de plus en plus vite, pour finalement s’annuler, à une certaine valeur $`T_c`$ de la température, appelée la température critique du système. Elle varie d’un système à un autre. Au delà de la température critique $`T_c`$, le paramètre d’ordre reste nul, ce quelle que soit la valeur de la température. Dans le cas des chemises, le paramètre d’ordre est naturellement ègal au nombre de chemises rouges moins le nombre de chemises vertes, divisé par le nombre total d’individus. À prix égal de couleur, ce nombre est donc égal à $`+1`$. Une baisse uniquement du vert le ramène à $`1`$. Par contre des soldes mobiles et volantes le font décro te de $`+1`$ vers zéro, qu’il atteint dans la phase désordonnée, pour une certaine baisse de prix. À des températures $`T`$ inférieures à $`T_c`$, pour des transitions du deuxième ordre, le paramètre d’ordre se comporte proportionnelement à une puissance de la distance en temérature à $`T_c`$. Il est proportionnel à $`(T_cT)^\beta `$, où $`\beta `$ est appelé un exposant critique. Il a été remarquable de constater, et de prouver, que ce comportement en loi de puissance est universel. En effet, la valeur de $`\beta `$ est identique pour un grand nombre de systèmes physiques de nature très différente, comme par exemple un liquide à son point d’ébullition, ou un aimant qui chauffe au point de perdre son aimantation. Par contre la valeur de $`T_c`$, elle, varie d’un système à l’autre. Ce caractère d’universalité montre que ce qui est en jeu, réside dans l’aspect “comportement collectif” du système, et non pas dans ses “propriétés intrinsèques” comme par exemple la nature de ces interactions. Certains physiciens tentent d’ailleurs d’étendre cette universalité des transitions de phase à certaines classes de phénomènes sociaux et économiques. Au point très précis où le paramètre d’ordre s’annule, il se passe des choses surprenantes. On a y trouvé en effet que chaque élément du système est corrélé, c’est à dire en communication, avec tous les autres élements du même système, et cela même si ce dernier est de taille infinie. Un individu qui change de couleur va donc influencer tous les autres, et vice versa. Cela à tel point que pour un individu donné, la situation pourrait sembler comme “mystique” (c’est une image). On assiste alors à des fluctuations géantes de couleur, qui sont simultanées avec des fluctuations minuscules. Cette coexistence d’une multitude d’échelles de longueur produit, ce que l’on appelle une “invariance d’échelle”. Quel que soit le niveau auquel on regarde ce qui se passe, c’est toujours exactement identique. C’est comme si on regardait un paysage à l’oeil nu, au microscope, ou avec un zoom géant, et que l’image ne changeait pas, ce qui est tout même extraordinaire. Ce sont par exemple, les fluctuations de densité, à toutes les échelles d’un liquide à son point critique, qui donnnent le phénomène experimental de l’opalescence critique, où de la lumière envoyée sur le liquide est si fortement diffusée, qu’il semble tout entier s’illuminer. De plus, lorsque la température s’approche du point critique, qu’elle lui soit supérieure, ou inférieure, le système dans son ensemble réagit massivement à toute perturbation extérieure qui voudrait briser sa symétrie. Par exemple, un tout petit champ magnétique va aligner dans sa direction tous les spins d’un morceau de fer. Autrement dit, le nombre d’individus suceptible de réagir à une petite perturbation extérieure qui brise la symétrie, est infinie. Il y a donc ce que l’on appelle une divergence de la fonction réponse du système, en l’ocurrence, sa suceptibilité devient infinie au point critique. Elle diminue de part et d’autre de $`T_c`$. Là encore on retrouve une propriété d’universalité similaire à celle du paramètre d’ordre. sur la façon dont se fait cette divergence. Au voisinage de $`T_c`$, et de part et d’autre, la suceptibilité diverge comme $`(T_cT)^\gamma `$. La valeur de l’exposant $`\gamma `$, différente de celle de $`\beta `$, est aussi comme elle, universelle, c’est à dire qu’elle est la même pour toute une classe de systèmes physiques de natures différentes. La vente des chemises s’intégrant ou non dans la classe d’universalité des systémes magnétiques est une question encore ouverte, qui d’ailleurs n’a pas été posée. Et donc, l’objectif de cet article n’était pas d’y répondre, mais plut t de faire saisir quelques mécanismes essentiels des transitions de phase, à partir d’une petite métaphore sociale.
no-problem/9901/physics9901014.html
ar5iv
text
# INTERFACIAL TENSION IN WATER AT SOLID SURFACES*footnote **footnote *Published in proceedings of the Third International Symposium on Cavitation, vol. 1, p87-90 (1998). ## NOMENCLATURE | $`x`$,$`y`$ | coordinates on specimen surface in scan direction | | --- | --- | | | and perpendicular to this direction | | $`z`$ | coordinate along surface normal | | $`p_0`$ | equilibrium pressure | | $`R_0`$ | mean radius of surface corrugation | | $`T`$ | tensile strength of liquid | | $`Z_0`$ | amplitude of surface corrugation | ## I INTRODUCTION In cavitation research the formation and stabilization of cavitation nuclei has always been an intriguing problem which has made calculations of cavitation inception highly problematic. Subcritical gas cavities in water are inherently unstable and go into solution or they drift to surfaces due to buoyancy. Therefore, stabilization must take place at liquid- solid interfaces. A model was proposed by Harvey *et al.* . Though able to explain some of the experimental results of inception research, and during half a century the only reasonably realistic model, it is insufficient. A new model was proposed by Mørch , and recently it has been improved to allow quantitative calculations . According to this model interfacial tension in the liquid adjacent to solid surfaces opens the possibility of detachment of the liquid, i.e. void formation, at surface elements of concave form. At sufficiently high curvature the voids may develop spontaneously, but at moderate and low curvatures the content of gas being in solution in the liquid and reaching the interface by diffusion is important for breaking liquid-solid bonds which are strained by the interfacial tension in the liquid. It is predicted that a void grows until the contact line between detached liquid and liquid still in contact with the solid reaches the locus of balance between the tensile stress due to interfacial liquid tension and the pressure in the bulk of liquid. It seems possible to explain qualitatively the most significant results of experimental research from this model. Quantitatively the measurements of tensile strength $`T`$ of tap water vs. increasing equilibrium pressure $`p_0`$ by Strasberg can be simulated from assuming that solid particles in tap water have shallow corrugations of sinusoidal cross section, axially symmetric around their bottom and of mean radius $`R_0<2\mu \mathrm{m}`$ and with relative amplitude $`Z_0/R_0=0.3`$, and small, relatively deeper ones with $`R_0=0.2\mu \mathrm{m}`$, $`Z_0/R_0=1`$, FIG. 1. The interfacial tension present at the interface of two substances in contact is normally given as a single quantity. However, as the solid-liquid interfaces we consider are not planar it is suitable to split this interface tension into two components, one for the liquid, $`A_1`$, and one for the solid, $`A_2`$, to obtain information of the influence of curvature on the liquid-solid bonding. In FIG. 2 the balance of forces is shown at a solid-liquid-vapour contact point. In addition to the interface tension forces $`A_1`$, $`A_2`$, $`B`$ and $`C`$ at the three interfaces this balance demands also an adhesion force $`D`$ (van der Waals’ force) between the liquid and the solid perpendicular to the solid surface. These forces give the contact angle of the liquid- vapour interface. In water where hydrogen bonds dominate the intermolecular forces an appreciable interfacial liquid tension ($`A_1`$) is to be expected adjacent to solid surfaces as a result of a stabilized interfacial liquid structure. Experimentally effects of an orderly structured water layer have been measured near a mica surface and by computer simulations it has been shown that at platinum surfaces the interfacial layer of water has an essentially ice-like solid structure . These results support the hypothesis that water generally exhibits a more or less stabilized structure at solid surfaces. The interfacial liquid tension expected to result from this structure is the crucial parameter in the model of void formation and thus for the formation of cavitation nuclei in liquids. It is the object of the present paper to verify its existence experimentally. ## II EXPERIMENTAL TECHNIQUE AND RESULTS Experimental techniques available for investigating the local interfacial tension in the liquid adjacent to a solid surface are very few - at present only atomic force microscopy (AFM) seems available . This technique is basically used to give information of the surface topography of a solid object, and resolution to atomic scale is available for crystallographically planar surfaces. However, it can be used also for local force spectroscopy. In AFM a pointed tip, usually of pyramidal form and of height and base dimensions $`510\mu \mathrm{m}`$ and with a tip radius of curvature $`1050\mathrm{nm}`$, which is mounted close to the free end of a thin cantilever of length about $`300\mu \mathrm{m}`$, is approached to the surface which is to be investigated. When the distance between the tip apex and the surface becomes sufficiently small interatomic forces between the tip and the object attract the tip, and the cantilever is bent. This is detected by the deflection of a laser beam being reflected from the cantilever surface opposite to the tip. The deflection is a measure of the force on the tip. If the tip is approached further to the surface contact is achieved and the resulting force shifts into repulsion. This so-called contact mode is the one generally used for topographic investigations. Here a suitable repulsive deflection is chosen and the tip is scanned in the $`x`$\- and $`y`$\- directions across the specimen while its height $`z`$ is regulated by a feedback circuit to maintain the deflection chosen, independent of surface corrugations. Thus the voltage in the feedback circuit is a measure of the topographic changes. In the force spectroscopy mode the tip is stationary in the $`x`$\- and $`y`$-directions, and the tip deflection is measured while the cantilever base is moved along the $`z`$-axis at constant speed towards the specimen until a suitable repulsive deflection is achieved, then its motion is reversed. These investigations can be made in vacuum, in gas, and in (optically transparent) liquid. In vacuum only interatomic forces between tip and specimen (van der Waals’ forces) affect the deflection. In gas (usually atmospheric air) also forces between molecules adsorbed to the surfaces are important. In particular water molecules forming adsorbed water layers on the tip and specimen surfaces are important because surface tension forces cause strong attraction when these layers get in contact. The surface tension forces and the van der Waals’ forces result in a transient ”snap-in” of the tip at approch just before contact is obtained. At operation in liquids it is generally assumed that snap-in is absent because the liquid is taken to have bulk structure right to the liquid-solid interface. The present results indicate that this is not correct. For the experiments a TopoMetrix TMX 2000 Explorer AFM was used with V-shaped $`\mathrm{Si}_3\mathrm{N}_4`$ cantilevers of nominal spring constant $`0.03\mathrm{N}/\mathrm{m}`$ and tip radius of curvature about $`50\mathrm{nm}`$. The interfaces to be considered here are distilled water-air interfaces which were approached from the liquid space, i.e. with the tip and cantilever fully submerged, and diamond polished stainless steel surfaces submerged in distilled water, and an air-gold interface. The setup with cantilever and tip submerged in water is shown in FIG. 3. At suitable air pressure in the central bore of the specimen a stable water-air interface of form as a spherical segment is created at the top of the bore, and it can be approached with the tip and cantilever fully submerged in water. At lateral translation of the specimen the water-stainless steel interface can be investigated. When the tip approaches the water-air interface from the liquid space and get in contact with the interfacial water it is strongly attracted to the interface and crashes through it in a violent snap-in. The process is interpreted to result from the interaction of the orderly structured liquid at the water-air interface with that at the water-tip interface. The initial interaction results in an increased order in the zone of liquid around the tip apex, and an attractive, but unbalanced force between the tip and the water air-interface is set up by the interfacial liquid tension in the structured zone, FIG. 4a. A balance is then obtained by local elevation of the water-air interface and bending of the cantilever. As a consequence the tip breaks through the interface, FIG. 4b. With the soft cantilever used in the present experiments balance was not achieved until the interface reached the cantilever itself. It was not possible to record the event as the dynamical range of the microscope ($`\pm 7\mu \mathrm{m}`$) was greatly exceeded. When subsequently the specimen was moved laterally to allow investigation of the water-stainless steel interface the topography of an area on the steel surface could be recorded as shown in FIG. 5a. A cross section along a single line, $`x=10\mu \mathrm{m}`$, is shown in FIG. 5b. The surface appears slightly wavy with localized micro-hills in the $`30100\mathrm{nm}`$ range. By force spectroscopy it is found that the snap-in at approach as well as the snap-out at the subsequent retraction depend strongly on the location. Very often the sn ap-in is quite small, just a few nm, and at retraction the tip sticks to the solid surface until the cantilever base has retracted about $`100\mathrm{nm}`$ corresponding to an attractive force of $`3\mathrm{nN}`$. Then the tip escapes from the specimen surface, but it does not return to the non-deflected condition until the cantilever base has moved another $`100\mathrm{nm}`$ during which the tip relaxes in two steps, FIG. 6, found in repeated cases. This may be related to the quantized adhesion reported in , though in the present case the changes occur at a much larger scale. In other cases a very large snap-in occurs reproducibly, as shown in FIG. 7, where the snap-in is about $`58\mathrm{nm}`$, and at retraction the tip remains in contact until the cantilever base has moved about $`400\mathrm{nm}`$. Then the cantilever returns to non- deflected condition in a single jump. The interpretation we give to these results is as follows: at locations on the specimen surface where the liquid is in direct contact with the solid surface there is an orderly structured liquid layer of thickness about $`1\mathrm{nm}`$ adjacent to the solid, and when the tip with its own orderly structured interface layer of liquid, also of thickness about $`1\mathrm{nm}`$, approaches the solid surface these interface layers merge and set up an attractive force on the tip, which in combination with the van der Waals’ forces between tip and sample, being of range typically about a few nanometer , result in a tip snap-in of less than $`10\mathrm{nm}`$, as actually apparent from FIG. 6. When the snap-in brings the tip in contact with the solid surface the van der Waals’ forces are strongly enhanced. Therefore, a larger force is required to withdraw the tip from contact. This is also evident from FIG. 6 where retraction of the cantilever base over a distance of about $`100\mathrm{nm}`$ is demanded to set up the force needed for the tip to escape the surface itself. However, it appears that a bending force on the cantilever remains. We suppose this is a consequence of a nanovoid being formed between the tip apex and the sample when contact between the two solid surfaces is broken. The surface tension at the double curved liquid-vapour interface which connects the tip and sample and bounds the void prevents that liquid flows into the gap and at the same time it establishes an attractive force between tip and sample. Therefore, the tip is not totally free of interaction with the specimen until the distance becomes so large that the void collapses. This appears to happen after a further $`100\mathrm{nm}`$ withdrawal, though in steps. The intermediate jumps may be related to discontinuous changes of the loci of contact of the liquid surface to the tip and sample surfaces. Apparently this takes place when the force imposed by the bent cantilever exceeds about $`12\mathrm{nN}`$. In cases of a significant snap-in, as in FIG. 7, the event cannot be attributed to neither contact between the structured interfacial liquid at the tip and specimen surfaces nor to the van der Waals’ forces, as these do not extend beyond at most $`10\mathrm{nm}`$. In liquid the range of the van der Waals’ forces is actually reduced compared to their range in air . However, if a stable interfacial void has grown on the specimen surface due to the local characteristic features of this surface, as modelled in , the tip meets a water-gas interface during the approach. As described above such an interface attracts the tip strongly and makes it penetrate deeply, i.e. in the present case it penetrates until tip-solid contact prevents further penetration, and a significant repulsive force between tip and specimen surface may then occur. This interpretation is supported by the large retraction distance of $`400\mathrm{nm}`$, corresponding to an attractive force of $`12\mathrm{nN}`$, observed before snap-out occurs, and the tip now escapes the solid surface as well as the supposed surface-attached void in a single large jump. This force considerably exceeds the van der Waals’ forces on the tip, which must be smaller than the $`3\mathrm{nN}`$ found from FIG. 6. Thus it reveals strong interfacial forces in the liquid adjacent to a water steel interface. It is of interest to compare the above results from stainless steel surfaces submerged in water with observations in air of a solid surface which does not adsorb water. In such a case the water adsorbed to the tip is of no significance as water is not attracted to the non-adsorbing solid surface. Gold does not adsorb water to any significant extent, and in FIG. 8 force spectroscopy on an air-gold interface is shown. The snap-in is about $`9\mathrm{nm}`$, and it can be ascribed to van der Waals’ forces which are unscreened due to the absence of water on the gold surface. At retraction the snap-out occurs in a single jump after $`43\mathrm{nm}`$ withdrawal of the cantilever base, corresponding to van der Waal’s forces of only $`1.3\mathrm{nN}`$. If we compare the force spectroscopy of the submerged stain- less steel surface in contact with water, FIG. 6, with that of the air-gold interface, we notice that at the submerged water- stainless steel interface the van der Waals’ forces are notably weaker at snap in, probably due to the screening effect of water. At snap out however, the upper limit of the van der Waals’ forces can be estimated from the air-gold experiment, and they are clearly insufficient to explain the force needed for the first snap-out. Therefore, by the first snap-out in FIG. 6 already a major part of the attractive force can be ascribed to liquid interfacial tension. ## III CONCLUSION AFM force spectroscopy investigations at solid as well as gaseous interfaces with water, probed from the liquid space, reveal characteristic attractive forces which can only be attributed to liquid tension in the interfacial water. This brings experimental support to the model of void formation at liquid- solid interfaces in which the interfacial liquid tension is a basic assumption. Further, the presence of interfacial voids is actually experimentally supported. Such voids are sources of cavity formation when single-phase liquids are exposed to tensile stress.
no-problem/9901/math-ph9901011.html
ar5iv
text
# INTRODUCTION ## INTRODUCTION Bloch (or Floquet) theory in its usual form already has a long history. Basically it starts from the fact that partial differential equations with constant coefficients are mapped into algebraic equations by means of the Fourier or Laplace transform. Now, if the coefficients are not constant but just periodic under an abelian (locally compact topological) group one still has the Fourier transform on such groups, mapping functions on the group $`\mathrm{\Gamma }`$ into functions on the dual group $`\widehat{\mathrm{\Gamma }}`$; the original spectral problem on a non-compact manifold is mapped into a (continuous) sum of spectral problems on a compact manifold (see section 1). This is what makes Bloch theory an indispensible tool especially for solid state physics, where one describes the motion of non-interacting electrons in a periodic solid crystal by a Schrödinger operator $`\mathrm{\Delta }+V`$ on $`L^2(^d)`$. The potential function $`V`$ is the gross electric potential generated by all the crystal ions and thus is periodic under the lattice given by the crystal symmetry. Measurements of crystals often require magnetic fields $`b`$ (2-form). In quantum mechanics, they are described by a vector potential (1-form) $`a`$ such that $`b=da`$ ($`B=\mathrm{curl}A`$ for the corresponding vector fields). The magnetic Schrödinger operator then reads $$H=(ia)^2+V.$$ But, although $`b`$ is periodic or even constant, $`a`$ need not be so, and $`H`$ won’t be periodic. It is therefore necessary to use magnetic translations (first introduced by Zak ) under which $`H`$ still is invariant. But now, these translations do not commute with each other in general. Therefore ordinary (commutative) Bloch theory does not apply. Basically, the reason for this failure is that a non-abelian group has no “good” group dual: the set of (equivalence classes of) irreducible representations has no natural group structure whereas the set of one-dimensional representations is too small to describe the group — otherwise it would be abelian. But although $`\widehat{\mathrm{\Gamma }}`$ does not exist any more, the algebra $`C(\widehat{\mathrm{\Gamma }})`$ of continuous functions continues to exist in some sense: It is given by the reduced group C-algebra of $`\mathrm{\Gamma }`$ which is just the C-algeba generated by $`\mathrm{\Gamma }`$ in its regular representation on itself (on $`l^2(\mathrm{\Gamma })`$). Section 2 shows how one can re-formulate ordinary Bloch theory in a way which refrains from using the points of $`\widehat{\mathrm{\Gamma }}`$ and relies just on the rôle of $`C(\widehat{\mathrm{\Gamma }})`$. From a technical point of view this requires switching from measurable fields of Hilbert spaces to continuous fields which then can be described as Hilbert C-modules over the commutative C-algebra $`C(\widehat{\mathrm{\Gamma }})`$. Having done this one can retain the setup but omit the condition of commutativity for the C-algebra $`C(\widehat{\mathrm{\Gamma }})`$. Thus one is lead to non-commutative Bloch theory (section 3) dealing with elliptic operators on Hilbert C-modules over non-commutative C-algebras. The basic task is now to relate properties of the C-algebra to spectral properties of “periodic” operators. Thus one generalizes spectral results for elliptic operators on compact manifolds as well as results of ordinary Bloch theory. In section 4 we list examples where non-commutative Bloch theory applies. This article is an overview of a part of my Ph.D. thesis which is written in German. Due to space limitations the following sections will be rather sketchy. A full account of that part in English is in preparation , as well as for the other, related parts . I am indepted to my thesis advisor Jochen Brüning for scientific support. This work has been supported financially by Deutsche Forschungsgemeinschaft (DFG) as project D6 at the SFB 288 (differential geometry and quantum physics), Berlin. ## 1 . COMMUTATIVE BLOCH THEORY ### Setup Let $`X`$ be a smooth oriented Riemannian manifold and $`\mathrm{\Gamma }`$ a discrete abelian group, acting on $`X`$ properly discontinuously ($`M:=X/\mathrm{\Gamma }`$ is Hausdorff), freely ($`M`$ smooth), isometrically ($`M`$ Riemannian), and co-compactly ($`M`$ compact). The $`\mathrm{\Gamma }`$-action on $`X`$ induces an action on $`C_{(c)}^{\mathrm{}}(X)`$ and a unitary action on $`L^2(X)`$ via $$(\gamma _{}f)(x):=f(\gamma ^1x)$$ for $`xX,\gamma \mathrm{\Gamma }`$ and $`f`$ in the corresponding space of functions. Let $`D`$ be a symmetric $`\mathrm{\Gamma }`$-periodic elliptic differential operator on $`X`$, i.e. on its domain of definition $`C_c^{\mathrm{}}(X)L^2(X)`$. By $`𝚪`$-periodic we mean that $`D`$ commutes with the $`\mathrm{\Gamma }`$-action on its domain. The basic physical example is $`X=^d`$ ($`d=2,3`$), $`\mathrm{\Gamma }=^d`$ acting by translations (or magnetic translations) and $`D`$ given by the Schrödinger operator (or magnetic Schrödinger operator with integral flux) with periodic electric potential. ### Aim Our aim is to determine the type (set/measure theoretic) of the spectrum of $`D`$. By set theoretic type of the spectrum<sup>1</sup><sup>1</sup>1Under the aforementioned conditions $`D`$ is essentially self-adjoint, so that the closure of $`D`$ is the only self-adjoint extension and has only real spectrum. By abuse of notation we denote the closure by $`D`$, too. we mean either band structure (i.e. a locally finite union of closed intervals) or Cantor structure (i.e. a nowhere dense set without isolated points). Bands may degenerate to points which would not be called bands by physicists. Non-degenerate bands allow formation of (semi-) conductors. Measure theoretic properties of the spectrum are continuity properties of the spectral measure with respect to Lebesgue measure. Physically one expects either pure point spectrum (eigenvalues and their accumulation points) or absolutely continuous spectrum (bands). Thus one wants to exclude the third possibility: singular continuous spectrum. ### Method The basic and well-known method for the spectral theory of periodic elliptic operators is Bloch theory, which in one dimension is also called Floquet theory. Its first step is to construct a direct integral $`L^2(X)`$ $`{\displaystyle _{\widehat{\mathrm{\Gamma }}}^{}}H_\chi 𝑑\chi ,`$ (1.1) $`D`$ $`{\displaystyle _{\widehat{\mathrm{\Gamma }}}^{}}D_\chi 𝑑\chi ,`$ (1.2) where the fiber Hilbert spaces $`H_\chi `$ $`=L^2(F_\chi )`$ (1.3) are spaces of square-integrable sections of associated complex line bundles $`F_\chi `$ $`=X\times _\chi ,`$ (1.4) and the operators in the fiber are given by the gauge-periodic boundary conditions $`D_\chi `$ $`=D|_{C^{\mathrm{}}(F_\chi )}.`$ (1.5) The decomposition $`\mathrm{\Phi }:L^2(X)_{\widehat{\mathrm{\Gamma }}}^{}H_\chi 𝑑\chi `$ is defined by $`(\mathrm{\Phi }f)(x)_\chi :={\displaystyle \underset{\gamma \mathrm{\Gamma }}{}}\chi (\gamma )f(\gamma ^1x)`$ (1.6) for $`fC_c^{\mathrm{}}(X),\chi \widehat{\mathrm{\Gamma }},xX`$ and can be extended unitarily to $`L^2(X)`$. $`\widehat{\mathrm{\Gamma }}`$ may be identified with the Brillouin zone in solid state physics, $`H_\chi `$ is the space of wave functions with quasi-momentum $`\chi `$. The family $`(H_\chi )_{\chi \widehat{\mathrm{\Gamma }}}`$ is a measurable field of Hilbert spaces; decomposability of $`D`$ w.r.t. this field is equivalent to $`\mathrm{\Gamma }`$-periodicity of $`D`$. The decomposition described above is still valid for the magnetic Schrödinger operator with zero magnetic flux per lattice cell but has to be modified for non-zero integral flux. In any case, the fibers $`D_\chi `$ may be identified with magnetic Schrödinger operators on the quotient space $`M=X/\mathrm{\Gamma }`$ on which there may be inequivalent quantizations of the classical magnetic system. Indeed, the family $`(D_\chi )_{\chi \widehat{\mathrm{\Gamma }}}`$ contains all possible quantization classes (). ### Results By general results for direct integrals (see e.g. , chapter II, §1) one can compute the spectrum of $`D`$ from the spectra of the family $`(D_\chi )_{\chi \widehat{\mathrm{\Gamma }}}`$. Using special properties of this family one gets: 1. Since $`(D_\chi )_{\chi \widehat{\mathrm{\Gamma }}}`$ is a continuous family of operators with compact resolvent, the spectrum of $`D`$ is given as the union $`\mathrm{spec}D=\underset{\chi \widehat{\mathrm{\Gamma }}}{}\mathrm{spec}D_\chi `$ and thus has band structure. Bands may degenerate to points, but possible eigenvalues have infinite multiplicity automatically. 2. Using the real analyticity of the operator family one gets: * $`\mathrm{spec}_{s.c.}D=\mathrm{}`$ * $`\mathrm{spec}_{p.p.}D`$ is discrete as a subset of $``$. For the magnetic Schrödinger operator with zero magnetic flux this is due to ; for rational flux (and general abelian-periodic elliptic operators) this was done in . ## 2 . COMMUTATIVE BLOCH-THEORY FROM A NON-COMMUTATIVE POINT OF VIEW As seen above it is necessary to use, in addition to a measurable field of Hilbert spaces, the continuity property of an operator family. Thus the basic idea is to incorporate the continuity into the setup, i.e. to find a continuous sub-field. Now, a continuous field of Hilbert spaces over a space $`\widehat{\mathrm{\Gamma }}`$ is equivalent to a Hilbert C-module over $`C(\widehat{\mathrm{\Gamma }})`$. In our geometric context such a module is given naturally: For $`e,fC_c^{\mathrm{}}(X)`$ define $`e|f_{}(\chi ):=\mathrm{\Phi }(e)_\chi |\mathrm{\Phi }(f)_\chi _{H_\chi }.`$ (2.1) This gives a $`C(\widehat{\mathrm{\Gamma }})`$-linear pre-scalar product, completion gives a Hilbert $`C(\widehat{\mathrm{\Gamma }})`$-module $``$, periodic operators are adjointable module operators. How to get back $`L^2(X)`$ from $``$? This can be done by means of the Hilbert GNS representation: Haar measure $`d\chi `$ on $`\widehat{\mathrm{\Gamma }}`$ defines a faithful state $`\tau `$ on $`C(\widehat{\mathrm{\Gamma }})`$ via integration and $`e|f_\tau ={\displaystyle _{\widehat{\mathrm{\Gamma }}}}\mathrm{\Phi }(e)_\chi |\mathrm{\Phi }(f)_\chi _{H_\chi }=e|f_{L^2(X)}`$ (2.2) so that the representation space $`_\tau `$ is just $`L^2(X)`$. The second basic observation is that $`C(\widehat{\mathrm{\Gamma }})=C_{red}^{}(\mathrm{\Gamma })`$ is the reduced group C-algebra of $`\mathrm{\Gamma }`$, i.e. the C-algebra generated by $`\mathrm{\Gamma }`$ in its regular representation on $`l^2(\mathrm{\Gamma })`$. This algebra continues to exist for non-abelian groups, but will be non-commutative. ## 3 . NON-COMMUTATIVE BLOCH THEORY ### Setup Let $`𝒜`$ be a C-algebra and $`H`$ a Hilbert space which is a right $`𝒜`$-module. Let $`D`$ be a (possibly unbounded) self-adjoint operator on $`H`$, commuting with the module action of $`𝒜`$. For physical examples we refer to section 4. ### Aim We now want to investigate the relations between $`\mathrm{spec}D`$ and $`𝒜`$; in particular this should reproduce the band structure results in the commutative case as described above. ### Method The basic step is to construct a Hilbert $`𝒜`$-module $``$ and a faithful (tracial) state $`\tau `$ on $`𝒜`$ such that the Hilbert GNS representation gives back the Hilbert space on which to do spectral theory: $`H_\tau `$; and such that $`D`$ comes from an unbounded self-adjoint module operator $`F`$ on $``$ which is $`𝒜`$-elliptic (see below). This construction has to be done for each class of examples separately and may require hard analytic work; once they fit into the general framework it is just (C-) algebraic propeties which are used. Under these assumptions one can construct a trace $`\mathrm{tr}_\tau `$ on the $`\tau `$-trace class $`_𝒜^1(,\mathrm{tr}_\tau )`$ in the module operators which generalizes the trace per unit volume in solid state physics. Applying this trace to projections one gets as usual a generalized dimension $`dim_\tau `$ for the range of projections. ### Ellipticity Let $`T`$ be an unbounded operator on $``$. $`T`$ is called $`𝓐`$-elliptic if 1. $`T`$ is densely defined, 2. $`T`$ is regular, i.e. $`T^{}`$ exists, is densely defined, and $`\mathrm{ran}(1+T^{}T)`$ is dense in $``$, and 3. $`T`$ has $`𝒜`$-compact resolvent, i.e. $`(1+T^{}T)^1𝒦_𝒜()`$. This is the notion of ellipticity which is usual for operators on Hilbert modules. ### Basic criteria Let $`𝒞`$ be a C-algebra, $`\tau `$ a trace. $`𝒞`$ has the Kadison property if there is $`c>0`$ such that for all non-zero projections $`P`$ in $`𝒞`$ one has $`\tau (P)c`$. Let $`𝒞`$ be a C-algebra, $`\tau `$ a state. $`𝒞`$ has real rank zero with inifinitesimal state if every self-adjoint element can be approximated by a finite spectrum element with arbitrary small $`\tau `$-value on the spectral projections. ### Results 1. If $`\lambda `$ is an isolated eigenvalue of $`D`$ then the corresponding eigenspace $`H_\lambda `$ is an (algebraically) finitely generated projective Hilbert $`𝒜`$-module. If $`e^{D^2}`$ is of $`\tau `$-trace class then $`H_\lambda `$ has finite $`\tau `$-dimension: $`dim_\tau H_\lambda <\mathrm{}`$ If $`,𝒜`$ are “suitable” then $`H_\lambda `$ is infinite dimensional ($`dimH_\lambda =\mathrm{}`$), in particular the discrete spectrum is empty: $`\mathrm{spec}_{disc}D=\mathrm{}`$ 2. If $`𝒦_𝒜()`$ has the Kadison property and $`e^{D^2}`$ is of $`\tau `$-trace class then $`D`$ has band spectrum (the basic idea going back to ). 3. If $`𝒦_𝒜()`$ has real rank zero with infinitesimal state ($`RRI_0`$) then Cantor spectrum is weakly generic (), i.e. every operator can be approximated by ones with Cantor spectrum in norm resolvent sense. The first part is analogous to the case of elliptic operators on compact manifolds: these have compact resolvent and therefore finite-dimensional eigenspaces, whereas in our situation we have $`𝒜`$-compact resolvent and finitely generated modules, but (under suitable conditions) infinite-dimensional eigenspaces. The second part traces back band structure to a property that holds in the commutative case. The third part gives a citerion for weakly generic (i.e. for a dense set of operators) total break-down of band structure. ## 4 . EXAMPLES ### Commutative Bloch theory $`𝒜`$ is the algebra of continuous functions $`C(\widehat{\mathrm{\Gamma }})`$ on the character group, $``$ the space of sections of a continuous field of Hilbert spaces defined by the continuous Bloch sections. The state $`\tau `$ is given by integration w.r.t. Haar measure: $`\tau (f)=_{\widehat{\mathrm{\Gamma }}}f(\chi )𝑑\chi `$. From this it follows that $`C(\widehat{\mathrm{\Gamma }})`$ has the Kadison property which implies band structure. Furthermore, we are in the “suitable” situation so that any possible eigenspace is infinite-dimensional but has finite $`\tau `$-dimension. ### Periodic elliptic operators Here $`𝒜`$ is the reduced group C-algebra $`C_{red}^{}(\mathrm{\Gamma })`$ of $`\mathrm{\Gamma }`$, $``$ is defined by $$e|f_{}:=\underset{\gamma \mathrm{\Gamma }}{}T_\gamma e|f_{L^2(E)}L_\gamma $$ for a vector bundle $`E`$ over $`X`$ with lift $`T_\gamma `$ of the $`\mathrm{\Gamma }`$-action; $`\tau `$ is the canonical trace, $`L_\gamma `$ the left regular representation of $`\mathrm{\Gamma }`$ on $`l^2(\mathrm{\Gamma })`$. This reproduces . ### Gauge-periodic elliptic operators This case is as above, but addtionally with a projective lift $`U_\gamma `$ of the action such that $$U_{\gamma _1}U_{\gamma _2}=\mathrm{\Theta }(\gamma _1,\gamma _2)U_{\gamma _1\gamma _2}.$$ Therefore $`\mathrm{\Theta }`$ defines a group cohomology class $`[\mathrm{\Theta }]H^2(\mathrm{\Gamma },S^1)`$, and $`𝒜=C_{red}^{}(\mathrm{\Gamma },\mathrm{\Theta })`$ is a twisted reduced group C-algebra, $`𝒜h`$ as above. In particular $`𝒜`$ is a rotation algebra $`𝒜_\alpha `$ for the $`^2`$-periodic magnetic Schrödinger operator, where $`\alpha `$ denotes the magnetic flux. If $`\alpha `$ then $`𝒜`$ has the Kadison property so that $`𝒦_𝒜()`$ has the Kadison property, too, and $`D`$ has band spectrum (reproducing ). If $`\alpha `$ then $`𝒜`$ has $`RRI_0`$ so that $`𝒦_𝒜()`$ has $`RRI_0`$, too, () and Cantor spectrum is weakly generic. ### Hofstadter model, quantum pendulum This is the case of the difference equations known as almost Matthieu, Hofstadter type or quantum pendulum, arising in several models in solid state physics (Peierls substitution, mesoscopic systems) as well as in integrable systems. Here we have just a trivial Hilbert module $`𝒜==𝒜_\alpha `$ over a rotation algebra. Therefore, the results are as above.
no-problem/9901/nucl-th9901015.html
ar5iv
text
# Saturation of product’s exoticity in compound nuclear reactions and its role in the production of new 𝑛-deficient nuclei with radioactive projectiles. ## Abstract Representation in terms of a new parameter, exoticity, a measure of $`n`$-deficiency or $`p`$-richness, clearly brings out the saturation tendency of the product’s maximum exoticity in a compound nuclear reaction as the compound nucleus is made more and more exotic using radioactive projectile (RIBs). The effect of this saturation on the production of new proton-rich species with RIBs over a wide $`Z`$-range has been discussed. PACs: 25.60.Dz, 21.10.Dr Keywords : RIB; CN Reaction; Exoticity; Saturation of product’s exoticity. The compound nuclear reaction has been used extensively in the last two decades for producing neutron-deficient ($`n`$-deficient) or proton-rich ($`p`$-rich) nuclei away from the $`\beta `$-stability. The fusion of two $`\beta `$-stable heavy-ions leads in most cases to a neutron-deficient compound system and the lightest (most $`n`$-deficient) compound nucleus of any atomic number $`Z`$ can be reached through rather symmetric combination of target and projectile, e.g. $`{}_{}{}^{50}Cr+^{54}Fe^{104}Sn`$, $`{}_{}{}^{58}Ni+^{74}Se^{132}Sm`$, $`{}_{}{}^{64}Zn+^{96}Ru^{160}W`$ etc. The evaporation of a neutron from such a compound nucleus (CN) takes the residue or the product towards the $`p`$-drip line while the evaporation of a proton or a $`\alpha `$\- particle brings it closer to the $`\beta `$-stability line as compared to the said compound nucleus. The Coulomb barriers (CB) for proton and $`\alpha `$ usually make the evaporation of these particles energetically more costly compared to neutron evaporation and, therefore, neutron evaporation is usually more favoured. To the extent that the $`n`$-evaporation dominates one always gains, vis-a-vis production of exotic species, by choosing appropriate projectile-target combinations leading to the formation of lightest compound systems. The possible availability of low energy (around Coulomb barrier) radioactive ion beams (RIBs) in near future will certainly allow formation of even lighter (as compared to the ’lightest’ CN systems possible with stable projectile - stable target combinations) CN systems and it is important to assess to what extent these lighter CN systems can help in the production of new $`n`$-deficient nuclei in or around the $`p`$-drip line. In other words it is important to assess to what extent the naive expectation ”more exotic the compound nucleus, more exotic is the product” can be extrapolated as the compound nucleus becomes more and more lighter. The above expectation is known to hold for compound systems upto a certain distance from the $`\beta `$-stability line but as one moves farther and farther away the binding energy of the last proton decreases very rapidly and that of the neutron increases sharply . For any given $`Z`$ if the mass number $`A`$ of the compound system is less than a certain value, the energy cost of the last proton (binding energy + CB for proton) becomes actually lower than that of a neutron making thereby $`p`$-evaporation a more likely process. The formation of compound systems beyond a certain extent of $`n`$-deficiency, thus, may not lead, given any limit of production cross-section, to the production of more $`n`$-deficient products. Intuitively, therefore, there is a possibility of saturation in the $`n`$-deficiency of the products. It is important to note that one needs to set a lower limit for the production cross-section because to the extent that no basic conservation laws (e.g. charge no., mass no. etc.) are violated, products of any exoticity can be obtained, in principle, from a given compound nucleus if one stops bothering whether the production cross section is 1 mb, or say a millionth of a millibarn. The lightest possible compound nuclei with stable projectile - stable target combinations and obviously the even lighter compound nuclei which can be formed with $`p`$-rich RIBs fall mostly in the domain where effective separation energy of the last proton is either equal to or less than that of the neutron. The product pattern in such cases, therefore, are expected to show the effect of the possible saturation, that is the formation of more $`n`$-deficient compound systems may not lead, given any limit of production cross-section, to more $`n`$-deficient products. To see whether there is indeed any saturation or not one needs, at first, to coin a definition of $`n`$-deficiency which is independent of $`Z`$. This is because the products of a given compound nucleus can have a range of $`Z`$ values starting from $`Z_{CN}`$ ($`Z`$ of the CN ) to upto say, 7 or 8 units of atomic number less than $`Z_{CN}`$. To compare which one of any two products of different $`Z`$ is more $`n`$-deficient we need a $`Z`$-independent description of $`n`$-deficiency so that the products of different $`Z`$’s can be considered in the same footing. This can be achieved by defining a new parameter which we prefer to call ”exoticity”. Keeping in mind that the absolute value of the $`n`$-deficiency, $`A_sA`$ (where $`A`$ is the mass number of the nucleus of atomic number $`Z`$ and $`A_s`$ is the mass number of the most $`\beta `$-stable isotope for the same $`Z`$) alone can not be taken to be a measure of ’exoticity’ of the compound nucleus or the product since the $`p`$-drip line is only a few neutrons away at lower $`Z`$ values while it is a few tens of neutrons away at higher $`Z`$’s, we choose to define the ’exoticity’ as, $`\zeta `$ $`=`$ $`1(AA_d)/(A_sA_d)`$ (1) where $`A`$ is the mass number of the nucleus of atomic number $`Z`$ and $`A_d`$ is the mass number of the isotope at the drip-line corresponding to the same $`Z`$. The exoticity is equal to 1 on the drip-line, is zero on the stability line and is greater than one beyond the drip line. The mass numbers $`A_d`$ were chosen from the compilation of Janecke and Masson which offers compilation of the proton drip-line over a wide range. The second hindrance in estimating the capability of a given compound nucleus to produce exotic products with production cross-section greater than any arbitrary chosen limit, is the excitation energy dependence of the production cross-section and product distribution which are typical to compound nuclear reactions. This makes any description involving the products themselves not suitable for the purpose (estimating the capacity of a CN to produce exotic products) since with increase in the excitation energy more and more new channels are opened up changing the products’ distribution and also the cross-section of a given product. To see whether or not any representation independent of excitation energy is possible we have plotted in figure 1, as a typical example, the exoticity of the maximum exotic product produced with cross-section greater than 1 mb as a function of the excitation energy for different compound nuclei of Ce (Cerium) having different exoticities. The cross-section values were computed using the code ALICE . The plot reveals the interesting feature that at lower CN exoticities an increase in the excitation energy leads to more exotic products but beyond a certain value of CN-exoticity the products’ exoticity becomes practically independent of the excitation energy. It is important to note that the excitation energy independence of the products’ exoticity does not mean that the ’most exotic product’ satisfying the minimum cross-section criterion ($`1`$ mb in this case) will remain the same at all the excitation energies. At a given excitation energy there will be one product (given the limit of production cross-section) which is most exotic. If one varies the excitation energy, the most exotic product satisfying the minimum cross-section criterion may be a different isotope but if one calculates its exoticity it will almost be the same as that of the most exotic product at the earlier excitation energy. Further, the compound systems of Cerium for which the excitation energy dependence practically vanish or becomes very weak are those which are lighter than the CN for which the separation energy of a neutron equals to the effective separation energy of a proton, that is $`B_n=B_{p}^{}{}_{}{}^{}`$ ($`B_{p}^{}{}_{}{}^{}=B_p+CB`$). For Cerium $`B_n`$ equals to $`B_{p}^{}{}_{}{}^{}`$ for $`\zeta _{CN}=0.54`$ ( $`A`$ = 127). Compound systems of other $`Z`$ values also exhibit similar dependence of products’ exoticity on the excitation energy. In this study we attempt to estimate the production of very $`p`$-rich exotic nuclei from compound nuclei formed by use of $`p`$-rich RIBs and having $`Z`$ in the range $`50Z_{CN}82`$. In this $`Z`$-range the Coulomb barrier for protons is quite high (favouring production of exotic species) and also the compound nuclear formation cross-section and its subsequent decay by light particle emission constitutes a major fraction of the total reaction cross-section for projectile energies not much above the Coulomb barrier. The compound nuclear systems of interest in the present study are those which are more exotic than the compound nuclei for which $`B_nB_{p}^{}{}_{}{}^{}`$. For example, for Ce the lightest CN that can be reached with stable target - stable projectile combination is $`{}_{}{}^{122}Ce`$ for which $`\zeta _{CN}=0.75`$. In the domain of our interest, therefore, an excitation energy independent description is possible if one chooses a representation in terms of exoticity of the compound nucleus and of the product rather than the usual representation in terms of $`A`$ and $`Z`$ of the products. It is important to mention here that an accurate estimation of cross-sections of very exotic products is not possible no matter which one of the presently available codes such as ALICE, CASCADE, PACE etc. is used for the purpose. Our intention is, therefore, not to predict the accurate cross-section values in a number of specific cases but to examine whether indeed there is any saturation of products’ exoticity and its implications on the production of new nuclei with RIBs. The dependence of $`\zeta _p^{max}`$ on $`\zeta _{CN}`$ is shown in figure 2, where $`\zeta `$’s for odd-$`Z`$ products of maximum exoticity (that is $`\zeta _p^{max}`$) produced from compound nuclei of representative even $`Z`$s and with cross-sections greater than $`1`$ mb are plotted against corresponding $`\zeta _{CN}`$’s. Exoticity of compound nucleus greater than one (i.e. beyond the drip-line) has also been considered. This is because the $`p`$-decay life time, due to the existence of CB is expected to be longer than CN decay time upto a certain distance beyond the $`p`$-drip line and one can attempt to form such compound systems so as to produce the maximum $`p`$-rich products. To decide how far beyond $`\zeta _{CN}=1`$ one can go, the ground state $`p`$-decay life times, which are dependent on barrier (Coloumb and centrifugal) penetration probabilities, for the compound nuclei beyond the $`p`$-drip line have been calculated for various angular momenta using WKB approximation. Only those compound nuclei for which the ground state $`p`$-decay life times have been found (for $`l=0`$, to ensure a very conservative calculation) to be greater than $`10^{14}`$ sec. (once again to ensure a conservative estimate) are considered. It can be seen from figure 2 that at each $`Z_{CN}`$, beyond a certain $`\zeta _{CN}`$, $`\zeta _p^{max}`$ shows a saturation tendency and the value of $`\zeta _p^{max}`$ at saturation increases with $`Z`$. The $`\zeta _p^{max}`$ at saturation, as expected, moves towards higher values as $`Z`$ increases. The gradient of the curves at various $`Z_{CN}`$’s are artefact of the relative binding energies or the evaporation probabilities of mainly protons and alphas at corresponding $`Z_{CN}`$’s. The curves shown in figure 2 clearly bring out the limitation, as a result of the saturation, of the concept of forming more and more exotic compound systems for the production of more and more exotic $`p`$-rich species. The effect of the saturation is however not so serious vis.a.vis production of odd $`Z`$ nuclei on or around the drip-line. For odd $`Z`$ nuclei, the drip line can be reached for all nuclei having $`Z51`$ with cross-sections $``$ 1mb. This is shown in fig.3 where odd $`Z`$ products of maximum exoticity that can be produced with RIBs which are 4-neutron away (deficient) from the lightest $`\beta `$-stable isotopes are shown in the $`N`$-$`Z`$ plane with two different cross-section limits of 1 mb and 0.01 mb. In the cross-section limit of 0.01 mb the drip-line can be reached for all nuclei with $`Z45`$ with only 4 neutron deficient RI projectiles. For even $`Z`$ products, however, drip line can be reached with the same projectiles (4-neutron deficient), as the calculation reveals, only for $`Z80`$ with 1 mb cross-section limit. If the cross-section limit is relaxed to $``$ 0.01 mb, the even $`Z`$ drip-line can be reached for nuclei with $`Z66`$. The saturation thus affects seriously the prospect of reaching the $`p`$-drip line with reasonable cross-sections, say $`10\mu `$b for all even $`Z`$ species with $`Z66`$. One can, however, consider the use of more exotic projectiles to reach the even $`Z`$ drip line for $`Z66`$, although the beam intensity is likely to fall rather sharply with the exoticity. It is important to note however that various other factors e.g. the signal to noise ratio, detection efficiency, the type of measurement etc. together decide the lower limit of cross-section and the intensity of RIB that one needs in any given situation. The saturation, thus, in no way puts any absolute restriction and should rather be considered as a hindrance to be overcome by putting more efforts to increase beam intensity of RIBs, detection efficiency, background rejection etc. It is important to note, in the context of discussions above, that production of new nuclei with RI projectiles usually offer a number of advantages as compared to the production of same nuclei with $`\beta `$-stable projectiles. An estimation of these advantages is necessary to decide the minimum usable beam intensity and the production cross-section ( the product of these two represents a sort of quality factor) in a given situation. To illustrate the possible advantages of RIBs we have plotted in figure 4 the estimated cross-sections of isotopes of $`Z=70`$ products from three different compound systems of $`W`$, i.e, $`Z=74`$. The compound nucleus of minimum exoticity $`\zeta _{CN}`$ = 0.76 is the lightest one that can be formed from the stable projectile - stable target combination ($`{}_{}{}^{64}Zn+^{96}Ru`$). The other two compound nuclei of $`\zeta _{CN}`$’s 0.92 and 1.07 are formrd respectively from $`{}_{}{}^{60}Zn+^{96}Ru`$ and $`{}_{}{}^{56}Zn+^{96}Ru`$. These curves clearly bring out the advantage of RIBs in terms of enhanced production cross-section and in terms of enhancing the signal to background ratio. For example it can be seen from figure 4 that the production cross-section of $`{}_{}{}^{150}Yb`$ (a new and very exotic nucleus) with RIB ($`{}_{}{}^{60}Zn`$, $`\zeta _{CN}`$ = 0.93) is about $`300`$ times more than that with the stable projectile ($`{}_{}{}^{64}Zn`$, $`\zeta _{CN}`$ = 0.76) and what is equally important for experimental measurements is the change in the relative production pattern. With stable projectile, the production cross-section of $`{}_{}{}^{150}Yb`$ is almost four orders of magnitude less compared to the most favoured channel, whereas in the RIB case ($`\zeta _{CN}=0.93`$), $`{}_{}{}^{150}Yb`$ is produced with the maximum cross-section. Such situations are very favourable in that they push down the lower limit of the needed RI beam intensity (for detection and other measurements) by several orders of magnitude or conversely the lower limit of production cross-section is pushed down allowing measurements on even more exotic species. In this communication, we have attempted to address the question to what extent one can hope to produce more exotic products by realising more and more exotic compound systems using radioactive projectiles. It has been shown that the exoticity of the product saturates beyond a certain value of the exoticity of the CN. The value of the compound nucleus’ exoticity at which the saturation occurs depends on the atomic number $`Z`$ and moves, as expected, towards higher value of exoticity as $`Z`$ increases. The conclusions reached are practically independent of the excitation energy of the compound nuclei as long as it is a few tens of MeV above the Coulomb barrier. This new revealation has important consequences in the production of new $`p`$-rich species using $`p`$-rich projectiles (RIBs). It tends to limit, to an extent, the utility of very $`p`$-rich projectiles vis-a-vis production of new $`p`$-rich species in or around the $`p`$-drip line. While the saturation does not affect that adversely the production of new odd $`Z`$ nuclei where the proton drip line can be reached with reasonable cross-sections ($`10\mu `$b) for all elements with $`Z45`$ with only moderately $`p`$-rich projectiles (4 neutron deficient), it does make production of $`p`$-drip line nuclei for even $`Z`$ (especially for $`Z65`$) difficult with moderate production cross-sections unless one uses quite exotic projectiles. Acknowledgement: The authors gratefully acknowledge the support and encouragement received from Dr. Bikash Sinha for this work. They are also grateful to Dr. J.N.De and Dr. Santanu Pal for many helpful discussions and suggestions. Figure Captions. Figure-1. Dependence of maximum exoticity of products against the excitation energy (in MeV) of compound nuclei of different $`\zeta _{CN}`$. Figure-2. The variation of maximum exoticity $`\zeta _p^{max}`$ for odd $`Z`$ products with the exoticity of even $`Z`$ compound nuclei $`\zeta _{CN}`$. Fifure-3. The $`\beta `$-stability line along with the proton drip line and lines showing the extent of production of exotic nuclei with RIBs which are only four neutron deficient compared to the lightest stable projectiles of corresponding $`Z`$ (within the cross-section limits of 1mb and 0.01 mb). Figure-4. The cross-section of $`Z=70`$ isotopes against mass number $`A`$ at representative $`\zeta _{CN}`$ values.
no-problem/9901/quant-ph9901013.html
ar5iv
text
# Questions on the concept of time ## Abstract Some notes and questions about the concept of time are exposed. Particular reference is given to the problem in quantum mechanics, in connection with the indeterminacy principle. PACS: 03.65.-w Quantum mechanics. 03.65.Bz Foundations, theory of measurement, miscellaneous theories. “How much time”, “It took quite some time”, “There is plenty of time”. These are some examples only of our common way to think about time: an interval between two instants. In physics as well time is seen as an interval. Asher Peres stressed that the measurement of time is the observation of a dynamical variable, which law of motion is known and it is uniform and constant in time . There is a kind of self–reference in this definition and nothing is said about time. On the other hand, time is considered simply as a parameter and, according to this, the above definition is completely satisfactory. Moreover, time is sometimes neglected (e.g. steady state phenomena), which is useful to understand some physical concepts. However, when we deal with quantum mechanics the problem of the time explodes in all its complexity. In classical mechanics (hamiltonian formulation) the dynamical state of a physical system is described by a point in a phase space, that is we have to know position $`q`$ and momentum $`p`$ at a given time $`t`$. Even though it can appear a sophism, it is not possible, strictly speaking, to know simultaneously $`q`$ and $`p`$ of any object. However, in classical physics, we may neglect variations during the lapse of time between the measurement of $`q`$ and $`p`$, because the quantum of action is so small when compared to macroscopic actions. It is very interesting to note how Sommerfeld stressed this question when he wrote about the Hamilton’s principle of least action: the trajectory points $`q`$ and $`q+\delta q`$ are considered *at the same time instant* . In quantum mechanics this approximation is not valid, because actions are comparable with the quantum of action. The hamiltonian formalism is not anymore a useful language to investigate nature and, as known, it was necessary to settle quantum mechanics. The impossibility to neglect time in quantum mechanics is well described by the Heisenberg’s principle of indeterminacy. Nevertheless, the role of the time in quantum indeterminacy is often neglected. In the history of physics, we can often find authors which claimed to have found a way to avoid the obstacle of indeterminacy. However they all missed the target, that is the question of the time. Heisenberg clearly stated that indeterminacy relationships do not allow a simultaneous measurement of $`q`$ and $`p`$, while do not prevent from measuring $`q`$ and $`p`$ taken in isolation . It is possible to measure, with great precision, complementary observables in two different time instants: this is not forbidden by Heisenberg’s principle. Later on, it is also possible to reconstruct one of the observables at the reference time of the other observable, but this is questionable. In the interval between two measurements observables change because time flows. What happened during this interval? We can reconstruct observables by making hypoteses, but we have to remember that these are hypoteses and not measurements. We have to take into account the so–called “energy–time uncertainty relationship”. As known, time is a $`c`$–number and therefore it have to commute with each operator. Nevertheless, the relationship exists, but it is worth to note its dynamical nature, whereas indeterminacy is kinematic . That is, it follows from the evolution of the system during the measurement. Bohr had already stated this and he had often pointed out the time issue , along with Landau and Peierls . We refer to in which the question is stated in a better way. The relationship: $$\mathrm{\Delta }E\mathrm{\Delta }t>\mathrm{}$$ (1) means that we have to consider the system evolution during the measurement, that is the difference between the measurement result and the state after the measurement. The energy difference between the two states cannot be less than $`\mathrm{}/\mathrm{\Delta }t`$. The energy–time relationship has important consequences particularly as regards the momentum measurement and, therefore, on double–slit experiment . Eq. (1) suggests that, given a certain energy, it is possible to construct a state with a huge $`\mathrm{\Delta }E`$ in order to obtain a very small $`\mathrm{\Delta }t`$. However, in a recent paper, Margolus and Levitin give a strict bound that depends on the difference between the average energy of the system and its ground state energy. Is it a step toward a quantization of the time? In addition, if we consider the equation of motion (written with Dirac’s notation ): $$i\mathrm{}\frac{d|Pt>}{dt}=H(t)|Pt>$$ (2) we can see that $`H(t)`$ is $`i\mathrm{}`$ times an operator of time–translation. If the system is closed we can consider $`H`$ constant and equal to the total energy of the system; but if not, if energy depends on time, this means that the system is under the action of external forces (e.g. measurement). The measurement introduce an energy exchange that does not follow causality. Moreover, it is worth to note that a closed system is an abstraction. A real closed system is not observable, without introducing energy exchange which would change $`H`$: therefore that would not be a closed system. We can say, by means of Rovelli’s words that there is no way to get information about a system without physically interacting with it for a certain time . Would you consider it a sophism? Of course not. We should always bear in mind that quantum physics is only an interpreted language we use to speak about Nature, though it does not describe Nature itself (on logic–linguistic structure of quantum physics see, for example, ). In classical physics we made many approximations, which are no longer valid in quantum physics. In particular, we can no more neglect time. As Heraclitus stated, you cannot plunge your hands twice in the same stream.
no-problem/9901/math-ph9901003.html
ar5iv
text
# Transfer matrices for scalar fields on curved spaces ## I Introduction We start our construction from the ideas comprised in Nelson$``$s axioms for scalar Euclidean-Markoff quantum fields. Here, the Markoff property of certain projectors is one of the basic ingredient in defining the transfer matrix of whom generator is identified with the Hamiltonian of Wightman quantum scalar field. We found that these ideas can be used in the same way at the non-quantum level. In the case of the scalar fields on Riemannian manifolds, for an arbitrary direction, we construct a propagator by using the Markoff property. In the stationary case it becomes a semigroup which can be considered as the transfer matrix of the system and, further, it can be used in introducing a Hamiltonian. We will show that the propagator is exponentially bounded by using Agmon$``$s results in exponential decay of solutions of second-order elliptic equations. An application concerning the decoupling (in the sense of ) of two disjoint non-convex regions is given. ## II Introductory definitions and results Let us consider the Riemannian manifold $`(R^{n+1},g)`$ and the Laplace-Beltrami operator on it, $`\mathrm{\Delta }`$. For a point in $`R^{n+1}`$ we use the notation $`(t,x)`$. Let $`E_m(t,x;s,y)`$ be the kernel of $`\left(\mathrm{\Delta }+m^2\right)^1`$ on $`L^2(R^{n+1},\sqrt{g}dtdx)`$. As in , we will not consider the additional term $`{\displaystyle \frac{1}{6}}\rho .`$ One defines the space $`N𝒟^{}\left(R^{n+1}\right)`$, $`fN`$ if : $$f_N^2=_{R^{n+1}}_{R^{n+1}}\overline{f}(t,x)E_m(t,x;s,y)f(s,y)\sqrt{g(t,x)}\sqrt{g(s,y)}𝑑t𝑑x𝑑s𝑑y<\mathrm{}\text{,}$$ (1) and, for each $`\sigma R`$, let $`N_\sigma D^{}\left(R^n\right)`$ be the space: $`gN_\sigma `$ if $$g_{N_\sigma }^2=_{R^n}\overline{g}\left(x\right)E_m(\sigma ,x;\sigma ,y)g\left(y\right)\sqrt{g(\sigma ,x)}\sqrt{g(\sigma ,y)}𝑑x𝑑y<\mathrm{}\text{.}$$ (2) We will consider that, as in the Euclidean case, the space $`L^2(R^n,d\mu _\sigma )N_\sigma `$, where $`d\mu _\sigma \left(x\right)=\sqrt{g(\sigma ,x)}d^nx`$ and that it is dense in $`N_\sigma `$ for each $`\sigma R`$. Now, let $`\widehat{E}_\sigma :N_\sigma L^2(R^n,d\mu _\sigma )`$ be the operator corresponding to the kernel $`E_m(\sigma ,x;\sigma ,y)`$. Then $`\widehat{E}_\sigma ^{1/2}`$ defines an isometry from $`N_\sigma `$ to $`L^2(R^n,d\mu _\sigma )`$ and let $`\left(\widehat{E}_\sigma ^{1/2}\right)^{}:L^2(R^n,d\mu _\sigma )`$ $`N_\sigma `$ be its adjoint. The following are true: $$\widehat{E}_\sigma ^{1/2}\left(\widehat{E}_\sigma ^{1/2}\right)^{}=1_{L^2(R^n,d\mu _\sigma )}\text{ and }\left(\widehat{E}_\sigma ^{1/2}\right)^{}\widehat{E}_\sigma ^{1/2}=1_{N_\sigma }\text{.}$$ (3) With our assumptions, $`\widehat{E}_\sigma ^{1/2}\left(N_\sigma \right)=L^2(R^n,d\mu _\sigma )N_\sigma `$, the operator $`\widehat{E}_\sigma ^{1/2}`$ is bounded on $`N_\sigma `$. Moreover, one can view $`\left(\widehat{E}_\sigma \right)^{}`$ as a dense defined unbounded operator on $`N_\sigma `$, in fact, it is the inverse operator of $`\widehat{E}_\sigma `$. For $`\sigma R`$, let $`j_\sigma `$ be the operator $`j_\sigma :N_\sigma N`$, $`\left(j_\sigma \psi \right)(t,x)=\psi \left(x\right)\delta \left(t\sigma \right)`$ and $`j_\sigma ^{}`$ be its adjoint. If $`\mathrm{\Lambda }`$ is a closed subset of $`R^{n+1}`$ we denote by $`N_\mathrm{\Lambda }`$ the subspace of $`N`$ which comprises all distributions with support in $`\mathrm{\Lambda }`$. The orthogonal projection of $`N`$ in $`N_\mathrm{\Lambda }`$ will be denoted by $`e_\mathrm{\Lambda }`$. Following we have: ###### Proposition 1 The operators $`j_\sigma `$ are isometries and $`j_\sigma ^{}j_\sigma =1_{N_\sigma }`$, $`j_\sigma j_\sigma ^{}=e_\sigma `$, where $`e_\sigma `$ denotes the projector corresponding to the subset of $`R^{n+1}`$, $`t=\sigma `$. Then we define the operators: $$U_{\sigma ,\sigma ^{}}:N_\sigma ^{}N_\sigma \text{}U_{\sigma ,\sigma ^{}}=j_\sigma ^{}j_\sigma ^{}\text{.}$$ (4) We will derive in the following that $`U_{\sigma ,\sigma ^{}}`$ are propagators in the sense of . This will follow from the Markoff property of the projectors $`e_\sigma `$. ###### Lemma 2 Let $`A`$, $`B`$ and $`C`$ be closed subsets in $`R^{n+1}`$ such that $`C`$ separates $`A`$ and $`B`$. Then $`e_Ae_Ce_B=e_Ae_B`$. ###### Solution 3 This is the consequence of the fact that $`E_m`$ is the kernel of a local operator. The proof is identic with that of . The basics properties of $`U_{\sigma ,\sigma ^{}}`$ operators are stated in the following proposition. ###### Proposition 4 The family of operators $`U_{\sigma ,\sigma ^{}}`$, $`\sigma `$, $`\sigma ^{}R`$ has the following properties: 1) $`U_{\sigma ,\sigma ^{}}U_{\sigma ^{},\sigma ^{\prime \prime }}=U_{\sigma ,\sigma ^{\prime \prime }}`$ 2) $`U_{\sigma ,\sigma }=1_{N_\sigma }`$ 3) $`U_{\sigma ,\sigma ^{}}1`$. ###### Solution 5 1) Using the Markoff property we have: $$e_\sigma e_\sigma ^{}e_{\sigma ^{\prime \prime }}=e_\sigma e_{\sigma ^{\prime \prime }}j_\sigma j_\sigma ^{}j_\sigma ^{}j_\sigma ^{}^{}j_{\sigma ^{\prime \prime }}j_{\sigma ^{\prime \prime }}^{}=j_\sigma j_\sigma ^{}j_{\sigma ^{\prime \prime }}j_{\sigma ^{\prime \prime }}^{}.$$ (5) By composition with $`j_{\sigma ^{\prime \prime }}`$ at the right, we have $$j_\sigma \left(j_\sigma ^{}j_\sigma ^{}j_\sigma ^{}^{}j_{\sigma ^{\prime \prime }}j_\sigma ^{}j_{\sigma ^{\prime \prime }}\right)=0.$$ (6) From the definition of $`U_{\sigma ,\sigma ^{}}`$ and since $`j_\sigma `$ are isometries, we conclude $`U_{\sigma ,\sigma ^{}}U_{\sigma ^{},\sigma ^{\prime \prime }}=U_{\sigma ,\sigma ^{\prime \prime }}`$. 2) It follows from proposition 1.1 and definition of $`U_{\sigma ,\sigma ^{}}`$. 3) Because $`j_\sigma ^{}`$ and $`j_\sigma `$ are isometries, the property results immediately. ## III Exponential bounds on propagators To improve our estimates on the propagators $`U_{\sigma ,\sigma ^{}}`$ we need a supplementary condition on the metric $`g`$. We say that an application $`Q:R^{n+1}M(n+1,n+1)`$ has stable positivity if there exists $`\epsilon >0`$ such that for any application $`\delta :R^{n+1}M(n+1,n+1)`$ with $`\left|\delta \left(x\right)^{ij}\right|\epsilon `$ the matrices $`Q\left(x\right)\delta \left(x\right)`$ are positive defined for any $`xR^{n+1}`$. The following result is a direct application of Agmon theory of exponentially decay of solutions of elliptic second order operators. ###### Proposition 6 If the metric $`g`$ has stable positivity then for any $`fN_\sigma ^{}`$: $$_{T_0}^{\mathrm{}}𝑑\sigma \{e^{\omega \sigma }\widehat{E}_\sigma ^{1/2}U_{\sigma ,\sigma ^{}}f_{N_\sigma }\}^2<\mathrm{}\text{,}$$ (7) provided $`\omega <{\displaystyle \frac{m}{\sqrt{supg^{11}}}}`$. ###### Solution 7 Starting from $$\begin{array}{c}u,U_{\sigma ,\sigma ^{}}f_{N_\sigma }=u,\widehat{E}_\sigma U_{\sigma ,\sigma ^{}}f_{L^2(R^n,d\mu _\sigma )}\hfill \\ =_{R^n}\overline{u}\left(x\right)\left[_{R^n}E_m(\sigma ,x;\sigma ^{},y)f\left(y\right)𝑑\mu _\sigma ^{}\left(y\right)\right]𝑑\mu _\sigma \left(x\right)\hfill \end{array}$$ (8) for $`uN_\sigma `$ and $`fN_\sigma ^{}`$, it follows that $`\phi (\sigma ,x)=\left(\widehat{E}_\sigma U_{\sigma ,\sigma ^{}}f\right)\left(x\right)`$ is a solution of $$\left(\mathrm{\Delta }+m^2\right)\phi (\sigma ,x)=0$$ (9) for $`\sigma >\sigma ^{}`$. Let $`\rho _m(;)`$ denotes the distance corresponding to the metric $`g_m=mg`$. The metric $`g`$ has stable positivity so, there is an $`\epsilon R_+`$ such that $`\rho _m(\sigma _0,x_0;\sigma ,x)>{\displaystyle \frac{\epsilon }{m}}\left|\sigma \sigma _0\right|`$. For $`\mathrm{\Omega }=\{(\sigma ,x):\sigma >T_0\}`$, $`T_0R_+`$ and for some positive $`\lambda `$: $$\begin{array}{c}_\mathrm{\Omega }\left|\phi (\sigma ,x)\right|^2e^{\lambda \rho _m(T_0,x_0;\sigma ,x)}\sqrt{g(\sigma ,x)}𝑑\sigma d^nx\hfill \\ =_{T_0}^{\mathrm{}}𝑑\sigma \widehat{E}_\sigma U_{\sigma ,\sigma ^{}}f,\widehat{E}_\sigma U_{\sigma ,\sigma ^{}}f_{L^2(R^n,d\mu _\sigma )}e^{\lambda \frac{\epsilon }{m}\left(\sigma T_0\right)}\hfill \\ <ct._{T_0}^{\mathrm{}}𝑑\sigma U_{\sigma ,\sigma ^{}}f,\widehat{E}_\sigma U_{\sigma ,\sigma ^{}}f_{L^2(R^n,d\mu _\sigma )}e^{\lambda \frac{\epsilon }{m}\left(\sigma T_0\right)}\hfill \\ =ct._{T_0}^{\mathrm{}}𝑑\sigma U_{\sigma ,\sigma ^{}}f_{N_\sigma }^2e^{\lambda \frac{\epsilon }{m}\left(\sigma T_0\right)}<\mathrm{}\text{.}\hfill \end{array}$$ (10) So we are in the conditions of the main theorem of . It follows that: $$\begin{array}{c}_\mathrm{\Omega }𝑑\sigma d^nx\sqrt{g(\sigma ,x)}\left|\phi (\sigma ,x)\right|^2\left(m^2g(h(\sigma ,x),h(\sigma ,x))\right)e^{2h(\sigma ,x)}\hfill \\ \frac{2\left(1+2d\right)}{d^2}m^2_{\mathrm{\Omega }\mathrm{\Omega }_d}\left|\phi (\sigma ,x)\right|^2e^{2h(\sigma ,x)}\sqrt{g(\sigma ,x)}𝑑x\text{,}\hfill \end{array}$$ (11) where $`d`$ is a positive number and $`\mathrm{\Omega }_d=\{(\sigma ,x)\mathrm{\Omega }:\rho _m((\sigma ,x),\left\{\mathrm{}\right\})>d\}`$. Here $$\rho _m((\sigma ,x),\left\{\mathrm{}\right\})=sup\{\rho _m((\sigma ,x),\mathrm{\Omega }K):K\text{ is a compact subset of }\mathrm{\Omega }\}\text{.}$$ (12) The function $`h`$ is any function which satisfies the condition $`g(h(\sigma ,x),h(\sigma ,x))<m^2`$. We choose $`h(\sigma ,x)=\omega \sigma `$ with $`\omega <{\displaystyle \frac{m}{\sqrt{supg^{11}}}}`$. The above inequality becomes $$\begin{array}{c}_\mathrm{\Omega }𝑑\sigma d^nx\sqrt{g(\sigma ,x)}\left|\phi (\sigma ,x)\right|^2e^{2\omega \sigma }\hfill \\ <\frac{2\left(1+2d\right)}{d^2}\frac{m^2}{m^2\omega ^2}_{\mathrm{\Omega }\mathrm{\Omega }_d}𝑑\sigma 𝑑x\sqrt{g(\sigma ,x)}\left|\phi (\sigma ,x)\right|^2e^{2\omega \sigma }\text{.}\hfill \end{array}$$ (13) If for any point $`(\sigma ,x)\mathrm{\Omega }`$ there is a geodesic which starts in $`(\sigma ,x)`$ and ends in the hyperplane $`\sigma =T_0`$ then $`\mathrm{\Omega }\mathrm{\Omega }_d\{(\tau ,x):0<\sigma T\}`$ with $`T`$ sufficiently large but finite. In conclusion $$\begin{array}{c}_\mathrm{\Omega }𝑑\sigma d^nx\sqrt{g(\sigma ,x)}\left|\phi (\tau ,x)\right|^2e^{2\omega \sigma }\hfill \\ =_{T_0}^{\mathrm{}}𝑑\sigma e^{2\omega \sigma }\widehat{E}_\sigma U_{\sigma ,\sigma ^{}}f,\widehat{E}_\sigma U_{\sigma ,\sigma ^{}}f_{L^2(R^n,\mu _\sigma )}<\mathrm{}\text{,}\hfill \end{array}$$ (14) or $$_{T_0}^{\mathrm{}}𝑑\sigma e^{2\omega \sigma }\widehat{E}_\sigma U_{\sigma ,\sigma ^{}}f,\widehat{E}_\sigma U_{\sigma ,\sigma ^{}}f_{L^2(R^n,\mu _\sigma )}<\mathrm{}\text{,}$$ (15) which implies $$_{T_0}^{\mathrm{}}𝑑\sigma \{e^{\omega \sigma }\widehat{E}_\sigma ^{1/2}U_{\sigma ,\sigma ^{}}f_{N_\sigma }\}^2<\mathrm{}\text{.}$$ (16) ## IV The stationary case We consider in this section that there is a coordinate system such that the metric $`g`$ is independent of first coordinate. In this case, the spaces $`N_\sigma `$ and the operators $`\widehat{E}_\sigma ^{1/2}`$ are identically and will be denoted by $`N_0`$ and $`\widehat{E}_0^{1/2}`$ respectively. Thus, the operators $`U_{\sigma ,\sigma ^{}}`$ are defined on the same Hilbert space and depend only on the difference $`\sigma \sigma ^{}:`$ $`U_{\sigma ,\sigma ^{}}=U_{\sigma \sigma ^{}}`$. The family of operators $`\left\{U_\tau \right\}_{\tau R_+}`$ forms a semigroup. Using the results about existence and properties of the generators of semigroups , we can obtain bounds directly on the transfer matrix $`U_\tau `$. ###### Proposition 8 The semigroup $`\left\{U_\tau \right\}_{\tau R_+}`$ is exponentially bounded: $`U_\tau _{N_0}<e^{\tau \omega }`$ provided $`\omega <{\displaystyle \frac{m}{\sqrt{supg^{11}}}}`$. ###### Solution 9 Because we have found estimates on $`\widehat{E}_0^{1/2}U_\tau `$, we will consider the operators $`\stackrel{~}{U}_\tau =\widehat{E}_0^{1/2}U_\tau \left(\widehat{E}_0^{1/2}\right)^{}`$, well defined on $`L^2(R^n,d\mu _0)`$. Using the fact that $`L^2(R^n,d\mu _0)`$ is dense in $`N_0`$ we can extend these operators by continuity on the space $`N_0`$. In this way we have build the semigroup $`\left\{\stackrel{~}{U}_\tau \right\}_{\tau R_+}`$ which satisfies the estimates of the precedent section: $$_{T_0}^{\mathrm{}}𝑑\tau \{e^{\omega \tau }\stackrel{~}{U}_\tau _{N_0}\}^2<\mathrm{}\text{,}$$ (17) for some $`T_0>0`$. So $`\left\{\stackrel{~}{U}_\tau \right\}_{\tau R_+}`$ is exponentially bounded and in consequence , if $`\stackrel{~}{K}`$ is its generator ($`\stackrel{~}{U}_\tau =e^{\tau \stackrel{~}{K}}`$) the resolvent set of $`\stackrel{~}{K}`$ satisfies: $$\left\{zCRez(\mathrm{},\omega )\right\}\rho \left(\stackrel{~}{K}\right)\text{.}$$ (18) If $`K`$ is the generator of $`\left\{U_\tau \right\}_{\tau R_+}`$ then, on $`𝒟\left(K\right)`$ we have: $$K=\left(\widehat{E}_0^{1/2}\right)^{}\stackrel{~}{K}\widehat{E}_0^{1/2}$$ (19) by using the reciprocal formula $$U_\tau =\left(\widehat{E}_0^{1/2}\right)^{}\stackrel{~}{U}_\tau \widehat{E}_0^{1/2}\text{,}$$ (20) valid on $`N_0`$. If the operator $$\left(\widehat{E}_0^{1/2}\right)^{}\left(\stackrel{~}{K}z\right)^1\widehat{E}_0^{1/2}$$ (21) is well defined, even on a dense subset of $`N_0`$, then $`Kz`$ is inversable. From 20 it follows that, if $`\left(\stackrel{~}{K}z\right)^1`$ exists, then: $$\left(\stackrel{~}{K}z\right)^1\left(L^2(R^n,d\mu _0)\right)L^2(R^n,d\mu _0)\text{,}$$ (22) and in consequence $`\left(\widehat{E}_0^{1/2}\right)^{}\left(\stackrel{~}{K}z\right)^1\widehat{E}_0^{1/2}`$ is well defined on the entire $`N_0`$. Will follow that $`\rho \left(\stackrel{~}{K}\right)\rho \left(K\right)`$ and this ends the proof. If the metric is symmetric at transformation $`x^1x^1`$, the transfer matrix generator is self-adjoint and it can be considered as the Hamiltonian of the scalar field. ## V Application Our application is for the Euclidean case. The results concerning decoupling of different regions in quantum Euclidean fields are based primarily on estimates of $`e_{\mathrm{\Lambda }_1}e_{\mathrm{\Lambda }_2}_N`$, where $`\mathrm{\Lambda }_1`$, $`\mathrm{\Lambda }_2`$ are two disjoint regions. Let us consider the two dimensional case. The most difficult case is when $`\mathrm{\Lambda }_1`$, $`\mathrm{\Lambda }_2`$ are not convex and there is no possibility of drawing a straight line between the two subsets. We can sharpen the existent estimates for these cases by using the previous results. The idea is to make a change of coordinates such that for the new coordinates, lines like $`\sigma =ct.`$ separate the two sets and they are as closed as possible to the boundaries of $`\mathrm{\Lambda }_1`$, $`\mathrm{\Lambda }_2`$. Then we can use the exponential bounds of the previous section to evaluate $`e_{\mathrm{\Lambda }_1}e_{\mathrm{\Lambda }_2}_N`$. More precisely: ###### Proposition 10 Let $`\mathrm{\Lambda }_1`$, $`\mathrm{\Lambda }_2`$ two regions in $`R^2`$ such that the construction of the coordinates 24 to be possible (after a rotation if necessary). Then $$e_{\mathrm{\Lambda }_1}e_{\mathrm{\Lambda }_2}_Ne^{m\left|\beta \alpha \right|\mathrm{min}\left|\mathrm{cos}\theta \right|}\text{,}$$ (23) where $`\theta `$ and $`\left|\beta \alpha \right|`$ will be defined during the proof. ###### Solution 11 Let $`(t,x)`$ denotes the original coordinates in which the metric is diagonal. Let $`\gamma :RR^2`$ be a curve which separates $`\mathrm{\Lambda }_1`$, $`\mathrm{\Lambda }_2`$ and $`\gamma \left(0\right)=\left(t=0,x=0\right)`$. We define a new coordinate system $`(\sigma ,\xi )`$ by $$\{\begin{array}{c}t(\sigma ,\xi )=\sigma +\gamma ^1\left(\xi \right)\\ x(\sigma ,\xi )=\gamma ^2\left(\xi \right)\end{array}$$ (24) In the new coordinates, the metric is $$g^{}(\sigma ,\xi )=\left(\begin{array}{cc}1& \frac{d\gamma ^1}{d\xi }\\ \frac{d\gamma ^1}{d\xi }& \left(\frac{d\gamma ^1}{d\xi }\right)^2+\left(\frac{d\gamma ^2}{d\xi }\right)^2\end{array}\right)$$ (25) so we are in the conditions of the last section. Using the Markoff property, $$e_{\mathrm{\Lambda }_1}e_{\mathrm{\Lambda }_2}_N=e_{\mathrm{\Lambda }_1}e_\alpha e_\beta e_{\mathrm{\Lambda }_2}_Ne_\alpha e_\beta _N\text{,}$$ (26) where the lines $`\sigma =\alpha `$, $`\sigma =\beta `$ separate $`\mathrm{\Lambda }_1`$ and $`\mathrm{\Lambda }_2`$ exactly in the order they appear in the above relation (in the sense that $`\sigma =\alpha `$ separates $`\mathrm{\Lambda }_1`$ by $`\sigma =\beta `$ etc.). Further $$j_\alpha j_\alpha ^{}j_\beta j_\beta ^{}_N=j_\alpha U_{\alpha \beta }j_\beta ^{}_N=U_{\alpha \beta }_{N_0}.$$ (27) The element $`\left(g^{}\right)^{11}`$ is given by $`\left(g^{}\right)^{11}={\displaystyle \frac{1}{\mathrm{cos}^2\theta }}`$, where $`\theta `$ is the angle between the tangent to the curve $`\gamma `$ and the $`x`$ axis. Using the bounds of the last section we have $$e_{\mathrm{\Lambda }_1}e_{\mathrm{\Lambda }_2}_Ne^{m\left|\beta \alpha \right|\mathrm{min}\left|\mathrm{cos}\theta \right|}.$$ (28) Performing first a rotation, one can choose the best values for $`\left|\beta \alpha \right|`$ and $`\mathrm{min}\left|\mathrm{cos}\theta \right|`$. ## VI Conclusions Our primary goal was to define the transfer matrix for scalar fields on curved spaces and to investigate the basic spectral properties of its generator. Even though the generator is not self-adjoint in the general case, this approach allows us to investigate this problem by using at least two new tools besides the methods of Green functions. One is the perturbations of hypercontractive semigroups and the other is the adiabatic theorem. Now it is straightforward to quantize the field by defining the Markoff field over the space $`N`$. For the stationary, symmetric at time reflection case (static), we think that one has now all elements to construct the physical field (for example that proposed in ) by following Nelson reconstruction method and holomorphic continuation of the transfer matrix. Note that, acording to results of , the holomorphic continuation of the transfer matrix to real time is still possible, in the stationary case without symmetry at time reflection, as long the spectrum of the generator belongs to the real axis. Of course, one has to check that the results of (sistematized in ), which are the core of the reconstruction theorem, are still valid. For the general case, we think that the adiabatic theorem, especially the adiabatic reduction theory , may play an important role in defining the physical quantum field by following Nelson$``$s approach.
no-problem/9901/cond-mat9901294.html
ar5iv
text
# Linear dependence of peak width in 𝜒⁢(𝐪,𝜔) vs Tc for YBCO superconductors \[ ## Abstract It is shown that the momentum space width of the peak in the spin susceptibility, Im$`\chi (𝐪,\omega )`$, is linearly proportional to the superconducting $`T_c`$: $`T_c=\mathrm{}v^{}\mathrm{\Delta }q`$ with $`\mathrm{}v^{}35meV`$Å. This relation is similar to the linear relation between incommensurate peak splitting and $`T_c`$ in LaSrCuO superconductors, as first proposed by Yamada et al ($`Phys.Rev.\mathrm{𝐁𝟓𝟕},6165,(1998)`$). The velocity $`\mathrm{}v^{}`$ is smaller than Fermi velocity or the spin-wave velocity of the parent compound and remains the same for a wide doping range. This result points towards strong similarities in magnetic state of YBCO and LaSrCuO. PACS numbers: 74.20.-z, 78.70.Nx, 61.12.-q \] Recent progress in neutron scattering in high-$`T_c`$ superconductors system, $`\mathrm{YBa}_2\mathrm{Cu}_3\mathrm{O}_{6+\mathrm{x}}`$ (YBCO), allowed to gather a wide variety of inelastic neutron scattering data which reveal a nontrivial structure of the antiferromagnetic susceptibility $`\chi (𝐪,\omega )`$ in both the energy and momentum spaces\[1-24\]. Using these data one can try to understand what is the relation between the superconducting and magnetic properties of high-T<sub>c</sub> superconductors. A nontrivial feature that have attracted a lot of attention is the so-called resonance peak appearing in the superconducting state and seems to be directly related to the formation of the superconducting state\[1-12\]. Here, we will focus on the completely different feature of Im$`\chi (\omega ,𝐪)`$, namely on the off-resonance spectrum. Substantial interest has been recently devoted to that contribution as incommensurate peaks have been observed away from the resonance peak in YBCO<sub>6.6</sub>. However, it was observed so far in limited doping, energy and temperature ranges. Generally, in the normal state $`\chi (\omega ,𝐪)`$ is peaked at the commensurate wavevector $`(\pi ,\pi )`$. This contribution is then simply characterized by a q-width in momentum space, $`\mathrm{\Delta }q`$ (HWHM). Considering the neutron scattering data in YBCO for oxygen concentrations between $`x=0.450.97`$ with respective $`T_c`$ up to 93 K, we find a surprisingly simple linear relation between superconducting transition temperature $`T_c`$ and HWHM $`\mathrm{\Delta }q`$ for the whole doping range: $`T_c=\mathrm{}v^{}\mathrm{\Delta }q,\mathrm{}v^{}=35\mathrm{m}\mathrm{e}\mathrm{V}\mathrm{\AA }`$ (1) This observation is based on analysis of the data and we used no theory assumptions in extracting the velocity $`v^{}`$ from the data. The left-hand side of the above equation has dimension of energy, $`\mathrm{\Delta }q`$ has an inverse distance dimension, hence the coefficient relating them should have dimension of velocity. A priori, it is not clear that this relation implies the existence of the mode with such a velocity. We believe it does: the magnetic properties of the YBCO are described by two velocities $`v_{SW}`$ and $`v^{}`$. Below, we explain how the Eq.(1) is obtained. The spin susceptibility in the metallic state of YBCO is experimentally found to have a maximum at any energy at the commensurate in-plane wavevector $`q_{AF}=(\frac{h}{2},\frac{k}{2})`$ (with $`h,k`$ even integer), referred to as $`(\pi ,\pi )`$ \[1-12,16-24\]. This generic rule is found to be violated in two cases. First, in the underdoped YBCO<sub>6.6</sub>, Dai et al reported low temperature q-scans at $`\mathrm{}\omega `$= 24 meV which display well-defined double peaks. Recent measurements with improved q-resolution confirm this observation . However, this behavior is mostly observed at temperatures below $`T_C`$. In the normal state, a broad commensurate peak is restored in the same sample (unambiguously above 75 K). The other case where the spin susceptibility was not found maximum at $`(\pi ,\pi )`$ is above $``$ 50 meV in the weakly doped YBCO<sub>6.5</sub>. Dispersive quasi-magnons behavior is observed in this high energy range. Most likely, this is reminiscent of spin-waves observed in the undoped AF parent compound YBCO<sub>6</sub>. Therefore, concentrating on the low energy spin excitations (below 50 meV), Im$`\chi (𝐪,\omega )`$ is characterized in the normal state by a broad maximum at the commensurate wavevector. This justifies an analysis in terms of a single peak centered around $`q_{AF}`$. However, the shape in q-space is systematically found to be sharper than a Lorentzian shape usually assumed to describe such a disordered magnetic system. The neutron scattering function is then empirically found to be well accounted for by a Gaussian line-shape such as $$S(Q,\omega )=I_{max}(\omega )\mathrm{exp}\left(\mathrm{log}2\frac{(qq_{AF})^2}{\mathrm{\Delta }_q^2(\omega )}\right)$$ (2) where $`\mathrm{\Delta }q(\omega )`$ is the half width at half maximum (HWHM). In principle, $`\mathrm{\Delta }q(\omega )`$ is an increasing function of energy. However, a rather weak energy dependence is found for $`\mathrm{\Delta }q(\omega )`$ with only a slight increase with the energy. Furthermore, this energy dependence becomes less pronounced for the higher doping range. The situation is even more subtle for $`x`$ 0.6 as Im$`\chi (𝐪,\omega )`$ is characterized by two distinct (although inter-related) contributions: one occurs exclusively in the superconducting state, the resonance peak, the second one appears in both states and is characterized by a broad peak (around $``$ 30 meV). They mainly differ by their energy dependences as the resonance peak is basically resolution limited in energy. With increasing doping, the off-resonance spectrum is continuously reduced (becoming too weak to be measured in the overdoped regime YBCO<sub>7</sub>) whereas the resonant peak becomes the major part of the spectrum. The recent incommensurate peaks measured below the resonance peak in YBCO<sub>6.6</sub> confirms the existence of two contributions as the low energy incommensurate excitations cannot belong to the same excitation as the commensurate resonance peak. At each doping, the peak intensity at the resonance energy is characterized by a striking temperature dependence either resembling to an order parameter-like dependence for the higher doping range (x$`>`$0.9) or just displaying a marked kink at $`T_C`$ . Therefore, this mode is a novel signature of the superconducting state in the cuprates, and most likely is due to electron-hole pair production across the superconducting energy gap . In contrast, a much smoother temperature dependence is observed for the off-resonance spectrum. This ”normal” contribution has not received much attention so far. However, the knowledge of the non resonant peak in the normal state is important and is crucial for some proposed mechanisms for the high-$`T_C`$ superconductivity based on antiferromagnetism, e.g. . The resonance peak is related to smaller q-values (and hence larger real space distance) as $`\mathrm{\Delta }q(\omega )`$ exhibits a minimum at the energy of the resonance peak. Furthermore, its q-width remains almost constant whatever the doping, $`\mathrm{\Delta }q^{reso}=0.11\pm 0.02`$ Å<sup>-1</sup>. Recent data agree with that conclusion. Applying the simple relation $`\xi =1/\mathrm{\Delta }q`$, it yields a characteristic length for the resonance peak, $`\xi 9`$ Å which might be related to the superconducting coherence length as the resonance peak is intimately linked to the high-$`T_C`$ superconductivity. In contrast, the ”normal” contribution is characterized by a doping dependent q-width which in terms of the Nearly AF Liquid approach would yield surprisingly small correlation length $`\frac{\xi }{a}12`$ (for x$``$0.6). Moreover, in all inelastic neutron scattering experiments (see e.g. ), the q-width is found temperature independent at any doping. This finding is especially clear for x$``$0.5 where the q-width at low temperature is small enough to increase upon heating. But, in contrast, no evolution is seen within error bars (about 10 %) up to room temperature. For larger doping, the low temperature q-width is already large and the AF intensity vanishes without any sign of q-broadening when increasing temperature. Therefore, these q-widths might be related to new objects essentially dependent on the doping level. To emphasize the precise value of the q-width, we have summarized in Table I the neutron data obtained over the last decade by few different groups. We consider only the low energy results for each oxygen content, where $`\mathrm{\Delta }q`$ is weakly energy dependent. The energy range of interest is indicated in Table I. The $`\mathrm{\Delta }q`$ value reported here has been mostly taken along the reciprocal direction. Other data have been also taken along the reciprocal direction which basically agree with the hypothesis of an isotropic q-width. $`\mathrm{\Delta }q`$ versus the oxygen content displays a double plateau shape which reminds the standard $`x`$ dependence of $`T_C`$ in YBCO. For the 90-K phase, $`\mathrm{\Delta }q^{HWHM}=0.22`$ Å<sup>-1</sup> yielding a very short AF correlation length within $`\mathrm{CuO}_2`$ planes $`\frac{\xi }{a}1.1`$. Summarizing the data in this Table, we plot both $`T_c(x)`$ and $`\mathrm{\Delta }q(x)`$ in Fig. 1, and find the linear relation between $`T_c`$ and $`\mathrm{\Delta }q`$ (Fig. 2) in the whole oxygen doping range, Eq. (1), where $`T_c`$ is the respective superconducting transition temperature at a given oxygen concentration $`x`$ and $`\mathrm{\Delta }q`$ is the corresponding half-width of the peak at $`(\pi ,\pi )`$ in $`\chi \mathrm{"}(𝐪,\omega )`$. The velocity $`\mathrm{}v^{}=35`$ meV.Å is about a factor of two larger than the equivalent velocity in LaSrCuO, $`\mathrm{}v_{214}^{}=20`$ meV.Å, inferred from $`T_c`$ vs $`\delta `$ plot (Fig. 3), see below. The Eq.(1) does imply that the magnetic correlations, as measured by $`\chi (𝐪,\omega )`$, and superconducting transition are closely related. The recent incommensurate splitting $`\delta `$ of the peak at $`(1/2+\delta ,1/2)`$ =$`(1/2,1/2+\delta )`$, observed by Dai et al and subsequently by Mook et al in YBCO<sub>6.6</sub> have been included in Fig. 2 (full square). Interestingly, the incommensuration $`\delta `$ by Mook et al fall on the same linear plot. First, we can make few comments on what one can extract from such a simple relationship as Eq. (1) regardless of the particular mechanism responsible for $`v^{}`$: a) Proportionality between the width of the peak and the critical temperature implies that there is a characteristic velocity $`\mathrm{}v^{}`$ which is the same (within experimental resolution) for a wide range of oxygen doping in YBCO. b) Velocity $`\mathrm{}v^{}`$ is two orders of magnitude smaller than typical Fermi velocity $`\mathrm{}v_F1`$ eV.Å in these compounds . This perhaps is not surprising as we are considering the magnetic response where localized Cu spins provide the main contribution. c) More importantly, $`v^{}<<v_{SW}`$ is about an order of magnitude smaller than the spin wave velocity, $`\mathrm{}v_{SW}0.65`$ eV.Å, of the parent compound but also much smaller than the spin velocity in the metallic state $`\mathrm{}v_{spin}0.42`$ eV.Å. This is a nontrivial fact. It may suggest that some magnetic soft mode is present. We do not have a model to explain the data presently. On the other hand on general grounds for any approach, based on the simple spin-wave theory, one would expect the typical spin-wave velocity to characterize the width in $`\chi ^{\prime \prime }(𝐪,\omega )`$. d) If, as we are proposing, the characteristic velocity $`\mathrm{}v^{}`$ does correspond to some propagating or diffusive mode than there should be a way to directly observe it in other experiments. Now we would like to discuss the possible origin of $`\mathrm{}v^{}`$. It is likely caused by some phase fluctuation mode associated with the slow motion of density excitations. These could be caused by ”stripe” fluctuations. Recent tunneling and photoemission studies indicate that the gap in the SC state increases as Tc decreases on underdoping . The $`T_c`$ would then been determined by phase fluctuations, as emphasized by Emery and Kivelson . Recently, based on phase fluctuations model, the similarly small velocity ($`60meV\AA `$ for LaSrCuO) was obtained by Casto Neto . It is therefore natural that phase mode velocity will determine the superconducting temperature. The existence of the second magnetic velocity $`v^{}`$ we interpret as a closeness to the quantum critical point (QCP), controlled by some density instability with strong coupling to the spin channel. The antiferromagnetic insulating compound with characteristic $`v_{SW}`$ determines the critical point at zero doping, where the transition into 3D antiferromagnetic state occurs at finite temperature. The second critical point with characteristic $`v^{}`$ along the doping axis is close to the optimally doped compound and might be determined by some density wave or ”stripe” instability. Based on the neutron scattering data for LaSrCuO system this point was emphasized by Aeppli et.al. . Within the simple model, say t-J, the energy scales are set by $`t`$, which determines $`v_F`$ and by $`J`$, determining $`v_{SW}`$. Hence one generally would not expect any excitations in this model with $`v^{}`$. One possibility to generate a new energy scale in the problem is to allow some (microscopic) inhomogeneities. Phase separation into hole-rich and antiferromagnetic regions with fluctuations of the boundaries between regions will occur with some soft velocity that might be related to $`v^{}`$. Finally, we would like to relate the above discussion to the other well studied system: LaSrCuO. Inelastic neutron scattering data by Yamada et al on LaSrCuO (La214) compounds show the existence of the incommensurate peaks at $`(\pi \pm \delta ,\pi )`$ and $`(\pi ,\pi \pm \delta )`$ . Plotted vs $`\delta `$, $`T_c(\delta )`$ was found to be a linear function of $`\delta `$ in the wide range of Sr doping, see Fig. 3, as appear in . Using the same reasoning as for Eq.(1) from the data we find the characteristic velocity, $$T_c=\mathrm{}v_{214}^{}\delta ,\mathrm{}v_{214}^{}=20meV.\AA $$ (3) Thus inferred velocity is much smaller that the Fermi velocity on La214 $`\mathrm{}v_F10.5`$ eV.Å and smaller than the measured spin wave velocity $`\mathrm{}v_{SW}0.85`$ eV.Å. We should again emphasize that the linearity $`T_c`$ vs $`\delta `$ is an experimental fact. The coefficient relating energy scale $`T_c`$ to the inverse length scale $`\delta `$ has a dimension of velocity. This result is similar to the small velocity we find for the $`\mathrm{\Delta }q`$ vs $`T_c`$ plots in YBCO, except in YBCO the velocity $`v^{}`$ is about a factor of two larger than in LaSrCuO. In conclusion, we find that the body of neutron scattering data on YBCO for a wide range of oxygen doping allows the simple linear relation between the width of the “normal”, i.e. out of resonance peak, $`\delta q`$ and corresponding $`T_c`$ of the sample. Thus inferred velocity $`\mathrm{}v^{}`$ 35 meV Å is anomalously small compared to known spin wave and Fermi velocities for these compounds. We suggest that this velocity indicates the existence of some new mode in these materials and this mode is closely related to the formation of superconducting state. We are grateful to G. Aeppli, L.P. Regnault, R. Silver, Y. Sidis and J. Tranquada for the useful discussions. This work was supported by the US DOE.
no-problem/9901/astro-ph9901063.html
ar5iv
text
# The Pixon Method of Image Reconstruction ## 1. Introduction Optimal extraction of the underlying quantities from measured data requires the removal of measurement defects such as noise and limited instrumental resolution. When the underlying quantity is an image, this process is known as image reconstruction (sometimes called image restoration, if the data are also in the form of an image). The original Pixon method (Piña & Puetter 1993; Puetter & Piña 1993; Puetter 1995, 1997) was developed to eliminate problems with existing image reconstruction techniques, particularly signal-correlated residuals and the production of spurious sources. This was followed by the accelerated Pixon method that vastly increased the computational speed (Yahil and Puetter 1995, unpublished). Recently, a quick Pixon method was developed that is even faster, at the expense of some photometric inaccuracy for low signal-to-noise-ratio features (Puetter and Yahil 1998, unpublished). Nevertheless, the quick Pixon method provides excellent results for a wide range of imagery and, with special-purpose hardware now under design, is capable of real-time video image reconstruction. Since its inception, the Pixon method in its various forms has been applied to a wide variety of astronomical, surveillance, and medical image reconstructions (spanning all wavelengths from $`\gamma `$-rays to radio), as well as to spectroscopic data. In all cases tested so far, both by the authors and by a variety of other workers, the Pixon method has proved superior in quality to all other methods and computationally much faster than its best competitors. A patent for the Pixon method is pending, restricting its commercial use. For individual scientific purposes, the original Pixon method is freely available in IDL and C++ from the San Diego Supercomputer Center. The accelerated and quick Pixon methods are sold by our company, Pixon LLC, but special arrangements for their free use in selected scientific projects can be made. The next sections discuss image reconstruction in general (§2.), describe the Pixon method (§3.), and give some examples (§4.). The HTML and PDF versions of this paper contain numerous additional examples via clickable hyperlinks. The complete set of public image reconstructions, including videos, can be seen on our Web site at http://www.pixon.com. ## 2. Image Reconstruction Methods For data taken with linear detectors, image reconstruction often becomes a matter of inverting an integral relation of the form $$D(𝐱)=\mathrm{𝐝𝐲}H(𝐱,𝐲)I(𝐲)+N(𝐱).$$ (1) Here $`D`$ are the data, $`I`$ is the sought underlying image model, $`H`$ is the point-spread function (PSF) due to instrumental and possibly atmospheric blurring, and $`N`$ is the noise. To avoid confusion, we use the term image to refer exclusively to the true underlying image or its model. Contrary to common parlance, the data are never called the image. We clearly distinguish between abstract image space and real data space, with the PSF transforming from the former to the latter. If the PSF is only a function of the displacement, $`H(𝐱,𝐲)=H(𝐱𝐲)`$, then the integral in equation (1) becomes a convolution, but in general the PSF can vary across the field. Note that the data need not have the same resolution as the image, or even the same dimensionality. For example, the Infrared Astronomical Satellite (IRAS) provided 1–D scans across the 2–D sky, and tomography data consist of multiple 2–D projections of 3–D images. Another common case is of data consisting of multiple, dithered, exposures of the same field, which all need to be modeled by a single image, possibly with PSFs that vary from one exposure to another. Image reconstruction differs from standard solutions of integral equations due to the noise term, $`N`$, whose nature is only known statistically. Methods for solving such an equation fall under two broad categories. (1) Direct methods apply explicit operators to the data to provide estimates of the image. These methods are often linear, or have very simple nonlinear components. Their advantage is speed, but they typically amplify noise, particularly at high frequencies. (2) By contrast, indirect methods model the noiseless image, transform it only forward to provide a noise-free data model, and fit the parameters of the image to minimize the residuals between the real data and the noise-free data model. The advantage of indirect methods is that noise is supposedly excluded (but see §2.3.). Their disadvantage is the required modeling of the image. If a good parametric form for the image is known a priori, the result can be superb. If not, either the derived image badly fits the data or, conversely, it overfits the data, interpreting noise as real features. Indirect methods are typically significantly nonlinear and much slower than direct methods. The Pixon method is a nonparametric, indirect, reconstruction method. It avoids the pitfalls of other indirect methods, while achieving a computational speed on a par with direct methods. To appreciate its advantages we therefore first provide a brief description of some competing direct and indirect methods. ### 2.1. Direct Methods In the case of a position-independent PSF, naive image deconvolution is obtained by inverting the integral equation (1) in Fourier space, ignoring the noise: $$\stackrel{~}{I}(𝐤)=\stackrel{~}{D}(𝐤)/\stackrel{~}{H}(𝐤),$$ (2) where the tilde designates Fourier transform, and $`𝐤`$ is the spatial wavenumber. This method amplifies noise at high frequencies, since $`H(𝐤)`$ is a rapidly declining function of $`|𝐤|`$, while $`D(𝐤)`$, which includes high-frequency noise, is not. To minimize noise amplification, nonlinear filtering is often applied to the data prior to Fourier inversion. Common filters are those due to Wiener (1949; see also Press et al. 1992) and thresholding of wavelet transforms (e.g., Donoho & Johnstone 1994; Donoho 1995). The Wiener filter has the advantage of being applied in Fourier space. Wavelet thresholding requires a wavelet transform, thresholding, and a back wavelet transform, all followed by Fourier inversion. Demonstrations of the superior performance of the Pixon method relative to Wiener-filtered Fourier inversions can be seen on our Web site in the reconstruction of the Lena image and an aerial view of New York City. ### 2.2. Parametric Least-Squares Fit The simplest indirect image reconstruction is a least-squares fit of a parametric model with a small number of parameters compared to the number of data points. Originally due to Gauss, this method is always superior to other methods, provided that the image can be so modeled. It is equivalent to a maximum-likelihood optimization in which the residuals are assumed to be Gaussian distributed. All models within the restricted parameterization are considered equally likely, and the likelihood function maximized is $`P(D|I)`$, the conditional probability of the data, given the model image. For this reason, the method is also known as maximum a posteriori probability (MAP) image reconstruction. ### 2.3. Nonnegative Least-Squares Fit When no parametric model of the image is known, the number of image model parameters can quickly become comparable to, or exceed, the number of data points. In this case, a MAP solution becomes inappropriate. For example, if the number of points in the image model equals the number of data points, then the nonsingular nature of the linear integral transform in equation (1) assures that there is a solution for which the data, including the noise, are exactly modeled with zero residuals. This is clearly the same poor solution, with all its noise amplification, obtained by the naive Fourier deconvolution. The above example shows that an unrestricted indirect method is no better at controlling noise than a direct method, and therefore the model image must be restricted in some way. The indirect methods described below all restrict the image model and differ only in the specifics of image restriction. A simple restriction is to constrain the model image to be positive. Since even a delta-function image is broadened by the PSF, it follows that the exact inverse of any noisy data with fluctuations on scales smaller than the PSF must be both positive and negative. By preventing the image from becoming negative, the noise-free data model cannot fluctuate on scales smaller than the PSF, which is equivalent to smoothing the data on the scale of the PSF. While smoothing over the scale of the PSF helps to reduce noise fitting, it does not go far enough. The Pixon method (§3.) smoothes further where possible, and is therefore able to eliminate noise fitting on larger scales. Demonstrations of its superior performance relative to nonnegative least squares can be seen on our Web site in the reconstruction of $`\gamma `$-ray imaging in the direction of Virgo, as well as in the Lena and New York City reconstructions. ### 2.4. Maximum-Entropy Method Bayesian methods go a step further by assigning explicit a priori probabilities to different models and then maximizing the joint probability of the model and the data: $$P(ID)=P(D|I)P(I)=P(I|D)P(D).$$ (3) The model probability function, $`P(I)`$, is known as the prior. For example, the nonnegative least-squares method (§2.3.) is a Bayesian method with all nonnegative models assigned equal nonzero probability and all other models zero probability. The rationale behind Bayesian methods is to optimize $`P(I|D)`$, the conditional probability of the image given the data. This probability is proportional to the joint probability $`P(ID)`$, since the data are fixed, and $`P(D)`$ can be viewed as a constant. However, it is important to recognize that, unlike $`P(D|I)`$ which depends on the instrumental response function and noise statistics, $`P(I|D)`$ is not known from first principles. In practice, therefore, computing $`P(ID)`$ requires the specification of a completely arbitrary prior, $`P(I)`$. The maximum-entropy method (MEM) is the most popular Bayesian image reconstruction technique. It uses the prior $$P(I)=\mathrm{exp}\left(\alpha \underset{i}{}I_i\mathrm{ln}I_i\right),$$ (4) where the sum is over the image pixels, and $`\alpha `$ is an adjustable parameter designed to strike a balance between image smoothness and goodness of fit. (See Gull 1989 and Skilling 1989 for a discussion of a “natural” choice for $`\alpha `$.) The sum in equation (4) approximates the information entropy of Shannon (1948), which is maximized for a homogeneous image. By favoring a flat image, MEM therefore eliminates structure not required by the data and suppresses noise fitting. The fundamental difficulty with this approach is that the MEM prior is a global constraint. (Specifically, the prior is invariant under random scrambling of the pixels.) MEM therefore enforces an average smoothness on the entire image and does not recognize that the density of information content in the image varies from location to location. Hence, MEM must necessarily oversmooth the image in some regions and undersmooth it in others. More sophisticated MEM schemes try to remedy this situation by applying separate MEM priors in different parts of the image, or by modifying the logarithmic term to $`\mathrm{ln}(I_i/M_ie)`$, where $`M`$ is some preassigned model and $`e`$ is the base of the natural logarithm (Burch, Gull, & Skilling 1983), but there is additional arbitrariness in the choice of $`M`$. By contrast, the Pixon method (§3.) adapts itself to the distribution of information content in the image. Demonstrations of its superior performance relative to MEM are given below (§4.) and on our Web site in a reconstruction of hard X-ray observations of a solar flare. ## 3. The Pixon Method Unlike Bayesian methods, the Pixon method (Piña & Puetter 1993; Puetter & Piña 1993; Puetter 1995, 1997) does not assign explicit prior probabilities to image models. Instead, it restricts them by seeking minimum complexity (Solomonoff 1964; Kolmogorov 1965; Chaitin 1966), which not only enables an efficient representation of the image but is also the best way to separate signal from noise. In simple terms, the Pixon method implements the principle of Ockham’s Razor to select the simplest plausible model of the image. Clearly, if the signal in the image can be adequately represented by a minimum of $`P`$ parameters, adding another parameter only serves to introduce artifacts by fitting the noise. Conversely, the removal of a parameter results in an improper representation of the image, since adequate fits to the image require a minimum of $`P`$ parameters. While few would dispute that a model with minimum complexity (also called algorithmic information content) is optimal, in practice it is impossible to find such a model for any but the most trivial problems. For example, one might try to model an image as the smallest number of contiguous patches of homogeneous intensity that still adequately fit the data. While there clearly is such a solution, it is quite another matter to find it among the combinatorially large number of possible patch patterns. And we have not even begun to consider patches that are not completely homogeneous. The Pixon method overcomes this difficulty in the same practical spirit in which other combinatorial problems have been solved, such as the famous traveling salesman problem (e.g., Press et al. 1992). One finds an intelligent scheme in which complexity is reduced significantly in a manageable number of iterations. After that, the decline in complexity per iteration drops sharply, and the process is halted. The solution found in this manner may not have the ultimate minimum complexity, but it is already so superior to other models that it is worth adopting. The Pixon method minimizes complexity by smoothing the image model locally as much as the data allow, thus reducing the number of independent patches, or Pixon elements, in the image. Formally, the image is written as an integral over a pseudoimage $$I(𝐲)=\mathrm{𝐝𝐳}K(𝐲,𝐳)\varphi (𝐳),$$ (5) with a positive kernel function, $`K`$, designed to provide the smoothing. As in the case of the nonnegative least-squares fit, requiring the pseudoimage, $`\varphi `$, to be positive eliminates fluctuations in the image, $`I`$, on scales smaller than the width of $`K`$. Importantly, this scale is adapted to the data. At each location, it is allowed to increase in size as much as possible without violating the local goodness of fit (GOF). Where the kernel is a delta function, the Pixon element spans one pixel, and the Pixon method reduces to a nonnegative least-squares fit, with the noise-free data model smooth on the scale of the PSF. Where the data allow smoothing on larger scales, however, the Pixon elements become larger. As a result, the overall number of Pixon elements in the image drops, and complexity is reduced. Complexity can be reduced not only by having kernel functions of different sizes to allow for multiresolution, but also by a judicious choice of their shapes. For example, circularly symmetric kernels, which might be adequate for the reconstruction of most astronomical images, are not the most efficient smoothing kernels for images with elongated features, e.g., an aerial photograph of a city. Altogether the choice of kernels is the language by which the image model is specified, which should be rich enough to characterize all the independent elements of the image. We have found, in practice, that circular kernels spanning 3–5 octaves in size, with 2–4 kernels per octave, are adequate for most image reconstructions. Additional elliptical kernels are needed only for images with clearly elongated features. The Pixon reconstruction consists of a simultaneous search for the broadest possible kernel functions and their associated pseudoimage values that together provide an adequate fit to the data. In practice, the details of the search vary depending on the flavor of the Pixon method used. Generally, however, one alternately solves for the pseudoimage given a Pixon map of kernel functions and then attempts to increase the scale sizes of the kernel functions given the current image values. The number of iterations required varies depending on the complexity of the image, but for most problems a couple of iterations suffice. The essence of the Pixon method is the imposition of the local criteria by which the kernel functions are chosen. For each pixel in pseudoimage space, the combination of the Pixon kernel and the PSF define a footprint in data space. We accept the largest Pixon kernel whose GOF and signal-to-noise ratio (SNR) within that footprint pass predetermined acceptance conditions set by the user. If no kernel has adequate GOF we assign a delta-function kernel, provided that the SNR for its footprint is high enough. If the SNR also fails to meet our condition, no kernel is assigned. The strength of the Pixon method is in its rejection of features that do not meet strict statistical acceptance criteria. Precisely because of this conservatism, which results in a significant lowering of the noise floor, the Pixon reconstruction is able to find weak but significant features missed by other methods and to resolve all features better. Sensitivity is often improved by an order of magnitude or more relative to competing methods and resolution by a factor of a few. Note that the SNR required by the Pixon method is not per pixel in the data, but the overall SNR in the data footprint of the Pixon kernel. The Pixon method is just as powerful in detecting large, low-surface-brightness features as small features with higher surface brightness. Acceptance or rejection of the feature is based in both cases on the statistical significance demanded by the user. Demonstrations of the ability of the Pixon method to reconstruct images with low SNR can be seen in the mammogram example (Figure 3 below) and in the reconstruction of $`\gamma `$-ray imaging in the direction of Virgo on our Web site. Finally, note that with a very terse representation that only has a small number of nonzero pseudoimage values, the image is also represented in a very compressed form. In the field of data compression one normally distinguishes between strong but lossy versus moderate and nonlossy compression. The Pixon method defines a new type of noiseless compression. The signal is preserved and restorable, while the noise is eliminated. This mode of compression is ideal, since compression can easily reach a factor of 100 for a typical image, yet the only loss is that of unwanted noise. A compressed form of the code is now under design; current codes, however, are primarily built for accuracy and speed and do not yet achieve such high compression. ## 4. Examples of Pixon Reconstructions The Pixon method has now been used in a variety of applications by the authors and others (see, Puetter 1995, 1997 and references therein). In this section we present a few examples of Pixon reconstructions for a variety of applications. Additional examples can be found in the brochure and figures on our Web site. ### 4.1. A Synthetic Data Set The example presented in Figure 1 (or its color version on our Web site) provides one of the best examples of comparative results of MEM and the Pixon method for a synthetic data set, i.e. a data set with known properties. The MEM algorithm used was MEMSYS 5, the most current release of the MEMSYS algorithms developed by Gull and Skilling (Gull & Skilling 1991). The MEMSYS reconstruction was performed by Nick Weir, and was enhanced by his multichannel correlation method (Weir 1991). As can be seen, the Pixon reconstruction is superior to the multichannel MEMSYS result, and is free of the low-level spurious sources and signal-correlated residuals evident in the MEMSYS reconstruction. ### 4.2. IRAS Imaging of M51 In Figure 2 we present comparisons of reconstructed images from 60$`\mu m`$ IRAS scans of the interacting galaxy pair M51. This data set was used in an image reconstruction contest at the 1990 MaxEnt Workshop (Bontekoe 1991). As can be seen, the Lucy-Richardson and maximum-correlation (also known as HIRES) reconstructions fail to reduce image spread in the cross-scan direction, i.e., the rectangular signature of the $`1.^{}5\times 4.^{}75`$ detectors is still clearly evident. They also do not reconstruct even gross features such as the “hole” (black region) in the emission north of the nucleus—this hole is clearly evident in optical images of M51. The MEMSYS 3 reconstruction is significantly better, recovers the hole, and resolves the NE and SW arms of the galaxy into discrete sources. However, the Pixon result is clearly superior. In fact, its sensitivity is a factor of 200 higher than that of MEMSYS 3, and its linear spatial resolution is improved by a factor of 3. The reality of its minute details can be verified by comparing with images at other wavelengths. ### 4.3. X-Ray Mammography Presented in Figure 3 is a Pixon reconstruction of a standard phantom from the American College of Radiology. Here a fiber with a 400 micron diameter was embedded in a piece of material with X-ray absorption properties similar to the human breast. This particular example was selected since it has a low SNR. A family of elliptical Pixon kernel functions was used for the reconstruction. The Pixon method easily detects the presence of the fiber. In some locations, however, the SNR of the fiber is so poor that statistically significant signal is absent. Hence, no detection can be made by the Pixon method. #### Acknowledgments. This work was was supported in part by NASA grants NAG53944 (to RCP) and AR-07551.01-96A (to AY). ## References Bontekoe, T. R., 1991, in Maximum Entropy and Bayesian Methods, eds. W. T. Grady Jr. & L. H. Schick, (Dordrecht: Kluwer Academic Publishers), 319 Bontekoe, T. R., Kester, D. J. M., Price, S. D., de Jonge, A. R. W., & Wesselieus, P. R., 1991, A&A, 248, 328 Burch, S. F., Gull, S. F., & Skilling, J., 1983, Comp. Vis., Graphics, & Im. Process., 23, 113. Chaitin, G. J., 1966, J. Ass. Comput. Mach., 13, 547 Donoho, D. L., 1995, IEEE Trans. Inf. Theory, 41, 613 $`|`$ Stanford report Donoho, D. L. & Johnstone, I., M., 1994, Comptes Rendus de L’Acadamie Des Sciences Serie I-Mathematique, 319, 1317 $`|`$ Stanford report Gull, S. F., 1989, in Maximum Entropy and Bayesian Methods, ed. J. Skilling, (Dordrecht: Kluwer Academic Publishers), 53 Gull, S. F., & Skilling, J., 1991, MemSys5 Quantified Maximum Entropy User’s Manual Kolmogorov, A. N., 1965, Inf. Transmission, 1, 3 Piña, R. K. & Puetter, R. C., 1993, PASP, 105, 630 Press, W. H., Teukolsky, S. A., Vetterling, W. Y., & Flannery, B. P, 1992, Numerical Recipes in Fortran, Second Edition, (Cambridge: Cambridge University Press) Puetter, R. C., 1994, Proc. S.P.I.E., 2302, 112 Puetter, R. C., 1995, Int. J. Image Sys. & Tech., 6, 314 Puetter, R. C., 1997, in Instrumentation for Large Telescopes, eds. J. M. Rodriguez Espinosa, A. Herrero, & F. Sanchez, (Cambridge: Cambridge University Press), 75 Puetter, R. C. & Piña, R. K., 1993, Proc. S.P.I.E., 1946, 405 Puetter, R. C., & Piña, R. K., 1994, in Science with high Spatial Resolution Far-Infrared Data, (Pasadena: Jet Propulsion Laboratory), 95-4, 61 Rice, W., 1993, AJ, 105, 67 Shannon, C. E., 1948, in Key Papers in the Development of Information theory, ed. D. Slepian, D., 1974, (New York, IEEE Press) Skilling, J., 1989, in Maximum Entropy and Bayesian Methods, ed. J. Skilling, (Dordrecht: Kluwer Academic Publishers), 45 Solomonoff, R., 1964, Inf. Control, 7, 1 Weir, N., 1991, in Proc. of the ESO/ST-ECF Data Analysis Workshop, eds. P. Grosbo & R. H. Warmels, 115 Wiener, N. 1949, Extrapolation and Smoothing of Stationary Time Series (New York: Wiley)
no-problem/9901/gr-qc9901046.html
ar5iv
text
# The theoretical significance of 𝐺*footnote **footnote *Talk given at the conference “The Gravitational Constant: Theory and Experiment 200 years after Cavendish” (London, 23-24 November 1998); to appear in Measurement, Science and Technology, 1999. ## Abstract The quantization of gravity, and its unification with the other interactions, is one of the greatest challenges of theoretical physics. Current ideas suggest that the value of $`G`$ might be related to the other fundamental constants of physics, and that gravity might be richer than the standard Newton-Einstein description. This gives added significance to measurements of $`G`$ and to Cavendish-type experiments. Cavendish’s famous experiment , carried out in 1798 using an apparatus conceived by the Reverend John Michell, gave the first accurate determination of the strength of the gravitational coupling. We refer to the beautiful review of Everitt for an authorative discussion of this classic experiment. At the 95% confidence level, the value obtained by Cavendish for the mean density of the Earth was $`5.48\pm 0.10`$ (the modern value being 5.57) . This corresponds to a fractional precision of 1.8%. As is discussed in detail in the other contributions to this Cavendish bicentennial conference, our present knowledge of the value of Newton’s constant $`G`$ seems to be uncertain at the $`10^3`$ level. This contrasts very much with our knowledge of other fundamental constants of physics (e.g. $`\alpha _{\mathrm{e}.\mathrm{m}.}=e^2/(4\pi \mathrm{}c)`$, and particle masses) which are known with a part in a million precision, or better. \[One can note, however, that the strong coupling constant, at the $`Z`$-boson mass scale, $`\alpha _s(m_Z)`$ is known only with a fractional precision of 1.7% .\] The purpose of this contribution is to discuss briefly the significance of the value of $`G`$, and more generally of Cavendish experiments, within the current framework of theoretical physics. Let us immediately note that, as any other fundamental constant of physics, $`G`$ should be measured with state-of-the-art precision, even if the significance of its value within the framework of physics were unknown (or small). But the main point I wish to make here is that many theoretical developments of twentieth’ century physics suggest that there is an especially deep significance attached to the value of $`G`$. This gives, therefore, all the more importance to measurements of $`G`$. \[Though, as we shall see, our current theoretical understanding is incomplete and cannot yet make full use of any precise value of $`G`$.\] As a starting point, let us remind the reader that the strength of gravity is strikingly smaller than that of the three other known interactions. Indeed, in quantum theory, the strengths of the electromagnetic, weak and strong interactions are measured by three dimensionless numbers $`\alpha _1`$, $`\alpha _2`$, $`\alpha _3`$ which are smaller but not much smaller than unity. Here, $`\alpha _ig_i^2/(4\pi \mathrm{}c)`$, with $`i=1,2,3`$, where $`g_1`$, $`g_2`$, and $`g_3`$ are the coupling constants of the gauge groups $`U(1)`$, $`SU(2)`$ and $`SU(3)`$, respectivelyHere, $`\alpha _1(5/3)\alpha _Y`$ with $`Y`$ being the weak hypercharge $`(Y(e_R)=1)`$. The usual fine-structure constant $`\alpha =e^2/(4\pi \mathrm{}c)1/137`$ corresponds to a combination of $`\alpha _Y`$ and $`\alpha _2`$.. The values of the $`\alpha _i`$’s depend on the energy scale at which they are measured, i.e. they depend on the distance scaleWe recall that in relativistic quantum mechanics (using $`c=1`$) an energy scale $`E`$ corresponds to a distance scale $`L_E=\mathrm{}/E0.2`$ fermi $`(E/\mathrm{Ge}V)^1`$. on which the interaction is being probed. \[For instance, the strong coupling constant $`\alpha _3`$, measuring the strength of Quantum Chromodynamics (QCD), is of order unity at the energy scale $`\mathrm{\Lambda }_{\mathrm{QCD}}200\mathrm{Me}V`$, and becomes small at high energy scales, i.e. at very short distances.\] The numerical values of the $`\alpha _i`$’s at the energy scale defined by the mass of the $`Z`$ boson, $`m_Z91\mathrm{Ge}V`$, are $$\alpha _1(m_Z)=\frac{1}{58.97\pm 0.08},\alpha _2(m_Z)=\frac{1}{29.61\pm 0.13},\alpha _3(m_Z)=\frac{1}{8.3\pm 0.5}.$$ (1) When the energy scale $`\mu `$ increases, $`\alpha _1(\mu )`$ and $`\alpha _2(\mu )`$ increase, while $`\alpha _3(\mu )`$ decreases. It seems (if one makes extra assumptions about the existence and spectrum of new (supersymmetric) particles at higher energies) that the three gauge couplings unify to a common numerical value $$\alpha _1(m_U)\alpha _2(m_U)\alpha _3(m_U)\alpha _U\frac{1}{25}$$ (2) at a very high energy scale $$m_U2\times 10^{16}\mathrm{Ge}V.$$ (3) By contrast with the numerical values (1) or (2), the corresponding “gravitational fine-structure constant”, $`\alpha _g(m)Gm^2/\mathrm{}c`$, obtained by noting that the gravitational interaction energy $`Gm^2/r`$ is analogous to the electric one $`e^2/(4\pi r)`$, is strikingly small, $`\alpha _g(m)10^{40}`$, when $`m`$ is taken to be a typical particle mass. Indeed, $$\alpha _g(m)\frac{Gm^2}{\mathrm{}c}6.707\times 10^{39}\left(\frac{m}{\mathrm{Ge}V}\right)^2.$$ (4) For a long time, this enormous numerical difference was viewed as a challenge. At face value, it seems to imply that gravity has nothing to do with the three other interactions. However, some authors tried to find a natural origin for numbers as small as (4). In particular, Landau conjectured that the very small value of $`\alpha _g`$ might be connected with the value of the fine-structure constant $`\alpha =[137.0359895(61)]^1`$ by a formula of the type $`\alpha _gA\mathrm{exp}(B/\alpha )`$, with $`A`$ and $`B`$ being numbers of order unity. More recently, ’t Hooft resurrected this idea in the context of instanton physics, where such exponentially small factors appear naturally. He suggested that the value $`B=\pi /4`$ was natural, and he considered the case where $`m=m_e`$, the electron mass. It was noted (for fun) in Ref. that the simple-looking value $`A=(7\pi )^2/5`$ happens to give an excellent agreement with the experimental value of $`G`$. Namely, if we define (for fun) a simple-looking “theoretical” value of $`G`$ by $$\alpha _g^{\mathrm{theory}}(m_e)\frac{G^{\mathrm{theory}}m_e^2}{\mathrm{}c}\frac{(7\pi )^2}{5}\mathrm{exp}\left(\frac{\pi }{4\alpha }\right),$$ (5) one finds that it corresponds to $`G^{\mathrm{theory}}=6.6723458\times 10^8\mathrm{cm}^3g^1s^2`$, in excellent agreement with the CODATA value: $`G^{\mathrm{CODATA}}/G^{\mathrm{theory}}=1.00004\pm 0.00013`$ ! The first aim of this exercise was to exhibit one explicit example of a possible theoretical prediction for $`G`$. The second aim is to serve as an introduction to the currently existing “predictions” for the value of $`G`$ which are numerically inadequate, but which are conceptually important. Indeed, the main message of the present contribution is that the gravitational interaction is currently believed to play a central role in physics, and to unify with the other interactions at a very high energy scale. The main argument is that gravity, like the other interactions, should be described by quantum theory. However, quantizing the gravitational field has turned up to be a much more difficult task than quantizing the other interactions. Let us recall that the electromagnetic interaction was quantized in the years 1930-1950 (QED), and that the weak and strong interactions were quantized in the 70’s and 80’s (Standard Model of weak interactions and QCD). The methods used to quantize the electroweak and strong interactions are deeply connected with the fact that the (quantum) coupling constants of these interactions, $`\alpha _i=g_i^2/(4\pi \mathrm{}c)`$, are dimensionless. By contrast, we see from Eq. (4) that the quantum gravitational coupling constant $`G/\mathrm{}c`$ has the dimension of an inverse mass squared, or (using the correspondence $`L_E=\mathrm{}/E`$) of a distance squared. This simple fact has deep consequences on the quantization of gravity. It means that gravity becomes very strong at high energies, i.e. at short distance scales. This is directly apparent in Eq. (4). If we consider a quantum process involving the mass-energy scale $`\mu `$, the associated dimensionless analog of the fine-structure constant will be $`\alpha _g(\mu )=G\mu ^2/\mathrm{}c`$ and will grow quadratically with $`\mu `$. This catastrophic growth renders inefficient the (renormalizable quantum field theory) methods used in the quantization of the other interactions. It suggests that gravity defines a maximum mass scale, or a minimum distance. There is, at present, only one theory which, indeed, contains such a fundamental length scale, and which succeeds (at least in the perturbative sense) in making sense of quantum gravity: namely, String Theory. This theory (which is not yet constructed as a well defined, all encompassing framework) contains no dimensionless parameter, and only one dimensionful parameter $`\alpha ^{}=\mathrm{}_s^2=m_s^2`$ where $`\mathrm{}_s`$ is a length, and $`m_s`$ a mass (we henceforth often use units where $`\mathrm{}=1=c`$). In the simplest case (where the theory is perturbative, and no large dimensionless numbers are present), String Theory makes a conceptually very elegant prediction: it predicts that the “fine-structure constants” of all the interactions, including the gravitational one, must become equal at an energy scale of the order of the fundamental string mass $`m_s`$. In other words, it predicts (in the simplest case) that $$\alpha _g(m_U)\alpha _1(m_U)\alpha _2(m_U)\alpha _3(m_U)\text{at}m_Um_S.$$ (6) This yields $`G\alpha _U/m_U^2`$. Taking into account some numerical factors yields something like ($`\gamma `$ denoting Euler’s constant) $$\frac{G}{\mathrm{}c}\frac{e^{1\gamma }}{3^{3/2}\mathrm{\hspace{0.17em}4}\pi }\frac{\alpha _U}{m_U^2}.$$ (7) When one inserts the “experimental” values (2) and (3) for $`\alpha _U`$ and $`m_U`$, one finds that the R.H.S. of Eq. (7) is about 100 times larger than the actual value of $`G`$. Many attempts have been made to remedy this discrepancy . However, the main message I wish to convey here is that modern physics tries to unify gravity with the other interactions and suggests the existence of conceptually important links, such as Eq. (7), between $`G`$ and the other coupling constants of physics. It is quite possible that, in the near future, there will exist a better prediction for $`G`$. I wish to mention that the exponential-type relations (5) between $`G`$, $`\alpha `$ and the particle mass scales are also (roughly) compatible with the type of unification predicted by string theory. Indeed, the hadronic mass scale $`(\mathrm{\Lambda }_{\mathrm{QCD}})`$ determining the mass of the proton, the neutron and the other strongly interacting particles is (via the Renormalization Group) predicted to be exponentially related to the string mass $`m_s`$. Roughly $$m_pm_s\mathrm{exp}(b/\alpha _U)$$ (8) where $`b`$ is a (known) number of order unity. Combining (8) with (6) leads to $$\alpha _g(m_p)=\frac{Gm_p^2}{\mathrm{}c}\alpha _Ue^{2b/\alpha _U},$$ (9) where $`\alpha _U`$ is the common value of the gauge coupling constants at unification. Finally, let me mention that String Theory (and other attempts at quantizing gravity, and/or unifying it with the other interactions) makes other generic predictions that might be testable in Cavendish-type experiments. Indeed, a generic prediction of such theories is that there are more gravitational-strength interactions than the usual (tensor) one described by Einstein’s general relativity. In particular, the usual tensor gravitational field $`g_{\mu \nu }(x)`$ is typically accompanied by one or several scalar fields $`\phi (x)`$. As many high-precision tests of relativistic gravity have put stringent limits on any long-range scalar gravitational fields (see, e.g., ), there are two possibilities (assuming that such scalar partners of $`g_{\mu \nu }`$ do exist in Nature): (i) the scalar gravitational field $`\phi (x)`$ is (like $`g_{\mu \nu }`$) long-ranged, but its strength has been reduced much below the usual gravitational strength $`G`$ by some mechanism. \[A natural cosmological mechanism for the reduction of any scalar coupling strength has been discussed in Refs. , .\]; (ii) the initially long-ranged field $`\phi (x)`$, has acquired a mass-term $`m_\phi `$, i.e. it has become short-ranged (decreasing with distance like $`e^{m_\phi r}/r`$), but its strength is still comparable to (or larger than) $`G`$ , . In the first case, the best hope of detecting such a deviation from standard gravity is to perform ultra-high-precision tests of the equivalence principle . In the second case, deviations from standard (Newtonian) gravity might appear in short-distance Cavendish-type experiments , . Indeed, it is possible (but by no means certain) that the mass (and therefore the range $`m_\phi ^1`$) of such gravitational-strength fields be related to the supersymmetry breaking scale $`m_{\mathrm{SUSY}}`$ by a relation of the type $`m_\phi G^{1/2}m_{\mathrm{SUSY}}^210^3\mathrm{eV}(m_{\mathrm{SUSY}}/\mathrm{Te}V)^2`$. Therefore, if $`m_{\mathrm{SUSY}}1\mathrm{T}\mathrm{e}V`$, the observable strength of gravity would increase by a factor of order unity at distances below $`m_\phi ^11\mathrm{mm}`$ , . More recently, another line of thought has suggested that gravity could be even more drastically modified below some distance $`r_0`$ . In principle, Cavendish-type experiments performed for separations smaller than $`r_0`$ might see a change of the $`1/r^2`$ law: the exponent 2 being replaced by an exponent larger than or equal to 4 ! \[Note, however, that in these models $`r_0`$ is already constrained to be smaller than $`1\mu \mathrm{m}`$.\] I wish also to mention a general argument of Weinberg suggesting the existence of a new gravitational-related interaction with range larger than 0.1 mm . \[The recent announcement of the measurement of a non zero cosmological constant goes in the direction of confirming the importance of such a submillimeter scale.\] In conclusion, I hope to have shown that $`G`$-measurements and Cavendish-type experiments have now reached a new significance as possible windows on the physics of unification between gravity and the other interactions.
no-problem/9901/quant-ph9901019.html
ar5iv
text
# Untitled Document Uncertainty principle for proper time and mass Shoju Kudaka Department of Physics, University of the Ryukyus, Okinawa, Japan Shuichi Matsumoto<sup>1</sup><sup>1</sup>1Electronic mail address : shuichi@edu.u-ryukyu.ac.jp Department of Mathematics, University of the Ryukyus, Okinawa, Japan —————————————————————————— —————————————————————————— We review Bohr’s reasoning in the Bohr-Einstein debate on the photon box experiment. The essential point of his reasoning leads us to an uncertainty relation between the proper time and the rest mass of the clock. It is shown that this uncertainty relation can be derived if only we take the fundamental point of view that the proper time should be included as a dynamic variable in the Lagrangian describing the system of the clock. Some problems and some positive aspects of our approach are then discussed. PACS numbers: 03.65.Bz, 03.20.+i, 04.20.Cv, 04.60.Ds. I. INTRODUCTION In various arguments about time, perhaps the most spectacular is the Einstein-Bohr debate on the photon box experiment<sup>1,2</sup>. Their concern in the debate was Heisenberg’s time-energy uncertainty relation. However, Bohr’s reasoning reveals, as shown in the following, an uncertainty relation between the proper time and the rest mass of a clock. In fact, his essential point was simply that the very act of weighing a clock, according to general relativity, interferes with the rate of the clock. In order to review Bohr’s reasoning, we consider an experiment in which we measure the rest mass of a clock. We assume, of course, that the clock keeps its own proper time. Following Einstein’s stratagem, we try to weigh the clock by suspending it with a spring. That is to say, if the spring stretches by the length $`l`$, we can calculate the mass $`m`$ of the clock from the relation $$kl=mg,$$ where $`g`$ is the gravitational acceleration and $`k`$ is a constant characterizing the spring. Assume that a scale is fixed to the spring support, and that we read the length $`l`$ on it with an accuracy $`\mathrm{\Delta }q`$. Then the determination of $`l`$ involves a minimum latitude $`\mathrm{\Delta }p`$ in the momentum of the clock, related to $`\mathrm{\Delta }q`$ by the equation $`\mathrm{\Delta }q\mathrm{\Delta }ph`$. Let $`t`$ be the time interval in which we read the length $`l`$. (We should note that $`t`$ is measured by a clock other than the suspended clock.) Then we cannot determine the force exerted by the gravitational field on the clock to a finer accuracy than $`\mathrm{\Delta }p/t`$. Therefore we cannot determine the mass $`m`$ to a finer accuracy than $`\mathrm{\Delta }m`$ given by the relation $$\frac{\mathrm{\Delta }p}{t}g\mathrm{\Delta }m.$$ (1) Now, according to general relativity theory, a clock, when displaced in the direction of the gravitational force by an amount $`\mathrm{\Delta }q`$, changes its rate in such a way that its reading in the course of a time interval $`t`$ differs by an amount $`\mathrm{\Delta }\tau `$ given by the relation $$\frac{\mathrm{\Delta }\tau }{t}=\frac{g\mathrm{\Delta }q}{c^2}.$$ (2) By combining (1), (2) and the relation $`\mathrm{\Delta }q\mathrm{\Delta }ph`$, we see, therefore, that there is an uncertainty relation $$c^2\mathrm{\Delta }m\mathrm{\Delta }\tau h$$ (3) between the rest mass $`m`$ and the proper time $`\tau `$ of the clock. The relativistic red-shift formula (2) was, of course, essential in Bohr’s reasoning above. The more essential it seems to be, however, the stronger the apprehension we feel that the uncertainty relation (3) may fail if we can think of a weighing procedure not resorting to any interaction between the clock and the gravitational field. We check one such case in the following. Assume that the clock has been brought to rest after being charged with an electric charge $`e`$, and that a uniform electric field $``$ is then switched on. After a short time $`t`$, we measure the distance the clock has moved. (Again $`t`$ is the time measured by a clock other than our clock in the electric field.) Then we can know the average velocity $`v`$ of the clock by dividing the distance by the value of $`t`$, and we can determine the mass $`m`$ of the clock by virtue of the formula $$e=m\frac{v}{t}.$$ Assume that the determination of the distance is made with a given accuracy $`\mathrm{\Delta }q`$. Then it implies a minimum latitude $`\mathrm{\Delta }p`$ in the momentum of the clock, where $`\mathrm{\Delta }q\mathrm{\Delta }ph`$. Hence we cannot determine the force exerted by the electric field on the clock to a finer accuracy than $`\mathrm{\Delta }p/t`$. Therefore, even when the velocity $`v`$ is obtained, we cannot determine the mass $`m`$ to a finer accuracy than $`\mathrm{\Delta }m`$ given by the relation $$\frac{\mathrm{\Delta }p}{t}\mathrm{\Delta }m\frac{v}{t}\mathrm{i}.\mathrm{e}.\mathrm{\Delta }pv\mathrm{\Delta }m.$$ (4) Now, according to special relativity theory, when a clock has a speed $`v`$, its rate $`\tau `$ in the course of a time interval $`t`$ is given by the relation $$\tau =t\sqrt{1\left(\frac{v}{c}\right)^2}.$$ (5) On the other hand, the average velocity $`v`$ has an uncertainty $`\mathrm{\Delta }v`$ given by the relation $$t\mathrm{\Delta }v\mathrm{\Delta }q.$$ Correspondingly, the clock has an uncertainty in its rate $`\tau `$ of the order $`\mathrm{\Delta }\tau `$ given by $$\mathrm{\Delta }\tau =t\mathrm{\Delta }\sqrt{1\left(\frac{v}{c}\right)^2}\frac{v}{c^2}t\mathrm{\Delta }v\frac{v}{c^2}\mathrm{\Delta }q.$$ (6) By combining (4), (6) and the relation $`\mathrm{\Delta }q\mathrm{\Delta }ph`$, we arrive, therefore, at the same uncertainty relation $$c^2\mathrm{\Delta }m\mathrm{\Delta }\tau h$$ as (3) obtained by Bohr’s reasoning. Thus the uncertainty relation (3) has been confirmed for a weighing procedure which does not rely on any gravitational interaction. Moreover, in this case, the time-shift formula (5) played an essential role in place of the relativistic red-shift formula. Each of these formulae is, of course, one of the deepest and most important results in relativistic theory. The fact that these important formulae play essential roles in deriving the uncertainty relation (3) lends some confidence as to its universality. The objective of this article is to show the following: The uncertainty relation (3) can be derived satisfactorily only if we describe the system of the clock by using a Lagrangian which includes the proper time as a dynamic variable. In the next section, selecting the simplest Lagrangian which is in accord with the above approach, we examine the Hamiltonian formalism of the clock. Our conclusion is that the rest energy can be considered the momentum conjugate to the proper time. In the third section, following Dirac’s procedure, we quantize the system of the clock, and we obtain the same uncertainty relation as (3). Some comments then follow on our quantization. II. LAGRANGIAN AND HAMILTONIAN FORMALISM A gravitational field $`g_{\mu \nu }`$ and an electromagnetic field $`A_\mu `$ are assumed to be given, and we consider our clock to be one material particle moving in those fields with electric charge $`e`$. The Lagrangian which is generally used in such a case is the following: $$L_0=mc\sqrt{g_{\mu \nu }(x)\dot{x}^\mu \dot{x}^\nu }+eA_\mu (x)\dot{x}^\mu ,$$ where $`x^\mu (\mu =0,1,2,3)`$ are the variables and the dot denotes the differential with respect to an arbitrary parameter $`\lambda `$. It goes without saying that $`m`$ is the rest mass of the clock and that $`c`$ is the speed of light. We, however, cannot consider the proper time $`\tau `$ a physical quantity if we describe the system by using the Lagrangian $`L_0`$. On the other hand, it is clear that the proper time of a clock is a measurable physical quantity. (It is why a clock is so named.) Hence we have to find another Lagrangian which is in accord with the system of the clock. Our first purpose in this section is to find a Lagrangian $`L`$ which satisfies the following conditions: 1. The Lagrangian L has the proper time $`\tau `$ as a new variable in addition to $`x^\mu `$. 2. The motion equations for the variables $`x^\mu `$ are invariant between $`L`$ and $`L_0`$. As a candidate we consider the Lagrangian defined by $$L=M\left(\dot{\tau }\sqrt{g_{\mu \nu }(x)\dot{x}^\mu \dot{x}^\nu }/c\right)+eA_\mu (x)\dot{x}^\mu ,$$ where the dynamic variables are $`\tau ,M`$ and $`x^\mu `$. The Lagrange’s equations of motion are as follows: $`\dot{M}=0`$ (7) $`\dot{\tau }=\sqrt{g_{\mu \nu }(x)\dot{x}^\mu \dot{x}^\nu }/c`$ (8) $`{\displaystyle \frac{d}{d\lambda }}\left[{\displaystyle \frac{M}{c}}{\displaystyle \frac{g_{\rho \mu }\dot{x}^\mu }{\sqrt{g_{\mu \nu }(x)\dot{x}^\mu \dot{x}^\nu }}}+eA_\rho (x)\right]`$ $`{\displaystyle \frac{M}{c}}{\displaystyle \frac{g_{\mu \nu ,\rho }\dot{x}^\mu \dot{x}^\nu }{2\sqrt{g_{\mu \nu }\dot{x}^\mu \dot{x}^\nu }}}eA_{\mu ,\rho }(x)\dot{x}^\mu =0`$ (9) The second equation (8) means that we can identify the variable $`\tau `$ with the proper time of this clock. Moreover we have $`d\tau /d\lambda >0`$, and therefore it is possible to change the differential with respect to $`\lambda `$ to one with respect to $`\tau `$ in the third equation (9). As a result we find that $$\frac{d}{d\tau }\left[\frac{M}{c^2}g_{\rho \mu }\dot{x}^\mu +eA_\rho (x)\right]\frac{M}{2c^2}g_{\mu \nu ,\rho }\dot{x}^\mu \dot{x}^\nu eA_{\mu ,\rho }(x)\dot{x}^\mu =0,$$ where the dot denotes the differential with respect to $`\tau `$. Rewriting this equation, we get $$\frac{M}{c^2}\left[\ddot{x}^\rho +\mathrm{\Gamma }_{}^{\rho }{}_{\mu \nu }{}^{}\dot{x}^\mu \dot{x}^\nu \right]=ef^{\rho \mu }\dot{x}_\mu ,$$ (10) where $`\mathrm{\Gamma }_{}^{\rho }{}_{\mu \nu }{}^{}`$ and $`f_{\mu \nu }`$ are defined by $$\mathrm{\Gamma }_{}^{\rho }{}_{\mu \nu }{}^{}=\frac{1}{2}g^{\rho \sigma }\left(g_{\mu \nu ,\sigma }+g_{\nu \sigma ,\mu }+g_{\sigma \mu ,\nu }\right),f_{\mu \nu }=A_{\nu ,\mu }A_{\mu ,\nu }.$$ On the other hand, the motion equation derived from the original Lagrangian $`L_0`$ is $$m\left[\ddot{x}^\rho +\mathrm{\Gamma }_{}^{\rho }{}_{\mu \nu }{}^{}\dot{x}^\mu \dot{x}^\nu \right]=ef^{\rho \mu }\dot{x}_\mu .$$ (11) Equation (10) is just the same as equation (11) if we identify $`M`$ with the constant $`mc^2`$. Equation (7) indicates that this identification is possible. Thus our first purpose has been achieved. Moreover, this Lagrangian $`L`$ is the simplest of those which satisfy the above two conditions. The second purpose in this section is to investigate, by using the Lagrangian $`L`$, the consequences of our assertion that the proper time should be considered a dynamic variable. We note that it is possible to propose an argument without imposing any limitation on the fields $`g_{\mu \nu }`$ and $`A_\mu `$. In such an argument, however, we have to handle the coordinate time $`x^0=ct`$ as a dynamic variable, and then determine certain constraint conditions for the variables. Discussion of such constraints is not essential for our purpose. We therefore assume for simplicity hereafter that the fields $`g_{\mu \nu }`$ and $`A_\mu `$ are so-called static in the following sense: 1. The functions $`g_{\mu \nu }`$ and $`A_\mu `$ depend on only $`x^1,x^2,x^3`$. 2. For $`i=1,2,3`$, we have $`g_{i0}(=g_{0i})=0`$. Assuming the above conditions, we get $$L=M\left(\dot{\tau }\sqrt{f(x)^2g_{ij}(x)\dot{x}^i\dot{x}^j/c^2}\right)+ceA_0(x)+eA_i(x)\dot{x}^i,$$ where $`f`$ is defined by $`g_{00}=f^2(f>0)`$. The dynamic variables are $`\tau ,M,x^i(i=1,2,3)`$, and the dot denotes the differential with respect to $`t`$. The momentums conjugate to those variables are given by $$p_\tau \frac{L}{\dot{\tau }}=M,p_M\frac{L}{\dot{M}}=0$$ and $$p_i\frac{L}{\dot{x}^i}=\frac{M}{c^2}\frac{g_{ij}\dot{x}^j}{\sqrt{f^2g_{jk}\dot{x}^j\dot{x}^k/c^2}}+eA_i.$$ We have $`H_0`$ $``$ $`p_\tau \dot{\tau }+p_M\dot{M}+p_i\dot{x}^iL`$ $`=`$ $`f\sqrt{M^2+c^2g^{ij}(p_ieA_i)(p_jeA_j)}ceA_0.`$ If $`M`$ is replaced by $`mc^2`$, then $`H_0`$ is identical with the Hamiltonian which is derived from the original Lagrangian $`L_0`$. In our case, however, there exist two constraints: $$\varphi _1Mp_\tau =0,\varphi _2p_M=0.$$ Taking account of these constraints, we have to consider the total Hamiltonian $$HH_0+u_1\varphi _1+u_2\varphi _2,$$ where $`u_1`$ and $`u_2`$ are Lagrange’s undetermined multipliers. The multipliers $`u_1`$ and $`u_2`$ are determined in the following manner<sup>3</sup>: Poisson’s bracket of $`\varphi _1`$ and $`\varphi _2`$ is $$\{\varphi _1,\varphi _2\}=1$$ and therefore we have $$\dot{\varphi }_1=\{\varphi _1,H\}u_2,$$ $$\dot{\varphi }_2=\{\varphi _2,H\}u_1\frac{fM}{\sqrt{M^2+c^2g^{ij}(p_ieA_i)(p_jeA_j)}},$$ where the symbol “$``$” denotes the weak equality defined by the constraints $`\varphi _1=\varphi _2=0`$. Hence, the consistency conditions $$\dot{\varphi }_10\mathrm{a}\mathrm{n}\mathrm{d}\dot{\varphi }_20$$ require the multipliers $`u_1`$ and $`u_2`$ to be $$u_1=\frac{fM}{\sqrt{M^2+c^2g^{ij}(p_ieA_i)(p_jeA_j)}}\mathrm{and}u_2=0,$$ which give $$H=H_0\frac{fM(Mp_\tau )}{\sqrt{M^2+c^2g^{ij}(p_ieA_i)(p_jeA_j)}}.$$ (12) Hamilton’s canonical equations of motion are as follows: $`\dot{\tau }={\displaystyle \frac{H}{p_\tau }}={\displaystyle \frac{fM}{\sqrt{M^2+c^2g^{ij}(p_ieA_i)(p_jeA_j)}}}`$ $`\dot{p}_\tau ={\displaystyle \frac{H}{\tau }}=0`$ $`\dot{M}={\displaystyle \frac{H}{p_M}}=0`$ $`\dot{p}_M={\displaystyle \frac{H}{M}}0`$ $`\dot{x}^i={\displaystyle \frac{H}{p_i}}{\displaystyle \frac{H_0}{p_i}}`$ $`\dot{p}_i={\displaystyle \frac{H}{x^i}}{\displaystyle \frac{H_0}{x^i}}`$ Defining a matrix $`W_{ij}`$ by $$W_{ij}\{\varphi _i,\varphi _j\}=\left(\begin{array}{cc}0& 1\\ 1& 0\end{array}\right),$$ we can write Dirac’s bracket: $`\{A,B\}_D`$ $`=`$ $`\{A,B\}{\displaystyle \underset{i,j=1}{\overset{2}{}}}\{A,\varphi _i\}W_{ij}^1\{\varphi _j,B\}`$ $`=`$ $`\{A,B\}+\{A,\varphi _1\}\{\varphi _2,B\}\{A,\varphi _2\}\{\varphi _1,B\}.`$ We can easily calculate Dirac’s brackets between the canonical variables: $$\{\tau ,p_\tau \}_D=\{\tau ,M\}_D=1,\{x^i,p_j\}_D=\delta _{}^{i}{}_{j}{}^{},\mathrm{the}\mathrm{others}=0.$$ We are now in a position to be able to state our conclusions in this section. It is easily shown that $$\varphi _1,\varphi _2,T\tau p_M,Ep_\tau ,x^i,p_i,(i=1,2,3)$$ are canonical variables, and therefore the variables $`T,E,x^i,p_i(i=1,2,3)`$ can be interpreted as canonical variables on the submanifold defined by the constraints $`\varphi _1=\varphi _2=0`$. We can show also that $$\{A,B\}_D=\frac{A}{T}\frac{B}{E}\frac{A}{E}\frac{B}{T}+\underset{i=1}{\overset{3}{}}\left(\frac{A}{x^i}\frac{B}{p_i}\frac{A}{p_i}\frac{B}{x^i}\right)$$ on the submanifold. Since we have that $$T=\tau \mathrm{and}E=M(=mc^2)$$ on the submanifold defined by $`\varphi _1=\varphi _2=0`$, it follows from the above that the rest energy $`mc^2`$ is considered the momentum conjugate to the proper time $`\tau `$. III. QUANTIZATION AND DISCUSSIONS Thus we have arrived at the following conclusion: If we accept the view that we should describe a clock by using a Lagrangian which includes the proper time as a dynamic variable like the positions $`x^i`$, then we find that the rest energy $`E=mc^2`$ turns out to be the general momentum conjugate to the proper time, and that $`\tau ,E,x^i`$ and $`p_i`$ are canonical variables of the system. Since $`\tau ,E,x^i,p_i`$ are the canonical variables, if we quantize the system by Dirac’s procedure, there are corresponding operators $$\widehat{\tau },\widehat{E},\widehat{x}^i,\widehat{p}_i(i=1,2,3)$$ which satisfy the commutation relations $$[\widehat{\tau },\widehat{E}]=[\widehat{x}^i,\widehat{p}_i]=i\mathrm{}.$$ (13) The relation $`[\widehat{\tau },\widehat{E}]=i\mathrm{}`$ in (13) leads us to the uncertainty relation $$c^2\mathrm{\Delta }m\mathrm{\Delta }\tau \mathrm{}/2$$ (14) which was argued in the Introduction to this article. Our quantization leads to some desirable results besides the uncertainty relation (14), but at the same time gives rise to some problems. First, we should make some comment on the problems. In our quantization, the operators $`\widehat{\tau },\widehat{E},\widehat{x}^i`$ and $`\widehat{p}_i(i=1,2,3)`$ can be represented in the Hilbert space composed of square integrable functions of $`\tau ,x^1,x^2`$ and $`x^3`$. In particular, the operator $`\widehat{E}`$ is represented by the differential operator $`i\mathrm{}/\tau `$, and therefore the rest energy $`\widehat{E}`$ cannot have any discrete spectrum. Furthermore, this Hilbert space includes some states in which the mean values of $`\widehat{E}`$ are negative. The problems of the continuous mass spectrum and of the negative mass are inevitable in our formulation. The authors cannot, at present, judge whether these characteristics are desirable or not. These problems will be discussed in a subsequent paper from a rather different viewpoint. Secondly, we focus our attention on some positive aspects of our quantization. We restrict ourselves, for simplicity, to the case in which the space-time metric is flat and $`A_\mu =0`$. Then the Hamiltonian in (12) is rather simple and the Hamiltonian operator has the form $$\widehat{H}\sqrt{\widehat{E}^2+c^2\widehat{𝐩}^2}.$$ (We omit, hereafter, the hats representing the operators since there is no possibility of misunderstanding.) For the Heisenberg representation of the operator $`\tau `$ $$\tau (t)=e^{itH/\mathrm{}}\tau e^{itH/\mathrm{}},$$ we find that $$\frac{d}{dt}\tau (t)=\frac{i}{\mathrm{}}e^{itH/\mathrm{}}[H,\tau ]e^{itH/\mathrm{}}=\frac{E}{\sqrt{E^2+c^2𝐩^2}}$$ (15) by virtue of $$[\tau ,H]=i\mathrm{}\frac{E}{\sqrt{E^2+c^2𝐩^2}}.$$ Hence we have $$\tau (t)=\frac{E}{\sqrt{E^2+c^2𝐩^2}}t+\tau .$$ (16) We note that the last term of (15) is the operator which represents the time delay of the moving clock. We can moreover show that $`{\displaystyle \frac{d}{dt}}\tau (t)^2`$ $`=`$ $`{\displaystyle \frac{E}{\sqrt{E^2+c^2𝐩^2}}}\tau (t)+\tau (t){\displaystyle \frac{E}{\sqrt{E^2+c^2𝐩^2}}}`$ $`=`$ $`2{\displaystyle \frac{E^2}{E^2+c^2𝐩^2}}t+[{\displaystyle \frac{E}{\sqrt{E^2+c^2𝐩^2}}},\tau ]_+,`$ where we have used equation (16), and where $`[A,B]_+`$ denotes the anti-commutator of operators $`A`$ and $`B`$. Integrating this, we have $$\tau (t)^2=\frac{E^2}{E^2+c^2𝐩^2}t^2+[\frac{E}{\sqrt{E^2+c^2𝐩^2}},\tau ]_+t+\tau ^2.$$ Hence the standard deviation $`\mathrm{\Delta }\tau (t)`$ in a state $`\psi `$ is given by $`(\mathrm{\Delta }\tau (t))^2`$ $``$ $`\tau (t)^2\tau (t)^2`$ (17) $`=`$ $`\left({\displaystyle \frac{E^2}{E^2+c^2𝐩^2}}{\displaystyle \frac{E}{\sqrt{E^2+c^2𝐩^2}}}^2\right)t^2`$ $`+\left([{\displaystyle \frac{E}{\sqrt{E^2+c^2𝐩^2}}},\tau ]_+2{\displaystyle \frac{E}{\sqrt{E^2+c^2𝐩^2}}}\tau \right)t`$ $`+\left(\tau ^2\tau ^2\right),`$ where $`A`$ denotes the mean value of an operator $`A`$ in the state $`\psi `$. Here we must introduce some approximations: We assume that the Hamiltonian operator has a very sharp value (say $``$) in the state $`\psi `$. This assumption seems to be natural since the clock is moving as a free particle. Under this assumption, we can approximately estimate the two terms in (17) in the following manner; $$\frac{E^2}{E^2+c^2𝐩^2}\frac{E}{\sqrt{E^2+c^2𝐩^2}}^2\frac{1}{^2}\left(E^2E^2\right),$$ $$[\frac{E}{\sqrt{E^2+c^2𝐩^2}},\tau ]_+2\frac{E}{\sqrt{E^2+c^2𝐩^2}}\tau \frac{1}{}\left([E,\tau ]_+2E\tau \right).$$ (18) On the other hand, the term $`[E,\tau ]_+2E\tau `$ in (18) often vanishes, as it does in the case of all optimal simultaneous measurements of $`E`$ and $`\tau `$. (We can easily check it by setting, for example, $`\tau =i\mathrm{}/E`$ and $`\psi =`$ a Gaussian function of $`E`$.) Taking this cancellation into account, we neglect the second term in (17). Thus we have arrived at $$(\mathrm{\Delta }\tau (t))^2\frac{1}{^2}(\mathrm{\Delta }E)^2t^2+(\mathrm{\Delta }\tau )^2,$$ and, by virtue of the inequality $$\frac{1}{^2}(\mathrm{\Delta }E)^2t^2+(\mathrm{\Delta }\tau )^2\frac{2}{}\mathrm{\Delta }\tau \mathrm{\Delta }Et,$$ we finally have $$(\mathrm{\Delta }\tau (t))^2\frac{\mathrm{}}{}t,$$ (19) where we have used the uncertainty relation $`\mathrm{\Delta }\tau \mathrm{\Delta }E\mathrm{}/2`$ of (14). When the motion of the clock is so slow that the value of $``$ is approximately equal to $`mc^2`$, then our inequality (19) has the form $$(\mathrm{\Delta }\tau (t))^2\frac{\mathrm{}}{mc^2}t,$$ (20) which exactly coincides with an inequality derived by Salecker and Wigner from another point of view (see Eq. (6) in Ref. 4). In conclusion, we should make some comment on the meaning of our results to physics. Bohr and Rosenfeld stressed the principle that every proper theory should provide in and by itself its own means for defining the quantities with which it deals. One of the key points this principle makes is that we should analyze the means of measuring those quantities in order to argue the consistency of a physical theory. In their case, they succeeded in showing that the definition of the standard quantization of electromagnetic field is consistent in the above sense by discussing the means of measuring the classical electromagnetic field<sup>5,6</sup>. Several authors have applied this principle to the theory of relativity to find a consistent quantization of the space-time geometry. The theory deals with such quantities as the metric tensor, the curvature tensor, the covariant derivative and connection coefficients. The measurement of the distance between two events is most fundamental in the procedures by which we measure these quantities. For this we require the concept of a clock<sup>7,10</sup>, and the clock cannot be independent of the various physical laws. Thus, if the above principle should be a general feature of physical theory, a consistent formulation of the quantization of the space-time geometry should have some inherent relation with various limitations on the accuracy of the clock resulting from the physical laws. Various gedanken experiments on such limitations have been proposed and elaborated on for some fifty years<sup>4,7-16</sup>. In many of them, however, the clock is assumed to have some structure, from which starting point the argument is developed. It seems uncertain therefore whether their results are universal or not. Moreover, different studies sometimes reach different conclusions. Our objective in the present paper was to propose an attempt to dispose of this ambiguity. We showed the following : (a) There is an uncertainty relation between the proper time and the rest mass of a clock independent of its structure (see Eq. (3)). (b) A limitation on the accuracy of the clock is derived from the uncertainty relation in a natural way (see Eqs. (19) and (20)). The subject raised here has been argued, despite its importance, only at the level of thought experiments. The authors are uneasy with this situation, and think that the time has come to argue it at a more positive level. We hope that the importance of this subject is recognized and that, for example, the relation (20) is verified by experiment in the near future. <sup>1</sup>A.Pais, ‘Subtle is the Lord …’ The Science and the Life of Albert Einstein (Oxford University, 1982). <sup>2</sup>M.Jammer, The philosophy of quantum mechanics(John Wiley & Sons, Inc., 1974). <sup>3</sup>P.A.M.Dirac, Canad. J. Math. 2, 129(1950); Proc. Roy. Soc. (London) A 246, 326(1958). <sup>4</sup>H.Salecker and E.P.Wigner, Phys. Rev. 109, 571(1958). <sup>5</sup>N.Bohr and L.Rosenfeld, Mat.-Fys. Medd. Dan. Vid. Selsk. 12, no.8 (1933); Phys. Rev. 78, 794 (1950). <sup>6</sup>L.D.Landau and R.Peierls, Z. Phys. 69, 56 (1931); in Collected Papers of Landau, ed. D.ter Haar, (Gordon and Breach, New York, 1965), pp. 40-51. <sup>7</sup>E.P.Wigner, Rev. Mod. Phys. 29, 255 (1957). <sup>8</sup>A.Peres and N.Rosen, Phys. Rev. 118, 335 (1960). <sup>9</sup>C.A.Mead, Phys. Rev. 135, B849 (1964). <sup>10</sup>R.F.Marzke and J.A.Wheeler, in Gravitation and Relativity, eds. H.Y.Chiu and W.F.Hoffman, (W.A.Benjamin, New York, 1964). <sup>11</sup>F.Károlyházy, A.Frenkel, and B.Lukács, in Quantum Concepts in Space and Time, eds. R.Penrose and C.J.Isham , (Clarendon, Oxford, 1986). <sup>12</sup>A.Charlesby, Radiat. Phys. Chem. 33, 487 (1989). <sup>13</sup>L.Diósi and B.Lukács, Phys. Lett. A 142, 331 (1989). <sup>14</sup>F.Károlyházy, in Sixty-Two Years of Uncertainty , ed. A.I.Miller, (Plenum, New York, 1990). <sup>15</sup>M.Maggiore, Phys. Lett. B 304, 65 (1993). <sup>16</sup>S.Doplicher, Ann. Inst. Henri Poincare Phys. Theor. 64, 543 (1996).
no-problem/9901/cond-mat9901332.html
ar5iv
text
# The addition spectrum of interacting electrons: Parametric dependence ## Abstract The addition spectrum of a disordered stadium is studied for up to 120 electrons using the self consistent Hartree-Fock approximation for different values of the dimensionless conductance and in the presence and absence of a neutralizing background. In all cases a Gaussian distribution of the addition spectrum is reached for $`r_s1`$. An empirical scaling for the distribution width in the presence of a neutralizing background is tested and seems to describe rather well its dependence on the dimensionless conductance. Recent measurements of the distribution of the addition spectrum of chaotic quantum dots differ from the results of the orthodox constant interaction (CI) model in several ways. The distribution, which is roughly Gaussian in all experiments, has no bimodal structure expected due to the spin , nor does it have a Wigner like distribution expected if somehow the spin degeneracy is lifted . The width of the distribution varies between the different experiments. While in the first experiment the width is considerably larger than that expected from the CI model, the width in the latest experiment is compatible with the CI model predictions. These deviations from the CI model are usually attributed to the low electronic densities of the dots. For a typical dot the electronic density is of the order of $`n_D=23.5\times 10^{11}\mathrm{cm}^2`$, corresponding to a ratio of the average inter-particle Coulomb interaction and the Fermi energy $`r_s=1/\sqrt{\pi n_D}a_B1`$ (where $`a_B`$ is the Bohr radius). At these values of $`r_s`$ correlations in the electronic densities which are not taken into account in the CI model appear . Thus, the study of the addition spectra for chaotic dots must take into account the intricate interplay of chaos and interaction. Several numerical studies have tried to clarify the picture using exact diagonalization methods and Hartree-Fock (HF) approximations . The exact diagonalization method has the advantage of taking the full effect of the correlations into account, but can treat only small systems. The HF approximations lose some of the correlation effects, but are able to handle larger systems. In this paper we would like to answer some of the remaining open questions regarding the addition spacing distribution: (i) How does it depend on the dimensionless conductance $`g`$? (ii) Does it depend on the charge distribution in the dot (i.e., a uniform distribution due to a homogeneous background vs. accumulation of charge at the boundaries)? (iii) Do larger system sizes where up to a hundred electrons may be added to the dot change qualitatively the results obtained in the previous studies where only several electrons were added per dot? In order to answer these questions we study the Bunimovich stadium which is a canonical example of chaotic system . The addition spectrum of the stadium is calculated for different values of $`g`$, in the presence of a positive background and its absence, using a self-consistent Hartree-Fock approximation. We consider spinless electrons, since from experiment and from our exact diagonalization calculations we can argue that at $`r_s1`$ the role of spins in determining the addition spectrum is not an important one. We find that for any value of $`g`$ the addition spectrum distribution is Gaussian for values of $`r_s1`$. For smaller values of $`g`$ the distribution becomes Gaussian at smaller values of $`r_s`$. The dependence of the distribution width on $`g`$ can be summed up by a scaling function given later on. For $`g\mathrm{}`$ the width grows only moderately even at $`r_s1`$, while for higher values of $`g`$ the distribution is much wider. We find that the background plays an important role in determining the distribution of the addition spectrum. The absence of a positive background enhances the width of the distribution due to charge accumulation on the boundaries as predicted in Ref. . To perform our numerical calculations we chose a simple two dimensional tight-binding model, namely a square lattice with nearest-neighbors interaction. By using a single spatial index to label the sites, we can write the Hamiltonian as: $`H`$ $`=`$ $`H_0+H_{int}`$ (1) $`H_0`$ $`=`$ $`{\displaystyle \underset{j}{}}ϵ_ja_j^+a_jV{\displaystyle \underset{<j,k>}{}}(a_j^+a_k+a_k^+a_j)`$ (2) $`H_{int}`$ $`=`$ $`{\displaystyle \underset{j>k}{}}U_{jk}(a_j^+a_jK)(a_k^+a_kK),`$ (3) where $`ϵ_j`$ is the on-site energy, $`V`$ is the constant hopping matrix element and $`j,k`$ denotes the sum over the nearest-neighbors. $`H_{int}`$ contains the interaction among the electrons, which we chose as the unscreened Coulomb interaction $`U_{jk}=U/|r_jr_k|/b`$, with $`b`$ representing the lattice spacing and $`U=e^2/b`$ the interaction strength between electrons located on nearest-neighbor sites. Thus, $`r_s\sqrt{\pi N/n}(U/8V)`$, where $`N`$ is the number of sites and $`n`$ the number of electrons. The constant term $`K`$ which appears in the interaction term represents a constant positive charge background. Setting its value equal to zero corresponds to an electrically isolated dot, while setting $`K=n/N`$ assures the global charge neutrality corresponding to a dot closely coupled to a gate or screened by the environment. To describe completely the system we need also to specify the boundary conditions which define the shape of the dot. We studied a Bunimovich quarter stadium, see Fig. 1, to avoid degeneracies due to space symmetries. The classical motion inside this region is chaotic and the quantum dynamic shows the typical quantum chaos signatures, as the RMT statistics (avoided-level crossings and Wigner-Dyson level-spacing distribution) or the presence of quantum eigenstates scarred along the unstable classical orbits . This choice allowed us to study the quasi-ballistic regime of high dimensionless conductance but avoiding the highly degenerate condition related to the study of regular dots. The addition spectrum of the dot has been studied numerically by solving the Hartree-Fock problem in various parameter ranges. We decoupled the Coulomb interaction in a direct and an exchange term as described in . The Hamiltonian in the HF approximation reads $`H_{HF}=H_0+{\displaystyle \underset{jk}{}}a_j^+a_jU_{jk}a_k^+a_k_0{\displaystyle \underset{jk}{}}a_j^+a_kU_{jk}a_k^+a_j_0+const.,`$ (4) where $`\mathrm{}_0`$ denotes the average on the ground state which is calculated self-consistently. We studied the addition spectrum by changing the number of electrons from $`n=20`$ to $`n=120`$ and repeating the calculation for $`20`$ different realizations of the on-site energies. The dimensionless conductance of the sample is calculated from the non-interacting participation ratio $`I=10^2\mathrm{\Omega }^1_{n=20}^{120}_j|\mathrm{\Psi }_n(\stackrel{}{r}_j)|^4`$ (where $`\mathrm{\Psi }_n`$ is the n-th non-interacting eigenvector and $`\mathrm{\Omega }`$ is the volume) by using the relation given in Ref. $$g=3(\pi (I3))^1\underset{\mu }{}(\omega _1/\omega _\mu ),$$ (5) where $`\omega _\mu `$ are the eigenvalues of the diffusion equation with the Neuman boundary condition for the stadium. It turns out that for a clean stadium or for small disorder $`g`$ acquires negative values which indicates that, although the energy spectrum follows RMT predictions, the eigenvectors are not yet fully random. Thus, in order to obtain a really random system we added two types of disorder: (i) on-site disorder, where the energy of each site is chosen randomly between $`W/2`$ and $`W/2`$ and (ii) strong scatterers, where with a certain probability the site energy is set to zero or to a very large value (in our case $`ϵ_j=100V`$). In Fig. 2 we show the results for the spacing distribution for two different values of on-site disorder: $`W=3V`$ corresponding to $`g1`$ and $`W=V`$ corresponding to $`g\mathrm{}`$ (the conductance strongly increases when the value of $`W`$ decreases, but $`W=V`$ lies close to the region where Eq. (5) fails, thus we could not evaluate the exact value of $`g`$ but only asses, by extrapolation, that it is very large). We studied the case with (c,d) and without (a,b) a compensating background, and we built the distributions by averaging over the $`20`$ realizations of disorder and over the number of electrons. From this figure we can clearly realize that by increasing the interaction strength $`U`$ we obtain a transition from a Wigner-Dyson surmise (solid thin line) to more symmetric and broad distributions which are well described by Gaussians (see for example the solid thick lines which fit the $`U=1.5`$ distributions). This behaviour is present for both strengths of disorder and for the different backgrounds. It is clearly seen that in the absence of a positive background the distributions for the same value of $`g`$ become broader, as predicted in Ref. . This can be understood intuitively by noting that the electronic density in the presence of background is uniform, while in the absence of background there is a charge accumulation on the boundary (we have verified this directly from our calculation). Thus, without background the density is inhomogeneous and its effective value within the sample is lower, leading to a stronger influence of the interaction. The influence of $`g`$ is also clearly demonstrated in the figure. Smaller values of $`g`$ lead to a broader distribution for any kind of background. A more quantitative representation can be obtained by calculating the distributions momenta. In Fig. 3 we show the dependence on the interaction strength of: (a) the average spacing $`\mathrm{\Delta }_2`$; (b) the standard deviation $`\delta \mathrm{\Delta }_2\sqrt{(\mathrm{\Delta }_2\mathrm{\Delta }_2)^2}`$ and (c) the normalized deviation $`\delta \mathrm{\Delta }_2/\mathrm{\Delta }_2`$. In this figure we show the data corresponding to the values of parameter used in Fig. 2 and also the results obtained by using the strong scatterers disorder, where we distributed the scatterers randomly in the dot with a $`5\%`$ probability. This allowed us to obtain a high value of the conductance in a case which is close to the clean case but where we can successfully use Eq. 5. Before discussing these results we need to asses their accuracy. The HF approximation is less and less accurate as the interaction strength increases. In fact we were not able to obtain reliable results, and not even convergence of the method, for $`U/V`$ greater or close to $`2`$, depending on the strength of the disorder. We realized that one of the first indicator of the lose of accuracy is the appearance of too large fluctuations in the addition spectrum. Thus to estimate the numerical errors affecting the results of Fig. 3 we fitted the distributions with a Gaussian function and we considered the long-tail deviations from the Gaussian as numerical errors. Naturally this procedure, which we used only for value of the $`U/V`$ larger than $`0.5`$, where the distributions have already lost the Wigner-like asymmetry of the non-interacting case, can hide some characteristics of the phenomenon. In fact we realized that the presence of the tails depends on the kind of disordered we used and that, for example, it is more pronounced in the clean dot case (not shown here) or in the strong scatterers disorder. We think that this effect could be related to the presence of scarred eigenstates of the system. These states in fact, for their strongly inhomogeneous charge distribution, can produce large charging fluctuations . By considering the tails of the distribution as exclusively produced by the numerical approximation we obtained an overestimate of the error, but in all the cases shown in Fig. 3 we obtained error bars smaller or comparable with the size of the symbols we used, thus we decided not to show the error bars in the figures. Moreover this result confirms that at small values of the interaction the spacing distribution is already very close to a Gaussian. As expected the average spacing (Fig. 3a) does not depend on the value of $`g`$. With the background $`\mathrm{\Delta }_2`$ is linear with $`U`$, as expected for a constant density system , while in the absence of the background the average spacing is somewhat suppressed because of the accumulation of charge on the boundary. The standard deviation shows an enhancement as function of the interaction strength (Fig. 3b). The enhancement (compared to the theoretical RMT value of $`\sqrt{4/\pi 1}0.52\mathrm{\Delta }`$, where $`\mathrm{\Delta }`$ is the mean level spacing) strongly depends on the background and on $`g`$. The previously discussed role of the positive background in reducing the standard deviation is clearly seen. It is also clear that the standard deviation is smaller for larger values of $`g`$. In Fig. 3c we see that the normalized deviation for the cases in which a positive background exists saturates at values of $`U1.5V`$. The normalized values saturate at values between $`10\%`$ (for $`g\mathrm{}`$) and $`20\%`$ (for $`g=1`$) of the average spacing, in agreement with previous results . Even for $`g\mathrm{}`$ the distribution seems to follow a Gaussian distribution, with an interaction independent width already at $`U=1.5V`$ corresponding to an average $`\overline{r}_s=0.7`$ (where $`\overline{r}_s=\sqrt{\pi N}(U/8V)_{n_1}^{n_2}n^{1/2}/(n_2n_1)=\sqrt{\pi N}(U/4V)(\sqrt{n_2}\sqrt{n_1})/(n_2n_1)`$, with $`n_1=20`$ and $`n_2=120`$). In the absence of background the normalized deviation for the same value of $`g`$ is larger. Moreover, there is a minimum of the normalized deviation at $`UV`$ after which this quantity starts growing. This is connected with the fact that at $`UV`$ the average fluctuation $`\mathrm{\Delta }_2`$ starts to deviate from a linear dependence on $`U`$. For the positive background case we are able to give an empirical scaling function for the standard deviation which describes rather well the influence of the dimensionless conductance $`g`$. From Fig. 3b it is clear that the standard deviation depends on $`g`$ and on the interaction strength $`U`$. In our model Hamiltonian we have five independent parameters: $`W`$, $`U`$, $`V`$, $`n`$ and $`N`$. Out of these parameters one may create a set of four independent dimensionless quantities $`g`$, $`U/V`$, $`r_s`$, and $`\mathrm{\Delta }/V`$, and the standard deviation can be recast into the dimensionless form $`\delta \mathrm{\Delta }_2^{}=\delta \mathrm{\Delta }_2/\sqrt{4/\pi 1}\mathrm{\Delta }`$. We observed that, by dividing the dimensionless deviation by $`1+U/V\sqrt{g}`$ all the curves of the same averaged electronic density seem to collapse on top of each other (see Fig. 4). This hints to a scaling of the dimensionless deviation $`\delta \mathrm{\Delta }_2^{}=(1+U/V\sqrt{g})F(U/V,r_s)`$, where $`F(U/V,r_s)`$ is some function of interaction strength and density. It is important to note that this scaling form does not agree with the expectations of perturbation theory. First, for $`g\mathrm{}`$ there should be no dependence on $`U`$ for weak interactions, which is clearly not the case. Also for a uniform distribution of charge one expects the corrections to the standard deviation to follow $`1/g`$ and not $`1/\sqrt{g}`$. This $`1/\sqrt{g}`$ dependence has been independently seen also in the work of Walker et. al.. Of course one may blame these discrepancies on the HF approximation, but we note that for the relative weak interaction strength discussed here one expects HF to work rather well. One may also note that we are not able to recast the interplay between interaction strength and density as function of one combined parameter $`r_s`$. This may be the result of the high ratio of single electron spacing to the band width in our model. In conclusions, the spacing distribution of spinless electrons in interacting quantum dots is Gaussian once $`r_s`$ is of order of one, independently of parameters such as dimensionless conductance and the positive background charge. Nevertheless, these parameters have a strong influence on the onset of the Gaussian distribution and its width. This might explain the considerable difference in the width of distribution seen between different experiments , although it is not clear whether the physical parameters of the different dots are indeed different. Useful discussions on the addition spectrum of quantum dots with O. Agam, B. L. Altshuler, A. Auerbach, Y. Gefen, C. M. Marcus, A. D. Mirlin, D. Orgad, O. Prus and U. Sivan are gratefully acknowledged. L.B. is grateful to G. Grosso and T. Wojta for useful suggestions about the Hartree-Fock method. We would like to thank the Israel Science Foundations Centers of Excellence Program for financial support.
no-problem/9901/cond-mat9901291.html
ar5iv
text
# Patterned Geometries and Hydrodynamics at the Vortex Bose Glass Transition ## Abstract Patterned irradiation of cuprate superconductors with columnar defects allows a new generation of experiments which can probe the properties of vortex liquids by confining them to controlled geometries. Here we show that an analysis of such experiments that combines an inhomogeneous Bose glass scaling theory with the hydrodynamic description of viscous flow of vortex liquids can be used to infer the critical behavior near the Bose glass transition. The shear viscosity is predicted to diverge as $`|TT_{BG}|^z`$ at the Bose glass transition, with $`z6`$ the dynamical critical exponent. In the mixed state of cuprate superconductors the magnetic field is concentrated in an array of flexible flux bundles that, much like ordinary matter, can form crystalline, liquid and glassy phases. The dynamics of the flux-line array determines the resistive properties of the material and has therefore been the focus of much attention. Novel types of glasses are also possible because of pinning in disordered samples . In particular, the introduction of columnar damage tracks by heavy-ion irradiation yields a low-temperature “Bose glass” phase, in which every vortex is trapped on a columnar defect and the pinning efficiency of vortex lines is strongly enhanced . At high temperatures the vortices delocalize in an entangled flux-line liquid. The high temperature liquid transforms into a Bose glass via a second order phase transition at $`T_{BG}`$, characterized by universal critical exponents . We show here that there are very strong divergences in the vortex shear viscosity and other transport coefficients as this transition is approached from the liquid, similar to behavior conjectured for glass transitions in ordinary forms of matter, and propose experiments which test our predictions. Vortex matter with columnar defects thus provides a concrete example of a glassy phase accessed via a genuine second order phase transition and characterized by universal critical exponents. The Bose glass transition has been studied theoretically by viewing the vortex line trajectories as the world lines of two-dimensional quantum mechanical particles . The thickness of the superconducting sample corresponds to the inverse temperature of the ficticious quantum particles. In thick samples the physics of vortex lines pinned by columnar defects becomes equivalent to the low temperature properties of two-dimensional bosons with point disorder. The low temperature phase is a Bose glass where the vortices behave like localized bosons. It has vanishing linear resistivity and an infinite tilt modulus . The entangled flux liquid phase is resistive and corresponds to a boson superfluid . Although an exact theory of the continuous transition at $`T_{BG}(B)`$ from the Bose glass to the entangled flux liquid (or “superfluid”) is not available, most physical properties can be described via a scaling theory in terms of just two undetermined critical exponents . In the low temperature Bose glass each flux line is localized in the vicinity of one or more columnar pins. Its excursion in the direction perpendicular to the applied field is characterized by a correlation length that diverges at $`T_{BG}`$, $`l_{}(T)|TT_{BG}|^\nu _{}`$. There is also a diverging correlation length along the applied field direction (here the $`z`$ direction), $`l_{}(T)|TT_{BG}|^\nu _{}`$, where $`\nu _{}=2\nu _{}`$ . The time scale $`\tau `$ for relaxation of a fluctuation of size $`l_{}`$ is assumed to diverge with a critical exponent $`z`$, $`\tau l_{}^z|TT_{BG}|^{z\nu _{}}`$. The universal critical exponents as determined by the most recent simulations are $`\nu _{}1`$ and $`z4.6\pm 2`$ . Scaling can then be used to relate physical quantities to these diverging length and time scales. In particular, the resistivity $`\rho (T)`$ for currents applied in the $`ab`$ plane is predicted to vanish as $`TT_{BG}`$ from above as $`\rho |TT_{BG}|^{\nu _{}(z2)}`$ . Some predictions of the scaling theory have been tested experimentally, but there are as yet no direct measurements of the transport coefficients usually associated with glass transitions in conventional forms of matter, such as the shear viscosity. As we shall see, the behavior of the shear viscosity is determined by the dynamical critical exponent $`z`$ that controls the divergence of the relaxation time in the Bose glass phase. A measurement of the shear viscosity would provide a direct probe of the diverging relaxation time associated with glassy behavior . Patterned irradiation of cuprate superconductors with columnar defects allows for a new generation of experiments that may in fact provide a direct probe of viscous critical behavior near the Bose glass transition . By starting with a clean sample, at temperatures such that point disorder is negligible, it is possible to selectively irradiate regions of controlled geometry. An example is shown in Fig. 1. The side regions have been heavily irradiated, and are characterized by a high matching field $`B_\varphi ^{(2)}`$ and transition curve $`T_{BG}^{(2)}`$, while the channel is lightly irradiated with a lower matching field $`B_\varphi ^{(1)}<B_\varphi ^{(2)}`$ and transition curve $`T_{BG}^{(1)}`$. When $`T_{BG}^{(1)}<T_{BG}<T_{BG}^{(2)}`$, the flux array in the channel is in the liquid state, while the contacts are in the Bose glass phase. Flow in the resistive flux liquid region is impeded by the “Bose-glass contacts” at the boundaries, as the many trapped vortices in these regions provide an essentially impenetrable barrier for the flowing vortices. As discussed in Ref. , the pinning at the boundaries propagates into the liquid channels by a viscous length $`\delta `$ that depends on the flux liquid viscosity. As the temperature is lowered at constant field, so that the Bose glass transition $`T_{BG}^{(1)}`$ of the liquid region is approached from above (Fig. 2) the growing Bose glass correlations increase $`\delta `$ and strongly suppress the flow in the channel and the associated flux flow voltage drop across the channel. In this paper we analyze experiments with flux flow in such confined geometries by combining the predictions of the Bose glass scaling theory – generalized to the spatially inhomogeneous case – with the hydrodynamics of viscous flow of vortex liquids . Our analysis shows that the viscous length $`\delta `$ controlling boundary pinning is just the Bose-glass localization length, $`l_{}`$, and therefore provides a prescription for measuring the Bose glass scaling near the transition. Both flow in the channel geometry sketched in Fig. 1 and in the Corbino disk geometry (Fig. 3) used recently by López et al. is discussed. Such experiments can be used to extract the critical behavior of various transport coefficients and map out the entire critical region. In particular, the flux liquid shear viscosity is predicted to diverge as $`|TT_{BG}|^z`$ at the Bose glass transition. Because $`z4.6\pm 2.0`$, this powerful divergence is reminiscent of the Vogel-Fulcher behavior $`\eta \mathrm{exp}\left[c/(TT_g)\right]`$ conjectured for glass transitions in conventional forms of matter. The Bose glass scaling theory summarized earlier is easily generalized to the case of spatially inhomogeneous flow in constrained geometries. Considering for simplicity the channel geometry, a generalized scaling ansatz for the local electric field from flux motion at position $`x`$ in a channel of thickness $`L`$ takes the form $$E(T,J,x,L)=b^{(1+z)}E(b^{1/\nu _{}}t,\frac{b^\nu _{}b^\nu _{}J\varphi _0}{ck_BT},\frac{x}{b},\frac{L}{b}),$$ (1) where $`b>1`$ is the length scaling parameter and $`t=|TT_{BG}|/T_{BG}`$ the reduced temperature. This ansatz follows from the usual assumption that the continuous transition is described by a single diverging length scale and the homogeneity condition on the relevant physical quantities at the transition (see, e.g., Refs. ). The response in the Bose glass is generally nonlinear in the applied current $`J`$. By choosing $`b=t^\nu _{}l_{}(T)`$ we obtain $$E(T,J,x,L)=l_{}^{(1+z)}E(1,\frac{l_{}l_{}J\varphi _0}{ck_BT},\frac{x}{l_{}},\frac{L}{l_{}}).$$ (2) In the entangled flux liquid the response is linear at small current. Upon expanding the right hand side of Eq. (2) we obtain for $`J0`$ $$E(J0,x,L)\rho _0\left(\frac{l_{}}{a_0}\right)^{2z}J(x/l_{},L/l_{}),$$ (3) where $`a_0`$ is the vortex spacing and $`\rho _0=\left(n_0\varphi _0/c\right)^2(1/\gamma _0)`$ is the Bardeen-Stephen resistivity of noninteracting flux lines, with $`\gamma _0`$ a bare friction. A scaling form for the resistivity $`\rho (T,L)=\mathrm{\Delta }V/(LJ)`$, with $`\mathrm{\Delta }V`$ the net voltage drop across the channel, is easily obtained by integrating Eq. (3), with the result, $$\rho (T,L)=\rho _f(T)f(L/l_{})$$ (4) with $`f(x)=\frac{1}{x}_0^x𝑑u(u,x)`$ a scaling function and $`\rho _f(T)`$ the bulk resistivity, $$\rho _f(T)=\rho _0\left(\frac{l_{}}{a_0}\right)^{2z}\left(\frac{n_0\varphi _0}{c}\right)^2\frac{1}{\gamma }.$$ (5) In the second line of Eq. (5) the dependence on the Bose glass correlation length $`l_{}`$ has been incorporated in a renormalized friction coefficient $`\gamma =\gamma _0\left(\frac{l_{}}{a_0}\right)^{z2}`$ that diverges at the transition as $`\gamma |TT_{BG}|^{\nu _{}(z2)}`$ . For $`Ll_{}`$, the channel geometry has no effect and one must recover the bulk result, leading to $`f(x1)1`$. The scaling function $``$ can be determined by assuming that the long wavelength electric field of Eq. (3) is described by hydrodynamic equations . For simple geometries where the current is applied in the $`ab`$ plane and the flow is spatially homogeneous in the $`z`$ direction, these reduce to a single equation for the coarse-grained flux liquid flow velocity $`𝐯(𝐫)`$, $$\gamma 𝐯+\eta _{}^2𝐯+𝐟_L=0,$$ (6) The second term in Eq. (6) is the flux liquid viscosity $`\eta (T,H)`$ and represents the viscous drag arising from intervortex interactions and entanglement. Finally, $`𝐟_L=\frac{1}{c}n_0\varphi _0\widehat{𝐳}\times 𝐉`$ is the Lorentz force density driving the flux motion. Intervortex interaction at the Bose-glass boundaries translates into a no-slip boundary condition for the flux liquid flow velocity. By preventing the free flow of flux liquid, the Bose glass boundaries can significantly decrease the macroscopic flux-flow resistivity of the superconductor. Once the velocity field is obtained by solving Eq. (6) with suitable boundary conditions, the electric field profile in the superconductor is found immediately from $`𝐄(𝐫)=\frac{n_0\varphi _0}{c}\widehat{𝐳}\times 𝐯(𝐫)`$. It is instructive to rewrite Eq. (6) as an equation for the local electric field, $$\delta ^2_{}^2𝐄+𝐄=\rho _f𝐉,$$ (7) where $`\delta =\sqrt{\eta /\gamma }`$ is the viscous length. When the first term on the right hand side is absent, i.e., the flux liquid viscosity is small, this equation of “viscous electricity” reduces to Ohm’s law with flux flow resistivity given by the bulk value, $`\rho _f(T)`$. Interactions, however, make the viscous drag important and as a result the electrodynamics of flux-line liquids is highly nonlocal near the Bose glass transition. The solution of the hydrodynamic equation for the simple channel geometry sketched in Fig. 1, with a homogeneous current $`𝐉=\widehat{𝐱}J`$ applied across the channel, is given by $$E(x,L)=\rho _fJ\left[1\frac{\mathrm{cosh}(x/\delta )}{\mathrm{cosh}(L/2\delta )}\right],$$ (8) and is shown in Fig. 1. Upon comparing Eq. (8) to Eq. (3), we see that the quantity in square brackets in Eq. (8) is the scaling function $``$ and find that the viscous length $`\delta `$ is in fact the Bose glass length $`l_{}`$. As the friction diverges at $`T_{BG}`$ according to $`\gamma |TT_{BG}|^{\nu _{}(z2)}`$, this identification immediately gives that the flux liquid shear viscosity also diverges at the Bose glass transition with $$\eta =l_{}^2\gamma |TT_{BG}|^{\nu _{}z}.$$ (9) The scaling form for the resistivity is obtained by integrating Eq. (8), with the result $$\rho (T,L)=\rho _f(T)\left[1\frac{2l_{}}{L}\mathrm{tanh}\left(\frac{L}{2l_{}}\right)\right].$$ (10) If $`l_{}L`$, we recover the bulk result of Eq. (5), $`\rho (T,L)=\rho _f(T)|TT_{BG}|^{\nu _{}(z2)}`$. Near the transition, where $`l_{}L`$, the resistivity depends on the channel width and is controlled by the shear viscosity, with $$\rho (T,L)\frac{\rho _fL^2}{12l_{}^2}=\left(\frac{n_0\varphi _0}{c}\right)^2\frac{L^2}{12\eta (T)}L^2|TT_{BG}|^{\nu _{}z}.$$ (11) This strong divergence of the viscosity is precisely the kind of behavior expected at a liquid-glass transition. In this sense the Bose glass transition is an example of a glass transition that is well understood theoretically and where precise predictions are available. Another important patterned geometry is the Corbino disk, recently used by López et al. for defect-free materials to reduce boundary effects in the flux flow measurements. Here we propose fabrication of a Corbino disk with Bose glass inner and outer contacts sketched in Fig. 3. A current $`I`$ injected at the outer boundary and extracted at the inner boundary yields a radial current density $`𝐉(r)=\frac{I}{2\pi (R_2R_1)}\frac{\widehat{𝐫}}{r}`$ that drives vortex motion in the azimuthal direction. The electric field induced by flux motion is radial, $`𝐄(𝐫)=E(r)\widehat{𝐫}`$, and its magnitude is obtained by solving Eq. (7) in a cylindrical geometry, with the result, $$E(r)=\frac{\rho _fI}{2\pi (R_2R_1)l_{}}\left[\frac{l_{}}{r}+c_1I_1(\frac{r}{l_{}})+c_2K_1(\frac{r}{l_{}})\right],$$ (12) where $`c_1={\displaystyle \frac{K_1(\rho _2)/\rho _1K_1(\rho _1)/\rho _2}{K_1(\rho _1)I_1(\rho _2)K_1(\rho _2)I_1(\rho _1)}}`$ (13) $`c_2={\displaystyle \frac{I_1(\rho _1)/\rho _2I_1(\rho _2)/\rho _1}{K_1(\rho _1)I_1(\rho _2)K_1(\rho _2)I_1(\rho _1)}},`$ (14) with $`\rho _{1,2}=R_{1,2}/l_{}`$ and $`I_1(x)`$ and $`K_1(x)`$ Bessel functions. The electric field profiles are shown in Fig. 4. The resistivity is defined in terms of the net voltage drop $`\mathrm{\Delta }V_{12}`$ between the inner and outer radii as $`\rho (T,R_1,R_2)=\mathrm{\Delta }V_{12}/[I/2\pi (R_2R_1)]`$. Near the Bose glass transition, where $`l_{}R_2,R_1`$, we find $$\rho \frac{(n_0\varphi _0/c)^2}{4\eta (T)}\left\{\frac{R_2^2R_1^2}{2}\frac{4R_1^2R_2^2[\mathrm{ln}(R_2/R_1)]^2}{R_2^2R_1^2}\right\}.$$ (15) As in the channel geometry, the resistivity at the transition is completely determined by the diverging viscosity and the geometrical parameters of the channel. Experiments with patterned geometries near the Bose glass transformation provide an exciting opportunity to probe viscous behavior near a second order glass transition. A similar scaling analysis leads to predictions for the additional viscosities which characterize the dynamics of vortex matter . For example, the viscous generalization of Ohm’s law for transport parallel to the applied field reads $$\delta _{}^2_z^2E_{}\delta _{}^2_{}^2E_{}+E_{}=\rho _{}J_{},$$ (16) with $`\rho _{}l_{}^z`$, $`\delta _{}l_{}`$, and $`\delta _{}l_{}`$. This work was supported by the National Science Foundation at Syracuse through Grants No. DMR97-30678 and DMR98-05818 and at Harvard through Grant No. DMR97-14725, and by the Harvard Materials Research Science and Engineering Center through Grant No. DMR98-09363.
no-problem/9901/astro-ph9901374.html
ar5iv
text
# Goals of the ARISE Space VLBI Mission ## 1 The ARISE Mission Concept Supermassive black holes (SMBHs) are thought to be responsible for the astounding amount of energy released from the centers of many galaxies. The technique of Space VLBI (Ulvestad,, 1999) is the only astronomical technique foreseen for the next 20 years that will have the capability of imaging the region dominated by the gravitational potential of the black hole, within light days to light months of the active galactic nucleus. ARISE (Advanced Radio Interferometry between Space and Earth) is a mission currently under active study in the U.S. that will orbit a 25-m telescope to work together with ground telescopes worldwide in order to investigate the spectacular astrophysics in the vicinity of SMBHs. For ground-based VLBI operating at a frequency of 86 GHz on the longest baselines possible on Earth ($``$10,000 km), the best angular resolution is about 75 $`\mu `$as, a factor of $`500`$ better than that achievable with the Hubble Space Telescope. ARISE, in an elliptical orbit with a maximum altitude of 40,000-100,000 km, will work together with sensitive ground radio telescopes such as those in the Very Long Baseline Array (VLBA) and in the European VLBI Network (EVN), and will produce radio images of active galactic nuclei (AGNs) with angular resolution of 7–15 $`\mu `$as at the highest observing frequency of 86 GHz. Table 1 lists the basic mission parameters. Table 2 lists the observation characteristics as a function of frequency. Detection thresholds for a baseline to the Effelsberg 100-m telescope (EB) are given, assuming no phase referencing and the maximum data rate. Because of angular momentum constraints, it is extremely unlikely that the space radio telescope can switch sources rapidly enough to do phase referencing, but this technique of calibrating the atmosphere may be enabled just by having the ground telescopes switch sources. At 43 and 86 GHz, millimeter-wave telescopes such as SEST and the MMA/LSA will be important anchors that will significantly improve the fringe-detection threshold. ## 2 ARISE Science Goals ARISE is a versatile, high-sensitivity instrument that will employ the technique of Space VLBI to image the environment of a variety of compact objects such as supermassive black holes (SMBHs). It will resolve details 5–10 times smaller than can be imaged using ground-based VLBI, and several orders of magnitude smaller than instruments observing in other wavebands. Table 3 summarizes the primary science goals of ARISE; a number of additional goals, such as imaging of young supernovae, are omitted due to lack of space. The most important goals of ARISE focus on studies of SMBHs and their environments in active galactic nuclei, the most energetic power plants in the Universe. The popular treatment by Begelman & Rees, (1996) discusses observed properties of AGNs over a variety of wavebands that are attributable to SMBHs. The current paradigm for an AGN includes, at its center, a SMBH that provides the power for the AGN. Surrounding the black hole is an accretion disk, that is roughly co-planar (except for disk warps) with a much more extensive “torus” of material that may extend for hundreds of parsecs. As material in the disk drifts toward the central black hole, energy is extracted from that material by the spinning black hole. A magnetized radio jet of highly relativistic particles is accelerated near the SMBH, and flows outward near the speed of light along the symmetry axis of the accretion disk. Flickering gamma-ray emission reveals the creation of large quantities of high-energy particles in the inner light months of the radio jet. With ARISE, two critical classes of observations can be made. First, imaging of the inner light months of active galaxies in their continuum radio emission reveals the birthplace of the relativistic jets, the generation of shocks near that birthplace, and the key physical parameters in the regions of gamma-ray production. Second, imaging of molecular line (H<sub>2</sub>O maser) emission from the inner light months of the accretion disks in AGN directly samples the dynamics of material in the vicinity of the SMBH. Such studies lead to direct measurement of SMBH masses and of the physical characteristics of the accretion process (Moran et al.,, 1995). VLBI in general, and ARISE in particular, provide important information, and actual images, that can be supplied by no other technique in modern astrophysics. The ARISE resolution at 86 GHz will correspond to $`0.1`$ pc for a blazar at $`z=0.5`$, enabling resolution on a scale similar to that of the gamma-ray emission. In an H<sub>2</sub>O megamaser galaxy at 50 Mpc distance, the 22-GHz resolution will be $`0.05`$ pc, enabling imaging of the vertical and velocity structures in the disk. Observations on such important physical scales in these objects are not possible with VLBI baselines whose length is limited by the size of the Earth. Beyond the studies of SMBHs and their environment, ARISE can use AGNs for a variety of important cosmological studies. In particular, ARISE will permit investigation of radio sources with an otherwise unreachable combination of sensitivity and angular resolution, which is crucial for conclusive cosmological tests measuring the dependence of angular size and separation on redshift. Of special interest are the novel investigations that can be made using gravitational lenses (Kochanek & Hewitt,, 1996). ARISE imaging of lensed AGNs will improve the modeling of the mass distribution, currently the largest uncertainty in the determination of the Hubble Constant by this direct method. A gravitational lens also acts as a “cosmic telescope” in magnifying the background source by a factor of 10 or more, effectively increasing the angular resolution of ARISE to near 1 $`\mu `$as, which will provide resolution of light days even for the most distant AGNs. Finally, the sensitivity of ARISE to structures on the scale of tens to hundreds of microarcseconds will enable detection of compact lenses having masses of $`10^3`$$`10^6M_{}`$; such objects are among the leading candidates for the “missing” baryonic dark matter. ## 3 European Contributions to ARISE ARISE is currently part of the long-term roadmap in NASA’s Structure and Evolution of the Universe theme; if ARISE is funded in its current incarnation, it will likely be as a U.S.-led mission. However, ARISE also has many of the elements of a “descendant” of two other Space VLBI concepts, QUASAT and the International VLBI Satellite, which were proposed to the European Space Agency (ESA), but ultimately were not funded. Thus, concepts developed in Europe already have played a key role in ARISE, and several members of the ARISE Science Advisory Group are based at European institutions. The newly formed Joint Institute for VLBI in Europe (JIVE) provides an excellent vehicle for the participation of European ground facilities in ARISE; the European development of millimeter-wave telescopes will be especially useful at the higher frequencies. The capabilities of ESA, including equipment that will be flown aboard Planck/FIRST, also could provide important contributions to the space element of ARISE. Table 4 lists some areas in which European participation could contribute significantly to ARISE. ## 4 ARISE Timeliness The VLBI Space Observatory Programme (VSOP), is the first dedicated Space VLBI mission, in operation since early 1997 (Hirabayashi et al.,, 1998). VSOP, under the leadership of the Institute for Space and Astronautical Science in Japan, has demonstrated the capability for routine Space VLBI imaging by observing strong sources at 1.6 and 5 GHz. A much more sensitive mission using this technique will be timely, because it provides an imaging capability in the compact regions that will be investigated by several upcoming high-energy satellites such as GLAST and ASTRO-E. ARISE will take advantage of space technologies currently under active development. The most crucial technology is that connected with the deployable 25-m reflector that must work at frequencies as high as 43 and 86 GHz. The current baseline selection for ARISE is an inflatable antenna, under development for several other applications in communications and remote sensing. The other “new” technologies are those aimed at achieving a very high sensitivity, and should be well in hand by the potential ARISE launch date of 2008. These include low-noise amplifiers (developed for MAP and Planck/FIRST) and cryogenic cooling to an ambient temperature of 20 K (tested aboard the Space Shuttle and required for Planck/FIRST). Required ground systems include a number of sensitive ground telescopes. The EVN and the VLBA already provide a suite of ground telescopes as well as the entire operational infrastructure necessary for a VLBI mission. Completion of the Green Bank Telescope, the MMA/LSA, and the VLA upgrade will provide major new capabilities at the highest ARISE observing frequencies. Finally multi-gigabit per second data-recording and correlation capability will be required, and is under active development by several groups, notably in the Mark 4 and S3/S4 systems. Within the U.S., ARISE provides a unique opportunity for cooperation between space assets funded by NASA and an extensive ground infrastructure already developed under funding by the National Science Foundation. Thus, ARISE is a timely mission because it can take advantage of the large investments already made in ground facilities in both the U.S. and Europe.
no-problem/9901/cond-mat9901038.html
ar5iv
text
# Inversion of Randomly Corrugated Surfaces Structure from Atom Scattering Data ## I Introduction Structurally disordered surfaces have been a subject of great interest for some time now. Of special interest are epitaxially grown films, liquid surfaces, and amorphous surfaces. In epitaxial growth for example, metal or semiconductor atoms are adsorbed on a surface under thermal conditions, to form two- and three-dimensional structures on top of it. The physical and chemical properties are determined by the final form of these structures. These may be of dramatic importance, e.g, in the production of electronic devices. One of the most exciting aspects of epitaxial growth kinetics, is that it prepares disordered structures in the intermediate stages. The disorder manifests itself in the formation of various types of clusters or diffusion limited aggregates on top of the surface. These structures may be monolayers (usually at high temperatures, when the diffusivity is large, or at coverages significantly below a monolayer), in which case the disorder is two-dimensional, or they may be composed of several layers, giving rise to disorder in three dimensions. Epitaxially grown structures of this type offer an exceptional opportunity for both experimental and theoretical study of disorder. No satisfactory and comprehensive theory of the epitaxial growth process is as of yet available, much due to the absence of reliable interaction potentials for the system. The situation with respect to liquid and amorphous surfaces is similar: very little is known at this point about their structure. Progress at this stage thus hinges critically on data available from experiments. An important experimental technique is thermal atom scattering, and in particular He scattering . The main advantage offered by He scattering is complete surface sensitivity, as He does not penetrate into the bulk, unlike other scattering techniques such as neutron or X-ray scattering, or low energy electron diffraction (LEED). Another important advantage is that He scattering is highly non-intrusive, due to the inertness and low mass of the He atoms. The latter also means that He scattering is really a diffraction experiment at the typical meV energy scale at which most experiments are performed, with sensitivity to atomic-scale features. The interpretation of He scattering experiments is, however, rather involved due to the complicated interaction between the He beam and the surface. As in all other scattering problems, this interpretation issue is in fact one of inversion of the He/surface potential. The inversion problem, however, is intrinsically ill-posed, since one can only measure intensities, not phases. There is a certain redundancy in the intensity information which can be exploited to obtain relative phase shifts, but never the absolute phases. Thus the inversion can never be fully performed, although partial information can be obtained, or useful approximations can be made concerning the shape of the potential, which yield an analytically closed-form solution . In this work I will concentrate on the problem of trying to relate surface structural features to the scattering intensities, rather than the potential in general. This involves a non-trivial step of connecting the potential with such features. One possibility is the suggestion by Norskov and coworkers on the basis of effective-medium theory, in which the leading repulsive term in the He scattering potential from any electronic system is taken to be proportional to the local unperturbed electron density $`n_0(𝐫)`$ of the host at the He position $`𝐫`$: $$V(𝐫)=\alpha n_0(𝐫).$$ (1) In this way the potential is rather simply related to structural features such as the local electronic corrugation. However, while useful for describing the interaction close to the surface, where repulsion dominates, this formula is inapplicable at large distances where the He/surface interaction is dominated by the long-range attractive forces. In addition, it still requires knowledge of the local electron density, often a highly nontrivial task, especially in presence of defects or impurities. Thus in practice one often resorts to the use of specific functional forms for the potential, such as the Morse or Lennard-Jones potentials, and fits the parameters to the experimental data. Unfortunately, the connection to surface geometrical features is then less transparent. So far almost all of the work done on inverting He/surface potentials has been for He scattering from ordered, crystalline surfaces. Again, a full inversion is impossible, but useful results have been obtained by assuming specific forms for the interaction potential. These include semiconductor surfaces: GaAs(110), Si, InSb(110) , and some transition metal surfaces (Ni, Cu, Ag, Au) . More recent efforts have concentrated on potentials for scattering from various adsorbed monolayers on metals, e.g., the c(2$``$2) phase of oxygen on Ni(001) , hydrogen-plated Pt , and Xe/Cu(110) . We recently fitted cross-section data to obtain a potential for a disordered Ag/Pt(111) system . A very detailed recent review of potentials of physical adsorption is the work of Vidali et al. , which tabulates parameters of interest as deduced from analysis of experimental data and calculations of over 250 gas-surface systems, including He. Formal inversion methods have also occupied the attention of various researchers. The first such work was presented by Gerber and Yinnon , who showed that atom-surface interaction potentials can be recovered from the diffraction peak intensities measured in beam scattering experiments by a direct, simple inversion method, using the Sudden approximation (SA). The SA is a highly successful and useful theoretical method in the He scattering field, and has been reviewed by Gerber . The method of Ref. was applied to simulated Ne scattering from W(110). This was followed by the first inversion of real atom/surface scattering data: the He/MgO(100) system . Schlup showed how the surface profile function can be inverted in the Eikonal approximation, and applied his method to simulated data. In Ref. we studied a related problem: the inversion of an ad-atom profile function in the SA. Rabitz and coworkers developed a method based on functional sensitivity analysis and Tikhonov regularization , and applied this to the inversion of the He potential for scattering from a Xe monolayer on the (0001) face of graphite , which was also attempted earlier by employing close-coupling calculations . Finally, we recently inverted the structure of a low temperature disordered overlayer of Ag on Pt(111) . It should be noted that the situation with regards to potentials for He scattering from disordered surfaces is very much inferior to the case of ordered surfaces described above. Essentially no such reliable potentials exist, and the subject is at this point at a most preliminary stage. The common approach it to assume that the potential can be represented as a surface term plus a sum of pairwise additive terms representing the interaction of the He with each of the surface adatoms or vacancies. In this paper I will take a different approach, and will show how one can derive statistical information about randomly corrugated surfaces from He scattering measurements. This information comes in the form of correlation functions, not potential parameters. The resulting expressions relate useful statistical parameters characterizing the surface disorder to experimental observables which are not hard to obtain in practice, such as incidence energy dependence of the specular intensity. Unfortunately, at the time of writing experiments are unavailable for comparison with the results obtained here. The developments will therefore be primarily methodological, in anticipation of experimental data. It is hoped that the results obtained here will motivate He scattering experiments on disordered solid and liquid surfaces. As demonstrated by this work and others before it, He scattering can provide a wealth of information on disordered surface structure and dynamics. The structure of the paper is as follows. Sec. II provides a brief introduction into the SA. Sec. III is the heart of the paper and derives the inversion expressions. Concluding remarks are brought in Sec.IV. ## II Brief Review of the Sudden Approximation Consider a He atom with mass $`\mu `$ incident upon a surface with wavevector $`𝐤=(𝐊,k_z)`$. $`\mathrm{}𝐊=\mathrm{}(k_x,k_y)`$ and $`\mathrm{}𝐊^{}`$ are respectively the intial and final momentum components parallel to the surface, and $`\mathrm{}k_z`$ is the incident momentum normal to the surface. The position of the He atom is $`𝐫=(𝐑,z)`$, where $`𝐑=(x,y)`$ is the lateral position. The SA is valid when the collisional momentum transfer $`𝐪=𝐊^{}𝐊`$ in the direction parallel to the surface is much smaller than the momentum transfer normal to the surface: $`2k_z|𝐪|`$. This condition is satisfied at relatively high incidence angle and energy $`E=(\mathrm{}𝐤)^2/(2m)`$, and moderate surface corrugations. When it holds, one can approximately consider the scattering along $`z`$ as occurring at fixed $`𝐑`$. Then if $`\psi `$ is the He wavefunction, it satisfies a Schrödinger equation where the dependence on $`𝐑`$ is adiabatic: $$\left[\frac{\mathrm{}^2}{2\mu }\frac{d^2}{dz^2}+V_𝐑(z)\right]\psi _𝐑(z)=\epsilon \psi _𝐑(z).$$ (2) Here $`V_𝐑(z)`$ is the He-surface interaction potential and no inelastic channels are included, so that the total energy $`\epsilon `$ is conserved . This means that each surface point $`𝐑`$ gives rise to an elastic real phase shift $`\eta (𝐑)`$, which can be evaluated in the WKB approximation from Eq.(2) as: $`\eta (𝐑)={\displaystyle _{\xi (𝐑)}^{\mathrm{}}}𝑑z\left[\left(k_{z}^{}{}_{}{}^{2}{\displaystyle \frac{2m}{\mathrm{}^2}}V_𝐑(z)\right)^{1/2}k_z\right]k_z\xi (𝐑),`$ (3) where $`\xi (𝐑)`$ is the classical turning point pertaining to the integrand in Eq.(3). The phase shift in turn yields the S-matrix as: $`𝒮(𝐑)=\mathrm{exp}[2i\eta (𝐑)]`$. The $`𝐑`$ coordinate is conserved in this picture so the S-matrix is diagonal in the coordinate representation: $`𝐑^{}|𝒮|𝐑=e^{2i\eta (𝐑)}\delta (𝐑^{}𝐑).`$ Experimentally one measures probabilities $`|𝒮(𝐊^{}𝐊)|^2`$ for $`𝐊𝐊^{}`$ transitions. To obtain these $`𝐑|𝐊=\mathrm{exp}(i𝐊𝐑)/\sqrt{A}`$ (where $`A`$ is the area of the surface) can be used, to find: $`𝐊^{}|𝒮|𝐊={\displaystyle 𝑑𝐑^{}𝑑𝐑𝐊^{}|𝐑^{}𝐑^{}|𝒮|𝐑𝐑|𝐊}={\displaystyle \frac{1}{A}}{\displaystyle 𝑑𝐑e^{i𝐪𝐑}e^{2i\eta (𝐑)}}.`$ (4) This is the well-known expression for the SA scattering amplitude . The SA has been tested extensively by comparison to exact coupled-channel calculations on Ne/W(110) and He/LiF(001) , as well as by comparison to exact time-dependent propagation methods in the case of scattering from defects . It is particularly noteworthy that the SA yielding an inverted potential of remarkable accuracy for simulated Ne/W(110) data. The most important deficiency of the SA (apart from being limited to high energies) is its inability to describe double collision events. This is because, as mentioned above, the $`𝐑`$ coordinate is conserved in the SA, i.e., each trajectory takes place at constant $`𝐑`$. Clearly, no double collisions can occur under such conditions. However, double collisions may take place, e.g., when an incident atom is scattered off a defect onto the surface, or in the opposite order. This issue was resolved recently by combining the SA with the Born approximation . ## III Inversion of Structure of Randomly Corrugated Surfaces from He Scattering Intensities ### A Atom Scattering from a Randomly Corrugated Square Well Potential Recall the role of $`\xi (𝐑)`$ in the SA amplitude \[Eq.(4)\]: it is the position of the classical turning points, which can alternatively be viewed as the surface corrugation function. In the case of a disordered surface $`\xi (𝐑)`$ can be a random function of $`𝐑`$ and in order to obtain observable quantities one must average over an appropriate ensemble which characterizes the physical and statistical properties of the surface of interest. Thus the scattering probability, as a function of momentum transfer $`𝐐`$, is given by $$P(𝐐)=|S(𝐐)|^2,$$ (5) where $`\mathrm{}`$ indicates an average over the ensemble of which $`\xi (𝐑)`$ is a typical sample. In certain cases one will be justified in assuming that translational invariance has been established after averaging over the corrugation ensemble. For example, this will generally be the case for liquid surfaces, for solids when the structural disorder on the surface is due to radiation damage, and epitaxially grown defects on a surface. This means that $`\xi (𝐑)`$ is a stationary stochastic process, and hence that $`f[\xi (𝐑)]`$ is independent of $`𝐑`$, and that $`g[\xi (𝐑),\xi (𝐑^{})]`$ depends only on $`𝐑𝐑^{}`$. The assumption of stationarity will, however, not hold for the case in which the structural disorder is caused by an incomplete adsorbed overlayer on a periodic substrate, and I will present a different treatment for such cases in a later section. Combining Eqs.(4),(5) one obtains in the case of a translationally invariant potential $$P(𝐐)=\frac{1}{A}𝑑𝐑e^{i𝐐𝐑}e^{2i[\eta (𝐑)\eta (0)]}.$$ (6) From the above expression it can be seen that within the SA, the angular scattering intensity is the Fourier transform of a function of the random variable $$E(𝐑)=2[\eta (0)\eta (𝐑)].$$ (7) To proceed, one must first establish the connection between the phase-shift and properties of the surface. Assume now that the interaction of the He with the entire surface can be expressed in the form of a square well potential of depth $`ϵ`$. Square well potentials have been rather successful in predicting properties of interacting gases; the accuracy obtained in fitting second and third virial coefficients with a square well potential compares to that of a Lennard-Jones 6-12 potential . They have also been studied by others in the context of inversion problems , e.g., in neutron reflectometry from magnetic films . In our case the potential assumes the form: $`{\displaystyle \frac{2m}{\mathrm{}^2}}V(𝐑,z)=\{\begin{array}{cc}\mathrm{}\hfill & \text{}z<\xi (𝐑)\hfill \\ ϵ\hfill & \text{}\xi (𝐑)<z<\xi (𝐑)+\mathrm{\Delta }\xi (𝐑)=\zeta (𝐑)\hfill \\ 0\hfill & \text{}z>\zeta (𝐑)\hfill \end{array}.`$ Typically, $`\xi (𝐑)`$ is a relatively strongly corrugated function as it expresses the interaction of the He atom with the core electrons responsible for the steep repulsive part of the potential. On the other hand, $`\zeta (𝐑)`$ may be very smooth, reflecting the loss of detail at the long distances at which the attractive part of the He-surface potential becomes important. According to Eq.(3) the phase shift is then given by $$\eta (𝐑)=k_z\mathrm{\Delta }\xi (𝐑)\left(\left(1+\delta \right)^{\frac{1}{2}}1\right)k_z\xi (𝐑),$$ (8) where $`\delta =ϵ/k_z^2`$ is a small parameter in the high-energy SA. Eq.(8) is known as “the Beeby effect” , i.e., in the presence of a well the wave number has to be replaced by an effective wave number which is due to the acceleration of the particle by the attractive forces. In the case of scattering by a pure hard wall \[$`\mathrm{\Delta }\xi (𝐑)=0`$\], or a square well with the same shape as the hard wall \[$`\mathrm{\Delta }\xi (𝐑)=z_0=\mathrm{constant}`$\], the scattering intensity is $$P(𝐐)=\frac{1}{A}𝑑𝐑e^{i𝐐𝐑}e^{2ik_z[\xi (0)\xi (𝐑)]}.$$ (9) In this case the intensity is essentially the Fourier transform of the characteristic function $`\mathrm{exp}[ik_zZ(𝐑)]`$ of the relative surface corrugation: $`Z(𝐑)=2[\xi (0)\xi (𝐑)],`$ which makes its interpretation particularly clear. Knowledge of the probability density $`f_Z(z;𝐑)`$ fully determines the corrugation of the surface in question, in that $`f_Z(z;𝐑)dz`$ is the probability of finding a corrugation of height between $`z`$ and $`z+dz`$ at $`𝐑`$, with respect to that at the origin. It is a well known fact in probability theory that for any (piecewise-) continuous distribution function $`f_X(x)`$ a unique and explicit inversion of $`f_X(x)`$ exists from the characteristic function. Hence it is clear at the outset that He scattering can, at least in principle, provide very useful information about the statistics of a disordered system. In the following sections I will show how the statistical information pertaining to the disordered surface can be extracted from the He scattering intensity. ### B Extraction of the Surface Corrugation Probability Density from the Angular Scattering Intensity for a Hard Wall Potential In this section I specialize to the hard wall potential. This model has been in use for many years in He scattering theory and is related to the Eikonal approximation in optics . In Ref. the origin of the hard-wall was discussed and it was concluded that it is due to the the local density of metal electron states in the selvedge. This leads to the Esbjerg-Norskov theory, Eq.(1). The great merits of the hard wall approximation are that (1) it is analytically tractable and (2) it provides a direct geometrical interpretation of the surface corrugation. However, it is clearly oversimplified and leaves out many interesting features of the He-surface interaction. More sophisticated approximation schemes have therefore been introduced by various researchers. The distorted-wave Born approximation was used early on to treat soft potentials and low-energy scattering . These studies have shown significant deviations from the predictions of the hard-wall model, e.g., for a corrugated Morse potential for He scattering by a Cu(110) surface . Nevertheless, the hard wall approximation has physical merit in the He high-energy limit, which is assumed here in connection with the SA. An inverse Fourier transform of the scattering intensity $`P(𝐐)`$ \[Eq.(9)\] yields the characteristic function of the surface corrugation: $$e^{ik_zZ(𝐑)}=\frac{A}{2\pi }_{k_z}^1[P(𝐐);𝐑],$$ (10) where a notation for the Fourier transform was introduced: $`[f(x);y]={\displaystyle \frac{1}{\sqrt{2\pi }}}{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑xf(x)e^{ixy}.`$ In the case of normal incidence $`P(𝐐)`$ is a symmetric function, so that its Fourier transform is real. The probability density $`f_Z(z;𝐑)`$ can now be found if one recalls that: $$e^{ik_zZ(𝐑)}=𝑑ze^{ik_zz}f_Z(z;𝐑).$$ (11) Formally, therefore, $`f_Z(z;𝐑)`$ is fully determined by the scattering intensity, through another Fourier transform: $$f_Z(z;𝐑)=\frac{A}{(2\pi )^2}𝑑k_ze^{ik_zz}_{k_z}^1[P(𝐐);𝐑].$$ (12) The last equation shows that He scattering may in principle be used to fully invert the probability density of the corrugation function of a randomly corrugated surface. However, this requires dense sampling of the scattering intensities over a broad range of incidence wave-numbers, a task which may be difficult to accomplish in practice. Additional difficulties may arise due to Gibbs phenomenon and noise. A more realistic approach is to determine moments of the random variable describing the surface corrugation, an idea which dates back to the Backus and Gilbert work in geophysics , and has been pursued by others as well . To show how this may be accomplished here one may expand the characteristic function in terms of its moments. We then obtain from Eqs.(10),(11): $$\underset{n=0}{\overset{\mathrm{}}{}}\frac{(ik_z)^n}{n!}Z(𝐑)^n=\frac{A}{2\pi }_{k_z}^1[P(𝐐);𝐑]g(k_z;𝐑).$$ (13) The moments of $`Z(𝐑)`$ may now be found by differentiation of the experimentally available function $`g(k_z;𝐑)`$: $$Z(𝐑)^n=\frac{1}{i^n}\frac{d^ng(0;𝐑)}{dk_z^n}.$$ (14) Thus in contrast to the rather involved task of extraction of the full probability density via Eq.(12), requiring an experiment to be performed over the entire energy range, extraction of the moments merely involves an extrapolation of the data to the low energy limit $`k_z=0`$. Progressively higher moments, however, require higher derivatives and hence an increasingly dense sampling in $`k_z`$ to reduce noise due to finite differences. As a check, one may estimate the first moment $`Z(𝐑)=2\xi (0)\xi (𝐑)`$, which must clearly vanish. In practice, perhaps the most interesting piece of information is the second moment. Assuming that all higher moments vanish is equivalent to the assumption that $`Z(𝐑)`$ is a Gaussian random variable. From Eqs.(13),(14) one has: $$Z(𝐑)^2=\frac{A}{2\pi }\frac{d^2}{dk_z^2}\left[_{k_z}^1[P(𝐐);𝐑]\right]|_{k_z=0}.$$ (15) One may also choose to focus on the corrugation function $`\xi (𝐑)`$ itself, instead of on the relative corrugation $`Z(𝐑)=2[\xi (0)\xi (𝐑)]`$. Without loss of generality one can take $`\xi (0)=\xi (𝐑)=0`$. A cumulant expansion can then be used to evaluate the averages in Eq.(9). Truncating this expansion at the second cumulant, or equivalently, assuming that $`\xi (𝐑)`$ is a Gaussian random variable, one obtains: $$P(𝐐)=\frac{1}{A}𝑑𝐑e^{i𝐐𝐑}e^{(2\sigma k_z)^2[C(𝐑)1]},$$ (16) where the variance and correlation function are respectively $$\sigma ^2=\xi (𝐑)^2,C(𝐑)=\frac{1}{\sigma ^2}\xi (0)\xi (𝐑).$$ (17) Here, by definition, $`0|C(𝐑)|1`$ and $`C(𝐑)0`$ as $`R\mathrm{}`$. Fourier transforming the scattering intensity now yields an expression which can be used to conveniently fit $`\sigma ^2`$ and $`C(𝐑)`$ as a function of $`k_z`$: $$e^{(2\sigma k_z)^2[C(𝐑)1]}=\frac{A}{2\pi }_{k_z}^1[P(𝐐);𝐑].$$ (18) A set of equations \[notably Eqs.(12),(15),(18)\] has thus been derived which can be applied to extract useful statistical information on the surface corrugation, within a hard wall model, from the angular intensity distribution. In the next section I will consider what information can be derived within the more realistic square well model. ### C Correlation Functions in the Square Well Model It is not possible to analytically solve for the probability density of the effective surface corrugation or its moments in the square well model, as was done in Sec.III B in the hard wall case. It is, however, possible to obtain some interesting information by performing a cumulant expansion, as I now proceed to show. An inverse Fourier transform of the scattering intensity in the general case of a translationally invariant potential \[Eq.(6)\] yields \[using Eq.(7)\]: $`e^{iE(𝐑)}={\displaystyle \frac{A}{2\pi }}^1[P(𝐐);𝐑].`$ Introducing the notation $`\mathrm{\Delta }Z(𝐑)=2[\mathrm{\Delta }\xi (𝐑)\mathrm{\Delta }\xi (0)].`$ one obtains with the help of Eq.(8), after some algebra: $`E(𝐑)k_z\left[Z(𝐑){\displaystyle \frac{1}{2}}\delta \mathrm{\Delta }Z(𝐑)\right].`$ This expression is exact to first order in the small parameter $`\delta `$. Without loss of generality again set $`Z(𝐑)=0`$, and also $`\mathrm{\Delta }Z(𝐑)=0`$, so that the first moment of $`E(𝐑)`$ vanishes. For the second moment one obtains, again to first order in $`\delta `$: $`E(𝐑)^2E(𝐑)^2k_{z}^{}{}_{}{}^{2}\left[Z(𝐑)^2\delta Z(𝐑)\mathrm{\Delta }Z(𝐑)\right].`$ In analogy to Eq.(17) define a variance and correlation function between the hard wall corrugation and the deviation from it due to the attractive well: $`\sigma _1^2=\xi (𝐑)\mathrm{\Delta }\xi (𝐑),C_1(𝐑)={\displaystyle \frac{1}{\sigma _1^2}}\xi (𝐑)\mathrm{\Delta }\xi (0),`$ so that $`Z(𝐑)\mathrm{\Delta }Z(𝐑)=8\sigma _1^2[1C_1(𝐑)].`$ Collecting the results we obtain finally to second order in the cumulant expansion: $`e^{(2\sigma k_z)^2[C(𝐑)1](2\sigma _1ϵ)^2[C_1(𝐑)1]}={\displaystyle \frac{A}{2\pi }}^1[P(𝐐);𝐑].`$ This equation \[compare to Eq.(18)\] may be used to fit the experimental data to obtain the correlation functions, variances, and the well depth of the attractive part of the potential. This is much information to fit to a single experimental function. However, since the contribution due to the attractive well is expected to be small, one may at first neglect this contribution. This is tantamount to assuming a hard wall interaction, by which one may obtain an estimate of the hard wall quantities $`\sigma `$ and $`C(𝐑)`$. Having computed these, one may correct the fit by inclusion of $`\sigma _1`$, $`C_1(𝐑)`$, and $`ϵ`$, and proceed iteratively. In the next section I will consider how statistical information can be derived from the experimentally straightforward measurement of the specular intensity. ### D Extraction of the Correlation Length from Specular Peak Measurements in the Hard Wall Model From an experimental point of view, there is a significant advantage to working with specular intensities, as their measurement involves a relatively minor effort compared to that required to obtain the full angular intensity distribution. There is also a theoretical advantage, in that energy transfer to the surface is significantly reduced in specular collisions, so that a phononless treatment is more justified. In the hard wall approximation, by addition and subtraction of the same term, Eq.(5) for the angular distribution can be written as $`P(𝐐)=\left|{\displaystyle \frac{1}{A}}{\displaystyle 𝑑𝐑e^{i𝐐𝐑}e^{2ik_z\xi (𝐑)}}\right|^2+`$ (20) $`{\displaystyle \frac{1}{A^2}}{\displaystyle 𝑑𝐑𝑑𝐑^{}e^{i𝐐(𝐑𝐑^{})}\left[e^{2ik_z[\xi (𝐑)\xi (𝐑^{})]}e^{2ik_z\xi (𝐑)}e^{2ik_z\xi (𝐑)}\right]}.`$ Assuming translational invariance again and specializing to the specular direction, one obtains with a second order cumulant expansion (see Ref. for a more detailed derivation) $$IP(0)=e^{\beta E}\left[1+\frac{1}{A}𝑑𝐑\left(e^{\beta EC(𝐑)}1\right)\right],$$ (21) where for brevity: $`\beta =(2\sigma )^2,E=k_z^2.`$ (It was not sufficient to simply set $`𝐐=0`$ in Eq.(16) since then the integral would diverge due to the asymptotic properties of $`C(𝐑)`$.) I now wish to evaluate the specular intensity $`I`$ for some specific correlation functions. To be consistent with the truncation of the cumulant expansion at second order, I will investigate the case of short-range order, in which higher order moments of the corrugation function do not play a significant role. One is thus led to consider two types of short-range correlation functions. These are of course just convenient models, assumed for lack of further knowledge of the probability density distributions. In both cases I will be concerned with evaluating the integral term in Eq.(21), denoted by: $`F(E)={\displaystyle \frac{1}{2\pi }}{\displaystyle 𝑑𝐑\left(e^{\beta EC(𝐑)}1\right)}.`$ #### 1 Gaussian Correlation Function Let us assume a Gaussian form for the correlation function: $`C(R)=e^{(R/l)^2},`$ where $`l`$ is the correlation length. Then due to cylindrical symmetry $`F(E)={\displaystyle _0^L}\left[e^{\beta E\left(e^{(R/l)^2}1\right)}1\right]R𝑑R,`$ where $`L`$ is the linear extent of the surface. As such the integral cannot be evaluated analytically, but its derivative with respect to the energy can: $$\frac{F}{E}=\frac{l^2}{2E}\left(e^{\beta E}e^{\beta Ee^{(L/l)^2}}\right).$$ (22) Since short-range order was assumed one may safely neglect $`\mathrm{exp}\left((L/l)^2\right)`$. Differentiating Eq.(21) and combining with Eq.(22) then yields: $`{\displaystyle \frac{I}{E}}+\beta I+{\displaystyle \frac{\pi l^2}{A}}{\displaystyle \frac{e^{\beta E}1}{E}}=0.`$ This equation can be used conveniently for a best-fit of the correlation length $`l`$ and the variance $`\sigma ^2=\beta /4`$, by using the experimental specular data, as a function of incidence energy $`E=k_z^2`$. #### 2 Exponential Correlation Function Next assume a longer range, exponential form for the correlation function: $`C(R)=e^{(R/l)},`$ where $`l`$ is a new correlation length. Then: $`F(E)={\displaystyle _0^L}\left[e^{\beta E\left(e^{(R/l)}1\right)}1\right]R𝑑R.`$ Again the integral cannot be evaluated analytically, but its derivative can: $$\frac{F}{E}=\frac{l^2}{E}\left[\mathrm{Ei}(\beta E)\mathrm{log}(\beta E)\gamma \right].$$ (23) \[neglecting $`\mathrm{exp}(L/l)`$\]. Here $`\mathrm{Ei}(x)=e^x\left(1/x+_0^{\mathrm{}}e^t(xt)^2𝑑t\right)`$ is the exponential integral , and $`\gamma `$ is Euler’s constant. Differentiation of Eq.(21) in combination with Eq.(23) now yields: $`{\displaystyle \frac{I}{E}}+\beta I+{\displaystyle \frac{2\pi l^2}{AE}}e^{\beta E}\left[\gamma +\mathrm{log}(\beta E)\mathrm{Ei}(\beta E)\right]=0,`$ which can be used to best-fit the correlation length and the variance for a model of exponentially decaying correlations. ## IV Summary and Conclusions This paper has presented new results on the inversion of structure of randomly corrugated surfaces from atom scattering data, within the SA. This work has been largely formal, with applications to be worked out in the future in connection with presently unavailable experimental data. The analysis presented here showed that within the framework of the SA, the scattering intensities in principle contain the full statistical information characterizing the surface disorder. Several potentially useful expressions were derived from which statistical parameters can be extracted in simple He scattering experiments. One application of the theory presented here is contained in our work on scattering from Ag/Pt(111) , in which it was shown how randomly distributed adsorbates affect the scattering intensities. Additional theoretical applications will be undertaken in the future, but it is hoped above all that this work will stimulate experimentalists to further utilize inert atom scattering in the study of increasingly complex surface disorder. The results presented here suggest that such experiments can reveal a wealth of information concerning disordered surface structure, in particular on the statistics of randomly corrugated surfaces. ## Acknowledgements This work was carried out while the author was with the Physics Department and the Fritz Haber Center for Molecular Dynamics at the Hebrew University of Jerusalem, Givat Ram, Jerusalem 91904, Israel. Numerous helpful discussions with Prof. R. Benny Gerber, without whom this work could not have been completed, are gratefully acknowledged. Partial support from NSF Grant CHE 97-32758 is gratefully acknowledged as well.
no-problem/9901/nucl-th9901017.html
ar5iv
text
# Study of thermometers for measuring a microcanonical phase transition in nuclear fragmentation ## Abstract The aim of this work is to study how the thermodynamic temperature is related to the known thermometers for nuclei especially in view of studying the microcanonical phase transition. We find within the MMMC-model that the ”S-shape” of the caloric equation of state $`e^{}(T)`$ which is the signal of a phase transition in a system with conserved energy, can be seen in the experimentally accessible slope temperatures $`T_{slope}`$ for different particle types and also in the isotopic temperatures $`T_{HeLi}`$. The isotopic temperatures $`T_{HHe}`$ are weaker correlated to the shape of the thermodynamic temperature and therefore are less favorable to study the signal of a microcanonical phase transition. We also show that the signal is very sensitive to variations in mass of the source. In this work we are interested in testing the different experimentally accessible thermometers for nuclei in order to understand which quantity is best related to their thermodynamic temperature. It is our purpose to show that it should be in principle possible to measure the specific signal of a microcanonical phase transition in an accurate experiment. The concept of phase transitions is usually discussed in connection with macroscopic systems. In such systems phase transitions are recognized by divergences in quantities like specific heat $`c(e^{})=de^{}/dT_{thd}`$, where $`e^{}`$ is the specific excitation energy and $`T_{thd}`$ the thermodynamic temperature. If we are interested by similar phenomena in finite systems, the divergences are unsuitable to recognize and to classify phase transitions, since no divergences can occur. In a finite system the conservation laws become significant for the appearance and the shape of a phase transition. If a finite system is in thermal contact with a heat bath of constant temperature $`T_{thd}`$, i.e. is a canonical system, then the signal of a first order phase transition will become smeared showing up as a bump (instead of a divergence) in heat capacity or, equivalent, as a smooth anomaly in the caloric curve $`e^{}(T_{thd})`$. For a finite and isolated, i.e. microcanonical system the signal changes qualitatively. Here the total energy $`E`$ of the system is a strictly conserved quantity. The first order phase transition becomes signaled by an ”S-shape” in the caloric equation of state $`T_{thd}(E)`$ , as shown in the following figures in this publication. Since excited nuclei are an example of finite as well as isolated systems, they are especially suitable to study the signal of a microcanonical phase transition. This signal with an ”S-shape” was frequently obtained in calculations, though it is still a subject of controversial discussion. Therefore it is of fundamental interest to test the theoretical findings by an experiment. Another important point is the fact that the thermodynamic temperature which can be simply measured in macroscopic physics is not accessible directly in a finite system. In a macroscopic system the measurement is simple through the fact that the size of the thermometer which gets into thermal contact with the system of interest is negligible compared to the size of the system. For a finite system this is obviously not satisfied. Especially for a microcanonical system it is not possible to bring a thermometer into thermal contact with a system without violating the strict energy conservation. Thus to obtain information on the thermodynamic temperature one needs to find an experimental observable which is not a temperature, but keeps information on the behavior of $`T_{thd}`$. Several suggestions for such observables were made. The candidates for ”nuclear thermometers” are the slope temperatures from Maxwellian fits of energy distributions and the temperatures deduced from the isotopic ratios. In this work we are testing the quality of these ”nuclear thermometers” as concerning their ability to reproduce the shape of the microcanonical caloric equation of state ($`CES`$) $`T_{thd}(E)`$. Another approach can be found in . We concentrate on the microcanonical phase transition from evaporation to asymmetric fission which was predicted in the fragmentation of hot nuclei within the Berlin statistical fragmentation model MMMC . Similar signals were also predicted by other statistical models for nuclear fragmentation and also for atomic clusters . An experimental observation in ref. seems also to support the existence of this phase transition. To study the ”nuclear thermometers” we first obtain the signal of a phase transition in the $`T_{thd}(E)`$ curve within the MMMC-model. Then we calculate the signals of the two thermometers in question, the caloric curves of $`T_{slope}(E)`$ and $`T_{isotopic}(E)`$. Let us first briefly discuss the basics of the model. The strictly microcanonical MMMC-model assumes a hot compound nucleus to be formed in a nuclear collision. After fragmentation of this compound system the fragments remain coupled and exchange nuclei as long as they are in close contact. Consequently, the system is assumed to equilibrate statistically shortly after the break-up. The volume which is accessed by the equilibrated fragment configuration is called the freeze-out volume. This means in terms of thermodynamics that the collection of all possible fragment configurations represents the maximum accessible phase space $`\mathrm{\Omega }(E)`$ for a given freeze-out volume. $`\mathrm{\Omega }(E)`$ is restricted by the valid conservation laws, which are the conservation of mass, charge, linear and angular momentum and of the total energy of the system, and also by the geometrical constraints. In the simulation the most important geometrical constraint is the size of the freeze-out volume, which is taken to be spherical. The radius $`r_f`$ of this volume, which is the only simulation parameter of the MMMC-model, is for the energy region of 1 to 4 MeV per nucleon at about $`r_f=2.2A^{1/3}`$fm, which corresponds to approximately 6 times the normal nuclear volume. When the fragments (which can be in excited states) leave this volume they may de-excite as they trace out Coulomb-trajectories. The output of the MMMC calculation is a collection of freeze-out configurations which are supposed to be representative for the entire phase space $`\mathrm{\Omega }(E)`$. The thermodynamic temperature $`T_{thd}`$ of these configurations is calculated by $$\frac{1}{T_{thd}}=\frac{S}{E}=\frac{s}{E^{}},$$ (1) where $`E`$ is the total energy of the system, $`S=k_B\mathrm{ln}\mathrm{\Omega }(E)`$ is the entropy, $`k_B`$ the Boltzmann constant and $`s=S/A`$ and $`E^{}=E/A`$ with $`A`$ the mass of the decaying nucleus. The caloric curve $`T_{thd}(E^{})`$ is plotted in all figures of this paper as a solid line with circles. We test the ”nuclear thermometers” in two steps. First we plot the caloric curves $`T_{slope}(E^{})`$ and $`T_{isotopic}(E^{})`$ obtained from the MMMC-events after performing the Coulomb trajectories. Next we subject the calculated events to the software filter of the INDRA setup and plot the filtered caloric curves. For the INDRA-filter we assume the source velocity as 8.1 cm/ns, which is a typical quasi-projectile velocity measured with INDRA in mid-peripheral Xe + Sn collisions at 50 A.MeV bombarding energy . After the filtering we select the complete events for which the detected total charge and the total momentum is greater than 80% of the initial charge and of the initial momentum of the source, respectively. These events are used for constructing the filtered caloric curves. We start with the slope temperatures $`T_{slope}`$ for protons, deuterons, tritons, $`{}_{}{}^{3}He`$ and alpha particles. The calculated kinetic energy spectra were fitted with the surface-evaporating Maxwellian source formula for every particle type: $$\frac{d\sigma }{dE_{kin}}\frac{(E_{kin}B)}{T_{slope}^2}e^{(\frac{E_{kin}B}{T_{slope}})},$$ (2) where $`E_{kin}`$ is the center of mass kinetic energy of the particles, $`B`$ the Coulomb barrier and $`T_{slope}`$, which is the slope of the raw spectra, which is the desired slope temperature. We calculate the slope temperature for protons, deuterons, tritons <sup>3</sup>He, alpha and also light IMFs. Figures 1, 2, 3, 4 and 5 show the comparison of $`T_{thd}(E^{})`$ and $`T_{slope}(E^{})`$ for unfiltered (left plot) and INDRA-filtered (right plot) events for $`p`$, $`d`$, $`t`$, $`{}_{}{}^{3}He`$ and $`\alpha `$. Our most important finding in all these curves is that the slope temperatures resemble the general shape of $`T_{thd}(E^{})`$ before and after the filtering. The caloric curves for Li, Be and B (not shown here) which we have calculated only without filtering, repeat also the general shape of the phase transition despite of big error bars in $`T_{slope}`$. The unfiltered $`T_{slope}(E^{})`$ for $`p`$, $`d`$, $`t`$ systematically achieve values lower then the thermodynamic temperatures, while the values for $`{}_{}{}^{3}He`$ and $`\alpha `$ are close to those of $`T_{thd}`$. We think that the last finding is just accidental. The unfiltered $`T_{slope}`$ are close for all particles with $`Z=1`$ and for those with $`Z=2`$, and the shift between them of $`0.3`$ MeV is due to the higher Coulomb repulsion. Here one can see in a very simple way that the result of the MMMC calculation cannot be described by a Maxwellian source with a unique temperature $`T_{slope}(E^{})`$. Still we see that using the Maxwellian fit just as a recipe we obtain a useful tool to extract a pseudo-temperature which is correlated with the thermodynamic temperature. Let us now proceed to the isotopic temperatures . The basic assumption for the isotopic temperature formula is a thermal equilibrium between free nucleons and composite fragments within a certain interaction volume V at a temperature T. The formalism is of the grandcanonical ensemble and ignores the possible effects of mass and energy conservation. The $`T_{HeLi}(E^{})`$ isotopic temperature is given by $$T_{HeLi}=16/ln(2.18\frac{Y_{{}_{}{}^{6}Li}/Y_{{}_{}{}^{7}Li}}{Y_{{}_{}{}^{3}He}/Y_{{}_{}{}^{4}He}}),$$ (3) and the $`T_{HHe}(E^{})`$ temperature by $$T_{HHe}=14.3/ln(1.6\frac{Y_{{}_{}{}^{2}H}/Y_{{}_{}{}^{3}H}}{Y_{{}_{}{}^{3}He}/Y_{{}_{}{}^{4}He}}),$$ (4) where $`Y(E^{})`$ is the particle yield. Fig. 6 shows a comparison of unfiltered and filtered $`T_{HeLi}(E^{})`$ with the thermodynamic temperature $`T_{thd}(E^{})`$. Again we see that the signal of the phase transition survived the procedure. Performing the same for the H-He isotopic ratios, figure 7, we find that already the unfiltered $`T_{HHe}(E^{})`$ isotopic temperature is less sensitive to the ”S-shape” in the $`T_{thd}(E^{})`$ curve, but it also shows some structure at the phase transition. Finally we would like to address the question, how sensitive is the signal of the microcanonical phase transition to the mass and charge of the compound nucleus. Figure 8 shows the $`T_{thd}(E^{})`$ curve for several masses and charges of the source. In the left panel we show that decreasing the charge of the source from Z=54 for <sup>122</sup>Xe (dots) to Z=50 for <sup>122</sup>Sn (empty triangles) does not influence much the phase transition. On the opposite, increasing the mass by 10 nuclei for <sup>132</sup>Xe (full triangles) shifts the transition signal to higher excitation energies by approximately 0.5 MeV per nucleon. In the right panel we show two additional curves for <sup>80</sup>Se and <sup>250</sup>Cf, thus strongly reducing and strongly decreasing the mass. Here the evaporation-fission phase transitions for <sup>80</sup>Se appears as a very weak signal at $`E^{}1.5`$ A.MeV and $`T_{thd}3.8`$ MeV. The stronger signal at $`T_{thd}4.6`$ MeV is due to a different fission-multifragmentation phase transition. The fragmentation behavior of <sup>250</sup>Cf shows no phase transition at all, even for higher or lower $`E^{}`$. This is connected to the intrinsic instability against fission, so that even at very low excitation energies the nucleus fissions symmetrically instead of entering first the evaporation mode and later the asymmetric fission mode like the lighter nuclei do. This shows very plastically that there is no unique liquid-gas phase transition in nuclear fragmentation, but many different transitions depending on mass, and may be on other characteristics of nuclei. The dependence of the phase transition on the mass is natural, since increasing the mass we automatically increase the phase space $`\mathrm{\Omega }(E^{})`$. Thus if changing $`A`$ with excitation energy can also produce unusual shapes of the $`T_{thd}(E^{})`$ curve<sup>*</sup><sup>*</sup>*We suppose together with and in opposite to that the curve shown in ref. is just the effect of changing the mass of the source. without undergoing any phase transition. This discussion serves mainly to make the point that to measure the microcanonical phase transition it is essential to keep especially the mass of the source at constant over the whole energy range. Further, it is important to obtain energy bining less than 0.2 MeV per nucleon to detect the discussed transition. The investigation in ref. misses the phase transition by performing energy steps $`\mathrm{\Delta }E^{}1`$ A.MeV. The current INDRA setup can realize $`\mathrm{\Delta }E^{}0.4`$ A.MeV for the energy region of 1 to 4 A.MeV, which is at the limit of desired accuracy. If an additional experimental setup could measure the mass of the projectile fragments one could obtain much better $`E^{}`$ reconstructions and $`\mathrm{\Delta }E^{}`$ below 0.2 A.MeV. Summarizing the above we studied how the shape of the caloric curve $`T_{thd}(E^{})`$ at a microcanonical phase transition is correlated to the shape of different nuclear thermometers $`T_{slope}(E^{})`$ and $`T_{isotopic}(E^{})`$. The signal of a microcanonical phase transition is an ”S-shape” in the thermodynamic temperature. The slope temperatures for proton, deuteron, triton, $`{}_{}{}^{3}He`$ and alpha and the isotopic temperature $`T_{HeLi}(E^{})`$ reproduce this ”S-shape” at correct excitation energies for INDRA-filtered and unfiltered events. The absolute values of this curves vary and do not coincide with the thermodynamic temperature. This means that the decrease of temperature with rising excitation energy at a first order microcanonical phase transition can be measured experimentally. This is of fundamental importance for the systematic study of phase transitions in finite systems. O.S. is grateful to GANIL for the friendly atmosphere during her stays there. This work was supported by IN2P3.
no-problem/9901/cond-mat9901345.html
ar5iv
text
# Effect of disorder on the non-dissipative drag ## I Introduction Electron–electron (e-e) interactions are responsible for a multitude of fascinating effects in condensed matter. They play a leading role in phenomena ranging from high temperature superconductivity and the fractional quantum Hall effect, to Wigner crystalization, the Mott transition and Coulomb gaps in disordered systems. The effects of this interaction on transport properties, however, are difficult to measure. A new technique has recently proven effective in measuring the scattering rates due to the Coulomb interaction directly. This technique is based on an earlier proposal by Pogrebinskiĭ. The prediction was that for two conducting systems separated by an insulator (a semiconductor–insulator–semiconductor layer structure in particular) there will be a drag of carriers in one film due to the direct Coulomb interaction with the carriers in the other film. If layer $`2`$ is an “open circuit”, and a current starts flowing in layer $`1,`$ there will be a momentum transfer to layer $`2`$ that will start sweeping carriers to one end of the sample, and inducing a charge imbalance across the film. The charge will continue to accumulate until the force of the resulting electric field balances the frictional force of the interlayer scattering. In the stationary state there will be an induced, or drag voltage $`V_D`$ in layer $`2`$. There is a fundamental difference between transresistance and ordinary resistance insofar as the role of the Coulomb interaction is concerned. For a perfectly pure, translationally invariant system, the Coulomb interaction cannot give rise to resistance since the total current commutes with the Hamiltonian $`H`$. This means that states with a finite current are stationary states of $`H`$ and will never decay, since the e-e interaction conserves not only the total momentum but also the total current. (For electrons moving in a periodic lattice, momentum and velocity are no longer proportional and the current could in principle decay by the e–e interaction.) If the layers are coupled by the Coulomb interaction, the stationary states correspond to a linear superposition of states in which the current is shared in different amounts between layers: the total current within a given layer is not conserved and can relax via the inter–layer interaction. This mechanism of current degrading was studied in the pioneering experiment of Gramila et al. for GaAs layers embedded in AlGaAs heterostructures. The separation between the layers was in the range $`200`$-$`500`$Å. The coupling of electrons and holes and the coupling between a two dimensional and a three dimensional system was also examined. If we call $`I`$ the current circulating in layer $`1`$, the drag resistance (or transresistance) is defined as $`\rho _D={\displaystyle \frac{V_D}{I}}.`$ Most of the experiments done so far indicate the vanishing of $`\rho _D`$ at zero temperature, something expected in the usual scattering theory of transport. The possibility of a drag effect at zero temperature was considered by Rojo and Mahan, who considered two coupled mesoscopic rings that can individually sustain persistent currents, see Figure (1). The mechanism giving rise to drag in a non–dissipative system is also based on the inter–ring or inter–layer Coulomb interaction, the difference with the dissipative case being the coupling between real or virtual interactions. One geometry in which this effect comes to life is two collinear rings of perimeter $`L`$, with a Bohm–Aharonov flux, $`\mathrm{\Phi }_1`$, threading only one of the rings (which we will call ring one). This is of course a difficult geometry to attain experimentally, but has the advantage of making the analysis more transparent. Two coplanar rings also show the same effect. If the rings are uncoupled in the sense that the Coulomb interaction is zero between electrons in different rings, and the electrons are non–interacting within the rings, a persistent current $`J_0=cdE/d\mathrm{\Phi }_1=ev_F/L`$ will circulate in ring one. If the Coulomb interaction between rings is turned on, the Coulomb interaction induces coherent charge fluctuations between the rings, and the net effect is that ring two acquires a finite persistent current. The magnitude of the persistent drag current $`J_D`$ can be computed by treating the modification of the ground state energy in second order perturbation theory $`\mathrm{\Delta }E_0^{(2)}`$, and evaluating $$J_D=e\frac{d\mathrm{\Delta }E_0^{(2)}}{d\mathrm{\Phi }_2}|_{\mathrm{\Phi }_2=0},$$ (1) with $`\mathrm{\Phi }_2`$ an auxiliary flux treading ring two that we remove after computing the above derivative. The question of the effect of disorder on persistent currents remains controversial. Since our project involves calculating the effect of disorder on an induced persistent current, we expect our results to shed some light on this issue. For an isolated pure ring the persistent current is $`J_0=ev_F/L`$ with $`L`$ the perimeter of the ring and $`v_F`$ the Fermi velocity. The most immediate effect of disorder is to introduce a mean free path $`\mathrm{}`$. One expects disorder to decrease the persistent current, and qualitative arguments indicate that it is decreased by a factor $`\mathrm{}/L`$: $`J_0ev_F/L(\mathrm{}/L)`$. Our results indicate on firmer theoretical grounds that a similar argument can be used for the drag persistent current. In this paper we outline our detailed studies of the effect of disorder on non-dissipative drag using both analytic and numerical methods. ## II General remarks on the non–dissipative Drag The zero drag current can be finite only if quantum coherence, or entanglement, between the wave functions of the two systems is established. In this situation, the meaningful description of the dynamics of the combined system involves a single wave function, which distinghuishes from ordinary dissipative drag, a case in which one has scattering between two incoherently coupled systems. Figure (1) is a schematic illustration of this coherent coupling mechanism. We consider first two one-dimensional systems. Assume that, in the absence of the Coulomb coupling, system $`1`$ carries a finite equilibrium current, which could in principle be established by an Aharonov-Bohm flux threading system $`1`$ only. If system $`2`$ is a one dimensional wire of perimeter $`2\pi L`$, the mesoscopic nature of the zero drag current can be proven by the following analysis. Let $`\mathrm{\Psi }_0`$ be the ground state of the combined system. This wave function involves the coordinates of both systems. Let us consider system $`2`$ as a closed ring geometry, and designate the coordinates of the particles in this subsystem as angular variables $`\theta _i`$, with $`i=1,\mathrm{},N_2`$, and $`N_2`$ being the number of particles at system 2. The kinetic component of the Hamiltonian of system $`2`$ can then be written as $$H_K^{(2)}=\frac{\mathrm{}^2}{2mL^2}\underset{i=1}{\overset{N_2}{}}\frac{^2}{\theta _i^2}.$$ (2) Consider the modified wave function $`\mathrm{\Psi }^{}`$ constructed by applying a “boost”, or gauge transformation, on the coordinates of system $`2`$: $$\mathrm{\Psi }^{}=U(\alpha )\mathrm{\Psi }_0\mathrm{exp}(i\alpha \underset{i=1}{\overset{N_2}{}}\theta _i)\mathrm{\Psi }_0,$$ (3) with $`\alpha `$ a parameter. By the variational theorem $`E^{}=\mathrm{\Psi }^{}|H|\mathrm{\Psi }^{}E_0`$, with $`H`$ the Hamiltonian of the combined system, and $`E_0`$ the total energy. On the other hand, explicit evaluation of $`E^{}`$ gives $$E^{}=E_0+\frac{\mathrm{}^2}{2mL^2}N_2\alpha ^2\frac{h}{e}\mathrm{\Psi }_0|\widehat{J}_{\mathrm{Tot}}^{(2)}|\mathrm{\Psi }_0,$$ (4) with the current operator for system $`2`$ given by $$\widehat{J}_{\mathrm{Tot}}^{(2)}=\frac{e}{2\pi mL^2}\underset{i=1}{\overset{N_2}{}}i\mathrm{}\frac{}{\theta _i}.$$ (5) Due to the variational nature of the bound, the dragged current has to obey the inequality: $$J_{\mathrm{drag}}\mathrm{\Psi }_0|\widehat{J}_{\mathrm{Tot}}^{(2)}|\mathrm{\Psi }_0\alpha ^2\frac{e\mathrm{}\rho }{2\pi mL},$$ (6) with $`\rho `$ the particle density. Equation (6) emphasizes the mesoscopic nature of the dragged current: in the limit of $`L\mathrm{}`$, $`J_{\mathrm{drag}}0`$ with the same length dependence as the persistent current in mesoscopic rings, the value of which is $`ev_F/L`$ in the ballistic regime. Note that the bound is valid for strictly one-dimensional systems. Having established a bound, one needs to show that there is indeed a finite dragged current, and provide a quantitative estimate. We first present such a calculation treating the Coulomb interaction between the systems in second order perturbation theory. Consider two identical one-dimensional wires. Wire $`1`$ is threaded by a Aharonov-Bohm flux $`\varphi _1`$ (in units of the flux quantum). In order to evaluate the induced current $`J_2`$, we impose also a flux $`\varphi _2`$ in system $`2`$, and compute $$J_2=\frac{e}{\mathrm{}}\frac{E_0}{\varphi _2}|_{\varphi _2=0}.$$ (7) We neglect the Coulomb interaction within each wire, and consider the ballistic regime (no impurities in either system). In the absence of coupling, and for both fluxes $`\varphi _i<\pi /2`$, the ground state consists of two Fermi systems with one particle energies $`E_i^{(0)}=\frac{\mathrm{}^2}{2mL^2}(n_i\varphi _i)^2`$, and occupied levels for $`n_i<n_F`$ ($`i=1,2`$, and $`n_F=N/2`$, $`N`$ being the particle number at each ring). Let $`V(q)`$ be the Fourier transform of the Coulomb coupling, which for wires separated a distance $`d`$ has the form $`V(q)=(2e^2/L)K_0(qd)`$, $`K_0(x)`$ being the zero-order Bessel function of imaginary argument. The second order correction to the energy is then given by: $$\mathrm{\Delta }E_2=\frac{mL^2}{\mathrm{}^2}\underset{Q,n_1,n_2}{}\frac{V^2(\frac{Q}{L})}{Q}\frac{f_{n_1}(1f_{n_1+Q})f_{n_2}(1f_{n_2Q})}{(Q+n_1+\varphi _1n_2\varphi _2)},$$ (8) with $`Q,n_1,n_2`$ integers, and $`f_m`$ Fermi functions: $`f_m=1`$ if $`|m|<n_F`$, and zero otherwise. The above sum is now evaluated transforming the sum into integrals over the continuum variables $`q=Q/L`$, $`k_i=n_i/L`$. Evaluating the integrals, and computing the derivative with respect to $`\varphi _2`$, we obtain $$J_2=\frac{me^5}{\mathrm{}^3}\frac{1}{2\pi ^3}\frac{I(k_Fd)}{k_FL}\varphi _1,$$ (9) with $`I(k_Fd)=_0^{\mathrm{}}𝑑q\frac{qK_0^2(qd)}{4k_F^2q^2}`$. In the limit of large $`k_Fd`$, which corresponds to the interparticle distance being much smaller than the distance between the systems, we obtain $$J_2J_0\frac{1}{(k_Fa_0)^2}\frac{1}{(k_Fd)^2},$$ (10) with $`J_0=ev_F\varphi _1/L`$ being the persistent current carried by the otherwise uncoupled system $`1`$, and $`a_0`$ the Bohr radius. We have proven that there is an induced persistent current due to the Coulomb interaction. We now ask ourselves about the induced effect if system $`2`$ is made open, so that no current can circulate. In the transport situation, a voltage will be induced. Here, we show that there is no voltage induced. We start with a setup that, in the absence of the flux in system $`1`$, is “parity even”. By this we mean that the charge distribution in wire $`2`$ is symmetric around the center of the wire. We want to know if this symmetry is broken by applying the flux in system $`1`$, an operation that breaks the time reversal symmetry. Let us call $`P`$ and $`T`$ the parity and time reversal operators that interchange the ends of the wire. We want for example the induced dipole moment in wire $`2`$, $`x_2=\mathrm{\Psi }_0|\widehat{x}_2|\mathrm{\Psi }_0`$. The operator $`PT\widehat{x}_2(PT)^1=\widehat{x}_2`$, while the wave function is invariant under $`PT`$, which implies $`x_2=0`$, hence there is no induced voltage. ## III Disorder and the non–dissipative Drag In this section we outline our results on the effect of disorder in the non–dissipative drag. In calculating the effects of disorder we use the two ring geometry considered by Rojo and Mahan, see Figure (1), and calculate the second order Coulomb interaction between the conduction electrons in the two rings. The Coulomb potential is $`V{\displaystyle \underset{k}{}}V_k\rho _{k,1}\rho _{k,2}={\displaystyle 𝑑x𝑑x^{}\rho _1(x)\rho _2(x^{})V(xx^{})},`$ With $`\rho _i`$ the charge density at ring $`i`$. The second order correction to the ground state energy due to the Coulomb interaction is $$\mathrm{\Delta }E=\underset{n,n^{}}{}\underset{m,m^{}}{}\frac{|_kV_k(\psi _ne^{i\pi kx/L}\psi _n^{}^{}𝑑x)(\psi _me^{i\pi kx^{}/L}\psi _m^{}^{}𝑑x^{})f_n^{}(1f_n)f_m^{}(1f_m)|^2}{E_nE_n^{}+E_mE_m^{}},$$ (11) where $`\psi _n`$ is an eigenstate in the presence of disorder. From this expression for the energy shift we can calculate the drag current from Equation (1). ### A Analytics In this section we estimate the effect of disorder on the non-dissipative drag current for the case in which disorder is present only in the ring on which the Bohm–Aharonov flux is applied. The driven ring (ring 2), on which the drag current circulates, will be taken as disorder-free. Momentum remains a good quantum number in ring 2 making the calculation more tractable. The first order correction to the wave function is given by $$|\mathrm{\Psi }_1=\underset{q}{}V(q)\underset{k}{}\underset{\overline{\nu }}{}\frac{c_{k+q}^{}c_k|F_2|\overline{\nu }\overline{\nu }|\rho _q|\psi _0^{(1)}}{E_{k+q}E_k+E_{\overline{\nu }}E_{0,1}},$$ (12) where $`E_k`$ are the one-particle energies for the states of ring 2, and $`|\overline{\nu }`$ is a many-body state of ring 1 with energy $`E_{\overline{\nu }}`$. The ground state of ring 1 is $`|\psi _0^{(1)}`$, and its energy is $`E_{0,1}`$. Now, since we are neglecting interactions within each ring, the resulting equilibrium current in ring 2 is given by $$J_2=\frac{e}{L}\underset{q}{}\frac{\mathrm{}q}{m}|V(q)|^2\underset{k}{}\underset{\mu ,\nu }{}\frac{f_k(1f_{k+q})f_\mu (1f_\nu )|\mu |e^{iqx}|\nu |^2}{(E_{k+q}E_k+E_\nu E_\mu )^2},$$ (13) where now $`|\nu `$ refers to the exact one-particle states with energies $`E_\nu `$ corresponding to the disordered Hamiltoninan in ring 1. We can rewrite the above espression in terms of the spectral function $`S(q,\omega )`$ defined as $$S(q,\omega )=\underset{\mu ,\nu }{}f_\mu (1f_\nu )|\mu |e^{iqx}|\nu |^2\delta \left(\omega (E_\nu E_\mu )/\mathrm{}\right).$$ (14) We will consider the function $`S(q,\omega )`$ in the approximation in which the matrix element $`|\mu |\rho _q|\nu |`$ is given by the diffusive lorentzian: $$|\mu |e^{iqx}|\nu |^2=\frac{1}{\pi \mathrm{}N(0)}\frac{Dq^2}{(Dq^2)^2+(E_\mu E_\nu )^2/\mathrm{}^2},$$ (15) where $`D`$ is the difussion constant and $`N(0)`$ is the density of states of the system. In this approximation we obtain that $`S(q,\omega )`$ is given by $$S(q,\omega )=N(0)\frac{\omega Dq^2}{(Dq^2)^2+\omega ^2}.$$ (16) Before replacing this expression in Equation (13) let us recall that there is a flux $`\mathrm{\Phi }`$ threading ring 1 and therefore one expects $`S(q,\omega )S(q,\omega )`$. We follow Ambegaokar and Eckern in including the effect of the flux in the diffusive motion through the replacement: $$Dq^2D\overline{q}^2D\left(q\pi \frac{\varphi }{L}\right)^2,$$ (17) with $`\varphi `$ being the flux in units of the flux quantum. The induced current will therefore be given by $$J_2=\frac{e}{L}\underset{q}{}\frac{\mathrm{}q}{m}|V(q)|^2\mathrm{}D\overline{q}^2N(0)\underset{k}{}𝑑\omega \frac{f_k(1f_{k+q})}{(E_{k+q}E_k+\mathrm{}\omega )^2}\frac{\omega }{(D\overline{q}^2+\omega ^2)}.$$ (18) For small wavevectors ($`qk_F`$) we have: $$\underset{k}{}\frac{f_k(1f_{k+q})}{(E_{k+q}E_k+\mathrm{}\omega )^2}=\frac{L}{2\pi }\frac{q}{\left(\frac{\mathrm{}^2}{m}k_Fq+\mathrm{}\omega \right)^2},$$ (19) and also, in the limit of $`q\mathrm{}<1`$, with $`\mathrm{}`$ being the mean free path: $$_0^{\mathrm{}}𝑑\omega \frac{1}{\left(\frac{\mathrm{}^2}{m}k_Fq+\mathrm{}\omega \right)^2}\frac{\omega }{(D\overline{q}^2)^2+\omega ^2}\frac{(\overline{q}\mathrm{})}{(\mathrm{}v_F)^2q^2}.$$ (20) We are interested in the lowest order in $`\varphi `$ for the induced current, which gives $$J_2=\frac{e}{4\pi }N(0)\frac{D\mathrm{}}{mv_F^2}\frac{\varphi }{L}\underset{q}{}q^2V(q)^2,$$ (21) which we can now rewrite using $`D=v_F\mathrm{}`$ as $$J_2\left[\left(\frac{ev_F}{L}\right)\left(\frac{\mathrm{}}{L}\right)\right]\left[\frac{\mathrm{}}{d}\frac{N(0)(e^2/d)^2C}{E_F}\right]\times \varphi ,$$ (22) where $`C`$ is a constant, $$C=_0^{\mathrm{}}𝑑xx^2K_0(x)^2=.308425$$ (23) The first term in square brackets in Equation (22) corresponds to a familiar expression for the persistent current in ring 1 in the presence of disorder. The value of terms in the second square bracket can be computed taking $`N(0)=1/\mathrm{\Delta }`$, with $`\mathrm{\Delta }10K`$ being the level spacing for a ring of $`L1\mu m`$, $`E_F=2eV`$, and a distance between rings of $`d=100`$Å. Note that this term contains the product of two ratios: a small one given by $`E_{\mathrm{Coul}}/E_F`$, with $`E_{\mathrm{Coul}}=e^2/d`$, and a large one given by $`E_{\mathrm{Coul}}/\mathrm{\Delta }`$. This gives a number of order one, a result that probably overestimates the drag current, but serves as an indication that the effects of disorder are not extreme. The second square bracket also contains an additional ratio, the mean free path to the distance between rings. This additional factor shows that the effects of disorder are stronger in the drag current from that in the driving ring. In order to test this results we performed numerical simulations, which we present in the following sections. ### B Numerical simulations #### 1 Perturbative treatment of the Coulomb interation In evaluating the drag current computationally we consider a discrete ring with $`N`$ lattice sites and $`P<N`$ electrons. We model disorder by placing a random disorder potential at each lattice site. The hamiltonian for an electron hopping between lattice sites in this ring is given by $`H=t({\displaystyle \underset{i=1}{\overset{N}{}}}C_i^{}C_{i+1}e^{i\varphi }+{\displaystyle \underset{i}{}}C_{i1}^{}C_ie^{i\varphi })+{\displaystyle \underset{n=1}{\overset{N}{}}}W_nC_n^{}C_n,`$ where $`\varphi `$ is the magnetic flux through the ring, $`C_i^{}`$ is the electron creation operator at site $`i`$ and $`w_n`$ is the disorder potential at site $`n`$. For $`N`$ lattice sites, this gives an $`N\times N`$ hopping matrix. In computing the energy shift for the two ring system we work with the x space representation of Equation (11), $$\mathrm{\Delta }E=\underset{n,n^{}=1}{\overset{N}{}}\underset{m,m^{}=1}{\overset{N}{}}\frac{|_x_x^{}V(xx^{})<x|n><n^{}|x><x^{}|m><m^{}|x^{}>f_n^{}(1f_n)f_m^{}(1f_m)|^2}{E_nE_n^{}+E_mE_m^{}}.$$ (24) Here $`x`$ and $`x^{}`$ denote discrete positions of the lattice sites in rings one and two respectively and the $`|n>`$’s and $`E_n`$’s are the eigenvectors and eigenvalues obtained numerically from the hopping matrix. We obtain disorder averaging by evaluating $`\mathrm{\Delta }E`$ with different realizations of the random disorder potentials, $`W_n`$, at values between $`W`$ and $`W`$ where $`W`$ is the disorder amplitude. The result of the computer simulations are shown in figure (3) for a system of 10 lattice sites and 7 particles. The ratio of the drag current to its zero disorder value $`J_d/J_d(0)`$ is plotted both for a system in which disorder is present in ring 2 only and for a system of two disordered rings. The ratio $`J_o/J_o(0)`$ is also plotted. For small disorder amplitude, $`J_dW^2`$. #### 2 Non-perturbative treatment for very small rings by Lanczos method In this section we present some exact results for small clusters. We use the Lanczos method to diagonalize the problem, and obtain results that are non–perturbative in the interaction. As a first illustration, Figure (4) shows the persistent and drag currents both with and without disorder. The drag current follows the persistent current of ring 1 in its periodicity of one flux quantum as a function of the applied flux through ring 1. Figure (5) shows the drag current for two systems of different sizes. Note that the dependence with disorder is stronger for the larger system as expected from the factors of $`\mathrm{}/L`$ that appear in the analytical expressions in section III A. In conclusion we have established that the drag current remains finite for finite disorder. We have shown by numerical simulations of finite clusters and by analytical considerations that the effect of disorder on the drag current is more pronounced than the effect of disorder on the persistent current in a single ring.
no-problem/9901/math9901040.html
ar5iv
text
# Une conjecture en théorie des partitions ## 1 Introduction Nous revenons dans cette Note sur une conjecture que nous avons été conduit à formuler dans un précédent article . Cette conjecture s’exprime dans le cadre de la théorie classique des partitions. Il est remarquable que sa formulation ne nécessite que des notions extrêmement élémentaires. Nous la présentons ici sous sa forme la plus naturelle, et nous explicitons plusieurs de ses conséquences. ## 2 Notations Une partition $`\lambda `$ est une suite décroissante finie d’entiers positifs. On dit que le nombre $`n`$ d’entiers non nuls est la longueur de $`\lambda `$. On note $`\lambda =(\lambda _1,\mathrm{},\lambda _n)`$ et $`n=l(\lambda )`$. On dit que $`\left|\lambda \right|=\underset{i=1}{\overset{n}{}}\lambda _i`$ est le poids de $`\lambda `$, et pour tout entier $`i1`$ que $`m_i(\lambda )=card\{j:\lambda _j=i\}`$ est la multiplicité de $`i`$ dans $`\lambda `$. On identifie $`\lambda `$ à son diagramme de Ferrers $`\{(i,j):1il(\lambda ),1j\lambda _i\}`$. On pose $$z_\lambda =\underset{i1}{}i^{m_i(\lambda )}m_i(\lambda )!.$$ Nous avons introduit dans la généralisation suivante du coefficient binomial classique. Soient $`\lambda `$ une partition et $`r`$ un entier $`0`$. On note $`\genfrac{}{}{0.0pt}{}{\lambda }{r}`$ le nombre de façons dont on peut choisir r points dans le diagramme de $`\lambda `$ de telle sorte que au moins un point soit choisi sur chaque ligne de $`\lambda `$. Soient $`X`$ une indéterminée et $`n`$ un entier $`1`$. On note désormais $$(X)_n=X(X+1)\mathrm{}(X+n1)$$ $$[X]_n=X(X1)\mathrm{}(Xn+1).$$ les coefficients hypergéométriques ”ascendant” et ”descendant” classiques. On pose $$\left(\genfrac{}{}{0pt}{}{X}{n}\right)=\frac{[X]_n}{n!}.$$ ## 3 Notre conjecture Il s’agit d’une généralisation de la propriété classique suivante, qui est par exemple démontrée au Chapitre 1, Section 2, Exemple 1 du livre de Macdonald . Soit $`X`$ une indéterminée. Pour tout entier $`n1`$ on a $$\underset{\left|\mu \right|=n}{}(1)^{nl(\mu )}\frac{X^{l(\mu )}}{z_\mu }=\left(\genfrac{}{}{0pt}{}{X}{n}\right),$$ $$\underset{\left|\mu \right|=n}{}\frac{X^{l(\mu )}}{z_\mu }=\left(\genfrac{}{}{0pt}{}{X+n1}{n}\right).$$ Ces deux relations sont équivalentes en changeant $`X`$ en $`X`$. Dans nous avons formulé la conjecture suivante qui généralise ce résultat. Soit $`X`$ une indéterminée. Pour tous entiers $`n,r,s1`$ nous avons conjecturé l’identité $$\begin{array}{c}\underset{\left|\mu \right|=n}{}(1)^{rl(\mu )}\frac{\genfrac{}{}{0.0pt}{}{\mu }{r}}{z_\mu }X^{l(\mu )1}\left(\underset{i=1}{\overset{l(\mu )}{}}(\mu _i)_s\right)\hfill \\ \hfill =(s1)!\left(\genfrac{}{}{0pt}{}{n+s1}{nr}\right)\underset{k=1}{\overset{\mathrm{min}(r,s)}{}}\left(\genfrac{}{}{0pt}{}{Xs}{rk}\right)\left(\genfrac{}{}{0pt}{}{s}{k}\right).\end{array}$$ Mais il est bien connu ( , p.13) que la suite de polynômes $`\{[X]_n,n0\}`$ est du type binomial , c’est-à-dire qu’elle satisfait l’identité $$[X+Y]_n=\underset{k0}{}\left(\genfrac{}{}{0pt}{}{n}{k}\right)[X]_{nk}[Y]_k.$$ Nous sommes ainsi conduits à présenter notre conjecture sous la forme suivante qui est beaucoup plus naturelle. ###### Conjecture 1 Soit $`X`$ une indéterminée. Pour tous entiers $`n,r,s1`$ on a $$\begin{array}{c}\underset{\left|\mu \right|=n}{}(1)^{rl(\mu )}\frac{\genfrac{}{}{0.0pt}{}{\mu }{r}}{z_\mu }X^{l(\mu )1}\left(\underset{i=1}{\overset{l(\mu )}{}}(\mu _i)_s\right)\hfill \\ \hfill =(s1)!\left(\genfrac{}{}{0pt}{}{n+s1}{nr}\right)\left[\left(\genfrac{}{}{0pt}{}{X}{r}\right)\left(\genfrac{}{}{0pt}{}{Xs}{r}\right)\right].\end{array}$$ Ou de manière équivalente $$\begin{array}{c}\underset{\left|\mu \right|=n}{}\frac{\genfrac{}{}{0.0pt}{}{\mu }{r}}{z_\mu }X^{l(\mu )1}\left(\underset{i=1}{\overset{l(\mu )}{}}(\mu _i)_s\right)\hfill \\ \hfill =(s1)!\left(\genfrac{}{}{0pt}{}{n+s1}{nr}\right)\left[\left(\genfrac{}{}{0pt}{}{X+r+s1}{r}\right)\left(\genfrac{}{}{0pt}{}{X+r1}{r}\right)\right].\end{array}$$ L’équivalence est obtenue en changeant $`X`$ en $`X`$. Comme on a $$\genfrac{}{}{0.0pt}{}{\mu }{r}=0\text{si }r>\left|\mu \right|,$$ la Conjecture 1 est triviale pour $`r>n`$. Comme on a $$\genfrac{}{}{0.0pt}{}{\mu }{r}=0\text{si }r<l(\mu ),$$ la sommation au membre de gauche de la Conjecture 1 est limitée aux partitions $`\mu `$ telles que $`l(\mu )r`$. Chacun des membres est ainsi un polynôme en $`X`$ de degré $`r1`$. Comme on a $$\genfrac{}{}{0.0pt}{}{\mu }{\left|\mu \right|}=1,$$ la Conjecture 1 prend la forme suivante pour $`r=n`$. ###### Conjecture 2 Soit $`X`$ une indéterminée. Pour tous entiers $`n,s1`$ on a $$\underset{\left|\mu \right|=n}{}(1)^{nl(\mu )}\frac{X^{l(\mu )1}}{z_\mu }\left(\underset{i=1}{\overset{l(\mu )}{}}(\mu _i)_s\right)=(s1)!\left[\left(\genfrac{}{}{0pt}{}{X}{n}\right)\left(\genfrac{}{}{0pt}{}{Xs}{n}\right)\right].$$ Ou de manière équivalente $$\underset{\left|\mu \right|=n}{}\frac{X^{l(\mu )1}}{z_\mu }\left(\underset{i=1}{\overset{l(\mu )}{}}(\mu _i)_s\right)=(s1)!\left[\left(\genfrac{}{}{0pt}{}{X+n+s1}{n}\right)\left(\genfrac{}{}{0pt}{}{X+n1}{n}\right)\right].$$ La Conjecture 2 est vérifiée pour $`s=1`$ car on retrouve dans ce cas la propriété classique énoncée au commencement de cette section. ## 4 Commentaires Nous avons vérifié la Conjecture 1 dans les cas particuliers suivants: * pour $`n7`$ avec $`r`$ et $`s`$ arbitraires (au moyen d’un calcul explicite), * pour $`s=1`$ avec $`n`$ et $`r`$ arbitraires (c’est le théorème 1 de ), * pour $`r=1,2,3`$ avec $`n`$ et $`s`$ arbitraires (voir ci-dessous). En général la Conjecture 1 se décompose en $`r1`$ conjectures obtenues en identifiant les coefficients de $`X^k(0kr1)`$ dans chaque membre. Pour $`k=0`$ la sommation au membre de gauche est limitée à la partition $`(n)`$ de longueur $`1`$. En identifiant les termes constants dans chaque membre, on obtient $$(1)^r\frac{\left(\genfrac{}{}{0pt}{}{n}{r}\right)}{n}(n)_s=(s1)!\left(\genfrac{}{}{0pt}{}{n+s1}{nr}\right)\left(\genfrac{}{}{0pt}{}{s}{r}\right).$$ Cette identité est très facilement vérifiée. D’autre part on a le développement $$\left[\left(\genfrac{}{}{0pt}{}{X}{r}\right)\left(\genfrac{}{}{0pt}{}{Xs}{r}\right)\right]=\frac{1}{r!}\left(rsX^{r1}\frac{r(r1)}{2}s(r+s1)X^{r2}+\text{etc…}\right).$$ Le coefficient de $`X^{r1}`$ au membre de gauche de la Conjecture 1 correspond à la sommation sur les partitions $`\mu `$ de longueur $`r`$. Comme on a $$\genfrac{}{}{0.0pt}{}{\mu }{l(\mu )}=\underset{i=1}{\overset{l(\mu )}{}}\mu _i=\underset{i1}{}i^{m_i(\mu )},$$ en identifiant les coefficients de $`X^{r1}`$ dans chaque membre, on obtient la ###### Conjecture 3 Pour tous entiers $`n,r,s1`$ on a $$(r1)!\underset{\begin{array}{c}\left|\mu \right|=n\hfill \\ l(\mu )=r\hfill \end{array}}{}\frac{{\displaystyle \underset{i1}{}}m_i(\mu )(i)_s}{{\displaystyle \underset{i1}{}}m_i(\mu )!}=s!\left(\genfrac{}{}{0pt}{}{n+s1}{nr}\right).$$ Le coefficient de $`X^{r2}`$ au membre de gauche de la Conjecture 1 correspond à la sommation sur les partitions $`\mu `$ de longueur $`r1`$. On voit facilement qu’on a $$\genfrac{}{}{0.0pt}{}{\mu }{l(\mu )+1}=\frac{1}{2}\left(\left|\mu \right|l(\mu )\right)\underset{i=1}{\overset{l(\mu )}{}}\mu _i.$$ En identifiant les coefficients de $`X^{r2}`$ dans chaque membre, on retrouve la même Conjecture 3, mais écrite en remplacant $`r`$ par $`r1`$. La Conjecture 3 est vérifiée pour $`s=0`$ et $`s=1`$, avec $`n`$ et $`r`$ arbitraires (voir , page 462). Pour $`s=0`$ (resp $`s=1`$) le membre de gauche est égal au nombre de compositions (c’est-à-dire de multi-entiers) de poids $`n`$ et de longueur $`r`$ (resp. ce nombre multiplié par $`n/r`$). La Conjecture 3 est également vérifiée pour $`r=1`$ et $`r=2`$, avec $`n`$ et $`s`$ arbitraires. Pour $`r=1`$ elle devient l’identité $$(n)_s=s!\left(\genfrac{}{}{0pt}{}{n+s1}{s}\right).$$ Pour $`r=2`$ elle s’écrit $$\underset{i=1}{\overset{n1}{}}(i)_s=s!\left(\genfrac{}{}{0pt}{}{n+s1}{n2}\right).$$ Cette identité est vérifiée car elle est équivalente à la propriété classique suivante $$\left(\genfrac{}{}{0pt}{}{N}{k}\right)=\underset{i=1}{\overset{N1}{}}\left(\genfrac{}{}{0pt}{}{i}{k1}\right).$$ Nous généralisons ce résultat au moyen de la conjecture suivante. ###### Conjecture 4 Pour tous entiers $`n,r,s1`$ on a. $$(r1)!\underset{\begin{array}{c}\left|\mu \right|=n\hfill \\ l(\mu )=r\hfill \end{array}}{}\frac{{\displaystyle \underset{i1}{}}m_i(\mu )(i)_s}{{\displaystyle \underset{i1}{}}m_i(\mu )!}=\underset{i=1}{\overset{nr+1}{}}\left(\genfrac{}{}{0pt}{}{ni1}{r2}\right)(i)_s.$$ La Conjecture 4 permet de démontrer facilement la Conjecture 3 au moyen d’une récurrence sur l’entier $`r`$. Nous l’avons vérifiée explicitement pour $`rn7`$ avec $`n`$ et $`s`$ arbitraires.
no-problem/9901/cond-mat9901037.html
ar5iv
text
# “Smoke Rings” in Ferromagnets ## Abstract It is shown that bulk ferromagnets support propagating non-linear modes that are analogous to the vortex rings, or “smoke rings”, of fluid dynamics. These are circular loops of magnetic vorticity which travel at constant velocity parallel to their axis of symmetry. The topological structure of the continuum theory has important consequences for the properties of these magnetic vortex rings. One finds that there exists a sequence of magnetic vortex rings that are distinguished by a topological invariant (the Hopf invariant). We present analytical and numerical results for the energies, velocities and structures of propagating magnetic vortex rings in ferromagnetic materials. PACS numbers: 11.27.+d, 47.32.Cc, 75.10.Hk, 11.10.Lm In many situations ferromagnetic materials may be viewed as continuous media, in which the state of the system is represented by a vector field indicating the local orientation of the magnetisation. The dynamics of the ferromagnet then follow from the time evolution of this vector field, which obeys a non-linear differential equation known as the Landau-Lifshitz equation. Representing the local orientation of the magnetisation by the vector field $`𝒏(𝒓,t)`$ ($`𝒏`$ is a three-component unit vector), the Landau-Lifshitz equation takes the form $$\rho J\frac{n_i}{t}=ϵ_{ijk}n_j\frac{\delta E}{\delta n_k}$$ (1) in the absence of dissipation. $`\rho `$ is the density of magnetic moments, each of angular momentum $`J`$, and $`E`$ is the energy, which is a functional of $`𝒏(𝒓,t)`$ and its spatial derivatives (summation convention is assumed throughout this paper). The Landau-Lifshitz equation has been the subject of numerous studies, owing to its physical importance as a general description of ferromagnetic materials, and to the rich mathematical properties that result from its combination of non-linearity and non-trivial topology. Of particular interest are the solitons and solitary waves that it has been found to support. In one spatial dimension, the Landau-Lifshitz equation is integrable for certain energy functionals and the complete set of solitons is known. In higher dimensions, the equation is believed to be non-integrable for even simple energy functionals, and the understanding of non-linear excitations is incomplete. Here, we construct a novel class of solitary waves of three-dimensional Landau-Lifshitz ferromagnets. Our approach relies on the conservation of the linear momentum $$P_\alpha 2\pi \rho Jϵ_{\alpha \beta \gamma }d^3𝒓r_\beta \mathrm{\Omega }_\gamma (𝒓,t),$$ (2) where $`\mathrm{\Omega }_\alpha \frac{1}{8\pi }ϵ_{\alpha \beta \gamma }ϵ_{ijk}n_i_\beta n_j_\gamma n_k`$ is the “magnetic vorticity”. This definition of momentum resembles the definition of the hydrodynamic impulse of an incompressible fluid, if fluid vorticity is identified with magnetic vorticity. Our work emphasises the connection to fluid dynamics by showing that there exist solitary waves in ferromagnets that are analogous to the vortex rings of fluid dynamics: these are circular loops of (magnetic) vorticity that propagate at a constant velocity parallel to their axis of symmetry. (The magnetic analogues of vortex/anti-vortex pairs have recently been determined using a similar approach.) Furthermore, just as there exist generalisations of the vortex rings in fluid dynamics to vortex ring structures in which the lines of vorticity are linked (as measured by a non-zero helicity), there exist similar generalisations of the magnetic vortex rings in ferromagnets to topologically non-trivial structures involving the linking of vortex lines (in this context the measure of linking is known as the Hopf invariant). As a consequence, ferromagnets support a sequence of topologically-distinct magnetic vortex rings. One can understand how this sequence arises by noting that a magnetic vortex line carries an internal orientation that can be twisted through an integer multiple of $`2\pi `$ as the vortex line traces out a closed loop. By varying the number of rotations of this internal angle, one obtains a sequence of magnetic vortex ring configurations that are topologically distinct (they cannot be interconverted by non-singular deformations), as they relate to different values of the Hopf invariant. (See Ref. for a similar construction for fluids, where topology allows the inserted twists to be non-integer multiples of $`2\pi `$; magnetic vortex rings are more akin to the coreless vortex rings of superfluid <sup>3</sup>He-$`A`$, Ref..) The Hopf invariant, $``$, is the integer invariant that characterises the mappings $`S^3S^2`$ (the vector field $`𝒏(𝒓)`$ describes such a mapping when fixed boundary conditions are imposed, e.g. $`𝒏(|𝒓|\mathrm{})=+\widehat{𝒛}`$); it can be interpreted in terms of the linking number of two vortex lines on which $`𝒏`$ takes different values. This topological invariant has been of interest in recent studies of non-linear field theories, where it has been used to stabilise static solitons. Here we show that it classifies a sequence of dynamical solitary waves of ferromagnets – the magnetic vortex rings. We shall construct magnetic vortex ring solitary waves for ferromagnetic materials described by the energy functional $$E\frac{1}{2}\rho _sd^3𝒓(_\alpha n_i)^2+\frac{1}{2}Ad^3𝒓(1n_z^2),$$ (3) which represents isotropic exchange interactions and uniaxial anisotropy (we consider only $`A0`$, and choose the groundstate to be the uniform state with $`𝒏=+\widehat{𝒛}`$). It is straightforward to verify that, for this functional, the momentum (2) is conserved by the dynamics (1), as is the number of spin-reversals $$N\rho J/\mathrm{}d^3𝒓(1n_z).$$ (4) (Within a full quantum description, the number of spin-reversals would be an integer; within the semiclassical description afforded by the Landau-Lifshitz equation, which is accurate for $`N1`$, $`N`$ a continuous variable.) Our approach is to find configurations, $`𝒏^{}(𝒓)`$, that extremise the energy (3) at given values of the momentum, $`𝑷`$, and number of spin-reversals, $`N`$, within each topological subspace, $``$. This procedure defines an extremal energy $`E_{}^{}(𝑷,N)`$. The variational equations can be used to show that there exist time-dependent solutions of Eqn.(1) of the form $`n_x(𝒓,t)+in_y(𝒓,t)`$ $`=`$ $`[n_x^{}(𝒓𝒗t)+in_y^{}(𝒓𝒗t)]e^{i\omega t}`$ $`n_z(𝒓,t)`$ $`=`$ $`n_z^{}(𝒓𝒗t)`$ where $$v_\alpha =\frac{E_{}^{}}{P_\alpha }|_N;\omega =\frac{1}{\mathrm{}}\frac{E_{}^{}}{N}|_{P_\alpha }.$$ (5) These solutions describe travelling waves which move in space at constant velocity, $`v_\alpha `$, while the magnetisation precesses around the $`z`$-axis at angular frequency $`\omega `$. We shall find configurations $`n_i^{}(𝒓)`$, resembling magnetic vortex rings, which have spatially-localised energy density and therefore describe propagating solitary waves. It has been suggested previously that solitary waves resembling magnetic vortex rings might exist in ferromagnets, being stabilised by a non-zero Hopf invariant $``$ combined with the conservation of either the number of spin-reversals $`N`$, or the momentum $`𝑷`$. In the present work we make use of the conservation of both $`N`$ and $`𝑷`$, allowing solitary waves with more general motions to be constructed. This proves to be essential for the magnetic vortex rings that we discuss here. Furthermore, since we do not invoke topological stability, we can obtain solitary waves for all values of the Hopf invariant, including $`=0`$. We now turn to the determination of the properties of magnetic vortex ring solitary waves within the differing topological subspaces $``$. First, we make some general remarks. Due to the invariance of the energy (3) under spatial rotations, the extremal energy $`E_{}^{}(𝑷,N)`$ is independent of the direction of momentum; for later convenience, we choose $`𝑷=P\widehat{𝒚}`$. By considering the dependences of Eqns.(2,3,4) under scale transformations, one finds that the extremal energy has the form $`E_{}^{}(P,N)=N^{1/3}_{}(p,a)`$, where $`pP/N^{2/3}`$ and $`aAN^{2/3}`$ (here, and subsequently, we choose units for which $`\rho _s=\rho J=\mathrm{}=1`$). As a first step, consider linearising the equations of motion about the groundstate $`𝒏(𝒓)=\widehat{𝒛}`$. The configurations that extremise the energy at fixed $`N`$ and $`𝑷`$ are easily found: they are spatially-extended, and describe $`N`$ non-interacting spin-waves each of momentum $`𝑷/N`$. The results of this analysis determine the spin-wave dispersion $`_{=0}^{sw}(p,a)=p^2+a`$ (the Hopf invariant is zero for all small-amplitude disturbances). Insight into the results of the full, non-linear theory may be achieved by considering large-radius magnetic vortex ring configurations. Consider a magnetic vortex carrying a total “flux” $`Q`$, that is closed to form a circular loop with radius much larger than the size of the vortex core. (The vortex core size, like the loop radius, varies with $`N`$ and $`P`$. The condition that the core is small compared to the radius is $`P/QN^{2/3}=p/Q1`$.) The flux $`Q`$ is defined to be the integral of the vorticity across a surface pierced by the vortex; since the magnetisation tends to a constant, $`𝒏=\widehat{𝒛}`$, far from the vortex core, $`Q`$ is an integer (this is the topological invariant that classifies the mappings $`S^2S^2`$). Neglecting the size of the vortex core in comparison to the radius of the loop, one can determine the minimum energy of these magnetic vortex rings using results from previous studies of topological solitons in two-dimensional ferromagnets. One finds $`_{}^{asym}(p,a)`$ $`=`$ $`4\pi \sqrt{|Q|p}+a/|Q|,a^2p/Q,`$ (6) $`_{}^{asym}(p,a)`$ $`=`$ $`2\sqrt{2\pi a}(p/|Q|)^{1/4},a^2p/Q,`$ (7) which are valid for $`p/Q1`$, when the vortex core is small compared to the radius of the vortex ring ($`Q0`$ has been assumed). The relative sizes of $`a^2`$ and $`p/Q`$ determine whether the vortex core is small \[Eqn. (6)\] or large \[Eqn. (7)\] compared to a lengthscale set by the anisotropy (the width of a domain wall). In order to determine the properties of the magnetic vortex rings when the size of the vortex core is comparable to the radius of the ring ($`p/Q1`$) we have employed numerical analysis (we have not been able to solve the variational equations analytically). We simplify the problem by minimising the energy within a class of axisymmetric configurations that is consistent with the general variational equations: $`\left[n_x(𝒓)+in_y(𝒓)\right]`$ $`=`$ $`\left[n_x(\rho ,y)+in_y(\rho ,y)\right]e^{iM\varphi },y0,`$ $`\left[n_x(𝒓)+in_y(𝒓)\right]`$ $`=`$ $`\left[n_x(\rho ,y)in_y(\rho ,y)\right]e^{iM\varphi },y0,`$ $`n_z(𝒓)`$ $`=`$ $`n_z(\rho ,|y|),`$ where $`(\rho ,y,\varphi )`$ are cylindrical polar co-ordinates about the $`y`$-axis. We thereby reduce configuration space to the functions $`𝒏(\rho ,y)`$ on the quarter plane $`(\rho 0,y0)`$ and the integer parameter $`M`$. The Hopf invariant of these configurations is $`=MQ`$, where $`Q`$ is the total flux through the half-plane $`(\rho 0)`$. For $`M0`$, finite energy configurations have $`𝒏\widehat{𝒛}`$ at $`\rho 0`$ as well as at $`\rho ^2+y^2\mathrm{}`$, so the total flux $`Q`$, and hence the Hopf invariant, take only integer values. We have studied the isotropic ferromagnet, $`a=0`$, by a discretisation of the region $`(0\rho L,0yL/2)`$ on square lattices of up to $`600\times 300`$ sites. Fixed boundary conditions, $`𝒏=\widehat{𝒛}`$, were imposed on $`\rho =L`$ and $`y=L/2`$, and $`n_y=0`$ was imposed on $`y=0`$ for consistency with the above ansatz (the requirement that $`𝒏=\widehat{𝒛}`$ on $`\rho =0`$ when $`M0`$ emerges naturally from the energy minimisation). Energy minimisation was achieved by a conjugate gradient method with constraints on $`N`$ and $`P`$ imposed by an augmented Lagrangian technique. The energies of the magnetic vortex ring solitary waves with $`=0`$ ($`M=0`$) and $`=1`$ ($`M=Q=1`$) are shown in Fig. 1 as functions of the scaled momentum $`p`$. At large momenta, both branches approach the asymptotic expression (6) for a $`Q=1`$ vortex. Assuming that the leading corrections shown in the inset are linear in $`1/p^{3/2}`$, one finds from Eqns. (5,6) that a vortex loop of large radius, $`p/Q1`$, moves in such a way that its precession frequency $`\omega `$ is comparable to the frequency with which it translates a distance of order its own radius. At small momenta, the solitary waves become higher in energy than non-interacting spin-waves for $`p<4.66`$ ($`=0`$) and $`p<5.1`$ ($`=1`$). Both branches are found to persist below these values for a small range of $`p`$ as local energy minima. Typical configurations of the solitary waves are illustrated in Figures 2 and 3, which show the local magnetisation within the $`xy`$-plane and three-dimensional representations of the curves on which $`𝒏=\widehat{𝒛}`$ and $`𝒏=+\widehat{𝒙}`$. These curves allow a direct visualisation of the topological structure of the configurations, as measured by the Hopf invariant. In Fig. 2, the curves are simply circles centred on the $`y`$-axis: they are unlinked, illustrating that the Hopf invariant is zero, $`=0`$. In Fig. 3, the two curves are linked once, illustrating a non-trivial topological configuration with unit Hopf invariant, $`=1`$. For larger values of the Hopf invariant ($`2`$) we find that corrections arising from the finite lattice size in our calculations become more significant. (Finite-size effects are already apparent in Fig. 1 for $`=1`$.) These effects prevent a convincing demonstration of the existence of (non-singular) magnetic vortex rings with $`2`$ that describe stable energy minima. The construction of stable magnetic vortex rings with $`2`$ may require the use of non-axisymmetric configurations. The results of the present study demonstrate that magnetic vortex rings with $`=0`$ and $`=1`$ do describe stable energy minima within the class of axisymmetric configurations assumed. Our results provide the structures, energies, and therefore the velocities and precession frequencies \[see Eqns. (5)\] of these propagating non-linear modes. Further numerical studies indicate that similar magnetic vortex ring solitary waves exist for non-zero anisotropy, $`a0`$. (In fact, the magnetic vortex rings are apparently more favourable: e.g. the branch of solitary waves with $`=0`$ persists down to $`p=0`$ for $`a23.5`$, consistent with the existence of purely precessional solitary waves in a uniaxial ferromagnet.) Any additional sources of magnetic anisotropy in experimental systems, or the inclusion of magnetic dipole interactions, will lead to corrections that are small when the magnetic vortex rings are sufficiently small (compared to a characteristic lengthscale set by the strength of these additional couplings). One may wonder why magnetic vortex rings have not as yet been observed experimentally, whilst vortex rings in fluids are a matter of everyday experience. The answer lies in the difficulty of creating non-linear excitations in solid-state materials. For example, magnetic vortex rings involve a large number of spin-reversals, so they are not accessed in standard inelastic neutron scattering experiments which probe single spin-flip excitations. The creation of magnetic vortex rings in ferromagnetic materials will require the use of other experimental techniques. One way in which magnetic vortex rings could be excited experimentally, the details of which we are currently investigating, is to exploit an instability that we have discovered to the creation of magnetic vortex rings in itinerant ferromagnets of mesoscopic size under conditions of high current density. This instability signals a transition to a form of magnetic turbulence in mesoscopic ferromagnets, driven by the exchange coupling between the magnetisation and the spins of the itinerant electrons, and may be relevant to the unexplained dissipative phenomena observed in Ref.. The author is grateful to Mike Gunn and Richard Battye for helpful discussions, and to Pembroke College, Cambridge and the Royal Society for financial support.
no-problem/9901/astro-ph9901258.html
ar5iv
text
# The Dwarf Spheroidal Galaxies in the Galactic Halo ## 1. Introduction At the present time, the Galaxy has nine identified dwarf spheroidal (dSph) galaxy companions: Sculptor, Fornax, Draco, Ursa Minor, Leo I, Leo II, Carina, Sextans and Sagittarius. These systems appear to be typical members of the dSph/dE class of low luminosity, small, low surface brightness galaxies. The wording “at the present time” is deliberately chosen because it is unclear whether the current population of Galactic dSph companions represents most, or only a small fraction, of the dSphs that existed in the halo at early times in the life of the Galaxy. For example, the Sgr dSph is currently undergoing a significant interaction with the Milky Way and, within the next billion years or so, it probably will no longer be recognizable as a distinct object, having merged into the Galactic halo. Thus, as other contributions in this volume will emphasize, the disruption of dSph galaxies might have contributed significantly to the make-up of the Galactic halo (see also Mateo 1996). In this review, however, I will concentrate on the properties of the current Galactic halo dSphs. Indeed, these galaxies are particularly relevant to the conference theme of “Bright Stars & Dark Matter”, since they are the only Galactic halo systems where both stars and dark matter are found together in bound systems. In the first part of this contribution, then, the observed velocity dispersions of the Galactic dSphs and the implications for dark matter contents will be considered. This is followed in the second part by a discussion of some new results derived from characteristics of dSph stars. ## 2. Galactic Dwarf Spheroidal Velocity Disperions In 1983 the late Marc Aaronson published a paper (Aaronson 1983) that revolutionized our concept of the masses and dynamics of dSph galaxies. His paper indicated that the velocity dispersion of the Draco dSph, albeit based on only 5 observations of 4 stars, was at least $``$6.5 kms<sup>-1</sup> and that, as a consequence, the mass-to-light ratio of this dSph exceeded that of globular clusters by at least an order of magnitude. Since the stellar population of Draco is apparently similar to those of Galactic globular clusters, Aaronson’s result implied the existence of a substantial amount of dark matter in this dSph. Since the publication of that paper there have been a number of similar studies of Draco and of the other Galactic dSphs, with increasingly large samples of stars. Yet the basic result has remained the same. Indeed, with the publication of results for Leo I (Mateo et al. 1998), we can now say that all the Galactic dSphs appear to contain significant amounts of dark matter – see Mateo (1997) and Olszewski (1998) for recent reviews of this subject. In this section the recent work on Leo I (Mateo et al. 1998) is first considered. Other than the large Galactocentric distance, which makes the target stars relatively faint (and therefore required use of the Keck telescope for the observations), the Mateo et al. (1998) study of the velocity dispersion of Leo I is typical of existing Galactic dSph velocity dispersion studies. It therefore provides an example with which to highlight the steps (and the potential pitfalls) in the process by which observations of individual radial velocities are transformed into a dSph mass-to-light ratio estimate. The dark matter content implications of these results, and those for other Galactic dSphs, are then presented. The section concludes with a brief discussion of an alternative interpretation of the large observed velocity dispersions – that the dSphs are undergoing tidal disruption and are thus not in virial equilibrium. ### 2.1. Leo I – A Case Study The sample of Leo I stars observed by Mateo et al. (1998) consists of 33 red giants selected from a colour-magnitude study. These stars were observed at high dispersion (R $``$ 34,000) but the resultant spectra have relatively low S/N ratio. Velocities are obtained by cross-correlating the spectra with high S/N spectra of radial velocity standards. The high systemic velocity of Leo I, $``$290 kms<sup>-1</sup>, assures us of Leo I membership for all the candidates observed, though for some other Galactic dSphs this member/non-member discrimination is not as clear cut. A total of 40 individual measurements were made with the typical velocity error being $``$2.2 kms<sup>-1</sup> (the actual errors range from 1.4 to 4.8 kms<sup>-1</sup> depending on the S/N ratio of the spectrum). Then, based on a number of different techniques (e.g. weighted standard deviation, bi-weight estimator, maximum likelyhood), all of which produce similar values, the observed velocity dispersion for this sample of Leo I stars is $`\sigma _{obs}`$ = 8.8 $`\pm `$ 1.3 kms<sup>-1</sup>. Note that the error associated with this dispersion comes principally from the “sampling error” arising from the finite size of the observed sample, and that the value applies to the core of Leo I, since all but one of the stars observed have (in projection at least) radial distances less than the core radius. Further, within this limited radial range, there is no indication of any change in $`\sigma _{obs}`$ with location, nor is there any evidence for systematic rotation of Leo I, at least in the core region. These latter results are also commonly found for other Galactic dSph systems. Mateo et al. (1998) then apply what is now standard formalism to calculate from the observed dispersion a central mass-to-light ratio ($`\rho _0`$/I<sub>0,V</sub>), using the observed central surface brightness of Leo I, and a total mass-to-light ratio (M/L<sub>total,V</sub>) from the total integrated magnitude of the dSph. Both these calculations require a length scale; the core radius derived from the observed surface brightness (or surface density) profile is usually used. These structural parameters are now (at least moderately) well known for all Galactic dSphs, though they are, of course, based on the stellar distribution which may not reflect the underlying mass distribution. The scale factors required in these calculations are usually taken as those for the King (1966) model which best fits the surface brightness/density profile. These models are appropriate for spherically symmetric systems with isotropic velocity distributions whereas the Galactic dSphs have significant flattening yet lack systematic rotation; consequently, they presumably have anisotropic velocity distributions. The use of King model parameters, however, is not regarded as crucial (e.g. Merritt 1988); much more fundamental (e.g. Pyror & Kormendy 1990, Pyror 1994) is the implicit assumption here that “mass follows light”, an assumption for which there is little justification at present. Applying this formalism, Mateo et al. (1998) find $`\rho _0`$/I<sub>0,V</sub> = 3.5 $`\pm `$ 1.4 and M/L<sub>total,V</sub> = 5.6 $`\pm `$ 2.1 indicating that M/L<sub>V</sub> for Leo I could lie anywhere between $``$2 and 8 in solar units. How then are we to interpret these values? Mateo et al. (1998) point out that data for low central concentration globular clusters (e.g. Pryor & Meylan 1993), which are the ones for which mass segregation effects should be minor, yield $`<`$(M/L<sub>V</sub>)$`>`$ = 1.5 $`\pm `$ 0.1 when analyzed in the same way as the Leo I observations. At first sight a direct comparison of this mean value with the derived M/L values for Leo I doesn’t convincingly argue for the presence of a significant dark matter content in the dSph. However, we must keep in mind that the stellar population of Leo I is not that of a globular cluster (as is the case for many of the Galactic dSphs). In fact, Lee et al. (1993) have shown that the stellar population of Leo I is dominated by stars of intermediate-age (i.e. ages $``$ 2 – 10 Gyr) so that the “mean age” of Leo I is considerably younger than that of the globular clusters. Mateo et al. (1998) have constructed simple stellar population models to correct for this effect and have calculated that for a valid comparison with the Galactic globular clusters, the observed Leo I M/L<sub>V</sub> value should be increased by a factor of approximately two. In other words, after compensating for stellar population differences, we have (M/L<sub>V</sub>)<sub>LeoI</sub> $``$ 9 and (M/L<sub>V</sub>)$`_{globcl}`$ $``$ 1.5 – a clear indication that there is a significant dark matter component in Leo I. One might question the strength of this conclusion on the basis that the Mateo et al. (1998) data, like the situation for many of the Galactic dSphs, are essentially single epoch observations and that as a result, the presence of binary stars might have inflated the observed dispersion and caused the M/L ratio to be overestimated. The extensive work of Olszewski et al. (1996), however, has shown that this is not a valid objection. These authors have extensive repeat observations, extending over many years, of a large number of stars in the Galactic dSphs Draco and Ursa Minor. Indeed, despite the fact that the minimum possible binary period, given the radii of the red giants observed in these programs, is approximately six months, Olszewski et al. (1996) have sufficient data to investigate the binary frequency in these dSphs. Among the stars observed, they find six likely binaries, four in Ursa Minor and two in Draco. Then, via an exhaustive set of simulations, Olszewski et al. (1996) convert this observed binary frequency among red giants into an estimate of the overall binary star frequency in the dSphs. Surprisingly, they find that the binary frequency in Draco and Ursa Minor might be as much as three times higher than it is in Population I samples, and as much as five times higher than is the case for field Galactic halo samples. Discussion of this intriguing result is beyond the scope of this contribution. Nevertheless, the extensive simulations of Olszewski et al. (1996) show that even with such a high binary frequency, the effect of undetected binaries on the velocity dispersion determined from datasets similar to that of Mateo et al. (1998) for Leo I, is small and cannot be used as an explanation for the high mass-to-light ratio. ### 2.2. The Dark Matter Interpretation The observed mass-to-light ratios for the Galactic dSph galaxies<sup>1</sup><sup>1</sup>1The Sgr dSph is excluded from the discussion here as the extent to which its obvious interaction with the Milky Way compromises the interpretation of the observed velocity dispersion measurements is unclear (but see also Ibata et al. 1997). range from values in excess of 50 for the low luminosity systems Draco and Ursa Minor (e.g. Armandroff et al. 1995) to values of order 5 for the more luminous systems (e.g. Mateo 1998 and the references therein). As noted above, the star formation histories of the Galactic dSphs vary considerably from system to system and a proper comparison of M/L values must then take this into account. We follow the procedures of Mateo et al. (1998) and reduce the observed M/L values (taken from Mateo 1998) to those which would be expected if the dSphs were composed only of stars similar to those in globular clusters. The results of this process are shown in Fig. 1. As has been noted many times in the past (using uncorrected values), Fig. 1 reveals a general correlation with the least luminous systems showing the highest M/L values. The dashed curve in Fig. 1 is the relation (cf. Mateo et al. 1998) expected if the luminous stars in each dSph are embedded in a dark matter halo of constant mass, independent of the luminosity of the dSph. That is, the line is derived from the relation: (M/L)<sub>total,V,corr</sub> = (M/L)<sub>V,stars</sub> \+ M(Dark Matter)/L<sub>stars,V,corr</sub> where, since we have corrected to a globular cluster like population, (M/L)<sub>V,stars</sub> = 1.5 in solar units. For the curve shown in Fig. 1, M(Dark Matter) = $`2\times 10^7`$ solar masses. The constant dark matter mass line gives a reasonable representation of the Galactic dSph points in Fig. 1, but we should not take this concordance too seriously. Recall that the mass estimates use scale factors from spherically symmetric isotropic King (1966) models, which are unlikely to be appropriate for real (flattened, anisotropic) dSphs. More significantly, the mass estimates are based on the assumption that “mass follows light”. This latter assumption can be investigated (e.g. Pyror 1994) if radial velocities are determined for large samples ($``$100 stars) of dSph stars in regions that extend well beyond the core radius of the visible population. From such samples the observed velocity dispersion profile can be constructed and compared to the predictions of models which either retain or relax the “mass follows light” assumption. Such datasets are just now becoming available. In all cases the observed velocity dispersion profiles are flatter than the predictions based on the King model that best fits the observed surface density profile (e.g. Fig. 1 of Mateo 1997), indicating a mass distribution more extended than the light. This is an area where we can expect to see interesting new developments in the near future, but it seems unlikely that the new models and new data will remove the requirement for a significant dark matter content in the Galactic dSph galaxies. ### 2.3. Alternatives to the Dark Matter Interpretation? The oft-discussed alternative to the “significant dark matter content” interpretation for the large observed velocity dispersions in Galactic dSphs is that the dSphs are, in fact, tidally disrupted remnants that are not in virial equilibrium (e.g. Kroupa 1997 and references therein). Consequently, the observed velocity dispersions cannot be true reflections of the actual dSph masses. There is insufficient space here to discuss this alternative view in detail. However, when considering its validity, at least two points should be kept in mind. First, the Galactic dSphs exhibit correlations between quantities such as luminosity, surface brightness, length scale and mean metal abundance. While Kroupa (1997) indicates how a surface brightness – absolute magnitude correlation might be expected among a set of tidally disrupted remnants, the corresponding absolute magnitude – mean abundance and surface brightness – mean abundance correlations among the Galactic dSphs have no explanation in this scenario. These correlations, which cover a luminosity range greater than a factor of 100, are also followed by the dSph companions to M31, by the isolated Local Group dSph Tucana, and even by dSphs beyond the Local Group (see, for example, Fig. 18 of Caldwell et al. 1998). The similarity of these relationships in different environments then argues rather strongly against the interpretation of the Galactic dSphs as nothing but a set of tidally disrupted remnants. Second, the Galactic dSphs Leo I and Leo II have Galactocentric distances that exceed 200 kpc. Thus neither of these dSphs are subject to Galactic tides to anything like the same extent as the inner dSphs (R $``$ 65 – 90 kpc, excluding Sgr). Yet neither Leo I nor Leo II lies in a distinctly different location, relative to the other Galactic dSphs, in Fig. 1, for example. Further, both Leo systems have M/L values that imply the presence of dark matter. This lack of separation between the “near” and “far” Galactic dSphs is then a further argument against the tidally disrupted remnants interpretation for the Galactic dSphs. Nevertheless, if the tidally disrupted scenario doesn’t apply to all Galactic dSphs, we can at least ask if there are any particular cases (other than the Sgr dSph, which is clearly being strongly affected by Galactic tides) where such a scenario might apply. The signatures of such a situation might well include large scale streaming motions<sup>2</sup><sup>2</sup>2Tidal disruption models, e.g. Piatek & Pryor (1995) and Oh et al. (1995), show that this process produces large scale ordered motions rather than large random motions., sub-structure and “extra-tidal” stars as well as appreciable line-of-sight depth. Given these potential signatures it is then interesting to consider recent results for the Galactic dSph Ursa Minor. This dSph is one of the closest of the Galaxy’s dSph companions and it has one of the largest apparent M/L values (M/L<sub>V</sub> = 77 $`\pm `$ 13, Armandroff et al. 1995). With an ellipticity e = 0.55, Ursa Minor is also the flattest of the Galactic dSph companions. The results of interest are as follows: (1) Kleyna et al. (1998) have used deep wide-field CCD imaging to confirm and establish the statistical significance of an asymmetry in the stellar distribution along the major axis of Ursa Minor. In a dynamically stable system such asymmetries should be erased on timescales that are of order at most a few crossing times, or $``$10<sup>9</sup> years in Ursa Minor. (2) Kroupa (1997) has generated a tidally disrupted model for Ursa Minor in which he suggests that the true major axis of the dSph lies at a significant angle to the plane of the sky. This generates a line-of-sight depth which, for a core radius of $``$200 pc, Kleyna et al. (1998) estimate as $``$0.04 mag in size. These authors then searched for this effect by calculating the mean apparent magnitude of samples of horizontal branch stars along the major axis. They find that $`<V>_{SW}<V>_{NE}`$ = 0.025 $`\pm `$ 0.021 mag and $`<I>_{SW}<I>_{NE}`$ = 0.036 $`\pm `$ 0.035 mag, which is suggestive of the postulated effect but far from a convincing demonstration. (3) Both Hargreaves et al. (1994) and Armandroff et al. (1995) have reported a velocity gradient approximately along the minor axis in Ursa Minor. Tidal disruption models generally have streaming motions that are revealed as apparent major-axis rotation, but, given the fact that of all the dSphs so far investigated, Ursa Minor is the only one to show any rotation signature (whether about the major or minor axis), it is not unreasonable to suggest that this observed motion is the consequence of a tidal effect. The size of these ordered motions ($``$3 kms<sup>-1</sup> per 100 pc in projection), however, is considerably smaller than the observed dispersion ($``$9 kms<sup>-1</sup>). Thus it is unlikely that the observed M/L for Ursa Minor is significantly overestimated. Then, given that Ursa Minor is seemingly the best object for which a tidal disruption model might be viable, yet such a model is not compellingly required, it seems reasonable to conclude that the Galactic dSph galaxies do indeed contain significant amounts of dark matter. ## 3. Galactic Dwarf Spheroidal Stellar Populations One of the most interesting developments in the study of dSph galaxies in recent years has been the recognition that the star formation histories vary significantly from dSph to dSph (see, for example, the recent reviews of Da Costa 1997a, 1998, Mateo 1998, and the references therein). However, rather than consider the latest results on their evolutionary history (e.g. Stetson et al. 1998) two issues relevant to the conference theme will be considered. The first is the age of the oldest stars in the Galactic dSph companions as compared to the age of the oldest stars in other Galactic halo objects. The second is a discussion of how element abundance ratios in dSph red giants, recently determined for the first time, compare with similar data for field halo stars. ### 3.1. The Age of the Oldest Populations All of the Galactic dSphs are known to contain RR Lyrae variable stars. Since such variables are also found in Galactic halo globular clusters, the occurrence of RR Lyraes in dSph galaxies has conventionally been taken as evidence for the presence of a stellar population in each dSph that has an age comparable to those of the Galactic globular clusters. The relative size of this old population varies from dSph to dSph, as does the subsequent star formation history. Nevertheless, the presence of this old population has led to the qualitative statement that star formation commenced in the Galactic halo dSph galaxies at an epoch similar to that for the formation of the Galactic halo globular clusters, regardless of the dSph’s location in the proto-Galactic halo. However, if we are to increase our understanding of the processes that occurred during the earliest stages of the evolution of the Galaxy’s halo, we need quantitative results. In particular, we seek a quantitative answer to the question “How similar in age are the ‘first generation’ stars that formed in the various components of the Galactic halo?”. The advent of the WFPC2 camera onboard the Hubble Space Telescope has made attempting to answer this question feasible, and a number of relevant studies have recently appeared. For example, Grillmair et al. (1998) have used HST/WFPC2 observations of a field near the centre of the Draco dSph to produce a colour-magnitude (c-m) diagram that reaches well below the main sequence turnoff in this dSph. They then use these data to suggest that Draco is 1.6 $`\pm `$ 2.5 Gyr older than the metal-poor halo globular clusters M68 and M92. This result contrasts with previous expectations, based on Draco’s relatively red horizontal branch morphology, that this dSph would prove to be somewhat younger than Galactic halo globular clusters of comparable metallicity. It should be kept in mind, however, that the Grillmair et al. (1998) observations apply to a field region where the presence of significant abundance and possible age ranges complicate the interpretation. Less ambiguous results require studies of dSph star clusters, since star clusters are single age and abundance populations. The two most luminous of the Galaxy’s dSph companions, Fornax and Sagittarius, possess their own globular cluster systems, and for both these dSphs there are new results on the ages of their star clusters. For example, Montegriffo et al. (1998) have shown that Terzan 8, the most metal-poor of the four globular clusters associated with Sagittarius, has the same age as the metal-poor Galactic halo clusters M55 and M68. The precision of this result, however, is limited to approximately $`\pm `$2 – 3 Gyr by the uncertainities in their ground-based c-m diagram. For the Fornax dSph, Buonanno et al. (1998) have used WFPC2 images to produce c-m diagrams that reach below the main sequence turnoff for four of the five Fornax globular clusters. They find that these four Fornax globular clusters have identical ages to within $`\pm `$1 Gyr. As for Draco, the similarity of these cluster ages contrasts rather strongly with the expected age range of $``$2 Gyr based on the horizontal branch morphology differences shown by the Fornax cluster c-m diagrams and the assumption that age is the “second parameter”. As Buonanno et al. (1998) note, the result of closely similar ages for all four clusters suggests that either horizontal branch morphology is more sensitive to age than previously thought, or some other quantity besides age is responsible for the horizontal branch morphology differences. Buonanno et al. (1998) then go on to compare their Fornax cluster c-m diagrams with those for Galactic halo globular clusters. They find ages that are not significantly different, at the 1 – 2 Gyr level, from those for the Galactic halo clusters M92 and M68. Thus, to a precision of $``$1 – 2 Gyr, we can conclude that Fornax, Sgr and Draco (and also probably Ursa Minor – see Olszewski & Aaronson 1985) did indeed commence forming stars at the same time as the metal-poor globular clusters were forming in the proto-Galactic halo. Other recent results allow this conclusion to be widened. In particular, Harris et al. (1997) have shown that the metal-poor globular cluster NGC 2419, which lies in the extreme outer halo at a Galactocentric distance of $``$90 kpc, has an age that is indistinguishable from that of M92 to a precision of better than 1 Gyr. Similarly, Olsen et al. (1998) and Johnson et al. (1998) have obtained WFPC2 data for a total of eight globular clusters associated with the Large Magellanic Cloud. Their results show that, first, to within an upper limit of $``$1 Gyr, there is no detectable age range among these LMC star clusters. Second, the clusters are indistinguishable in age, at the $`\pm `$1 Gyr level, from Galactic halo globular clusters of comparable metal abundance. All these results then suggest that, regardless of the subsequent star formation histories, the initial epoch of star formation was well synchonized among all the components of the proto-Galactic halo, which may well have been distributed over a volume at least $``$100 kpc in radius. In other words, despite the very different locations, masses, densities and dark matter contents of the proto-LMC, the proto-dSphs and the proto-NGC 2419 gas clouds, etc, in the proto-Galactic halo, the initial episode of star formation in all these components seems to have been well co-ordinated. An understanding of how this comes about would undoubtedly advance our knowledge of galaxy formation and of conditions in the early Universe. ### 3.2. Abundance Ratios in dSph Red Giants The study of element abundance ratios, typically with respect to iron, in the atmospheres of the members of a stellar system is important because such ratios, and their variation with overall abundance, can provide significant information on the enrichment processes that occur during the evolution of the stellar system. In particular, abundance ratio studies of stars in dSph galaxies should be capable of providing direct constraints on their chemical evolution. It is also possible that such studies might provide a signature to mark those Galactic halo field stars that have come from disrupted dSph galaxies. The determination of abundance ratios for dSph stars is no easy task. Even the brightest red giants in the nearest dSphs are relatively faint and thus a large telescope is required. The results that are described below come from the Keck telescope and the HIRES spectrograph, but they should be regarded as precursors for what will undoubtedly be an extensive area of study once other large telescopes (e.g. HET, Gemini, VLT, Subaru, Magellan, etc) begin science operations. The first such study is that of Shetrone et al. (1998) who have analyzed high dispersion spectra of four red giants in Draco. They find, firstly, that these stars show a substantial range in iron abundance: the \[Fe/H\] values are –3.0, –2.4, –1.7 and –1.4 dex, respectively, where in each case the uncertainty in the \[Fe/H\] value is $``$0.1 dex. The existence of this large abundance range comes as no real surprise since we have known for some time that most, if not all, Galactic halo dSphs possess significant internal abundance ranges (e.g. Suntzeff 1993 and references therein). However, studies of large unbiased samples of red giants in dSphs from which to determine the abundance distribution functions are generally lacking at the present time. The abundance ratios for these Draco red giants do, nevertheless, reveal some interesting differences from globular cluster and field halo red giants. These are illustrated in Fig. 2 where the Shetrone et al. (1998) results for Draco and for red giants in the globular clusters M92 (\[Fe/H\] = –2.27) and M3 (\[Fe/H\] = –1.53) are compared with the results of McWilliam et al. (1995) and McWilliam (1998) for metal-poor field halo stars. For the $`\alpha `$–element calcium, the \[Ca/Fe\] values for the globular cluster and field halo stars shown in the upper panel of Fig. 2 are consistent with the trends exhibited by larger samples of stars (see, e.g., Norris, these proceedings). However, the Draco stars, especially the two more metal-poor objects, have \[Ca/Fe\] values that are significantly lower than the overall trend. These two stars have \[Ca/Fe\] $``$ 0.1 while the 24 field halo stars with –3.2 $``$ \[Fe/H\] $``$ –2.0 in the McWilliam et al. (1995) sample have $`<`$\[Ca/Fe\]$`>`$ = 0.42 $`\pm `$ 0.02 dex. On the other hand, as the middle panel of Fig. 2 shows, the results for magnesium, which is also an $`\alpha `$–element, show no such effect. The two metal-poor Draco red giants have \[Mg/Fe\] values consistent with the field halo star and globular cluster red giant determinations. The low \[Mg/Fe\] value for one of the more metal-rich Draco red giants will be discussed below. The lower panel of Fig. 2 shows the results for the s-process element barium. The two more metal-rich Draco giants have \[Ba/Fe\] values that are consistent with the results of McWilliam (1998) for a sample of metal-poor field halo stars. This is also true for the globular cluster red giants. However, as was found for \[Ca/Fe\], the lower panel of Fig. 2 reveals that the two more metal-poor Draco red giants have significantly lower \[Ba/Fe\] values than do field halo stars with similar \[Fe/H\]. For the Draco star D24 this difference in \[Ba/Fe\] is $``$1 dex while for Draco star D119 the difference is indeterminate since there is only an upper limit on \[Ba/Fe\] for this most metal-poor star. This upper limit though corresponds to the lowest measured values of \[Ba/Fe\] in the McWilliam (1998) sample. What are we to make of these results? If they are substantiated by a larger sample of stars observed at higher S/N (the Shetrone et al. 1998 Draco spectra have S/N $``$ 24–30), then they might well indicate that the IMF in the proto-Draco gas cloud was different from that in the Galactic halo. One further point deserves comment. As the middle panel of Fig. 2 shows, the most metal-rich of the Draco red giants studied by Shetrone et al. (1998) exhibits a significant Mg depletion. As Shetrone et al. (1998) point out, this star also possesses an oxygen depletion and a modest enhancement of sodium. Together with a postulated enhancement of aluminium (Al was not observed), these abundance anomalies are reminiscent of the correlated CNO/NaMgAl abundance variations that are observed among the red giants in many globular clusters (e.g. Da Costa 1997b and references therein). The origin of these abundance anomalies remains uncertain but in this context we need only note that the phenomenon is restricted to globular cluster red giants; it is virtually unknown among field halo red giants. Consequently, if at least approximately 1 in 4 Draco red giants show these abundance anomalies, and if this fraction is typical for all Galactic dSphs, then the virtual complete absence of such anomalies in field halo red giants suggests that disrupted dSphs (or disrupted globular clusters for that matter) did not contribute significantly to the field halo population, contrary to the suggestions of some other contributions at this meeting. Clearly, a full accounting of the frequency of occurrence of CNO/NaMgAl abundance anomalies among dSph red giants is urgently needed. A second high dispersion study of abundances and abundance ratios in Galactic dSph red giants is that of Smecker-Hane et al. (1998) for stars in the Sagittarius dSph. This dSph is known to have a large internal abundance range. For example, the most metal-poor of the four Sgr globular clusters, Ter 8, has \[Fe/H\] $``$ –2.0 while the most metal-rich, Ter 7, has \[Fe/H\] $``$ –0.5 (e.g. Da Costa & Armandroff 1995). The Smecker-Hane et al. (1998) results for individual Sgr red giants are based on Keck + HIRES spectra that have S/N $``$ 80. They find, for the first three stars in their sample analyzed, \[Fe/H\] values of –1.30, –1.03 and +0.11 dex. This last value is remarkably high; for example, it is significantly larger than the present-day abundance, \[Fe/H\] $``$ –0.3, in the LMC! Yet, this star, once Sgr is fully disrupted, will be become a “field” object in the Galactic halo. Of particular interest here though are the abundance ratio results. For the Sgr red giant with \[Fe/H\] $``$ –1.30, the \[$`\alpha `$/Fe\] ratios have values of $``$0.3 dex, which are perfectly consistent with the ratios observed in globular cluster red giants and in field halo stars (cf. Fig. 2 and Norris, these proceedings). There is therefore nothing particularly remarkable about this Sgr red giant. For the Sgr red giant with \[Fe/H\] $``$ +0.11, however, the \[$`\alpha `$/Fe\] ratios are indeed noteworthy. Smecker-Hane et al. (1998) find \[O/Fe\] $``$ –0.41, \[Ca/Fe\] $``$ –0.24, \[Si/Fe\] $``$ +0.06 and \[Mg/Fe\] $``$ +0.11 so that overall, \[$`\alpha `$/Fe\] $``$ –0.12 dex. It is important to note that the low \[O/Fe\] in this star is not the result of the CNO/NaMgAl abundance anomaly effect seen in globular cluster red giants and in at least one Draco star. If the low \[O/Fe\] was due to this effect, then the abundances of sodium and aluminium should be significantly enhanced, and that is not observed in this Sgr star (Smecker-Hane et al. 1998). Instead, it seems reasonable to suggest that most of the iron in this star comes from Type Ia supernovae rather than Type II, and that consequently, since the timescale for SNIa exceeds that of SNII, this star is part of Sgr’s younger population. Further, as Gilmore & Wyse (1991) have shown, abundance ratios of this type are expected when the star formation is episodic rather than relatively continuous. In essence, the long intervals of quiescence between periods of star formation allow the iron abundance to build up via SNIa, while since there is no star formation in these intervals, no SNII occur to produce the $`\alpha `$-elements. Consequently, while recognizing that these results come from a preliminary analysis for a single star, the element abundance ratios nevertheless suggest that Sgr has had an episodic star formation history. At least qualitatively, this result is remarkedly consistent with those presented by Mighell et al. (these proceedings). Their HST/WFPC2 colour-magnitude diagram based study independently suggests that Sgr has indeed had an episodic star formation history. The concurrence of these results then indicates that we are moving towards a more complete understanding of the necessarily entwined star formation and chemical enrichment processes that occurred in this dwarf galaxy. ## 4. Summary When considering the properties of the dSph galaxies present in the Galactic halo today, we must keep in mind that these galaxies have survived for a Hubble time. Consequently, if there was a large population of dSph-like objects early in the life of the Galaxy which has now been mostly disrupted, the dSph galaxies that we can observe at the present epoch may not have been “typical” members of this hypothesized early population. In particular, the current dSph galaxies may have orbits that make them less susceptible to tidal disruption, and/or perhaps they have more massive and/or denser dark matter halos. Consequently, we shouldn’t necessarily expect exact correspondence between halo properties and those of present-day dSphs even if the Galactic halo does have a substantial contribution from disrupted dSphs. Nevertheless, the present-day dSphs are intriguing objects for further study both: dynamically, since we are beginning to see the emergence of extensive observational datasets and more complex theoretical models of both the dSphs and their interaction with the Milky Way; and, from a stellar populations point-of-view, where increased knowledge of star formation histories, and abundance and abundance ratio distributions will tell us a lot about the evolution of these lowest luminosity galaxies. #### Acknowledgments. I would like to thank Ed Olszewski for a timely message indicating the availability of a preprint, Tammy Smecker-Hane for making available results from her high dispersion studies of Sgr red giants in advance of publication, and John Norris for his comments on a draft of this paper. I would also like to place on record my indebtness to the late Prof. Alex Rodgers, to whom this meeting is dedicated. Our discussions in recent years ranged from tenure ratios in the IAS and large telescopes in Australia to the globular clusters associated with Sgr dSph; I always found the discussions beneficial. Mt. Stromlo & Siding Spring Observatories, and Australian astronomy in general, are poorer for his untimely demise. ## References Aaronson, M. 1983, ApJ, 266, L11 Armandroff, T.E., Olszewski, E.W., & Pyror, C. 1995, AJ, 110, 2131 Buonanno, R., Corsi, C.E., Zinn, R., Fusi Pecci, F., Hardy, E., & Suntzeff, N.B. 1998, ApJ, 501, L33 Caldwell, N., Armandroff, T.E., Da Costa, G.S., & Seitzer, P. 1998, AJ, 115, 535 Da Costa, G.S., & Armandroff, T.E. 1996, AJ, 109, 2533 Da Costa, G.S. 1997a, in The Second Stromlo Symposium: The Nature of Elliptical Galaxies, edited by M. Arnaboldi, G.S. Da Costa, & P. Saha, (ASP, San Francisco), ASP Conf. Ser., 116, 270 Da Costa, G.S. 1997b, in Fundamental Stellar Properties: The Interaction between Observation and Theory, IAU Symposium 189, edited by T.R. Bedding, A.J. Booth & J. Davis, (Kluwer, Dordrecht), p. 193 Da Costa, G.S. 1998, in Stellar Astrophysics for the Local Group, edited by A. Aparicio, A. Herrero & F. Sanchez, (Cambridge Univ. Press, Cambridge), p. 351 Gilmore, G., & Wyse, R.F.G. 1991, ApJ, 367, L55 Grillmair, C.J., et al. 1998, AJ, 115, 144 Hargreaves, J.C., Gilmore, G., Irwin, M.J., & Carter, D. 1994, MNRAS, 271, 693 Harris, W.E., Bell, R.A., VandenBerg, D.A., Bolte, M., Stetson, P.B., Hesser, J.E., van den Bergh, S., Bond, H.E., Fahlman, G.G., & Richer, H.B. 1997, AJ, 114, 1030 Ibata, R.A., Wyse, R.F.G., Gilmore, G., Irwin, M.J., & Suntzeff, N.B. 1997, AJ, 113, 634 Johnson, J.A., Bolte, M., Bond, H.E., Hesser, J.E., de Oliveira, C.M., Richer, H.B., Stetson, P.B., & VandenBerg, D.A. 1998, in New Views of the Magellanic Clouds, IAU Symposium 190, edited by Y.-H. Chu, N. Suntzeff, J. Hesser & D. Bohlender, (ASP, San Francisco), ASP Conf. Ser., in press King, I.R. 1966, AJ, 71, 64 Kleyna, J.T., Geller, M.J., Kenyon, S.J., Kurtz, M.J., & Thorstensen, J.R. 1998, AJ, 115, 2359 Kroupa, P. 1997, New Astronomy, 2, 139 Lee, M.G., Freedman, W., Mateo, M., Thompson, I., Roth, M., & Ruiz, M.T. 1993, AJ, 106, 1420 Mateo, M. 1996, in Formation of the Galactic Halo$`\mathrm{}`$Inside and Out, edited by H. Morrison & A. Sarajedini, (ASP, San Francisco), ASP Conf. Ser., 92, 434 Mateo, M. 1997, in The Second Stromlo Symposium: The Nature of Elliptical Galaxies, edited by M. Arnaboldi, G.S. Da Costa, & P. Saha, (ASP, San Francisco), ASP Conf. Ser., 116, 259 Mateo, M. 1998, ARA&A, 36, 435 Mateo, M., Olszewski, E.W., Vogt, S.S., & Keane, M.J. 1998, AJ, 116, 2315 McWilliam, A. 1998, AJ, 115, 1640 McWilliam, A., Preston, G.W., Sneden, C., & Searle, L. 1995, AJ, 109, 2757 Merritt, D. 1988, AJ, 95, 496 Montegriffo, P., Bellazzini, M., Ferraro, F.R., Martins, D., Sarajedini, A., & Fusi Pecci, F. 1998, MNRAS, 294, 315 Oh, K.S., Lin, D.N.C., & Aarseth, S.J. 1995, ApJ, 442,142 Olsen, K.A.G., Hodge, P.W., Mateo, M., Olszewski, E.W., Schommer, R.A., Suntzeff, N.B., & Walker, A.R. 1998, MNRAS, 300, 665 Olszewski, E.W. 1998, in Galactic Halos: A UC Santa Cruz Workshop, edited by D. Zaritsky, (ASP, San Francisco), ASP Conf. Ser., 136, 70 Olszewski, E.W., & Aaronson, M. 1985, AJ, 90, 2221 Olszewski, E.W., Pryor, C., & Armandroff, T.E. 1996, AJ, 111, 750 Piatek, S., & Pryor, C. 1995, AJ, 109, 1071 Pryor, C. 1994, in ESO/OHP Workshop on Dwarf Galaxies, edited by G. Meylan & P. Prugniel, (ESO, Garching), p. 323 Pryor, C., & Kormendy, J. 1990, AJ, 100, 127 Pryor, C., & Meylan, G. 1993, in Structure and Dynamics of Globular Clusters, edited by S.G. Djorgovski & G. Meylan, (ASP, San Francisco), ASP Conf. Ser., 50, 357 Shetrone, M.D., Bolte, M., & Stetson, P.B. 1998, AJ, 115, 1888 Smecker-Hane, T., McWilliam, A., & Ibata, R. 1998, BAAS, 30, 916 Stetson, P.B., Hesser, J.E., & Smecker-Hane, T.A. 1998, PASP, 110, 533 Suntzeff, N.B. 1993, in The Globular Cluster - Galaxy Connection, edited by G.H. Smith & J.P. Brodie, (ASP, San Francisco), ASP Conf. Ser., 48, 167
no-problem/9901/physics9901030.html
ar5iv
text
# Molecular Realism in Default Models for Information Theories of Hydrophobic Effects ## I Introduction The idea of constructing an information theory description of cavity formation in water has reinvigorated the molecular theory of hydrophobic effects . One advantage of this approach is that simple physical hypotheses can be expressed in a default model. Given a fixed amount of specific information, the quality of the predictions gives an assessment of the physical ideas that are embodied in the underlying default model. Relevant physical ideas include: whether a direct description of dense fluid packings significantly improves the predictions; or whether incorporation of de-wetting of hydrophobic surfaces is required; or whether specific expression of the roughly tetrahedral coordination of water molecules in liquid water is the most helpful next step for these theories. It is remarkable that the previous successes of the information models for the primitive hydrophobic effects have not required specific consideration of these physical points. This letter considers these physical arguments, constructs default models to test them, and gives the results of information theory predictions using those default models with specific moment information drawn from simulation of liquid water. Occupancy moments are used as information. Complete moment information produces results that are independent of the default model. However, the goal is to judge the default models and the physical ideas that they express. Therefore, our judgements will center on the accuracy of the predictions with limited moment information. More specifically, we take the view that the quality of the prediction with two moments is critical because information for the first two occupancy moments – the mean and variance – is available from experiment. Much of the technical work required to construct the default models considered involves molecular simulation calculations for fiducial systems. That technical work will be reported at a later time. The application of the information theory approach more broadly than to liquid water immediately turns-up cases where it works less well. Thus, a broader suite of default models will clearly be a key ingredient to the broader utility of this approach. ## II Testing Physical Ideas of Hydrophobic Effects The information theory approach studied here grew out of earlier studies of formation of atomic sized cavities in molecular liquids . It has led to new and simple views of entropy convergence in hydrophobic hydration and of pressure denaturation of proteins . A review of these developments has been given ; broader discussions are also available . The objective of the information theory prediction is the interaction part of the chemical potential of a hard core solute $`\beta \mathrm{\Delta }\mu =\mathrm{ln}p_0`$, where $`p_0`$ the probability that the hard solute could be inserted into the system without overlap of van der Waals volume of the solvent; 1/$`\beta `$=k<sub>B</sub>T. This procedure depends on a default model $`\widehat{p}_n`$ of the distribution $`p_n`$ of which $`p_0`$ is the $`n=0`$ member. Two default models have been considered in previous work: (a) the ‘Gibbs default model’ $`\widehat{p}_n1/n!`$ that predicts a Poisson distribution when the moment $`<`$n$`>_0`$ is the only information available; and (b) the ‘flat default model’ $`\widehat{p}_n`$ = constant$`>`$0, $`n`$ = 0, 1, $`\mathrm{}`$, $`n_{max}`$, and zero otherwise. The predictions obtained using these default models for the hydration free energy of inert gases in water are similar. Convergence to the correct result is non-monotonic with increasing numbers of binomial moments $`B_j=\left(\genfrac{}{}{0pt}{}{n}{j}\right)_0`$ used . Because of this non-monotonic convergence, the most accurate prediction obtained with a small number of moments utilizes only the two moments, $`B_1`$ and $`B_2`$. Furthermore, the flat default model produces a distinctly more accurate prediction of this hydration free energy when only two moments are used than does the Gibbs default model. The accuracy of the prediction utilizing the flat default model is remarkable. Furthermore, the Gibbs default model is conceptually more natural in this framework. So, the effectiveness of the flat default model relative to the Gibbs model is additionally puzzling. The work that follows addresses these issues. It deserves emphasis that the overall distribution $`p_n`$ is well described by the information theory with the first two moments, $`B_1`$ and $`B_2`$. It is the prediction of the extreme member $`p_0`$ that makes the differences in these default models significant. ### A Packing A first idea is that the default model should contain a direct description of dense fluid packings that are central to the theory of liquids . Accordingly, we computed $`p_n`$ for the fluid of hard spheres of diameter d = 2.67 Å at a density $`\rho d^3`$ = 0.633. Those computations used specialized importance sampling and will be reported later. Typical predictions for the hydrophobic hydration free energies of atomic size solutes obtained using those results as a default model are shown in Fig. 1. That shows the non-monotonic convergence with increasing number of occupancy moments obtained from the flat and the Gibbs default models. The predictions obtained using the hard sphere results as a default model are different but not improved in the essential aspects. Direct convergence is only seen if four or more moments are included. Though the convergence is more nearly monotonic from the beginning, the prediction obtained from a two moment model is worse than for the flat and the Gibbs default cases. ### B Attractive Interactions among Solvent Molecules A next idea is that attractive forces between solvent molecules might play a significant role for these properties because attractive forces lower the pressure of the solvent. Dehydration of hydrophobic surfaces becomes a principal consideration for solutes larger in size than the solvent molecules. But perhaps such effects are being felt already for atomic solutes. Accordingly, we computed $`p_n`$ for the Lennard-Jones liquid studied by Pratt and Pohorille for which attractive interactions were adjusted so that the macroscopic pressure of the solvent would be approximately zero. This Lennard-Jones system thus gives a cavity formation free energy for atomic sized cavities that is about the same as that of common liquid water simulation models. The results of Fig. 1 confirm this latter point but also show that the convergence with number of moments is again non-monotonic and not better than for the flat and the Gibbs default models. Again, direct, non-monotonic convergence is only seen after four occupancy moments are included. ### C Tetrahedral Coordination of Solvent Molecules The final idea checked here is whether the predictions of cavity formation free energies are improved by incorporating a tetrahedral coordination structure for water molecules in liquid water. We use a cluster Poisson model to accomplish this. The physical picture is: tetrahedral clusters of water molecules with prescribed intra-cluster correlations but random positions and orientations. A molecular cluster may contribute to occupants of a specific observation volume only if the center of the cluster is an occupant of a larger augmented volume; see Fig. 2. Definition of this augmented volume will depend on the structures of the clusters and the choice of cluster center. We then consider the generating function $`\mathrm{}(z)`$ for the probability $`\mathrm{}_N`$ that $`N`$ cluster centers are present in the augmented volume: $$\mathrm{}(z)=\underset{N=0}{}z^N\mathrm{}_N.$$ (1) We assume that $`N`$ is Poisson distributed $`\mathrm{}(z)=e^{<N>(1z)}`$ with $`<`$N$`>`$ the product of the density of clusters and the volume of the augmented region. Next we consider the generating function $`g(z)`$ defined by the conditional probabilities, $`g_n`$, that a cluster with center in the augmented volume contributes $`n`$ oxygen atom occupants to the observation volume: $$g(z)=\underset{n=0}{}z^ng_n.$$ (2) Defining the generating function for the probabilities of numbers of oxygen in the observation volume $$p(z)\underset{n=0}{}z^np_n,$$ (3) we can express $$p(z)=\mathrm{}(g(z)).$$ (4) This is a standard result of probability theory . $`\mathrm{ln}p(z)`$ is a polynomial function of $`z`$. Extraction of the series coefficients from Eq. 3 provides the desired default model. The numerical effort resides only in the computation of the $`g_n`$. In this study, the clusters are assumed to be tetrahedra with the oxygen atom of a water molecule at the center and at each vertex. Thus we take $`<`$N$`>`$ = $`\rho `$v/5, with v the volume of the augmented region and $`\rho `$ the molecular density of the solvent water. The OO intra-cluster near-neighbor distance, the distance of a point of a tetrahedron from its center, is 2.67Å and the augmented volume is a sphere with radius $`\lambda `$ \+ 2.67Å. The coefficients of $`g(z)`$ are obtained from a Monte Carlo calculation that randomly positions a tetrahedron with center in the augmented volume and counts how many O-points of the cluster land in the observation volume. Fig. 1 shows the predictions for cavity formation free energy obtained with the cluster (tetrahedron) Poisson default model. The non-monotonic convergence is still evident. The prediction utilizing two moments is more accurate than that utilizing the Gibbs default model and similar to the predictions made by the flat default or the Lennard-Jones default in the best cases considered here for those models. ## III Discussion Each of the default models newly considered here makes specific assumptions about n-body correlations. If the default model were the same as the experimental distribution, the limitation of the data to two moments would not be significant. The optimization would be unaffected by the number of experimental moments used. The present results suggests that the efficiency of the flat and Gibbs default models relative to the more sophisticated hard sphere and Lennard-Jones default models might be associated with the avoidance of specific assumptions for n-body correlations for the former cases. In this view, the specific assumptions for n-body correlations with the hard sphere and Lennard-Jones default models have to be displaced for a good description of cavity formation in liquid water. The third and fourth order factorial cumulants predicted on the basis of each of these default models using two experimental moments were evaluated and directly compared. In fact, the information theory predictions obtained for these moments were very similar to each other. A second point of discussion is that the biggest difference between the Lennard-Jones and the cluster Poisson model is in simplicity. Though the differences in the predictions seen here are not dramatic, the cluster Poisson model is simpler. This is particularly true for the dependence on thermodynamic state and the potential for further development. That the cluster Poisson model expressing tetrahedral coordination appears to be a helpful new direction is intuitive and encouraging. However, the fact that the predictions are not dramatically improved suggests that this sort of tetrahedral coordination is not the only or principal physical feature relevent for improved predictions cavity formation. The Lennard-Jones default model incorporates some of the dewetting phenomena that is expected to become more pronounced as the solute size increases . Fig. 3 shows the variation of hydration free energy with solute size obtained with the different default models and two moments. At the smallest solute size shown, all the models give the same result. In the solute size range of 2.2-2.8 Å, the cluster Poisson model gives the best results overall. For larger solute sizes, the cluster Poisson model results overestimate the hydration free energy. At this point, results from the Lennard-Jones default model cross the simulation results and become slightly too small for the larger solute sizes shown. ## IV Conclusion We conclude that direct incorporation of dense fluid packing effects (hard sphere default model) and packing effects plus attractive forces that lower the pressure of the solvent (Lennard-Jones default model) are ineffective in improving the prediction of hydrophobic hydration free energies of inert gases over the previously used Gibbs and flat default models. However, a cluster Poisson model that incorporates tetrahedral coordination structure in the default model is intuitive, simple to implement, and is one of the better performers for these predictions. These results provide a partial rationalization of the remarkable performance of the flat default model with two moments in previous applications. The specific cluster Poisson default model used here is primitive and will be the subject of further refinement. ## V Acknowledgements The work was supported by the US Department of Energy under contract W-7405-ENG-36. The support of M.A.G by the Center for Nonlinear Studies of Los Alamos National Laboratory is appreciated. LA-UR-98-5431.
no-problem/9901/astro-ph9901169.html
ar5iv
text
# References On density and temperature-dependent ground-state and continuum effects in the equation of state for stellar interiors – A Comment on the paper by S. Arndt, W. Däppen and A. Nayfonov 1998, ApJ 498, 349 Wolf-Dietrich Kraeft<sup>1</sup>, Stefan Arndt<sup>2</sup>, Werner Däppen<sup>3,4</sup>, Alan Nayfonov<sup>3,5</sup> <sup>1</sup> Institut für Physik, Universität Greifswald, D-17489 Greifswald, Germany <sup>2</sup> Max-Planck-Institut für Plasmaphysik, D-17489 Greifswald, Germany <sup>3</sup> Department of Physics and Astronomy, University of Southern California, Los Angeles, CA 90089-1342, U.S.A <sup>4</sup> Theoretical Astrophysics Center, Institute for Physics and Astronomy, Aarhus University, 8000 Aarhus C, Denmark <sup>5</sup> IGPP, Lawrence Livermore National Laboratory, Livermore, CA 94550, U.S.A. Abstract Misunderstandings have occurred regarding the conclusions of the paper by S. Arndt, W. Däppen and A. Nayfonov 1998, ApJ 498, 349. At occasions, its results were interpreted as if it had shown basic flaws in the general theory of dynamical screening. The aim of this comment is to emphasize in which connection the conclusions of the paper have to be understood in order to avoid misinterpretations. Comment The paper “On density and temperature-dependent ground-state and continuum effects in the equation of state for stellar interiors” by Arndt, Däppen and Nayfonov (1998; hereinafter ADN) dealt with the consequence of density and temperature effects on two-particle properties (such as binding energies and energies of continuum) for the equation of state, under conditions relevant for stellar interiors. The ADN paper applied, among other, the work by Seidel, Arndt and Kraeft (1995; hereinafter SAK). In particular, ground-state and continuum-edge shifts under plasma conditions are contained in SAK. In the ADN paper, an empirical approach is used to obtain thermodynamic quantities from the data contained in the SAK paper, in view of their possible testing by helioseismology. Essentially, two results of the SAK calculations were used by ADN. One was based on an elaborate formalism for dynamic screening, the other on a static approximation. The result of the ADN paper was that when the raw data of the SAK figure were taken to be thermodynamic quantities, plausible results (labeled STATIC) emerged for the static continuum edge, but the same technique yielded rather absurd quantities (labeled DYNAMIC) for the corresponding dynamic continuum edge. Although the conclusions of ADN were worded to allow various possibilities for the interpretation of this outcome, especially pointing at a potential inadequacy of the empirical identification of two-particle properties and thermodynamic quantities, it has been brought to our attention that the ADN paper is still at occasions misunderstood as if it had shown basic flaws in the general theory of dynamical screening. The aim of this comment is to explain once again in which connection the conclusions of ADN have to be understood in order to avoid misinterpretations, and to stimulate further investigations. The determination of two-particle properties is outlined in paragraph 2.3 of ADN along the lines given in the references (Zimmermann et al. 1978; Kraeft et al. 1986) and SAK, and takes carefully into account especially the dynamic character of the effective interaction and of the self energy. In paragraph 2.4.2 of ADN, the influence of bound and continuum states on thermodynamic functions is considered. Bound state energies are only very weakly density dependent (at least for $`Z=1`$ ions), consequently we will discuss here only the continuum edge problem. The continuum edge is defined as the sum of (momentum dependent) single particle energies, or of the self energies, respectively, taken for zero momenta. Consequently, these continuum energies are two-particle quantities, not thermodynamic ones. ADN were considering various approximation levels, referred to as DEBYE, STATIC and DYNAMIC. The results on the first two approximation levels differ only very little from each other as the Hartree-Fock contributions in Eq. (21) of ADN are not important for the densities considered. The Debye-shift \[last term of Eq. (21)\] is the result of thermodynamic averaging of the momentum dependent self energies over a Maxwellian and thus a thermodynamic quantity. This was shown explicitly in Kraeft et al. (1986), p. 115. Therefore, the Debye-shift gives the interaction part of the chemical potential (and thus the internal energy contribution $`U_4`$) in the Debye case. At the level of the dynamical approximation of the self energies, as considered in SAK and taken as a part of the approximation level DYNAMIC in ADN, the analogous thermodynamic averaging was not performed. ADN have chosen a simple procedure to estimate a thermodynamic quantity; this approximation consists in taking the continuum lines according to SAK. It leads to excellent thermodynamic quantities in the DEBYE and STATIC cases, but to severe discrepancies with observationally admissible results in the DYNAMIC case. In this comment, we want to emphasize that the unphysical result of the DYNAMIC approximation is due to the absence of thermodynamic averaging in the continuum curves on the static and dynamic approximation levels, respectively. By being an already thermodynamically averaged result, the Debye curve gives a physically consistent result. (By mere coincidence, the static case gives reasonable results, too.) However, such a consistency is absent when the dynamic continua according to SAK are used as thermodynamic data. In general, of course, a rigorous theory always has to account for the dynamics before doing (thermodynamic) averages. Of course, at the thermodynamic level, after averaging, the dynamics is no longer perceptible. While we have stressed here the importance of an inconsistency in the approximation introduced by ADN, their paper remains so far the only attempt to perform explicitly a quantitative calculation of thermodynamic quantities in the presence of dynamical screening. Future studies will have to show how to include such effects more rigorously. Acknowledgments: W.-D. K. acknowledges support by the Deutsche Forschungsgemeinschaft, Sonderforschungsbereich 198. W. D. and A. N. acknowledge support by the grant AST-9618549 of the National Science Foundation and the SOHO Guest Investigator grant NAG5-6216 of NASA. W. D. was supported in part by the Danish National Research Foundation through its establishment of the Theoretical Astrophysics Center.
no-problem/9901/astro-ph9901276.html
ar5iv
text
# 1 Introduction ## 1 Introduction The Very Long Baseline Interferometry (VLBI) technique can image compact radio sources with a resolution of the order of the milliarcsecond and can determine astrometrically relative positions to precisions of the order of tens of microarcseconds. This technique is ideal to construct a precise celestial reference frame. Up to the present, the group delay observable has been regularly used in such a task. The use of the more precise phase delay observable should constitute an immediate improvement in accuracy. Furthermore, the phase-delay, if used differentially over radio source pairs, becomes the most accurate observable in astrometry. The technique of phase-delay differential astrometry has been applied to several source pairs, with separations ranging from 33<sup>′′</sup> (1038+528 A/B, Marcaide & Shapiro, 1983) to 5$`\text{.}^{}`$9 (3C 395/3C 382, Lara et al., 1996). The pair 1928+738/2007+777 (4$`\text{.}^{}`$6 separation) has been also studied (Guirado et al. 1995; 1998) yielding precisions of $``$200 $`\mu `$as. Recently, we added a new source 1803+784 to the latter pair to take advantage of the constraints introduced by a triangular geometry in the determination of the angular separations Ros et al. (1998). It represents a first step towards extending the differential phase-delay astrometry from pairs to a whole sky radio source frame. ## 2 The Triplet 1803+784/1928+738/2007+777. We observed the radio sources of the triangle formed by the BL-Lac objects 1803+784 and 2007+777, and the QSO 1928+738 on epoch 1991.89 with an intercontinental interferometric array simultaneously at the frequencies of 2.3 and 8.4 GHz Ros et al. (1998). We determined the angular separations among the three radio sources with submilliarcsecond accuracy from a weighted least squares analysis to the differential phases, after removing most of the contribution due to the geometry of the array and the atmosphere. The radio source structure contributions to the phase delays were also modeled using hybrid mapping images of the radio sources from the same observations. We checked the consistency of our astrometric determination through the use of the so-called Sky-Closure. The Sky-Closure was defined as the circular sum of the angular separations of the three radio sources, determined pairwise and independently. In our case the result was consistent with zero, and verified satisfactorily the data process followed. The final accuracy of the astrometric determinations was of 130 $`\mu `$as. One important aspect in the astrometric work is the excess propagation delay due to the ionization of the propagation medium, mainly the ionosphere. The ionospheric contribution to the delays had been determined in the past from dual-band VLBI observations. Sardón et al. Sardón et al. (1994) showed that the total electron content (TEC) of the ionosphere can be determined with high accuracy by using dual frequency Global Positioning System (GPS) data. We used their method to estimate the plasma contribution by using TEC estimates of the ionosphere obtained from data from different GPS sites neighbor to the VLBI stations (see Ros et al. 1998; 1999). ### Proper Motions in 1928+738. The comparison of the measurements of the separations of the pair 1928+738/2007+777 with those presented by Guirado et al. Guirado et al. (1995, 1998) for epochs 1985.77 and 1988.83 allows us to register adequately the absolute position of 1928+738 relative to 2007+777. We estimate the proper motion of components in 1928+738, and identify the position of the radio source core even though it is unseen at cm-wavelengths, as shown in Fig. 1. The average proper motion of the components emerging from the core region is of 0.30$`\pm `$0.15 mas/yr towards the South. ## 3 The S5 Polar Cap Sample. The phase connection over a separation as large as 15 is critical for the success of the extension of the astrometry to larger samples. Pérez-Torres et al. Pérez-Torres et al. (these proceedings) have shown the phase-connection for such a pair, 1150+812/1803+784, to be possible. Therefore, the 13 radio sources of the complete S5 polar cap sample Eckart et al. (1986), which have mutual separations less than 15 can be studied also astrometrically. These sources are the quasars 0016+731, 0153+744, 0212+735, 0615+820, 0836+710, 1039+811, 1150+812, and 1928+738, and the BL Lacertae objects 0454+844, 0716+714, 1749+701, 1803+784 and 2007+777. We observed this set of radio sources at 8.4 GHz over 24 hours on epoch 1997.93. We imaged the 13 radio sources using hybrid mapping techniques. On these images we defined reference points and then removed the structure contributions from the corresponding astrometric observables. After it, we used the differential phase-delays to obtain a global solution of all the source positions. Fig. 2 shows our preliminary results. A similar trend of systematic effects, which will cancel out when making the differences, is conspicuous. With respect to the determination of the ionosphere contribution to the data, the density of the GPS network increased notably from 1991 to 1997, making the bias removal and the accuracy of the TEC determination much better. Now it is possible to have Global Ionospheric Maps from the Global Positioning System and thus estimate the ionosphere contribution to the astrometric observables of a single-wavelength VLBI observation and to remove the plasma effects from them with high accuracy. ## 4 Conclusions The differential phase-delay astrometry has recently undergone important improvements. The phase-connection process has been extended to larger sets of radio sources with larger source separations, and the ionosphere contribution to the astrometric observables has been successfully removed using GPS data. We have determined with submilliarcsecond precision the relative separations in the triangle of radio sources 1803+784/ 1928+738/ 2007+777, and we have observed astrometrically the complete S5 polar cap sample, that among the 13 sources within 20 to the celestial North Pole includes the above 3. New observations are now underway in the framework of a long-term astrometric program to determine the absolute kinematics of radio source components in the S5 complete sample. This program, extended over 5 years, will reach a precision in the determination of the relative separations better than 0.1 mas and consequently in the proper motions of the radio source components.
no-problem/9901/astro-ph9901329.html
ar5iv
text
# Do Jet-Driven Shocks ionize the Narrow Line Regions of Seyfert Galaxies? ## 1 Introduction The gas in the narrow line regions (NLRs) of Seyfert galaxies is known to be photoionized (e.g. Koski 1978; Ferland & Osterbrock 1986; Osterbrock 1989) and the prevailing view is that the ionizing photons originate in a compact source, perhaps the accretion disk. However, strong associations between the narrow emission-line and radio continuum properties of Seyfert nuclei have been found over the last 20 years. There are strong correlations between radio power and both line luminosity (de Bruyn & Wilson 1978) and line width (Wilson & Willis 1980). Early VLA maps showed that the radio sources represent collimated ejection from the compact nucleus and that they have similar spatial scales and orientations to the NLR. These results led Wilson & Willis (1980) to suggest that the nucleus ejects radio components which interact with ambient gas and replenish the high kinetic energy and ionization of the NLR. Simple models of the momentum transfer between jets and ambient gas (Blandford & Königl 1979) show that ambient gas with the observationally-inferred NLR mass can be accelerated to the observed velocities ($``$ 10<sup>3</sup> km s<sup>-1</sup>) in the gas crossing time of the NLR ($``$ 10<sup>5-6</sup> yrs) for reasonable efficiency factors relating jet and radio powers (Wilson 1981, 1982). Subsequent imaging (e.g. Haniff et al. 1988; Bower et al. 1995; Capetti et al. 1995; Falcke, Wilson & Simpson 1998) and spectroscopic (Whittle et al. 1988; Axon et al. 1998) observations have confirmed that the structure of the NLR in many Seyferts is dominated by compression of interstellar gas by the radio ejecta and that these ejecta are an important source of “stirring” of the gas. Theoretical descriptions of this interaction have involved expanding radio lobes (Pedlar, Dyson & Unger 1985), bow shocks driven by the radio jets (Taylor, Dyson & Axon 1992; Ferruit et al. 1997) and the role of the jet cocoon (Steffen et al. 1997). The power required to accelerate the gas is typically $``$ 10<sup>41-42</sup> erg s<sup>-1</sup> over $``$ 10<sup>5-6</sup> yrs to give the $``$ 10<sup>53-55</sup> erg of kinetic energy present in the observed clouds of the NLR (more kinetic energy could be present in gas too tenuous to be visible in line emission). The power radiated in observed line emission is somewhat higher at $``$ 10<sup>41-44</sup> erg s<sup>-1</sup>, which is $``$ 10<sup>2-4</sup> times higher than the radio luminosity (e.g. Fig. 1 of Wilson, Ward & Haniff 1988). Recently, Bicknell et al. (1998) have argued that the narrow line emission in Seyfert galaxies is, indeed, powered entirely by the radio-emitting jets. The jets are considered to drive several hundred to a thousand km s<sup>-1</sup> radiative shocks into interstellar gas on the hundred pc scale. Such “photoionizing shocks” are powerful sources of ionizing radiation and create photoionized precursors (Daltabuit & Cox 1972), the optical spectra of which are similar to Seyfert galaxy NLRs (Dopita & Sutherland 1995, 1996 \[hereafter DSI\]; Morse, Raymond & Wilson 1996). In this picture, the conversion of jet kinetic energy to radio emission is much less efficient in Seyfert galaxies than in radio galaxies and radio-loud quasars; the Seyfert jets are thermally dominated while jets in radio-loud AGN may be dominated by relativistic particles and magnetic fields. It is very important to decide whether the energy source that powers the NLR is “in situ” mechanical motion (jets) or the more conventional compact photoionizing source. This issue is closely related to the fundamental question of whether the putative accretion disk loses energy primarily in mechanical (as jets or winds) or radiative form. Shock velocities of 300 - 1,000 km s<sup>-1</sup>, required to account for the emission-line spectra of Seyferts (Dopita & Sutherland 1995), generate gas of temperature 10<sup>6-7</sup>K and copious soft X-rays (cf. Laor 1998). This production of soft X-rays by the NLR is an inevitable prediction of the photoionizing shock model and can be used to distinguish it from photoionization by a hidden Seyfert 1 nucleus. Indeed, soft X-ray emission from the NLR has been isolated in NGC 1068 (Wilson et al. 1992), NGC 2110 (Weaver et al. 1995) and NGC 4151 (Morse et al. 1995). For type 2 Seyfert galaxies, in general, the observed Einstein IPC soft X-ray flux (from the entire galaxy) and the \[OIII\]$`\lambda `$5007 flux (from the NLR) are similar (Fig. 1). This similarity is not inconsistent with photoionizing shocks being the power source, since $``$ 4% (20%) of the kinetic power of a 300 (500) km s<sup>-1</sup> radiative shock is radiated in the Einstein band (0.2 – 4 keV), while $``$ 2% is radiated as \[OIII\]$`\lambda `$5007 (DSI; Bicknell, Dopita & O’Dea 1997, hereafter BDO), and the expected X-ray spectrum is soft and photoelectric absorption can substantially reduce the observed flux. In this letter, we examine the relationship between soft X-ray and \[OIII\]$`\lambda `$5007 emission expected in the photoionizing shock model, and compare it with Einstein and ROSAT X-ray observations. We also argue that the shocks should produce strong coronal iron line emission, and compare the predicted and observed strengths. ## 2 Comparison of \[OIII\]$`\lambda `$5007 and Soft X-ray Fluxes ### 2.1 Method The calculations of DSI provide the luminosity radiated in the \[OIII\]$`\lambda `$5007 line per unit area of shock from both the post-shock gas and the photoionized precursor as a function of shock velocity and magnetic parameter. The pre-shock density is taken to be low enough that collisional deexcitation is unimportant. We have used the model-predicted \[OIII\]$`\lambda `$5007 luminosities from DSI (with the fluxes reduced by a factor of 2, as we suspect they may be too high by this factor based on comparison with the models discussed below and an apparent factor of 2 error in equation 3.3 of DSI) and BDO. For the post-shock gas, the \[OIII\]$`\lambda `$5007 luminosity was taken to be the average of the predictions for the four magnetic parameters considered by DSI. The exact value of the magnetic parameter is not critical because for shock velocities above $``$ 200 km s<sup>-1</sup>, the \[OIII\] emission of the precursor dominates that of the post-shock gas. The \[OIII\]$`\lambda `$5007 luminosity would be less than predicted if any of the following effects are important: a) the precursor is density bounded or the pre-shock gas presents a covering factor of $`<`$ 1; b) the post-shock gas cools in an unstable fashion, so that the fragmented gas does not intercept all of the ionizing flux (e.g. Innes, Giddings & Falle 1987a, b); or c) the pre-shock density is sufficiently high that the $`{}_{}{}^{1}D_{2}^{}`$ level of OIII suffers collisional deexcitation. Of these, effect b) is unlikely to affect our results significantly in view of the small contribution of the post-shock gas to \[OIII\]$`\lambda `$5007 for the shock velocities of interest. Since the collisional deexcitation density of the $`{}_{}{}^{1}D_{2}^{}`$ level is 7.0 $`\times `$ 10<sup>5</sup> cm<sup>-3</sup> (Osterbrock 1989), it is very unlikely that effect c) applies to the precursor gas, though post-shock gas could be collisionally deexcited in the inner, denser parts of the NLR. The possibility that effect a) is significant means that the predicted \[OIII\]$`\lambda `$5007 luminosities are most conservatively treated as upper limits. We have also calculated the luminosity of the radiation emitted from the post-shock gas in the bands of the ROSAT PSPC (0.1 - 2.4 keV) and Einstein IPC (0.2 - 4 keV) detectors using an updated version of the radiative shock wave code described in Raymond (1979). The atomic rates are basically the same as those of the current version of the Raymond & Smith (1977) X-ray code used for the X-ray spectrum predictions in DSI. The shock code includes time-dependent ionization and photoionization, but these have little effect on broad-band X-ray count rates. To obtain the total emission (over 4$`\pi `$ sterad), we multiplied the X-ray flux radiated upstream by a factor of 2. This factor may be too large for soft photons because of photoelectric absorption by the downstream gas. The predicted ratio, R, of X-ray count rate to \[OIII\]$`\lambda `$5007 flux can then be obtained as a function of shock velocity and effective hydrogen column density. The code also predicts the intensities of the \[Fe X\] and \[Fe XIV\] lines discussed below, including excitation by electron and proton impact, and by cascades following LS permitted excitations. ### 2.2 Results Figs 1 and 2 show the Einstein IPC and ROSAT PSPC count rates against the \[OIII\]$`\lambda `$5007 flux. The dashed lines in each panel are loci of constant R obtained from the model as described above after attenuation by an equivalent hydrogen column of N<sub>H</sub> = 10<sup>20.5</sup> cm<sup>-2</sup>, which is the average Galactic column density towards the Seyfert 2 galaxies plotted in the figures (see below). The \[OIII\] fluxes have been reduced by 0.2 mag, which is the expected obscuration for a normal gas to dust ratio. The solid lines are the same but with the model X-ray fluxes attenuated by N<sub>H</sub> = 10<sup>21.5</sup> cm<sup>-2</sup>, which is the column density corresponding to the average obscuration (A<sub>V</sub> = 1.8 mag, derived from the Balmer decrement assuming intrinsic case B recombination values) of the NLRs, assuming a normal gas to dust ratio. The (narrow) hydrogen lines in Seyfert 2s come from a similar region to \[OIII\]$`\lambda `$5007, so the model-predicted \[OIII\]$`\lambda `$5007 fluxes have been reduced by 1.12 $`\times `$ A<sub>V</sub> = 2.0 mag. Because the model \[OIII\]$`\lambda `$5007 fluxes may be overestimates, as discussed above, the predicted values of R are best treated as lower limits, so the predicted locations in Figs 1 and 2 for a given shock velocity lie to the upper left of the plotted line. Also plotted in Figs 1 and 2 are observed X-ray count rates and \[OIII\]$`\lambda `$5007 fluxes for type 2 Seyfert galaxies. Type 1 Seyferts and the so-called “narrow line X-ray galaxies” have been omitted (with the exception of NGC 2110, see figure captions) because the equivalent hydrogen columns to their compact nuclei are sufficiently low that the soft X-ray emission is dominated by the Seyfert 1 nucleus. The equivalent hydrogen column towards the “hidden” Seyfert 1 nuclei in Seyfert 2 galaxies averages $``$ 3 $`\times `$ 10<sup>23</sup> cm<sup>-2</sup> (Turner et al. 1997), so the compact source is heavily attenuated in the Einstein and ROSAT bands. At the spatial resolution of the IPC and PSPC detectors, the measured X-ray count rate generally includes the entire host galaxy, the NLR and any residual transmitted or scattered emission from the compact nucleus. Thus these observed X-ray fluxes represent firm upper limits to the X-ray flux from the NLR. The \[OIII\]$`\lambda `$5007 emission, on the other hand, was measured through a small aperture and is a good measure of the NLR line emission. Thus the plotted data represent upper limits to the actual value of R in the NLR. Also plotted in Fig. 2 is a point representing the total PSPC count rate (after absorption) and \[OIII\]$`\lambda `$5007 flux density (after obscuration) for a 3<sup>′′</sup> (0.7 pc) length along the shock front of the LMC supernova remnant N132D (Morse et al. 1996). The plotted PSPC X-ray count rate was obtained by integrating the curve of 0.1 - 2 keV flux density (obtained from the ROSAT HRI image and corrected for absorption) versus distance perpendicular to the shock given in Fig. 11 of Morse et al. (1996). The predicted absorbed PSPC flux density was then found using the spectral model of Hwang et al. (1993) and the equivalent hydrogen column of N<sub>H</sub> = 6.2 $`\times `$ 10<sup>20</sup> cm<sup>-2</sup>, which is within a factor of 2 of the column used for the models represented by the dashed lines. The curve of \[OIII\]$`\lambda `$5007 flux density versus distance perpendicular to the shock, also given in Fig. 11 of Morse et al. (1996), was similarly integrated and was reduced by an obscuration A<sub>V</sub> = 0.34 mag (equivalent to N<sub>H</sub> = 6.2 $`\times `$ 10<sup>20</sup> cm<sup>-2</sup> for a normal gas to dust ratio). Fig. 2 shows that the resulting value of R for N132D is about 50 times higher than the average of the Seyfert galaxy points. Morse et al. (1996) argue convincingly that the X-ray and \[OIII\]$`\lambda `$5007 emissions from this region of N132D originate from a photoionizing shock of velocity 790 km s<sup>-1</sup>, with some contribution to the ionizing flux from lower velocity shocks. The absorption-corrected ratio F(0.1 - 2 keV)/F(\[OIII\]$`\lambda `$5007) for this 0.7 pc length of shock is $``$ 45, with an uncertainty of about 25%. Examination of Figs 1 and 2 shows that the model generally overpredicts the observed R values (i.e. the predicted X-ray flux is too high) for high shock velocities (500 - 1,000 km s<sup>-1</sup>) when only the Galactic column (N<sub>H</sub> = 10<sup>20.5</sup> cm<sup>-2</sup>) is included. For this column density, a shock velocity of $``$ 300 km s<sup>-1</sup> better describes the observed R values. For N<sub>H</sub> = 10<sup>21.5</sup> cm<sup>-2</sup>, the model of a 500 km s<sup>-1</sup> shock is comparable to most observed detections and upper limits, while the X-ray emission of a 300 km s<sup>-1</sup> shock is $``$ 2 orders of magnitude (the exact number being very sensitive to N<sub>H</sub>) below the observed data points. If most of the observed soft X-ray emission originates in the NLR and the hydrogen column is similar to that inferred towards the optical (narrow) line emission, a photoionizing shock model with shock velocity $``$ 400 – 500 km s<sup>-1</sup> is compatible with the observations. On the other hand, if the X-ray flux from the NLR is actually substantially lower than the observed total emission from the galaxy, a lower value of the shock velocity would be indicated or a classical photoionization model favored. Future high spatial resolution, medium spectral resolution observations with the ACIS on AXAF will spatially resolve the X-ray emission of the NLR in nearby Seyferts, and thus separate it from both the galaxy disk and the compact nucleus. These measurements will also allow the gas temperature and N<sub>H</sub> to be determined, providing a much more precise check of the model. ## 3 Coronal Iron Line Emission As emphasised by DSI, the post-shock gas should display a rich collisionally ionized UV spectrum. However, these lines are very sensitive to reddening. Another prediction of fast shock models is strong emission in the coronal forbidden lines of iron, such as \[Fe X\]$`\lambda `$6374 and \[Fe XIV\]$`\lambda `$5303. These lines are expected whenever the shocked gas is hot enough to collisionally ionize Fe to these species. Table 1 shows both absolute fluxes and fluxes relative to H$`\beta `$ in these two lines. The H$`\beta `$ emission (H$`\beta _S`$) is that from only the post-shock gas - emission from the precursor is not included in the listed ratio. The range of shock velocities consistent with both the optical line ratios observed in Seyferts (Dopita & Sutherland 1995) and the soft X-ray observations (Section 2) is 300 - 500 km s<sup>-1</sup>. For such shock velocities, the ratio of total H$`\beta `$ flux (H$`\beta _T`$ \- shock plus precursor) to shock-only flux (H$`\beta _S`$) is 2.1 - 2.2 (DSI, tables 8 and 10). Thus the predicted \[Fe X\]$`\lambda `$6374/H$`\beta _T`$ ratios are in the range 0.1 - 0.19, excluding any \[Fe X\]$`\lambda `$6374 from the photoionized precursor. The observed, reddening-corrected \[Fe X\]$`\lambda `$6374/H$`\beta `$ ratios in Seyfert 2s are in the range $`<`$ 0.014 - 0.16 (Koski 1978; Penston et al. 1984), with most galaxies in the range $`<`$ 0.014 - 0.06. The strongest \[Fe X\]$`\lambda `$6374 emitters are consistent with a 300 - 500 km s<sup>-1</sup> shock model, but most galaxies show an \[Fe X\]$`\lambda `$6374/H$`\beta `$ ratio which is a factor of 2 - 14 lower than the predictions. A similar conclusion comes from the general lack of detection of \[Fe XIV\]$`\lambda `$5303 in Seyfert galaxies. Of course, a contribution from lower velocity ($`<`$ 300 km s<sup>-1</sup>) shocks would enhance the H$`\beta `$ flux without significantly increasing \[Fe X\]$`\lambda `$6374 or \[Fe XIV\]$`\lambda `$5303 and thus reduce the predicted ratio. However, the optical emission line spectrum would then be of low excitation and would not match a Seyfert NLR spectrum. Thus the weakness of the iron coronal lines may favor a conventional photoionization model over the shock model. Depletion of iron onto grains would reduce the coronal line fluxes, but it would decrease the X-ray emissivity as well, and much of the iron is liberated by the time radiative cooling sets in (Vancura et al. 1994). Thus the \[Fe X\]/H$`\beta `$ ratios would be reduced by a modest amount in models with grains. This research was supported by NASA through grants NAG 53393 and NAG 81027, and by NSF through grant AST9527289. We are grateful to P. Ferruit, J. A. Morse and N. Nagar for help and advice.
no-problem/9901/cond-mat9901263.html
ar5iv
text
# Influence of 𝐶⁢𝑢 on spin-polaron tunneling in the ferromagnetic state of 𝐿⁢𝑎_{2/3}⁢𝐶⁢𝑎_{1/3}⁢𝑀⁢𝑛_{1-𝑥}⁢𝐶⁢𝑢_𝑥⁢𝑂₃ from the resistivity data ## Abstract Nearly a $`50\%`$ decrease of resistivity $`\rho (T,x)`$ (accompanied by a $`5\%`$ reduction of the peak temperature $`T_C(x)`$) due to just $`4\%`$ $`Cu`$ doping on the $`Mn`$ site of $`La_{2/3}Ca_{1/3}Mn_{1x}Cu_xO_3`$ is observed. Attributing the observed phenomenon to the substitution induced decrease of the polaron energy $`E_\sigma (x)`$ below $`T_C(x)`$, all data are found to be well fitted by the nonthermal coherent tunneling expression $`\rho (T,x)=\rho _0e^{\gamma M^2(T,x)}`$ assuming $`M(T,x)=M_R(x)+M_0(x)\mathrm{tanh}\left\{\sqrt{\left[T_C(x)/T\right]^21}\right\}`$ for the magnetization in the ferromagnetic state. The best fits through all the data points yield $`M_0(x)\sqrt{1x}M_0(0)`$, $`M_R(x)\sqrt{x}M_0(0)`$, and $`E_\sigma (x)E_\sigma (0)(1x)^4`$ for the $`Cu`$ induced modifications of the $`Mn`$ spins dominated zero-temperature spontaneous magnetization, the residual paramagnetic contribution, and spin-polaron tunneling energy, respectively, with $`E_\sigma (0)=0.12eV`$ and $`2R10\AA `$ for the spin-polaron’s size. As is well known, the ground state of the highly magnetoresistive conductor $`La_{2/3}Ca_{1/3}MnO_3`$ is ferromagnetic (FM) and the paramagnetic-ferromagnetic transition is accompanied by a sharp drop in resistivity below $`T_C`$. Such a correlation is considered as a basic element for the so-called magnetically induced electron localization scenario in which the changes of observable resistivity at low temperatures are related to the corresponding changes of the local magnetization, and a coherent nonthermal tunneling charge carrier transport mechanism dominates other diffusion processes. The effects of elemental substitution on the properties of $`La_{2/3}Ca_{1/3}MnO_3`$ have been widely studied in an attempt to further shed some light on the underlying transport mechanisms in this interesting material. In particular, substitution of the rare-earth atoms (like $`Y`$ or $`Gd`$) on the $`La`$ site leads to the lowering of the ferromagnetic (and ”metal-insulator”) transition temperature $`T_C`$ due mostly to the cation size mismatch. At the same time, the reduction of $`T_C`$ and a rather substantial drop of resistivity in the FM region due to $`Mn`$ ions replacement with metals (like $`Co`$ or $`Ni`$) are ascribed to a weakening of the Zener double-exchange interaction between two unlike ions. In other words, similar to the effects of an applied magnetic field, metal-ion doping was found to decrease the polaron tunneling energy barrier (thus increasing the correlation length). In this paper we present a comparative study of resistivity measurements on two manganite samples from the $`La_{2/3}Ca_{1/3}Mn_{1x}Cu_xO_3`$ family for $`x=0`$ and $`x=0.04`$ and for a wide temperature interval (from $`20K`$ to $`300K`$). As we shall see, this very small amount of impurity leads to a marked (factor of two) drop in resistivity value, hardly understandable along the conventional scattering theories. The data are in fact well fitted by a nonthermal spin tunneling expression for the resistivity assuming a magnetization $`M(T,x)`$ dependent charge carrier correlation length $`L(M)`$. The samples examined in this study were prepared by the standard solid-state reaction from stoichiometric amounts of $`La_2O_3`$, $`CaCO_3`$, $`MnO_2`$, and $`CuO`$ powders. The necessary heat treatment was performed in air, in alumina crucibles at $`1300^{}C`$ for 2 days to preserve the right phase stoichiometry. Powder X-ray diffraction patterns are characteristic of perovskites. No appreciable changes in the diffraction patterns induced by $`Cu`$ doping have been observed (suggesting thus that no structural changes have occured after replacement of $`Mn`$ by $`Cu`$). Energy Dispersive X-ray microanalyses confirm the presence of copper on the manganese crystallographic sites. The electrical resistivity $`\rho (T,x)`$ was measured using the conventional four-probe method. To avoid Joule and Peltier effects, a dc current $`I=1mA`$ was injected (as a one second pulse) successively on both sides of the sample. The voltage drop $`V`$ across the sample was measured with high accuracy by a $`KT256`$ nanovoltmeter. Fig.1 presents the temperature dependence of the resistivity $`\rho (T,x)`$ for two $`La_{2/3}Ca_{1/3}Mn_{1x}Cu_xO_3`$ samples, with $`x=0`$ and $`x=0.04`$, respectively. Notice a rather sharp (nearly a $`50\%`$) drop of resistivity (both near the peak and on its low temperature side) for the doped sample along with a small reduction of the transition (peak) temperature $`T_C(x)`$ reaching $`T_C(0)=265K`$ and $`T_C(0.04)=250K`$, respectively. Since no tangible structural changes have been observed upon copper doping, the Jahn-Teller mechanism can be safely ruled out and the most reasonable cause for the resistivity drop in the doped material is the reduction of the spin-polaron tunneling energy $`E_\sigma `$ which within the localization scenario is tantamount to an increase of the charge carrier correlation length $`L=\sqrt{2\mathrm{}^2/mE_\sigma }`$ (here $`m`$ is an effective polaron mass). In the FM region (below $`T_C(x)`$), the tunneling based resistivity reads $`\rho [M(T,x)]=\rho _se^{2R/L(M)}`$ where $`\rho _s^1=e^2R^2\nu _{ph}N_m`$ with $`R`$ being the tunneling distance (and $`2R`$ being a size of a small spin polaron), $`\nu _{ph}`$ the phonon frequency, and $`N_m`$ the density of available states. In turn, the correlation length $`L(M)`$ depends on the temperature and concentration of copper $`x`$ through the corresponding dependencies of the magnetization $`M(T,x)`$. Assuming, along the main lines of conventional mean-field approximation schemes that $`L(M)=L_0/(1M^2/M_L^2)`$ (with $`M_L`$ being a fraction of the saturated magnetization $`M(0)`$), we arrive at the following expression for the tunneling dominated resistivity $$\rho (T,x)=\rho _0e^{\gamma M^2(T,x)},$$ (1) with $`\rho _0=\rho _se^{2R/L_0}`$ and $`\gamma =2R/L_0M_L^2`$. To account for the observed behavior of the resistivity, we identify $`T_C`$ with the doping-dependent Curie temperature $`T_C(x)`$, and consider that the temperature and $`x`$ dependence of the magnetization is the sum of a classical Curie-Weiss contribution and a residual term, namely $$M(T,x)=M_R(x)+M_0(x)\mathrm{tanh}\left\{\sqrt{\left[T_C(x)/T\right]^21}\right\}.$$ (2) Specifically, $`M_R(x)`$ is interpreted as a $`Cu`$ induced paramagnetic contribution while $`M_0(x)`$ accounts for the deviation of the ferromagnetically aligned $`Mn`$ magnetic moments of the undoped material in the presence of copper atoms. In fact, save for the $`M_R(x)`$ term, Eq.(2) is an analytical (approximate) solution of the well-known Curie-Weiss mean-field equation on spontaneous magnetization, viz. $`M(T,x)/M(0,x)=\mathrm{tanh}\left\{\left[M(T,x)/M(0,x)\right](T_C(x)/T)\right\}`$. To exclude any extrinsic effects (like grain boundary scattering), we consider the normalized resistivity $`\mathrm{\Delta }\rho (T,x)/\mathrm{\Delta }\rho (0,x)`$ with $`\mathrm{\Delta }\rho (T,x)=\rho (T,x)\rho (T_C(x),x)`$ and $`\rho (0,x)`$ being the resistivity taken at the lowest available temperature. Fig.2 depicts the above-defined normalized resistivity versus the reduced temperature $`T/T_C(x)`$ for the two samples. First of all, notice that the $`x=0`$ and $`x=0.04`$ data merge both at low temperatures and above $`T_C(x)`$, while starting from $`T_C(x)`$ and below the $`Cu`$-doped (squares) and $`Cu`$-free (circles) samples follow different routes. On the other hand, approaching $`T_C(x)`$ from low temperatures, a (most likely) fluctuation driven crossover from undoped to doped transport mechanism is clearly seen near $`T/T_C(x)0.9`$. The solid lines are the best fits through all the data points according to Eqs.(1) and (2), yielding $`M_0(0)/M_L=1.41\sqrt{L_0/2R}`$, $`M_R(0)=0`$, $`M_0(0.04)=0.98M_0(0)`$, and $`M_R(0.04)=0.06M_0(0)`$ for the model parameters. Recalling that in our present study the copper content is $`x=0.04`$, the above estimates can be cast into the following explicit $`x`$ dependencies of the residual $`M_R(x)\sqrt{x}M_0(0)`$ and spontaneous $`M_0(x)\sqrt{1x}M_0(0)`$ magnetizations, giving rise to an exponential (rather than power) $`x`$ dependence of the observed resistivity $`\rho (T,x)`$ (see Eqs.(1) and (2)). Furthermore, assuming (as usual ) $`2R/L_01`$ for the (undoped) tunneling distance to correlation length ratio and using the found value of the residual resistivity $`\rho (T_C(0),0)=\rho _03.5m\mathrm{\Omega }m`$, the density of states $`N_m9\times 10^{26}m^3eV^1`$ and the phonon frequency $`\nu _{ph}2\times 10^{13}s^1`$ (estimated from Raman shift for optical $`MnO`$ modes), we obtain $`R5\AA `$ for an estimate of the tunneling distance (corresponding to $`2R10\AA `$ for a spin-polaron’s size) which in turn results in $`L_010\AA `$ (using a free electron approximation for a polaron’s mass $`m`$) and $`E_\sigma (0)0.12eV`$ for a zero-temperature copper-free carrier charge correlation length and the spin-polaron tunneling energy, respectively, both in good agreement with reported estimates of these parameters in other systems. Based on the above estimates, we can roughly estimate the copper induced variation of the correlation length $`L(x)`$ and the corresponding spin polaron tunneling energy $`E_\sigma (x)`$. Indeed, according to the earlier introduced definitions, $`L[M(T_C(x))]=L_0/(1M_R^2(x)/M_L^2)L_0/(12x)`$ and $`E_\sigma (x)L^2(x)`$ which lead to $`L(x)L(0)/(1x)^2`$ and $`E_\sigma (x)E_\sigma (0)(1x)^4`$ for small enough $`x`$. These explicit $`x`$ dependencies (along with the composition variation of the transition temperature $`T_C(x)T_C(0)(1x)`$) remarkably correlate with the $`Co`$ induced changes in $`La_{2/3}Ca_{1/3}Mn_{1x}Co_xO_3`$ recently observed by Rubinstein et al. Interestingly, the above-obtained estimates agree very well with the observed peak (at $`T_C(x)`$) and residual (at $`T0`$) resistivities defined through the model parameters as follows (see Eqs.(1) and (2)) $$\rho (T_C(x),x)=\rho _0e^{\gamma M_R^2(x)},$$ (3) and $$\rho (0,x)=\rho _0e^{\gamma M^2(0,x)},$$ (4) with $`\rho _0`$, $`M_R(x)`$, and $`M(0,x)`$ defined earlier. This provides thus an elegant self-consistency check for the employed interpretation. In summary, a rather substantial drop in resistivity $`\rho (T,x)`$ of $`La_{2/3}Ca_{1/3}Mn_{1x}Cu_xO_3`$ sample upon just $`4\%`$ $`Cu`$ doping is reported. Along with lowering the Curie point $`T_C(x)`$, the copper substitution is argued to add a small paramagnetic contribution $`M_R(x)`$ to the $`Mn`$ spins dominated spontaneous magnetization $`M`$ of the undoped material leading to a small decrease of the spin-polaron tunneling energy $`E_\sigma (x)`$. However, due to the tunneling dominated carrier transport process, this small amount of impurity is sufficient for the drastical changes in resistivity absolute value across the whole temperature range. The temperature and $`x`$ dependencies of the observed resistivity was found to be rather well fitted by a nonthermal coherent tunneling of spin polarons with a heuristic expression for the magnetization $`M(T,x)`$ in the ferromagnetic state (as an approximate analytic solution of the mean-field Curie-Weiss equation), resulting in the exponential (rather than linear) doping dependence of $`\rho (T,x)`$. ###### Acknowledgements. Part of this work has been financially supported by the Action de Recherche Concertées (ARC) 94-99/174. S.S. thanks FNRS (Brussels) for some financial support.
no-problem/9901/math9901074.html
ar5iv
text
# Differential interactive games:The short-term predictions ## 1. The differential interactive games ###### Definition Definition 1 An interactive system (with $`n`$ interactive controls) is a control system with $`n`$ independent controls coupled with unknown or incompletely known feedbacks (the feedbacks, which are called the behavioral reactions, as well as their couplings with controls are of a so complicated nature that their can not be described completely). An interactive game is a game with interactive controls of each player. Below we shall consider only deterministic and differential interactive systems. For symplicity we suppose that $`n=2`$. In this case the general interactive system may be written in the form: $$\dot{\phi }=\mathrm{\Phi }(\phi ,u_1,u_2),$$ $`1`$ where $`\phi `$ characterizes the state of the system and $`u_i`$ are the interactive controls: $$u_i(t)=u_i(u_i^{}(t),[\phi (\tau )]|_{\tau t}),$$ i.e. the independent controls $`u_i^{}(t)`$ coupled with the feedbacks on $`[\phi (\tau )]|_{\tau t}`$. One may suppose that the feedbacks are integrodifferential on $`t`$ in general, but below we shall consider only differential dependence. It means that $$u_i(t)=u_i(u_i^{}(t),\phi (t),\dot{\phi }(t),\ddot{\phi }(t),\mathrm{},\phi ^{(n)}(t)).$$ $`2`$ It is reasonable to suppose that all functional dependencies (1) and (2) are smooth. ## 2. Short-term predictions. Basic procedure Let $`u_i`$ and $`u_i^{}`$ ($`i=1,2`$) have $`n`$ degrees of freedom. Let us consider $`2n+1`$ arbitrary functions $`p_j(\stackrel{}{u},\stackrel{}{u}^{},\phi )`$ of $`\stackrel{}{u}=(u_1,u_2)`$, $`\stackrel{}{u}^{}=(u_1^{},u_2^{})`$ and $`\phi `$ ($`j=1,2,\mathrm{},2n+1`$). The knowledge of the processes in the game at $`=\tau <t`$ allows to consider $`2n`$ magnitudes $`f_i(\tau )=_{j=1}^{n+1}\alpha _{ij}(\tau )p_j(\stackrel{}{u}(\tau ),\stackrel{}{u}^{}(\tau ),\phi (\tau ))`$ ($`i=1,2,\mathrm{}2n`$) such that $`\dot{f}_i(\tau )0`$. One may suppose that the coefficients $`\alpha _{ij}(\tau )`$ are continuous and, moreover, belong to the Lipschitz class. Their differentiability is too strong condition to be satisfied in practice. For the fixed moment $`t`$ let us consider $`\mathrm{\Delta }t>0`$ such that the Jacobi matrix of the mapping $`\stackrel{}{u}(f_1,\mathrm{}f_n)`$ is nondegenerate at the moment $`\tau =t\mathrm{\Delta }t`$ and at the point $`\stackrel{}{u}=\stackrel{}{u}(\tau )`$. Under these conditions one can locally express $`\stackrel{}{u}`$ via $`\stackrel{}{u}^{}`$ and $`\phi `$: $$\stackrel{}{u}(\tau )=\stackrel{}{U}_\tau (\stackrel{}{u}^{}(\tau ),\phi (\tau );f_1(\tau ),\mathrm{}f_{2n}(\tau )).$$ $`3`$ The obtained relations may be used for an approximation of the interactive game by ordinary differential games. Let us consider a fixed moment $`t_0`$. For $`t>t_0`$ the interactive controls $`u_i(t)`$ will be replaced by their approximations $`u_i^{}(t)`$ as in the evolution equations of the game as in the expressions for the functions $`f_i`$. The magnitudes $`u_i^{}(t)`$ are defined by the formulas $$\stackrel{}{u}^{}(t)=\stackrel{}{U}_{t\mathrm{\Delta }t}(\stackrel{}{u}^{}(t),\phi (t\mathrm{\Delta }t);f_1(t\mathrm{\Delta }t),\mathrm{}f_{2n}(t\mathrm{\Delta }t))$$ $`4`$ for $`t>t_0`$ (for $`t<t_0`$ they coincide with the interactive controls $`u_i(t)`$). Note that $`f_i`$ were calculated by use of the values of $`\stackrel{}{u}^{}`$ at the moment $`t\mathrm{\Delta }t`$. Thus, we receive an ordinary differential game with retarded (delayed) arguments, to which the more or less standard analysis of ordinary differential games can be applied. The approximation (4) generalizes the retarded control approximation of the article . The values of the state $`\phi `$ calculated at the moment $`t\mathrm{\Delta }t`$ may be changed to its values calculated at the moment $`t`$ or at the intermediate moment $`t\alpha \mathrm{\Delta }t`$, where the parameter $`\alpha [0,1]`$ is chosen to provide the best approximation. Note that really we constructed a series of ordinary differential games parametrized by $`t_0`$. The obtained predictions may be used as short-term predictions for processes in the initial interactive game. Certainly, as it was marked in it is difficult to perceive and to interpret the analytically represented results in real time. Thus, it is rather reasonable to use some visual representation for the series of the approximating games. Thus, we are constructing an enlargement of the interactive game, in which the players interactively observe the visual predictions for this game in real time. Of course, such enlargement may strongly transform the structure of interactivity of the game (i.e. to change the feedbacks entered into the interactive controls of players). ## 3. Short-term predictions. Further developments The basic procedure exposed above essentially depends on the choice of the functions $`p_j(\stackrel{}{u},\stackrel{}{u}^{},\phi )`$. Its further developments are based on the attempts to choose them dynamically in the most effective way. The simplest improvement is in the consideration of several sets $`\{p_j^{(\mu )}\}`$ of such functions. The index $`\mu `$ labels an individual set. Fixing the moment $`t_0`$ and $`\mathrm{\Delta }t`$ one performs the basic procedure for each $`\mu `$ starting at $`t_0\mathrm{\Delta }t`$ instead of $`t_0`$. The obtained short-term predictions for $`\phi `$ are compared with the real data in the time interval $`t_0\mathrm{\Delta }t<t<t_0`$ (at this interval $`\stackrel{}{u}^{}`$ coincides with its observed value). The best prediction determines $`\mu `$, which is used for the short-term predictions for $`t>t_0`$. The index $`\mu `$ may vary over a finite set or over a smooth manifold. For example, let us consider the set of $`2n+2`$ functions $`p_j(\stackrel{}{u},\stackrel{}{u}^{},\phi )`$. They generate a linear space $`V`$. Any hyperplane in this space is spanned by $`2n+1`$ functions, which may be used in the basic procedure. In this case $`\mu `$ labels a hyperplane in the $`2n+2`$ dimensional space $`V`$ and, therefore, belongs to the $`2n+1`$-dimensional projective space $`^{2n+1}=(V^{})`$. Dynamics in the interactive game determines a curve $`\mu (t)`$ in $`^{2n+1}`$. The point $`\mu (t_0)`$ is the index of the best prediction constructed as above for the moment $`t_0`$. The curve $`\mu (t)`$ may be discontinuous. The next improvement is based on the dynamical selection of the considered sets of functions $`\{p_j^{(\mu )}\}`$ with finite number of $`\mu `$ during the game. One uses the procedure above to construct an individual approximation at the fixed moment $`t_0`$. Let the set labelled by $`\mu _0`$ gives the worst prediction. In the moment $`t_0+\mathrm{\Delta }t`$ one adds any new set to the considered ensemble of sets instead of the $`\mu _0`$-th one, repeat the procedure for this moment and so on. One may specify various algorithms to choose the new set. ## 4. Conclusions Thus, several heuristic procedures of the short-term predictions for processes in the differential interactive games were considered. Note that the problem of an estimation of precision of such predictions is not correct in view of interactivity of the game. One may only say that in any finite time interval $`t_0<t<t_1`$ the prediction becomes heuristically better with $`\mathrm{\Delta }t0`$. At least, it may be reasonably effective only for rather short intervals $`(t_0,t_1)`$<sup>1</sup><sup>1</sup>1To estimate the maximal admitted $`t_1`$ one may use the following procedure: let us consider two approximations started at $`t\mathrm{\Delta }t_1`$ and $`t\mathrm{\Delta }t_2`$, the moment $`t_1`$ is defined as the maximally possible one providing that the divergence of two various predictions is not too large. . Nevertheless, in practice the interactive effects are essential only on the short time intervals and the short-term analysis of the interactive game strategically reduces it to an ordinary game. The main problem here is to extract the necessary data from such analysis to define this new game; here, the investigation of series of approximating differential games and the unraveling of algebraic correlations between them (in spirit of the nonlinear geometric algebra) is apparently crucial (cf.).
no-problem/9901/nucl-ex9901009.html
ar5iv
text
# Freeze-Out Parameters in Central 158⋅A GeV 208Pb+208Pb Collisions ## Abstract Neutral pion production in central 158$``$A GeV <sup>208</sup>Pb+<sup>208</sup>Pb collisions has been studied in the WA98 experiment at the CERN SPS. The $`\pi ^0`$ transverse mass spectrum has been analyzed in terms of a thermal model with hydrodynamic expansion. The high accuracy and large kinematic coverage of the measurement allow to limit previously noted ambiguities in the extracted freeze-out parameters. The results are shown to be sensitive to the shape of the velocity distribution at freeze-out. Heavy ion reactions at sufficiently high energies produce dense matter which may provide the necessary conditions for the transition from a hadronic state to a deconfined phase, the Quark-Gluon Plasma. Since a finite thermalized system without external containment pressure will necessarily expand, part of the thermal excitation energy will be converted into collective motion which will be reflected in the momentum spectra of the final hadrons. The dynamics of the expansion may depend on the presence or absence of a plasma phase. The strongly interacting hadrons are expected to decouple in the late stages of the collision. Their transverse momentum spectra should therefore provide information about the conditions of the system at freeze-out, in particular about the temperature and collective velocity of the system, if the thermal assumption is valid. The application of a thermal description is non-trivial. There is no reason to believe neither that chemical and kinetic freeze-out should be identical, nor that there should be unique thermal freeze-out temperatures for all hadrons, nor unique chemical freeze-out temperatures for all flavour changing reactions. It is likely that chemical equilibrium is not fully attained (see e.g. ), implying that chemical parameters will also influence momentum spectra through contributions from decays of heavier resonances. Furthermore, it is not obvious that this problem should have a stationary solution since particle emission will occur throughout the full time evolution of the collision and so, in principle, would require a full space-time integration with varying parameters. Most attempts to extract freeze-out parameters from experiment assume local thermal equilibrium and fit parameterizations of hydrodynamical models to the experimental distributions . Already the earliest analyses noted ambiguities in fitting the hadron transverse mass spectra due to an anti-correlation between the fitted temperature, T, and transverse flow velocity, $`\beta _T`$. Two-particle interferometric (HBT) measurements provide information on the spatial and temporal extent of the emission volume, but are also sensitive to the collective motion of the source (see e.g. ). Within a hydrodynamical parameterization of the source at freeze-out, the transverse two-particle correlations have been shown to be sensitive only to the ratio $`\beta _T^2/T`$ . Hence HBT analyses have a $`\beta _TT`$ ambiguity which is roughly orthogonal to that resulting from fits to the single particle spectra. This fact has recently been used by the NA49 collaboration to constrain the freeze-out parameters to lie within the region $`\beta _T=0.55\pm 0.12`$ and $`T=120\pm 12`$ MeV for central Pb+Pb collisions . Alternatively, a recent analysis of $`\pi ^+,K^+,`$ and $`K^{}`$ distributions and $`\pi ^+`$ and $`\pi ^{}`$ two-particle correlations measured by the NA44 collaboration for central Pb+Pb collisions using a 9-parameter hydrodynamical model fit gave freeze-out parameters of $`\beta _T=0.443\pm 0.023`$ and $`T=95.8\pm 3.5`$ MeV. These analyses suggest that a single set of freeze-out parameters can describe the hadron single particle distributions and two-particle correlations, with moderate temperature and large transverse flow velocity. On the other hand, various thermal model analyses of particle production ratios, especially strangeness production (see e.g. Ref. for a recent summary), have indicated rather high chemical freeze-out temperatures. Use of integrated yields in these analyses allows to obtain conclusions on the temperature which are insensitive to the amount of transverse flow. In a recent analysis of results at SPS energies, including Pb+Pb collisions, good agreement is obtained if partial strangeness saturation is assumed with a chemical freeze-out temperature of about 180 MeV . A successful thermal interpretation of relativistic heavy ion collisions must provide an accurate description of the pion spectra since pions provide the “thermal bath” of the late stages the collision. In this letter we discuss the extraction of thermal freeze-out parameters from the neutral pion transverse mass distribution for central 158$``$A GeV <sup>208</sup>Pb+<sup>208</sup>Pb collisions. These data provide important constraints due to their accuracy and coverage in transverse mass. The analysis of the $`\pi ^0`$ spectrum, within a particular hydrodynamical model, reveals the importance of the shape of the velocity distribution at freeze-out. The default shape, derived from a Gaussian spatial distribution, favors a large thermal freeze-out temperature, similar to temperatures extracted for chemical freeze-out, but in contradiction to conclusions obtained based on analyses of limited coverage particle spectra and HBT results . The CERN experiment WA98 consists of large acceptance photon and hadron spectrometers together with several other large acceptance devices which allow to measure various global variables on an event-by-event basis. The results presented here were obtained from an analysis of the data taken with Pb beams in 1995 and 1996. The 10% most central reactions ($`\sigma _{central}630\mathrm{mb}`$) have been selected using the transverse energy $`E_T`$ measured in the MIRAC calorimeter. Neutral pions are reconstructed via their $`\gamma \gamma `$ decay branch using the WA98 lead-glass photon detector, LEDA, which consisted of 10,080 individual modules with photomultiplier readout. The detector was located at a distance of 21.5 m from the target and covered the pseudorapidity interval $`2.35<\eta <2.95`$. The general analysis procedure, described in , is similar to that used in the WA80 experiment . The momentum distributions are fully corrected for geometrical acceptance and reconstruction efficiency. The systematic error on the absolute yield is $`10\%`$ and increases sharply below $`p_T=0.4\mathrm{GeV}/c`$. An additional systematic error originates from the uncertainty on the momentum scale of 1%. The influence of this rises slowly for large $`p_T`$ and leads to an uncertainty on the yield of 15% at $`p_T=4\mathrm{GeV}/c`$. The measured neutral pion cross section from central Pb+Pb reactions as a function of $`m_Tm_0`$ is shown in Fig. 1. Included is a fit with a hydrodynamical model including transverse flow and resonance decays. This computer program calculates the direct production and the contributions from the most important resonances having two- or three-body decays including pions ($`\rho `$, $`\mathrm{K}_S^0`$, $`\mathrm{K}^{}`$, $`\mathrm{\Delta }`$, $`\mathrm{\Sigma }+\mathrm{\Lambda }`$, $`\eta `$, $`\omega `$, $`\eta ^{}`$). The code, originally intended for charged pions, has been adapted to predict neutral pion production. The model uses a gaussian transverse spatial density profile truncated at $`4\sigma `$. The transverse flow rapidity is assumed to be a linear function of the radius. For all results presented here, a baryonic chemical potential of $`\mu _B=200\mathrm{MeV}`$ has been used. The results are not very sensitive, however, to the choice of $`\mu _B`$ for the $`m_Tm_0`$ region considered here. This model provides an excellent description of the neutral pion spectra with a temperature $`T=185\mathrm{MeV}`$ and an average flow velocity of $`\beta _T=0.213`$. These values are very similar to the parameters obtained with similar fits to neutral pion spectra in central reactions of <sup>32</sup>S+Au . The $`2\sigma `$ lower limit<sup>*</sup><sup>*</sup>*All limits given use the data for $`m_Tm_0>2\mathrm{GeV}/c^2`$ as upper limits only to allow for additional hard-scattering contributions. on the temperature is $`T^{low}=171\mathrm{MeV}`$ and the corresponding upper limit on the flow velocity is $`\beta _T^{upp}=0.253`$. The observed curvature at low $`m_T`$ is largely a result of resonance decay contributions. Performing a fit with only the direct contribution leads to $`T=142\mathrm{MeV}`$ and $`\beta _T=0.301`$, with corresponding $`2\sigma `$ limits of $`T^{low}=135\mathrm{MeV}`$ and $`\beta _T^{upp}=0.318`$, similar to other analyses which have neglected decay contributions . The larger average velocity which results in this case is due to the fact that all of the observed curvature must now be accounted for by transverse flow. The high statistical accuracy and large transverse mass coverage of the present $`\pi ^0`$ measurement reveals the concave curvature of the $`\pi ^0`$ spectrum over a large $`m_T`$ range, which constrains the parameters significantly. This is further demonstrated by studying the local slope at each $`m_T`$. The local (inverse) slope is given by $$T_{local}^1=\left(E\frac{d^3\sigma }{dp^3}\right)^1\frac{d}{dm_T}\left(E\frac{d^3\sigma }{dp^3}\right).$$ (1) The local slope results are plotted in Fig. 2. Each individual value of $`T_{local}`$ has been extracted from 3 adjacent data points of Fig. 1. The data are compared to the hydrodynamical model best fit results of Fig. 1, as well as fits in which the transverse flow velocities have been fixed to larger values comparable to those obtained by Refs. and NA49 (sets 2 and 3). The corresponding fit parameters are given in Table I. The comparison demonstrates that while the large transverse flow velocity fits can provide a reasonable description of the data up to transverse masses of about 1 GeV, they significantly overpredict the local slopes at large transverse mass. While application of the hydrodynamical model at large transverse mass is questionable, the model cannot overpredict the measured yield. The observed overprediction therefore rules out the assumption of large transverse flow velocities, or points to a deficiency in the model assumptions used in these fits. The curvature in the $`\pi ^0`$ spectrum at large transverse mass is a result of the distribution of transverse velocities. Although the spectrum is not directly sensitive to the spatial distribution of particle emission, within this model it is dependent indirectly on the spatial distribution due to the assumption that the transverse rapidity increases linearly with radius. The large curvature at large transverse mass is due to high velocity contributions which result from the tail of the assumed gaussian density profile . Figure 3 shows the transverse source velocity distributions $`dN/d\beta _T`$ for the different parameter sets. More precisely these are source emission functions integrated over all variables except the transverse velocity and the rapidity, i.e. they are weighted with the produced particle multiplicity. The curves labelled 1-3 correspond to the calculations in figure 2 using a gaussian spatial profile. In addition, velocity profiles are shown for a uniform density profile (set 4) and for a Woods-Saxon distribution: $$\rho (r)=\frac{1}{1+\mathrm{exp}\left[(rr_0)/\mathrm{\Delta }\right]}$$ (2) with $`\mathrm{\Delta }/r_0=0.02`$ (set 5). These are included in figures 2 and 3. It is seen that the uniform density assumption truncates the high velocity tail resulting in less curvature in the pion spectrum, while the Woods-Saxon has a more diffuse edge at high $`\beta _T`$. While the gaussian and uniform density assumptions have very different velocity profiles, it is interesting that both can provide acceptable fits to the pion spectrum with best fit results with similar $`\beta _T`$ and $`T`$ parameters, which give similar effective temperatures, and which have similar velocity widths, $`\beta _{RMS}`$, as shown in Table I. Compared to the gaussian profile result, the best fit result using the uniform profile gives a lower temperature of 178 MeV and would lead to weaker limits of $`\beta _T^{upp}=0.42`$ and $`T^{low}=134\mathrm{MeV}`$. Limits cannot be set using the Woods-Saxon profile due to increased fit ambiguity. If the data for $`m_Tm_0>2\mathrm{GeV}/c^2`$ is used only as upper limits, as explained above, a best fit result with $`T=129\mathrm{MeV}`$ and $`\beta _T=0.42`$ is obtained. The data presented here can be well described with high thermal freeze-out temperatures, similar to temperatures which have been extracted for chemical freeze-out , and small transverse flow velocities. Note again that chemical and thermal freeze-out are not necessarily expected to be the same. On the other hand, if the larger velocities obtained in other analyses which have considered limited particle spectra together with HBT results persist, then the present analysis suggests much lower thermal freeze-out temperatures. For example, none of the different velocity profile assumptions used in this analysis allowed to reproduce the results of ref. – all profiles studied require a temperature of 90 MeV or less, if $`\beta _T=0.55`$ is assumed. The present data obviously provide important information on the shape of the freeze-out velocity distribution. A more extensive systematic study would require further guidance from full hydrodynamical calculations, which is beyond the scope of this paper. Recent hydrodynamical model calculations have found reasonable agreement with transverse mass spectra within a broad range of assumptions. However, in these studies it was not attempted to limit the model parameters or assumptions by a rigorous comparison to the data. In summary, we have argued that hydrodynamical models which attempt to extract the thermal freeze-out parameters of relativistic heavy ion collisions must provide an accurate description of the pion spectra, since pions most directly reflect the thermal evironment in the late stage of the collision. In particular, models, or parameter sets, which overpredict the observed pion yields, even at large transverse mass, can immediately be ruled out. We have demonstrated that the high accuracy neutral pion spectra with large transverse mass coverage can constrain the thermal freeze-out parameters and model assumptions. Within the context of the hydrodynamical model of Ref. , the default velocity profile favors large thermal freeze-out temperatures similar to the chemical freeze-out temperature determined for the same system . Only special choices of the velocity profile allow large average freeze-out velocities similar to those extracted from other recent analyses which consider also HBT results . On the other hand, the corresponding freeze-out temperatures are then $`90`$ MeV, significantly lower than other estimates. The present results indicate that the determination of the freeze-out parameters remains an open question. It will be important to determine whether full hydrodynamical models can reproduce the high precision pion data and thereby constrain the assumed freeze-out hypersurface. We wish to thank Urs Wiedemann for assistance with the model calculations and valuable discussions. This work was supported jointly by the German BMBF and DFG, the U.S. DOE, the Swedish NFR and FRN, the Dutch Stichting FOM, the Stiftung für Deutsch-Polnische Zusammenarbeit, the Grant Agency of the Czech Republic under contract No. 202/95/0217, the Department of Atomic Energy, the Department of Science and Technology, the Council of Scientific and Industrial Research and the University Grants Commission of the Government of India, the Indo-FRG Exchange Program, the PPE division of CERN, the Swiss National Fund, the INTAS under Contract INTAS-97-0158, ORISE, Grant-in-Aid for Scientific Research (Specially Promoted Research & International Scientific Research) of the Ministry of Education, Science and Culture, the University of Tsukuba Special Research Projects, and the JSPS Research Fellowships for Young Scientists. ORNL is managed by Lockheed Martin Energy Research Corporation under contract DE-AC05-96OR22464 with the U.S. Department of Energy. The MIT group has been supported by the US Dept. of Energy under the cooperative agreement DE-FC02-94ER40818.
no-problem/9901/cond-mat9901296.html
ar5iv
text
# Electromodulation of the bilayer 𝜈=2 quantum Hall phase diagram. ## Abstract We make a number of precise experimental predictions for observing the various magnetic phases and the quantum phase transitions between them in the $`\nu `$=2 bilayer quantum Hall system. In particular, we analyze the effect of an external bias voltage on the quantum phase diagram, finding that a finite bias should readily enable the experimental observation of the recently predicted novel canted antiferromagnetic phase in transport and spin polarization measurements. Recent theoretical work predicts the existence of a novel canted antiferromagnetic (C) phase in the $`\nu `$=2 bilayer quantum Hall system under quite general experimental conditions, and encouraging experimental evidence in its support has recently emerged through inelastic light scattering spectroscopy and transport measurements. Very recent theoretical works have shown that such a C-phase may exist in a multilayer superlattice system (with $`\nu `$=1 per layer), and that in the presence of disorder-induced-interlayer tunneling fluctuations the C-phase may break up into a rather exotic spin Bose glass phase with the quantum phase transition between the C-phase and the Bose glass phase being in the same universality class as the two-dimensional superconductor-insulator transition in the dirty boson system. In this Letter we consider the effect of an external electric field induced electromodulation (through an applied gate bias voltage) of the $`\nu `$=2 bilayer quantum phase diagram. Our goal is to provide precise experimental predictions which will facilitate direct and unambiguous observations of the various magnetic phases, and more importantly the quantum phase transitions among them. We find the effect of a gate bias to be quite dramatic on the $`\nu `$=2 bilayer quantum phase diagram. In particular, a finite gate bias makes the C-phase more stable which could now exist even in the absence of any interlayer tunneling in contrast to the situations considered in references where the interlayer tunneling induced finite symmetric-antisymmetric gap was crucial in the stability of the C-phase. Thus, a finite gate bias, according to our theoretical calculations presented here, has a qualitative effect on the $`\nu `$=2 bilayer quantum phase diagram – it produces a spontaneously interlayer-coherent canted antiferromagnetic phase which exists even in the absence of any inter-layer tunneling. The prediction of this spontaneously coherent canted (CC) phase is one of the new theoretical results of this paper. The theoretical construction of the bilayer $`\nu `$=2 quantum phase diagram and predicting its experimental consequences in the presence of the bias voltage are our main results. The bilayer $`\nu `$=2 system is characterized by five independent energy scales: the cyclotron energy, $`\omega _c`$ (we take $`\mathrm{}`$=1 throughout); the interlayer tunneling energy characterized by $`\mathrm{\Delta }_{SAS}`$, the symmetric-antisymmetric energy gap; the Zeeman energy or the spin-splitting $`\mathrm{\Delta }_z`$; the intralayer Coulomb interaction energy and the interlayer Coulomb interaction energy. The application of the external electric field adds another independent energy scale, the bias voltage, to the problem. Neglecting the largest ($`\omega _c`$, which we take to be very large) energy scale, one is still left with four independent dimensionless energy variables to consider in constructing the $`\nu `$=2 bilayer quantum phase diagram in the presence of finite bias. In the absence of any bias the quantum phase diagram is surprisingly simple, allowing for only three qualitatively different quantum magnetic phases, as established by a microscopic Hartree-Fock theory, a long wavelength field theory based on the quantum $`O(3)`$ nonlinear sigma model, and a bosonic spin theory. These three magnetic phases are the fully spin polarized ferromagnetic phase (F), which is stabilized for large values of $`\mathrm{\Delta }_z`$ (or for strong intralayer Coulomb interaction), the paramagnetic symmetric or spin singlet (S) phase, which is stabilized for large values of $`\mathrm{\Delta }_{SAS}`$ (or for strong interlayer Coulomb interaction), and the intermediate C phase, where the electron spins in each layer are tilted away from the external magnetic field direction due to the competition between ferromagnetic and singlet ordering. Note that the S phase is fully pseudospin polarized with $`\mathrm{\Delta }_{SAS}`$, the symmetric-antisymmetric gap, acting as the effective pseudospin splitting. The C phase is a true many-body symmetry-broken phase not existing in the single particle picture (and is stabilized by the interlayer antiferromagnetic exchange interaction). The single particle theory predicts a level crossing and a direct first order transition between the S phase and the F phase (nominally at $`\mathrm{\Delta }_z=\mathrm{\Delta }_{SAS}`$) as the Zeeman splitting increases. Coulomb interaction creates the new symmetry broken C phase, which prevents any level crossing (and maintains an energy gap throughout so that there is always a quantized Hall effect) between F and S phases, and makes all phase transitions in the system continuous second order transitions. The canted phase is canted in both the spin and the pseudospin space. One key experimental difficulty in observing the predicted phase transitions (in the absence of any external bias voltage) is that a given sample (with a fixed value of $`\mathrm{\Delta }_{SAS}`$, determined by the system parameters such as well widths, separations, etc.) is always at a fixed point in the quantum phase diagram calculated in references because $`\mathrm{\Delta }_z`$, $`\mathrm{\Delta }_{SAS}`$ and the Coulomb energies are all fixed by the requirement $`\nu `$=2 and the sample parameters. Therefore, a given experimental sample in this so-called balanced condition (i.e. no external bias, equal electron densities in the two layers on the average) is constrained to lie in the F or C or S phase, and the only way to see any phase transitions is to make a number of samples with different parameters lying in different parts of the phase diagram and to investigate and compare properties, as was done in the light scattering experiments of references . This is obviously an undesirable situation because what one really wants is to vary an experimental control parameter (e.g. an external electric field) to tune the system through the phase boundaries and study the quantum phase transition instead of studying different samples. (Theoretically this tuning is easily achieved by making $`\mathrm{\Delta }_z`$, $`\mathrm{\Delta }_{SAS}`$ and the Coulomb energies continuous variables in the phase diagram, but experimentally, of course, this cannot be done.) In this Letter we show that an externally applied electric field through a gate bias, which takes one away (off-balance) from the balanced condition and introduces unequal layer electron densities is potentially an extremely powerful experimental tool in studying the $`\nu `$=2 bilayer quantum phase transitions. Our results indicate that using an external gate bias as a tuning parameter, a technique already extensively used in experimental studies of bilayer structures, should lead to direct experimental observations of the predicted quantum phases in $`\nu `$=2 bilayer systems and the continuous transitions between them in both transport measurements and in spin polarization measurements through NMR Knight shift experiments. We have used two complementary techniques, the direct Hartree-Fock theory and the effective bosonic spin theory, to evaluate the bilayer $`\nu `$=2 quantum phase diagram including the effect of a finite bias voltage. The resulting bias dependent phase diagrams (in the $`\mathrm{\Delta }_z\mathrm{\Delta }_{SAS}`$ space) for the Hartree-Fock theory and the bosonic spin theory are shown in Figs. 1 and 2, respectively. Although there are some quantitative differences between the phase diagrams in the two models (to be discussed below), the main qualitative features are the same: increasing bias voltage enhances the phase space of the C phase mostly at the cost of the F phase, and for large enough bias the C phase becomes stable even for $`\mathrm{\Delta }_{SAS}`$=0, this CC-phase is spontaneously coherent. We note that the CC-phase (i.e. the bias induced C-phase along the $`\mathrm{\Delta }_{SAS}`$=0 line) and the C-phase are continuously connected and there is no quantum phase transition between them. Note that the S-phase, which is the singlet or the symmetric phase, is also stabilized for $`\mathrm{\Delta }_{SAS}`$=0 by finite bias effects. This phase (i.e. the S-phase along the $`\mathrm{\Delta }_{SAS}`$=0) is the spontaneous interlayer coherent symmetric or singlet phase (the CS-phase) and is analogous to the corresponding $`\nu `$=1 spontaneous interlayer coherent phase studied extensively in the context of the $`\nu `$=1 bilayer quantum phase diagram. There is, however, a fundamental difference between the coherent CS phase for our $`\nu `$=2 bilayer system and the corresponding $`\nu `$=1 spontaneous interlayer coherent phase; our $`\nu `$=2 bilayer CS phase can only exist under a finite external bias (the same as our CC phase). Unlike the corresponding $`\nu `$=1 bilayer system or the recently studied zero magnetic field bilayer system there is no spontaneous breaking of the pseudospin $`U(1)`$ symmetry (generated by the interlayer electron density difference) in our $`\nu `$=2 coherent bilayer phases which can only exist in the presence of an external voltage bias. We emphasize that there is no analogy to our canted phase (C or CC phase) in the corresponding $`\nu `$=1 bilayer quantum phase diagram. We note that the main difference (cf. Figs.1 and 2) between the Hartree-Fock theory and the bosonic spin theory is that the Hartree-Fock theory underestimates the stability of the S phase (compared with the bosonic spin theory) at small values of $`\mathrm{\Delta }_z`$. This is a real effect and arises from the neglect of quantum fluctuations in the Hartree-Fock theory which treats the interlayer tunneling as a first order perturbation correction in the S-phase. The bosonic spin theory is essentially exact for the S-phase and is therefore more reliable near the C-S phase boundary, particularly for small values of $`\mathrm{\Delta }_z`$ where tunneling effects are important. In Fig.3 we show our calculated quantum phase diagrams in the gate voltage ($`V_+`$) -tunneling ($`\mathrm{\Delta }_{SAS}`$) space for fixed values of the Zeeman energy $`\mathrm{\Delta }_z`$ (and the Coulomb energies) using both the Hartree-Fock and the bosonic spin theory. The phase diagrams in the two theories are qualitatively similar, and the interlayer coherent phases (CC and CS phases) are manifestly obvious in Fig.3 because the C and the S phases now clearly extend to the $`\mathrm{\Delta }_{SAS}`$=0 line (the ordinate) for finite bias voltage. In general, the presence of bias therefore allows for six different quantum magnetic phases in the $`\nu `$=2 bilayer system: the usual F,C, and S phases of references as well as the purely Néel (N) phase along the $`\mathrm{\Delta }_z`$=0 line in Fig.1 (the F,C,S,N phases are all allowed in the balanced $`V_+`$=0 situation), and two new (bias-induced) coherent phases (CC and CS) along the $`\mathrm{\Delta }_{SAS}`$=0 line in Figs. 1-3. The most important effect of the external bias, which is an important new prediction of the current paper, is that it allows for a continuous tuning of the quantum phase of a $`\nu `$=2 bilayer system within a single gated sample, as is obvious from Figs.1-3. The predicted quantum phase transitions can now be studied in light scattering, transport, and NMR experiments in single gated samples by tuning the bias voltage to sweep through various phases as shown in Figs.1-3. The last issue we address here is what one expects to see experimentally in transport and spin polarization measurements, in sweeping through the phase diagram of Figs.1-3 under an external gate bias. In Fig.4 we show our calculated results for the variation in the spin polarization of the system as a function of the bias $`V_+`$ with all the other system parameters being fixed. As expected the spin polarization is complete in the F phase and remains a constant as a function of $`V_+`$ until it hits the F-C phase boundary where it starts to drop continuously through the C phase, essentially dropping to zero at the the C-S phase boundary, remaining zero in the S-phase. At zero temperature the two phase transitions (i.e. F-C and C-S) are characterized by cusps in the spin polarization (Fig.4) which perhaps will not be observable in finite temperature experiments. The main features of the calculated spin polarization as a function of bias, as shown in Fig.4, should, however be readily observable in NMR Knight shift measurements, including possibly the Knight shift difference in the two layers (Fig.4). We have also carried out calculations of the interlayer charge imbalance (which is zero in the F phase and then rises continuously throughout the C and the S phases reaching full charge polarization for large $`V_+`$ in the S phase) as a function of the bias voltage. There are two cusps in the calculated imbalance as a function of $`V_+`$, corresponding to the F$``$C and the C$``$S phase transitions, which should be experimentally observable. The calculated imbalance therefore looks almost exactly complementary to the spin polarization results shown in Fig.4. Finally we have also calculated the charged excitation energies within the simple Hartree-Fock and bosonic theories (assuming no textural excitations such as skyrmions or merons), which lead to weak cusps in the activation energies at the phase boundaries. Using the parameters of the samples in ref., we conclude from our numerical calculations that the phase transition being observed in the $`\nu =2`$ bilayer transport experiments of ref. is the transition from the C-phase to the S-phase as a function of the density (and not from the F -phase to the C -phase as implied in ref.) Neither phase in ref. is spontaneous interlayer coherent phase (because $`\mathrm{\Delta }_{SAS}`$ is finite in the experiment) in contrast to the claims of ref.. Our results indicate, however, that it should be possible to see all three $`\nu `$=2 quantum phases (F,C and S) in a single gated sample by varying the bias voltage. We hope that the detailed results presented in this paper will encourage future bilayers $`\nu `$=2 experiments under external gate bias to explore the predicted rich phase diagram. We are grateful to A.H.MacDonald and Z.F.Ezawa for useful discussion. This work is supported by the National Science Foundation (at ITP, UCSB). LB also acknowledge financial support from grants PB96-0085 and from the Fundación Ramón Areces. SDS is supported by the US-ONR.
no-problem/9901/astro-ph9901204.html
ar5iv
text
# 1 Introduction. ## 1 Introduction. The QSO 3C 345 ($`V`$=16, $`z`$=0.595) is a core-dominated radio source that displays apparent superluminal motions, with components traveling in the parsec-scale jet along curved trajectories and speeds up to 10$`c`$ \[Zensus et al. (1995)\]. We have observed this QSO with the very long baseline interferometry (VLBI) technique using the NRAO Very Long Baseline Array (VLBA) at three epochs and four frequencies, in order to monitor it in total and linear polarization intensity. ## 2 Observations and Imaging. We observed 3C 345 in 1995.84, 1996.41, and 1996.81, using the VLBA at 22, 15, 8.4, and 5 GHz, and recording with a 16 MHz bandwidth at all frequencies. At each frequency, the source was observed for about 14 hrs, using 5-minute scans and interleaving all observing frequencies. Some calibrator scans (on 3C 279, 3C 84, NRAO 91, OQ 208 and 3C 286) were inserted during the observations. ### Total Intensity. After the fringe-fitting process, we exported the data into the differential mapping program DIFMAP \[Shepherd et al. (1994)\], and we obtained total intensity images using the hybrid mapping technique. The components C7 and C8 can be identified close to the core D, in the images at higher frequencies. At the lower frequencies, the jet extends to the NW direction, turning to the N at $``$20 mas distances from the core. In Fig. 1, we show the central region of the source at 22 GHz for the three sampled epochs. To describe the structures observed within $``$3 mas distance from the core, we fit elliptical components with Gaussian brightness profiles to the visibility data at 15 and 22 GHz. The change measured in the fitted positions for the component C8 indicates that the component was moving away from the core, at an angular speed of 0.26$`\pm `$0.08 mas/yr at 22 GHz, which corresponds to an apparent speed of 5.1$`\pm `$1.8$`h^1c`$ (assuming $`H_0`$=100$`h`$ km s<sup>-1</sup> Mpc<sup>-1</sup> and $`q_0`$=0.5). The trajectory of C7 at the same frequency shows a proper motion of 0.29$`\pm `$0.08 mas/yr, or apparent speed of 5.7$`\pm `$1.8$`h^1c`$. At 15 GHz the respective values are C8: $`0.23\pm 0.11`$ mas/yr ($`4.5\pm 2.2h^1c`$) and C7: $`0.30\pm 0.11`$ mas/yr ($`5.9\pm 2.2h^1c`$). ### Polarized Intensity. Since the early works of Cotton et al. Cotton et al. (1984) up to now, important progress has been made in polarimetric VLBI observations. The VLBA has standardized feeds with low instrumental polarization, and the relatively small size of the antennas is compensated by the excellent performance of the receivers. In addition, a new method for self-calibrating the polarimetric data has been introduced Leppänen et al. (1995), enabling D-term determination by using the program source itself. Using this method, we have calibrated the instrumental polarization, determining the feed solutions for the receivers of all the antennas. This has allowed us to obtain maps of the linearly polarized emission from the source ($`P=Q+iU=pe^{2i\chi }=mIe^{2i\chi }`$, where $`Q`$ and $`U`$ are the Stokes parameters, $`p=mI`$ is the polarized intensity, $`m`$ is the fractional linear polarization, and $`\chi `$ is the position angle of the electric vector in the sky). In Fig. 2, we show a composition of the total intensity $`I`$ (grey scale), the polarized intensity $`p`$ (contours) and the electric vector orientation angle $`\chi `$ (segments, length proportional to p) for the 22 GHz observations in 1995.84. In Fig. 2 the electric vector is aligned with the extremely curved jet direction at the inner 3 mas both for the core and the C7 component. The component C8 is boosted in polarized emission, showing an electric vector apparently perpendicular to this jet direction. Gómez et al. Gómez et al. (1994a, b) have modeled features similar to the ones presented here. They reported anticorrelation between the polarized and the total intensity flux close to a shock wave along a jet in its early evolution (near to the core). We should point out that the possible superposition between components with different electric vector orientations can lead to a cancellation of flux and produce apparent separations between polarized emission components. The brighter polarized emission in C8 might be explained by a shock wave in a curved jet geometry. More generally, the features of Fig. 2 can be explained in terms of a comprehensive shock model Wardle et al. (1994) in the framework of a helical geometry for the motion of the components Steffen et al. (1995). ## 3 Conclusions. We have monitored the superluminal QSO 3C 345 at three epochs within one year, observing with the VLBA at four frequencies. We have presented some results of these studies at the higher frequencies, showing the superluminal motions of components C8 and C7 with respect to the core component D, and the remarkably complex polarization structure near the core, which provides evidence for emerging components and changing projected jet direction within 3 mas from the core. The twist in the orientation of the electric vector along the jet can be explained in terms of an extremely curved helical geometry, following Steffen et al. Steffen et al. (1995). The electric field is parallel to the jet direction, and the boosting in the polarized emission at the component C8 and the change in the vector orientation can be the result of the presence of a shock wave in a bent jet.
no-problem/9901/astro-ph9901195.html
ar5iv
text
# Mass Density Perturbations from Inflation with Thermal Dissipation ## I Introduction In the past decade, there has been a number of studies on dissipative processes associated with the inflaton decay during its evolution. These studies have shed light into the possible effects of the dissipative processes. For instance, it was realized that dissipation effectively slows down the rolling of the inflaton scalar field $`\varphi `$ toward the true vacuum. These processes are capable of supporting the scenario of inflation . Recently, inspired by several new developments, the problem of inflation with thermal dissipation has attracted many re-investigations. The first progress is from the study of the non-equilibrium statistics of quantum fields, which has found that, under certain conditions, it seems to be reasonable to introduce a dissipative term (such as a friction-like term) into the equation of motion of the scalar field $`\varphi `$ to describe the effect of heat contact between the $`\varphi `$ field and a thermal bath. These studies shown that the thermal dissipation and fluctuation will most likely appear during the inflation if the inflaton is coupled to light fields. However, to realize sufficient e-folds of inflation with thermal dissipation, this theory needs to introduce tens of thousands of scalar and fermion fields interacting with the inflaton in an ad hoc manner. Namely, it is still far from a realistic model. Nevertheless, this study indicates that the condition necessary for the “standard” reheating evolution – a coupling of inflaton with light fields – is actually also the condition under which the effects of thermal dissipation during inflation should be considered. Secondly, in the case of a thermal bath with a temperature higher than the Hawking temperature, the thermal fluctuations of the scalar field plays an important and even dominant role in producing the primordial perturbations of the universe. Based on these results, the warm inflation scenario has been proposed. In this model, the inflation epoch can smoothly evolve to a radiation-dominated epoch, without the need of a reheating stage. Dynamical analysis of systems of inflaton with thermal dissipation gives further support to this model. It is found that the warm inflation solution is very common. A rate of dissipation as small as $`10^7H`$, $`H`$ being the Hubble parameter during inflation, can lead to a smooth exit from inflation to radiation. Warm inflation also provides explanation to the super-Hubble suppression. The standard inflationary cosmology, which is characterized by an isentropic de Sitter expansion, predicts that the particle horizon should be much larger than the present-day Hubble radius $`c/H_0`$. However, a spectral analysis of the COBE-DMR 4-year sky maps seems to show a lack of power in the spectrum of the primordial density perturbations on scales equal to or larger than the Hubble radius $`c/H_0`$. A possible explanation of this super-Hubble suppression is given by hybrid models, where the primordial density perturbations are not purely adiabatic, but mixed with an isocurvature component. The warm inflation is one of the mechanisms which can naturally produce both adiabatic and isocurvature initial perturbations. In this paper, we study the power spectrum of mass density perturbations caused by inflation with thermal dissipation. One purpose of developing the model of warm inflation is to explain the amplitudes of the initial perturbations. Usually, the amplitude of initial perturbations from quantum fluctuation of the inflaton depends on some unknown parameters of the inflation potential. However, for the warm inflation model, the amplitude of the initial perturbations is found to be mainly determined by the energy scale of inflation, $`M`$. If $`M`$ is taken to be about $`10^{15}`$ GeV, the possible amplitudes of the initial perturbations are found to be in a range consistent with the observations of the temperature fluctuations of the cosmic microwave background (CMB) . That is, the thermally originated initial perturbations apparently do not directly depend on the details of the inflation potential, but only on some thermodynamical variables, such as the energy scale $`M`$. This result is not unexpected, because like many thermodynamical systems, the thermal properties including density fluctuations should be determined by the thermodynamical conditions, regardless of other details. Obviously, it would be interesting to find more “thermodynamical” features which contain only observable quantities and thermodynamical parameters, as these predictions would be more useful for confronting models with observations. Guided by these considerations, we will extend the above-mentioned qualitative estimation of the order of the density perturbations to a quantitative calculation of the power spectrum of the density perturbations. We show that the power spectrum of the warm inflation does not depend on unknown parameters of the inflaton potential and the dissipation, but only on the energy scale $`M`$. The spectrum is found to be of power law, and the index of the power law can be larger or less than 1. More interestingly, we find that for a given $`M`$, the amplitude and the index of the power law are not independent from each other. In other words, the amplitude of the power spectrum is completely determined by the power index and the number $`M`$. Comparing this result with the observed temperature fluctuations of the CMB, we find that both amplitude and index of the power spectrum can be fairly well fitted if $`M10^{15}10^{16}`$ GeV. This paper is organized as follow: In Sec. II we discuss the evolution of the radiation component for inflationary models with dissipation prescribed by a field-dependent friction term. In particular, we scrutinize the physical conditions on which the thermal fluctuations dominate the primordial density perturbations. Section III carries out the calculations of the power spectrum of the density perturbations of the warm inflations. And finally, in Sec. IV we give the conclusions and discuss further observational tests. ## II Inflation with Thermal Dissipation ### A Basic equations Let us consider a flat universe consisting of a scalar inflaton field $`\varphi `$ and a thermal bath. Its dynamics is described by the following equations. The equations of the expanding universe are $$2\dot{H}+3H^2=\frac{8\pi }{m_{\mathrm{Pl}}^2}\left[\frac{1}{2}\dot{\varphi }^2+\frac{1}{3}\rho _rV(\varphi )\right],$$ (1) $$H^2=\frac{8\pi }{3}\frac{1}{m_{\mathrm{Pl}}^2}\left[\rho _r+\frac{1}{2}\dot{\varphi }^2+V(\varphi )\right],$$ (2) where $`H=\dot{R}/R`$ is the Hubble parameter, and $`m_{\mathrm{Pl}}=\sqrt{1/G}`$ the Planck mass. $`V(\varphi )`$ is the effective potential for field $`\varphi `$, and $`\rho _r`$ is the energy density of the thermal bath. Actually the scalar field $`\varphi `$ is not uniform due to fluctuations. Therefore, the field $`\varphi `$ in Eqs.(2.1) and (2.2) should be considered as an average over the fluctuations. The equation of motion for scalar field $`\varphi `$ in a de-Sitter universe is $$\ddot{\varphi }+3H\dot{\varphi }+\mathrm{\Gamma }\dot{\varphi }e^{2Ht}^2\varphi +V^{}(\varphi )=0,$$ (3) where the friction term $`\mathrm{\Gamma }\dot{\varphi }`$ describes the interaction between the $`\varphi `$ field and a heat bath. Obviously, for a uniformed field, or averaged $`\varphi `$, the term $`^2\varphi `$ of Eq. (2.3) can be ignored. Statistical mechanics of quantum open systems has shown that the interaction of quantum fields with thermal or quantum bath can be described by a general fluctuation-dissipation relation. It is probably reasonable to describe the interaction between the inflaton and the heat bath as a “decay” of the inflaton . These results support the idea of introducing a damping or friction term into the field equation of motion. In particular, the friction term with the form in Eq. (2.3), $`\mathrm{\Gamma }\dot{\varphi }`$, is a possible approximation for the dissipation of $`\varphi `$ field in a heat bath environment in the near-equilibrium circumstances. In principle, $`\mathrm{\Gamma }`$ can be a function of $`\varphi `$. In the cases of polynomial interactions between $`\varphi `$ field and bath environment, one may take the polynomial of $`\varphi `$ for $`\mathrm{\Gamma }`$, i.e., $`\mathrm{\Gamma }=\mathrm{\Gamma }_m\varphi ^m`$. The friction coefficient must be positive definite, hence $`\mathrm{\Gamma }_m>0`$, and the dissipative index of friction $`m`$ should be zero or even integer if $`V(\varphi )`$ is invariant under the transformation $`\varphi \varphi `$. The equation of the radiation component (thermal bath) is given by the first law of thermodynamics as $$\dot{\rho }_r+4H\rho _r=\mathrm{\Gamma }\dot{\varphi }^2.$$ (4) The temperature of the thermal bath can be calculated by $`\rho _r=(\pi ^2/30)g_{\mathrm{eff}}T^4`$, $`g_{\mathrm{eff}}`$ being the effective number of degrees of freedom at temperature $`T`$. The warm inflation scenario is generally defined by a characteristic that the thermal fluctuations of the scalar field dominate over the quantum origin of the initial density perturbations. Because the thermal and quantum fluctuations of the scalar field are proportional to $`T`$ and $`H`$ respectively, a necessary condition for warm inflation models is the existence of a radiation component with temperature $$T>H$$ (5) during the inflationary expansion. Eq. (2.5) is also necessary for maintaining the thermal equilibrium of the radiation component. In general, the time scale for the relaxation of a radiation bath is shorter for higher temperature. Accordingly, to have a relaxing time of the bath shorter than the expansion of the universe, a temperature higher than $`H`$ is generally needed. As a consequence of Eq. (2.5), warm inflation scenario requires that the solutions of Eqs. (2.1) - (2.4) should contain an inflation era, followed by smooth transition to a radiation-dominated era. Dynamical system analysis also confirmed that for a massive scalar field $`V(\varphi )=\frac{1}{2}M^2\varphi ^2`$, the warm inflation solution of Eqs. (2.1) - (2.4) is very common. A smooth exit from inflation to radiation era can be established even for a dissipation with $`\mathrm{\Gamma }`$ as small as $`10^7H`$. A typical solution of warm inflation will be given in next section. ### B Evolution of Radiation component during inflation Since warm inflation solution does not rely on a specific potential, we will employ the popular $`\varphi ^4`$ potential commonly used for the “new” inflation models. It is $$V(\varphi )=\lambda (\varphi ^2\sigma ^2)^2.$$ (6) To have slow-roll solutions, the potential should be flat enough, i.e., $`\lambda (M/m_{\mathrm{Pl}})^4`$, where $`V(0)M^4=\lambda \sigma ^4`$. For models based on the potential of Eq. (2.6), the existence of a thermal component during inflation seems to be inevitable. In order to maintain the $`\varphi `$ field close to its minimum at the onset of the inflation phase transition, a thermal force is generically necessary. In other words, there is, at least, a weak coupling between $`\varphi `$ field and other fields contributing to the thermal bath. During the slow roll period of inflation, the potential energy of the $`\varphi `$ field is fairly constant, and their kinetic energy is small, so that the interaction between the $`\varphi `$ field with the fields of the thermal bath remains about the same as at the beginning. As such, there is no compelling reason to ignore these interactions. Strictly speaking, we should use a finite temperature effective potential $`V(\varphi ,T)`$. However, the correction due to finite temperature is negligible. The leading temperature correction of the potential (2.6) is $`\lambda T^2\varphi ^2`$. On the other hand, as mentioned above, we have $`\lambda (M/m_{\mathrm{Pl}})^4`$ for the flatness of the potential. Therefore, $`\lambda T^2M^6/m_{\mathrm{Pl}}^4(M/m_{\mathrm{Pl}})^2H^2H^2`$, i.e., the influence of the finite temperature effective potential can be ignored when $`\varphi <m_{\mathrm{Pl}}`$. Now, we try to find warm inflation solutions of Eqs. (2.1) - (2.4) for weak friction $`\mathrm{\Gamma }<H`$. In this case, Eqs. (2.1) - (2.3) are actually the same as the “standard” new inflation model when $$\rho _rV(0).$$ (7) Namely, we have the slow-roll solution as $$\dot{\varphi }\frac{V^{}(\varphi )}{3H+\mathrm{\Gamma }(\varphi )}\frac{V^{}(\varphi )}{3H},$$ (8) $$\frac{1}{2}\dot{\varphi }^2V(0),$$ (9) and $$H^2H_i^2\frac{8\pi }{3}\frac{V(0)}{m_{\mathrm{Pl}}^2}\left(\frac{M}{m_{\mathrm{Pl}}}\right)^2M^2,$$ (10) where the subscript $`i`$ denotes the starting time of the inflation epoch. During the stage of $`\varphi \sigma `$, it is reasonable to neglect the $`\varphi ^3`$ term in Eq. (2.3). We have then $$\ddot{\varphi }+(3H+\mathrm{\Gamma })\dot{\varphi }4\lambda \sigma ^2\varphi =0.$$ (11) Considering $`\mathrm{\Gamma }<H`$, an approximate solution of $`\varphi `$ can immediately be found as $$\varphi =\varphi _ie^{\alpha Ht},$$ (12) where $`\alpha \lambda ^{1/2}(m_{\mathrm{Pl}}/M)^2/2\pi `$ and $`\varphi _i`$ is the initial value of the scalar field. Substituting solution (2.12) into Eq. (2.4), we have the general solution of (2.4) as $$\rho _r(t)=Ae^{(m+2)\alpha Ht}+Be^{4Ht}$$ (13) where $`A=\alpha ^2H\mathrm{\Gamma }_m\varphi _i^{m+2}/[(m+2)\alpha +4]`$, $`B=\rho _r(0)A`$, and $`\rho _r(0)`$ is the initial radiation density. Obviously, the term $`B`$ in Eq. (2.12) describes the blowing away of the initial radiation by the inflationary exponential expansion, and the term $`A`$ is due to the generation of radiation by the $`\varphi `$ field decay. According to Eq. (2.13), the evolution of the radiation has two phases. Phase 1 covers the period during which the $`B`$ term is dominant, and radiation density drops drastically due to the inflationary expansion. The component of radiation evolves into phase 2 when the $`A`$ term becomes dominant, where the radiation density increases due to the friction of the $`\varphi `$ field. Namely, both heating and inflation are simultaneously underway in phase 2. Therefore, this phase is actually the era of inflation plus reheating. The transition from phase 1 to phase 2 occurs at time $`t_b`$ determined by $`(d\rho /dt)_{t_b}=0`$. We have $$Ht_b\frac{1}{(m+2)\alpha +4}\mathrm{ln}\left\{\frac{4[(m+2)\alpha +4]}{(m+2)\alpha ^3H}\frac{aM^4}{\mathrm{\Gamma }_m\varphi _i^{m+2}}\right\},$$ (14) where $`a(\pi ^2/30)g_{\mathrm{eff}}`$. Then the radiation density at the rebound time becomes $$\rho _r(t_b)=\frac{1}{4}\left[(m+2)\alpha +4\right]A\mathrm{exp}[(m+2)\alpha Ht_b].$$ (15) From Eqs. (2.12) and (2.13), the radiation density in phase 2 is given by $$\rho _r(t)=\frac{1}{4}\alpha ^2\mathrm{\Gamma }H\varphi ^2(t)\frac{1}{16\pi ^2}\lambda \left(\frac{m_{\mathrm{Pl}}}{M}\right)^4\mathrm{\Gamma }H\varphi ^2(t).$$ (16) Since $`H(M/m_{\mathrm{Pl}})M`$, Eq. (2.16) can be rewritten as $$\rho _r(t)\lambda ^{1/2}\left(\frac{m_{\mathrm{Pl}}}{M}\right)^2\frac{\mathrm{\Gamma }}{H}\left(\frac{\varphi (t)}{\sigma }\right)^2V(0).$$ (17) On the other hand, from (2.12), we have $$\frac{1}{2}\dot{\varphi }(t)^2\lambda ^{1/2}\left(\frac{m_{\mathrm{Pl}}}{M}\right)^2\left(\frac{\varphi (t)}{\sigma }\right)^2V(0).$$ (18) Therefore, in the case of weak dissipation $`\mathrm{\Gamma }<H`$, we have $$\rho _r(t)<\dot{\varphi }(t)^2/2.$$ (19) This is consistent with the condition of inflation Eq. (2.7) when Eq. (2.9) holds. Eqs. (2.7) and (2.9) indicate that the inflation will come to an end at time $`t_f`$ when the energy density of the radiation components, or the kinetic energy of $`\varphi `$ field, $`\dot{\varphi }^2/2`$, become large enough, and comparable to $`V(0)`$. From Eqs. (2.17) and (2.18), $`t_f`$ is given by $$\lambda ^{1/2}\left(\frac{m_{\mathrm{Pl}}}{M}\right)^2\left(\frac{\varphi (t_f)}{\sigma }\right)^21.$$ (20) In general, at the time when the phase 2 ends, or a radiation-dominated era starts, the potential energy may not be fully exhausted yet. In this case, a non-zero potential $`V`$ will remain in the radiation-dominated era, and the process of $`\varphi `$ decaying into light particles is still continuing. However, considering $`\lambda ^{1/2}(m_{\mathrm{Pl}}/M)^2(\mathrm{\Gamma }/H)<1`$, the right hand side of Eq. (2.17) will always be less than 1 when $`\varphi (t)`$ is less than $`\sigma `$. This means that, for weak dissipation, phase 2 cannot terminate at $`\varphi (t)<\sigma `$, or $`V(\varphi (t_f))0`$. Therefore, under weak dissipation, phase 2 will end at the time $`t_f`$ when the potential energy $`V(\varphi )`$ is completely exhausted, i.e., $$\varphi (t_f)\sigma .$$ (21) This means that no non-zero $`V`$ remains once the inflation exits to a radiation-dominated era, and the heating of $`\varphi `$ decay also ends at $`t_f`$. ### C Temperature of radiation From Eq. (2.13), one can find the temperature $`T`$ of the radiation in phases 1 ($`t<t_b`$) and 2 ($`t>t_b`$) as $$T(t)=\{\begin{array}{cc}T_be^{H(tt_b)},\hfill & \text{if }t<t_b,\hfill \\ T_be^{(m+2)\alpha H(tt_b)/4},\hfill & \text{if }t_f>t>t_b,\hfill \end{array}$$ (22) where $$T_b=(4a)^{1/4}[(m+2)\alpha +4]^{1/4}A^{1/4}\mathrm{exp}\left[\left(\frac{m+2}{4}\right)\alpha Ht_b\right].$$ (23) The temperature $`T_f`$ at the end of phase 2 is $$T_f=T(t_f)=T_be^{(m+2)\alpha H(t_ft_b)/4},$$ (24) where $`t_f`$ is given by Eq. (2.21). Since $`T(t)`$ is increasing with $`t`$ in phase 2, the condition (2.5) for warm inflation can be satisfied if $`T(t_f)>H`$, or $$\rho _r(t_f)>aH_i^4.$$ (25) Using Eqs. (2.17) and (2.21), condition (2.25) is realized if $$\frac{\mathrm{\Gamma }}{H}>\left(\frac{\sigma }{m_{\mathrm{Pl}}}\right)^2\left(\frac{M}{m_{\mathrm{Pl}}}\right)^4.$$ (26) Namely, $`\mathrm{\Gamma }`$ can be as small as $`10^{12}H`$ for $`M10^{16}`$ GeV, and $`\sigma 10^{19}`$ GeV. Therefore, the radiation solution (2.13), or warm inflation, should be taken into account in a very wide range of dissipation $$10^{12}H<\mathrm{\Gamma }<H.$$ (27) This result is about the same as that given by dynamical system analysis: a tiny friction $`\mathrm{\Gamma }`$ may lead the inflaton to a smooth exit directly at the end of the inflation era. A typical solution of the evolution of radiation temperature $`T(t)`$ is demonstrated in Fig. 1, for which parameters are taken to be $`M=10^{15}`$ GeV, $`\sigma =2.2410^{19}`$ GeV, $`\mathrm{\Gamma }_2=10^5H_i`$ and $`g_{\mathrm{eff}}`$ = 100. Actually, $`g_{\mathrm{eff}}`$-factor is a function of $`T`$ in general. However, as can be seen below, the unknown function $`g_{\mathrm{eff}}(T)`$ has only a slight effect on the problems under investigation. Figure 1 shows that the rebound temperature $`T_b`$ can be less than $`H`$. In this case, the evolution of $`T(t)`$ in phase 2 can be divided into two sectors: $`T<H`$ for $`t<t_e`$, and $`T>H`$ for $`t>t_e`$, where $`t_e`$ is defined by $`T(t_e)=H`$. We should not consider the solution of radiation to be physical if $`T<H`$ since it is impossible to maintain a thermalized heat bath with the radiation temperature less than the Hawking temperature $`H`$ of an expanding universe. Nevertheless, the solution (2.13) should be available if $`t>t_e`$. Therefore, one can only consider the period of $`t_e<t<t_f`$ as the epoch of the warm inflation. Figure 1 also plots the Hubble parameter $`H(t)`$. The evolution of $`H(t)`$ is about the same as in the standard new inflation model, i.e., $`H(t)H_i`$ in both phases 1 and 2. In Fig. 1, it is evident that the inflation smoothly exits to a radiation era at $`t_f`$. The Hubble parameter $`H(t)`$ also evolves from the inflation $`H(t)`$ constant to a radiation regime $`H(t)t^1`$. The duration of the warm inflation is represented by ($`t_ft_e`$) then. The number of $`e`$-folding growth of the comoving scale factor $`R`$ during the warm inflation is given by $$N_{t_e}^{t_f}H𝑑t\frac{4}{(m+2)\alpha }\mathrm{ln}\frac{T_f}{H}.$$ (28) One can also formally calculate the number of $`e`$-folds of the growth in phase 2 as $$N_2_{t_b}^{t_f}H𝑑t\frac{4}{(m+2)\alpha }\mathrm{ln}\frac{T_f}{T_b},$$ (29) and the number of $`e`$-folds of the total growth as $$N_t_0^{t_f}H𝑑t\frac{4}{(m+2)\alpha }\mathrm{ln}\left(\frac{T_f}{T_b}\right)+Ht_b.$$ (30) It can be found from Eqs. (2.28) - (2.30) that both $`N_2`$ and $`N_t`$ depend on the initial value of the field $`\varphi _i`$ via $`T_b`$, but $`N`$ does not. The behavior of $`T`$ at the period $`t>t_e`$ is completely determined by the competition between the diluting and producing radiation at $`t>t_b`$. Initial information about the radiation has been washed out by the inflationary expansion. Hence, the initial $`\varphi _i`$ will not lead to uncertainty in our analysis if we are only concerned the problems of warm evolution at the period $`t_e<t<t_f`$. ## III The primordial density perturbations ### A Density fluctuations of the $`\varphi `$ field The fluctuations of $`\varphi `$ field can be calculated by the similar way as stochastic inflations. Recall that the coarse-grained scalar field $`\varphi `$ is actually determined from the decomposition between background and high frequency modes, i.e. $$\mathrm{\Phi }(𝐱,t)=\varphi (𝐱,t)+q(𝐱,t),$$ (31) where $`\mathrm{\Phi }(𝐱,t)`$ is the scalar field satisfying $$\ddot{\mathrm{\Phi }}+3H\dot{\mathrm{\Phi }}e^{2Ht}^2\mathrm{\Phi }+V^{}(\mathrm{\Phi })=0.$$ (32) $`q(𝐱,t)`$ in Eq.(3.1) contains all high frequency modes and gives rise to the thermal fluctuations. Since the mass of the field can be ignored for the high frequency modes, we have $$q(𝐱,t)=d^3kW(|𝐤|)\left[a_𝐤\sigma _𝐤(t)e^{i𝐤𝐱}+a_𝐤^{}\sigma _𝐤^{}(t)e^{i𝐤𝐱}\right]$$ (33) where $`k`$ is comoving wave vector, and modes $`\sigma _𝐤(t)`$ is given by $$\sigma _𝐤(t)=\frac{1}{(2\pi )^{3/2}}\frac{1}{\sqrt{2k}}\left[H\tau i\frac{H}{k}\right]e^{ik\tau },$$ (34) and $`\tau =H^1\mathrm{exp}(Ht)`$ is the conformal time. Eq.(3.3) is appropriate in the sense that the self-coupling of the $`\varphi `$ field is negligible. Considering the high frequency modes are mainly determined by the heat bath, this approximation is reasonable. The window function $`W(|𝐤|)`$ is properly chosen to filter out the modes at scales larger than the horizon size $`H^1`$, i.e., $`W(k)=\theta (kk_h(t))`$, where $`k_h(t)(1/\pi )H\mathrm{exp}(Ht)`$ <sup>*</sup><sup>*</sup>*The coefficient $`1/\pi `$ actually depends on the details of the cut-off function, which may not be step-function-like. For instance, considering causality, the cut-off function can be soft, and the longest wavelength of fluctuations can be a few times of the size of horizon is the lower limit to the wavenumber of thermal fluctuations. From Eqs.(3.1) and (3.3), with the slow-roll condition, Eq.(3.2) renders $$3H\dot{\varphi }e^{2Ht}^2\varphi +V^{}(\mathrm{\Phi })|_{\mathrm{\Phi }=\varphi }=3H\eta (𝐱,t),$$ (35) and $$\eta (𝐱,t)=\left(\frac{}{t}+\frac{1}{3H}e^{2Ht}^2\right)q(𝐱,t).$$ (36) Eq.(3.5) can be rewritten as $$\frac{d\varphi (𝐱,t)}{dt}=\frac{1}{3H}\frac{\delta F[\varphi (𝐱,t)]}{\delta \varphi }+\eta (𝐱,t)$$ (37) where $$F[\overline{\varphi }]=d^3𝐱\left[\frac{1}{2}(e^{Ht}\overline{\varphi })^2+V(\overline{\varphi })\right]$$ (38) Eq. (3.7) is, in fact, the rate equation of the order parameter $`\varphi `$ of a system with free energy $`F[\varphi ]`$. It describes the approach to equilibrium for the system during phase transition. Using the expression of free energy (3.8), the slow-roll solution (2.8) can be rewritten as $$\frac{d\varphi }{dt}=\frac{1}{3H+\mathrm{\Gamma }}\frac{dF[\varphi ]}{d\varphi }.$$ (39) Hence, in the case of weak dissipation ($`\mathrm{\Gamma }<H`$), Eq. (3.7) is essentially the same as the slow-roll solution (2.8) or Eq. (3.9) but with fluctuations $`\eta `$. The existence of the noise field ensures that the dynamical system properly approaches the global minimum of the inflaton potential $`V(\varphi )`$. Strictly speaking, both the dissipation $`\mathrm{\Gamma }`$ and fluctuations $`\eta `$ are consequences derived from $`q(𝐱,t)`$. They should be considered together. However, it seems to be reasonable to calculate the fluctuations alone if the dissipation is weak. Unlike (3.2), the Langevin equation (3.7) is of first order ($`\dot{\varphi }`$) due to the slow-roll condition. Generally, thermal fluctuations will cause both growing and decaying modes We thank the referee for pointing this problem out.. Therefore, the slow-roll condition simplifies the problem from two types of fluctuation modes to one, i.e., we can directly calculate the total fluctuation as the superposition of various fluctuations. It has been shown that during the eras of dissipations, the growth of the structures in the universe is substantially the same as surface roughening due to stochastic noise. The evolution of the noise-induced surface roughening is described by the so-called KPZ-equation . Eqs.(3.5) or (3.7), which includes terms of non-linear drift plus stochastic fluctuations, is a typical KPZ-like equation. From Eq.(3.6), the two-point correlation function of $`\eta (𝐱,t)`$ can be found as $$\eta (𝐱,t)\eta (𝐱^{},t^{})=\frac{H^3}{4\pi ^2}\left[1+\frac{2}{\mathrm{exp}(H/\pi T)1}\right]\frac{\mathrm{sin}(k_h|𝐱𝐱^{}|)}{k_h|𝐱𝐱^{}|}\delta (tt^{}),$$ (40) where $`1/[\mathrm{exp}(H/\pi T)1]`$ is the Bose factor at temperature $`T`$. Therefore, when $`T>H`$, we have $$\eta (𝐱,t)\eta (𝐱,t^{})=\frac{H^2T}{2\pi }\delta (tt^{}).$$ (41) This result can also be directly obtained via the fluctuation-dissipation theorem. In order to accord with the dissipation terms of Eq. (3.7), the fluctuation-dissipation theorem requires the ensemble average of $`\eta `$ to be given by $$\eta =0$$ (42) and $$\eta (𝐱,t)\eta (𝐱,t^{})=D\delta (tt^{}).$$ (43) The variance $`D`$ is determined by $$D=2\frac{1}{U}\frac{T}{3H+\mathrm{\Gamma }},$$ (44) where $`U=(4\pi /3)H^3`$ is the volume with Hubble radius $`H^1`$. In the case of weak dissipation, we then recover the same result as in Eq.(3.11), $$D=H^2T/2\pi .$$ (45) When $`T=H`$, we obtain $$D=\frac{H^3}{2\pi },$$ (46) which agrees exactly the result derived from quantum fluctuations of $`\varphi `$-field. Therefore, the quantum fluctuations of inflationary $`\varphi `$ field are equivalent to the thermal noises stimulated by a thermal bath with the Hawking temperature $`H`$. Eqs. (3.15) and (3.16) show that the condition (2.5) is necessary and sufficient for a warm inflation. For long-wavelength modes, the $`V^{}(\varphi )`$ term is not negligible. It may lead to a suppression of correlations on scales larger than $`|V^{\prime \prime }(\varphi )|^{1/2}`$. However, before the inflaton actually rolls down to the global minimum, we have $`|V^{\prime \prime }(\varphi \sigma )|^{1/2}H^1`$. The so-called abnormal dissipation of density perturbations may produce more longer correlation time than $`H`$. Therefore in phase 2, i.e., the warm inflation phase $`H<T<M`$, the long-wavelength suppression will not substantially change the scenario presented above. The fluctuations $`\delta \varphi `$ of the $`\varphi `$ field can be found from linearizing Eq. (3.7). If we only consider the fluctuations $`\delta \varphi `$ crossing outside the horizon, i.e., with wavelength $`H^1`$, the equation of $`\delta \varphi `$ is $$\frac{d\delta \varphi }{dt}=\frac{H^2+V^{^{\prime \prime }}(\varphi )}{3H+\mathrm{\Gamma }}\delta \varphi +\eta .$$ (47) For the slow-roll evolution, we have $`|V^{^{\prime \prime }}(\varphi )|9H^2`$ . One can ignore the $`V^{^{\prime \prime }}(\varphi )`$ term on the right hand side of Eq. (3.17). Accordingly, the correlation function of the fluctuations is $$\delta \varphi (t)\delta \varphi (t^{})D\frac{3H+\mathrm{\Gamma }}{2H^2}e^{(tt^{})H^2/(3H+\mathrm{\Gamma })},t>t^{},$$ (48) hence $$(\delta \varphi )^2\frac{3}{4\pi }HT$$ (49) Thus, in the period $`t_e<t<t_f`$ the density perturbations on large scales are produced by the thermal fluctuations that leave the horizon with a Gaussian-distributed amplitude having a root-mean-square dispersion given by Eq. (3.19). Principally, the problem of horizon crossing of thermal fluctuations given by Eq. (3.7) is different from the case of quantum fluctuations, because the equations of $`H`$ and $`\dot{H}`$, (2.1) and (2.2) contain terms in $`\rho _r`$. However, these terms are insignificant for weak dissipation \[Eq. (2.19)\] in phase 2. Thus Eqs.(2.1) and (2.2) depend only nominally on the evolution of $`\rho _r`$. Accordingly, for weak dissipation, the behavior of thermal fluctuations at horizon crossing can be treated by the same way as the evolutions of quantum fluctuations in stochastic inflation. In that theory, quantum fluctuations of inflaton are assumed to become classical upon horizon crossing and act as stochastic forces. Obviously, this assumption is not necessary for thermal fluctuations. Moreover, we will show that in phase 2 the thermal stochastic force $`HT`$ is contingent upon the comoving scale of perturbations by a power law \[Eqs. (2.21) and (3.21)\], and therefore the power spectrum of the thermal fluctuations obeys the power law. This make it more easier to estimate the constraint quantity in the super-horizon regime. Accordingly, the density perturbations at the horizon re-entry epoch are characterized by $$\left(\frac{\delta \rho }{\rho }\right)_h=\frac{\delta \varphi V^{}(\varphi )}{\dot{\varphi }^2+(4/3)\rho _r}.$$ (50) All quantities in the right-hand side of Eq. (3.20) are calculated at the time when the relevant perturbations cut across the horizon at the inflationary epoch. Using the solutions of $`\varphi `$ and $`\rho _r`$ of warm inflation (2.12) and (2.13), Eq. (3.20) gives $$\left(\frac{\delta \rho }{\rho }\right)_h\left(\frac{53^{3m/2+4}}{2^{m+3}\pi ^{m/2+3}}\right)^{\frac{1}{m+2}}\left(\frac{\gamma _m}{g_{\mathrm{eff}}\alpha ^m}\right)^{\frac{1}{m+2}}\left(\frac{T}{H}\right)^{\frac{1}{2}\left(\frac{m6}{m+2}\right)},$$ (51) where the dimensionless parameter $`\gamma _m\mathrm{\Gamma }_mH^{m1}`$, and $`T`$ is the temperature at the time when the considered perturbations $`\delta \rho _r`$ crossing out of the horizon $`H^1H_i^1`$. Eq. (3.21) shows that the density perturbations are insensitive to the $`g_{\mathrm{eff}}`$-factor. ### B Power law index Since inflation is immediately followed by the radiation dominated epoch, the comoving scale of a perturbation with crossing over (the Hubble radius) at time $`t`$ is given by $$\frac{k}{H_0}=2\pi \frac{H}{H_0}\frac{T_0}{T_f}e^{H(tt_f)},$$ (52) where $`T_0`$ and $`H_0`$ are the present CMB temperature and Hubble constant respectively. Eq. (3.22) shows that the smaller $`t`$ is, the smaller $`k`$ will be. This is the so-called “first out - last in” of the evolution of density perturbations produced by the inflation. Using Eqs. (2.22) and (3.22), the perturbations (3.21) can be rewritten as $$\left(\frac{\delta \rho }{\rho }\right)^2_hk^{(m6)\alpha /4},\mathrm{if}k>k_e,$$ (53) where $`k_e`$ is the wavenumber of perturbations crossing out of horizon at $`t_e`$. It is $$k_e=2\pi H\frac{T_0}{T_f}e^{H(t_et_f)}2\pi H\frac{T_0}{T_f}e^N.$$ (54) Therefore, the primordial density perturbations produced during warm inflation are of power law with an index $`(m6)\alpha /4`$. We may also express the power spectrum of the density perturbations at a given time $`t`$. It is $$\left(\frac{\delta \rho }{\rho }\right)^2_tk^{3+n},\mathrm{if}k>k_e,$$ (55) where the spectral index $`n`$ is $$n=1+\left(\frac{m6}{4}\right)\alpha .$$ (56) Clearly, for $`m=6`$, the warm inflation model generates a flat power spectrum $`n=1`$, yet the power spectrums will be tilted for $`m6`$. The dissipation models $`\mathrm{\Gamma }=\mathrm{\Gamma }_m\varphi ^m`$ may not be realistic for higher $`m`$, but we will treat $`m`$ like a free parameter in order to show that the results we concerned actually are not very sensitive to these parameters. The warm inflation scenario requires that all perturbations on comoving scales equal to or less than the present Hubble radius originate in the period of warm inflation. Hence, the longest wavelength of the perturbations (3.24), i.e., $`2\pi /k_e`$, should be larger than the present Hubble radius $`H_0^1`$. We have then $$N>\mathrm{ln}\left(\frac{HT_0}{H_0T_f}\right)=\mathrm{ln}\left(\frac{T_0}{H_0}\right)\mathrm{ln}\left(\frac{T_f}{H}\right)55,$$ (57) where we have used $`(T_0/H_0)(T_f/H)`$, as $`T_fM`$. Using Eq. (2.28), the condition (3.27) gives an upper bound to $`\alpha `$ for a given $`m`$ as $$\alpha _{\mathrm{max}}=\left(\frac{4}{m+2}\right)\frac{\mathrm{ln}(T_f/H)}{\mathrm{ln}(T_0/H_0)}.$$ (58) Thus, the possible area of the index $`n`$ can be found from Eq. (3.27) as $$n=\{\begin{array}{cc}1(6m)\alpha _{\mathrm{max}}/4\mathrm{to}1,\hfill & \text{if }m<6,\hfill \\ 1\mathrm{to}1+(m6)\alpha _{\mathrm{max}}/4,\hfill & \text{if }m>6.\hfill \end{array}$$ (59) Therefore, the power spectrum is positive-titled (i.e., $`n>1`$) if $`m>6`$, and negative-titled ($`n<1`$) if $`m<6`$. Figure 2 plots the allowed area of $`n`$ as a function of the inflation mass scale $`M`$. Apparently, for $`M10^{16}`$ GeV, the tilt $`|n1|`$ should not be larger than about 0.15 regardless of the values of $`m`$ from 2 to 12. ### C Amplitudes of perturbations To calculate the amplitude of the perturbations we rewrite spectrum (3.25) into $$\left(\frac{\delta \rho }{\rho }\right)^2_h=A\left(\frac{k}{k_0}\right)^{n1},\mathrm{if}k>k_e,$$ (60) where $`k_0=2\pi H_0`$. $`A`$ is the spectrum amplitude normalized on scale $`k=k_0`$, corresponding to the scale on which the perturbations re-enter the Hubble radius $`1/H_0`$ at present time. From Eqs. (3.21), and (3.23), we have $$A=\left(\frac{53^{3m/2+4}}{2^{m+3}\pi ^{m/2+3}}\right)^{\frac{2}{m+2}}\left(\frac{\gamma _m}{g_{\mathrm{eff}}\alpha ^m}\right)^{\frac{2}{m+2}}\left(\frac{H_0T_f}{HT_0}\right)^{n1}\left(\frac{T}{H}\right)^{\frac{m6}{m+2}}e^{(n1)H(t_ft)}.$$ (61) Applying Eq. (2.21), the radiation temperature at the moment of horizon-crossing, $`t`$, can be expressed as $`T(t)=T_f\mathrm{exp}[(m+2)\alpha H(tt_f)/4]`$. With the help of Eq. (2.28), we obtain $$\left(\frac{T}{H}\right)^{\frac{m6}{m+2}}\left(\frac{T_f}{H}\right)^{n1}e^{(n1)H(t_ft)}=\mathrm{exp}\left\{(n1)\left[1+\left(\frac{m+2}{4}\alpha \right)\right]N\right\}.$$ (62) On the other hand, using Eqs. (2.20), (2.23) and (2.28), one has $$\gamma _m=\left(\frac{3}{4}\right)^{1\frac{m}{2}}\frac{g_{\mathrm{eff}}}{30}\left(\frac{M}{m_{\mathrm{Pl}}}\right)^{2m}\alpha ^{3\frac{m}{2}}.$$ (63) Substituting Eqs. (3.32) and (3.33) into Eq. (3.31), we have finally $$A=\left(\frac{3^{4m}}{64\pi ^{3+\frac{m}{2}}}\right)^{\frac{2}{m+2}}\left(\frac{M}{m_{\mathrm{Pl}}}\right)^{\frac{4m}{m+2}}\left(\frac{H_0}{T_0}\right)^{n1}\alpha ^3\mathrm{exp}\left\{(n1)\left[1+\left(\frac{m+2}{4}\right)\alpha \right]N\right\}.$$ (64) Eq. (3.34) shows that the amplitude $`A`$ does not contain the unknown $`g_{\mathrm{eff}}`$-factor. Moreover, $`\alpha `$ can be expressed by $`n`$ and $`m`$ through Eq. (3.26), and $`N`$ can be expressed by $`\alpha `$ and $`M`$ via Eq. (2.28). Therefore, the amplitude of the initial density perturbations, $`A`$, is only a function of $`M`$, $`n`$, and $`m`$. Figures 3 and 4 plot the relations between the amplitude $`A`$ and index $`n`$ for various parameters $`M`$ and $`m`$. In the case of $`m=6`$, $`n=1`$, the relation of $`A`$ and $`\alpha `$ is plotted in Fig. 5. It can be seen from Figs. 3, 4 and 5 that for either $`m6`$ or $`m<6`$, the amplitude $`A`$ is significantly dependent on $`M`$, but not so sensitive to $`m`$. Namely, the testable $`A`$-$`n`$ relationship is mainly determined by a thermodynamical variable, the energy scale $`M`$. This is a “thermodynamical” feature. The relationship between $`A`$ and $`N`$ plotted in Figs. 6 and 7 also show this kind of “thermodynamical” feature: the $`A`$-$`N`$ relation depends mainly on $`M`$. For comparison, the observed results of $`A`$ and $`n`$ derived from the 4-year COBE-DMR data (quadrupole moment $`Q_{rmsPS}15.3_{2.8}^{+3.7}\mu K`$ and $`n1.2\pm 0.3`$) are plotted in Figs. 3, 4 and 5. The observationally allowed $`A`$-$`n`$ range is generally in a good agreement with the predicted $`A`$-$`n`$ curve if $`M10^{15}10^{16}`$ GeV, regardless the parameter $`m`$. Figures 3 and 4 also indicate that if the tilt of spectrum $`|n1|`$ is larger than 0.1, the parameter area of $`M10^{14}`$ GeV will be ruled out. Therefore, the warm inflation seems to fairly well reconcile the initial perturbations with the energy scale of the inflation. ## IV Conclusions and Discussion Assuming that the inflaton $`\varphi `$-field undergoes a dissipative process with $`\mathrm{\Gamma }\dot{\varphi }^2`$, we have studied the power spectrum of the mass density perturbations. In this analysis, we have employed the popular $`\varphi ^4`$ potential. However, only one parameter, the mass scale of the inflation $`M`$, is found to be important in predicting the observable features of power spectrum, i.e., the amplitude $`A`$ and index $`n`$. Actually, the warm inflation scenario is based on two thermodynamical requirements: (a) the existence of a thermalized heat bath during inflation, and (b) that the initial fluctuations are given by the fluctuation-dissipation theorem. Therefore, we believe that the “thermodynamical” features – $`A`$ and $`n`$ depend only on $`M`$ – would be generic for the warm inflation. This feature is useful for model testing. Hence, the warm inflation can be employed as an effective working model when more precise data about the observable quantities $`A`$, $`n`$ etc. become available. The current observed data of $`A`$ and $`n`$ from CMB are consistent with the warm inflation scenario if the mass scale $`M`$ of the inflation is in the range of $`10^{15}10^{16}`$ GeV. ###### Acknowledgements. We would like to thank an anonymous referee for a detailed report that improved the presentation of the paper. Wolung Lee would like to thank Hung Jung Lu for helpful discussions.
no-problem/9901/physics9901037.html
ar5iv
text
# Certainty and uncertainty in the practice of science. ## Certainty and uncertainty in the practice of science. While science is several thousand years old, it is in the last hundred years that the practice of science has become tremendously important in our lives: in the economy, in the technology of war, in the state of the natural environment, in the condition of our health and in all the material aspects of our lives. Many of our thoughts about the next millennium, our hopes and our fears, have to do with what the findings of science will do for us and what the findings of science will do to us. We try to predict what these findings of science might be; we want to reassure ourselves that we can control science and that we can direct the practice of science to desirable goals. There are many goals: some hope for major improvements in material comforts, others hope for the salvation of the natural environment, still others hope for lives without illness and with increased longevity. These hopes are based on assumptions that the directions of science can be controlled or planned, that there is coherence in the practice of science, that scientists know where their research is going, that any puzzle or problem in the natural world can be solved by enough scientific effort. I have been a working scientist, an experimenter in physics, for almost fifty years , and I am uncomfortable with these assumptions because the practice of science is an uncertain human activity. Is this a fruitful research direction? Can this problem be solved? Are we smart enough or lucky enough to solve the problem? Do we have the required research technology and if not, can we develop it? What are our motivations for doing this research? Will the results of this research have applications? Will these applications be beneficial or harmful? It is best to replace these abstractions by giving the history of one field of science. I choose the field I know best, the science of elementary particles and in particular, the science of the lepton family of particles. As I will explain, leptons are, or at least seem to be, very simple elementary particles; thus research on leptons is easy to describe and to use as an example. The history of lepton physics is also an apt example because this physics is about 100 years old. In the middle 1890’s Thomson elucidated the nature of the electron , the first identified elementary particle and the first lepton. Since then two heavier electron-like particles were discovered, the muon about mid-century, and the tau, discovered by my colleagues and myself about twenty years ago . Thus the twentieth century is spanned by the scientific work on the electron, muon, and tau, plus the work on closely associated elementary particles called neutrinos. As a former United States president was fond of saying, I want to make one thing perfectly clear. The uncertainties in the practice of science do not necessarily lead to uncertainties in the findings of science. If experimental results or observations on a phenomenon are verified by other experimenters, if there is logical understanding of the results or observations, then in my philosophy we have learned something real about the natural world. I am an engineer turned physicist and I have no interest in those philosophies of science that are concerned with whether we do or can know reality. Similarly I do not believe that the uncertainties in the practice of science will lead to the “end of science” . I am not of that school. ## A note on elementary particles for non-physicists. The following are some paragraphs about elementary particles . Figure 1 shows the hierarchy of matter with the largest kinds of matter, the molecules, at the top. At the bottom of Figure 1 are the elementary particles, the smallest pieces of matter that we have been able to find, smaller than an atom, smaller than a nucleus, less than $`10^{17}`$ centimeters in extent; perhaps having no detectable size. The number $`10^{17}`$ means 1/100,000,000,000,000,000 with 17 zeros in the denominator. This notation for large numbers is a great convenience and I explain it in the AppendixAppendix on very large and very small numbers.. Returning to Figure 1, the materials of everyday life such as water and wood and plastics and plant tissue are composed of molecules; and as you know from chemistry and biology, molecules are composed of atoms. Other materials such as iron and silicon are directly composed of atoms. But atoms are not simple entities, they themselves are complex, consisting of electrons moving around a nucleus. Continuing to move downward in Figure 1, the electron, as far as we know, is not composed of anything else; we cannot break up the electron or find anything inside of it. The electron is the most prevalent example of an elementary particle. On the other hand a nucleus is not simple and is not elementary; a nucleus is made up of protons and neutrons. At one time neutrons and protons were thought to be elementary particles, but we now know that they are made up of quarks. As far as we know, quarks like electrons are not composed of anything else; we cannot break up quarks or find anything inside of them. Thus we have arrived at the bottom of Figure 1 and to the simplest particles that compose everyday matter. Of course these elementary particles, quarks and electrons, may not be so simple; with new ideas and new experimental technology, we may find a deeper structure in these particles. In the practice of science present understanding may be replaced by a deeper future understanding; but until that replacement occurs we require that present understanding fit existing data. A popular and well-advertised speculative theory holds that elementary particles are manifestations of different vibrations of extremely small strings . But there is no experimental proof of the validity of the string theory hypothesis. A bit of terminology. Every particle inside the atom or smaller than the atom is called subatomic. Nuclei, the neutrons and protons that make up the nucleus, the quarks that make up the neutrons and protons, and the electron are all subatomic particles. The name elementary is reserved for those subatomic particles that we think are the simplest, those that we think are not made of anything else. Figure 2 is my attempt to sort out these distinctions for the reader. Electrons and quarks are the elementary particles that exist in everyday matter, but they are not the only elementary particles. Other elementary particles exist in nature, for example muons and neutrinos exist in the atmosphere and in outer space. Other elementary particles such as the tau and other quarks can be artificially created. But this is getting ahead of this history. Elementary particles are not just isolated pieces of matter that have nothing to do with each other. They pull and push on each other and interact with each other, sometimes changing into other kinds of particles. These interactions occur through four different forces: electromagnetic, weak, gravitational, and strong. Only two of these forces are of immediate concern. The electromagnetic force is just the electric and magnetic force that is manifest around us; it is the force involved in electric motors, in electronics, in the behavior of static electricity, in the behavior of lightning. If an elementary particle has electric charge it is acted upon by the electromagnetic force. The strong force is the force that holds the quarks inside the protons and neutrons, and it also holds the nucleus itself together. The strong force is the basis for the production of energy in our sun, in the stars, and in nuclear reactors. Unfortunately it is also the basis for the devastating release of energy and radioactivity by atom and hydrogen bombs. The elementary particles are classified into three families. Two of these families, the leptons and the quarks, are delineated in Table 1. Leptons do not interact through the strong force, and this decisively separates them from the quarks. The strong force between quarks compels them to be buried in complicated particles such as protons and neutrons and pions, Figure 2. We have never succeeded in making or finding a single quark isolated by itself. It is difficult to study the properties of quarks and even more difficult to explain their properties and behavior in simple terms. Conversely leptons, free of the strong force, can be isolated and studied individually. It is also easy to explain their properties in simple terms. This is why I have devoted much of my research to leptons and why the history of their discovery has pleasing simplicity. There is a third class of elementary particles that will not concern us: the particles that carry the basic forces. (The idea of a force being carried by a particle is a quantum mechanical concept.) For the sake of completeness these particles are the gluon that carries the strong force; the photon that carries the electromagnetic force; and the $`W`$ and $`Z`$ particles that carry the weak force, a force I have not discussed. If quantum mechanics can be applied to the gravitational force in the same way that it is applied to the other forces, then there is another particle called the graviton that carries the gravitational force. I will keep my particle physics discussions simple, and to do this I will ignore distinctions that are irrelevant to the matter at hand. For example there is no need in this paper to distinguish between particles and antiparticles , and so neutrinos and antineutrinos are simply called neutrinos, quarks and antiquarks are simply called quarks. ## Classic science: cathode rays and the discovery of the electron. The discovery of the electron is a classic example of scientific discovery . Classic in how the effort to understand the phenomenon called cathode rays led to the electron’s discovery; classic in how so much was explained once the electron’s properties were measured; and classic in how the applications of basic research on the electron has led to radio, television, transistors, computers, and who knows what next. It was already known in the eighteenth century that an electrical voltage applied between metal plates in a partially evacuated glass tube could produce light. Inside the tube the gas glowed; the size, shape, and color of the glowing region depended on the voltage, gas pressure, and shape of the tube. This phenomenon was called a cathode ray because the light seemed to be caused by rays coming from one of the metal plates inside the tube, specifically from the plate having negative charge, the cathode, Figure 3. We see the same phenomenon today in neon lights. Television picture tubes and computer monitors are also cathode ray tubes, although in these devices the gas pressure is very small. Many physicists of the late nineteenth century studied the cathode ray phenomenon, including famous names such as Crookes, Hertz, and Thomson. Gradually more and more was learned experimentally about cathode rays. For example it was learned that the rays are bent by a magnetic field and that the rays either carry, or cause the transfer of, negative electric charge. Still until the middle 1890’s there was dispute about the nature of cathode rays. Some physicists took the rays to be made up of negatively charged matter, the particles we now call electrons. Others believed the rays to be a kind of electromagnetic wave. There were several objections to the particle explanation. The most substantial objection was that the rays should bend in an electric field if they are charged particles, but this bending had not been observed. The dilemma was resolved in 1895 by Thomson using an improved vacuum pump. Thomson demonstrated that in a cathode ray tube with a sufficiently good vacuum, the cathode rays were bent in an electric field . A good vacuum is one in which just about all the gas in the tube has been removed. Describing his experiment with the tube shown in Figure 3b he wrote, “At high exhaustion the rays were deflected when the two aluminum plates were connected with the terminals of a battery of small storage cells… The deflection was proportional to the difference of potential between the plates…. It was only when the vacuum was a good one that the deflection took place.” Earlier attempts to deflect cathode rays in an electric field had failed because there was still gas in the tube and there was electrical conduction in the partial vacuum. Gas ions collected on the electrical plates, canceling the charges on the plates and therefore canceling the electric field. Thus the discovery of the electron depended on the gradual improvement of late nineteenth century instrument technology, particularly vacuum pump technology. Advances in scientific knowledge often depend upon improving the technology used in the practice of science. So we see a triumphant discovery after decades of research on cathode rays. But we also see that this was not a straight march to success. About half of the experimenters held the wrong idea about the nature of cathode rays for several decades. This is an important lesson about the practice of science: wrong ideas may persist for a long time. Today, one hundred years later, we have much better experimental equipment, but we are no smarter. Today there are similar controversies about observed phenomena ranging from cosmology to biology. Some of these controversies may be settled soon by discoveries as clear as the discovery of the electron, some may not be settled for a long time. A major uncertainty in the practice of science is when a particular controversy will be settled. Thomson received the Nobel Prize for settling the cathode ray controversy. ## What we know about the electron. The process of discovering the electron was interwoven with the process of determining the basic properties of the electron. By 1911, Millikan had measured the size of the electric charge of the electron and had shown, within his experimental errors, that all electrons have the same electric charge. And by the middle 1920’s it was known that the electron acts as though it is a perpetually rotating top and as though it is a very small bar magnet. I have written “acts as though” because if the electron has no size, one cannot picture what is rotating or how it can be a magnet. The values of the mass and the charge of the electron illustrate how small elementary particles are compared to the objects used in daily life. The mass of the electron is about $`10^{27}`$ grams. By the way, mass is called weight in everyday language. A standard size aspirin has a mass of about 1/3 of a gram. Thus it would take $`10^{27}`$ electrons to have about the same weight as three aspirins. The charge of the electron is $`1.6\times 10^{19}`$ coulombs. In everyday life we don’t use the coulomb unit of charge. We use a unit, however, for the electric current through a wire, the ampere, and electric current is simply the flow of electrons through a wire. A 100 watt light bulb uses about one ampere of current. To the nearest factor of ten, one ampere means $`10^{19}`$ electrons are flowing through the wire per second. Thus like the electron mass, the electron charge is very small compared to the electrical quantities that occur in everyday life. ## Limited knowledge: what we don’t know about the electron. A physicist living in the early twentieth century and doing research on the electron would probably have believed that we would continue to learn more and more about the electron as the century progressed. We have indeed learned more and more about how the electron behaves in metals, semiconductors, and molecules. We have indeed measured the known properties of the electron with more and more precision: its mass, charge, and magnetic properties. But we have made no progress in understanding what sets the mass of the electron. We have made no progress in understanding why all the known elementary particles with electric charge have charges that are either equal to plus or minus the charge on the electron or are equal to 1/3 or 2/3 of that charge. All we know is that no elementary particles with other electric charges have been found. Thus a research direction that must have seemed obvious and fruitful in the 1920’s, research to further uncover the inner nature of the electron, has not progressed. We keep trying to break up the electron to find its inner nature and we keep trying to find an unexpected property of the electron. No one knew what further to do in the 1920’s and no one knows what else to do now. Uncertainty about the future of a direction in research is a major uncertainty in the practice of science. Will the direction pay off or will it be fruitless? ## A note about protons, neutrons, and decaying particles. The second subatomic particle to be found was the proton. Its discovery and the first measurements of its properties occupied about 1900 to 1920. We now know that it is not an elementary particle; as shown in Figure 2 it is made up of three quarks. Thus the proton differs from the electron in that the proton has an internal structure, while the electron, to the best of our knowledge, has no internal structure. There are two other major differences between the proton and the electron. First the proton is almost 2000 times heavier. Second the proton, having a diameter of about $`10^{13}`$ cm, is much larger than the electron. On the other hand there is some similarity: the proton has the same size electric charge as the electron, but the proton is positively charged while the electron is negatively charged. Thus by the end of the first quarter of the twentieth century, two apparently fundamental particles of matter were known, the proton and the electron. And from quantum mechanics it was also known that light could also be considered to be made of particles, called photons. Thus nature seemed to be presenting us with a beautifully simple system of three particles composing everything. Unfortunately the world, even on this simplest level, is a lot more complicated. In the practice of science we sometimes mistake simplicity for truth; nature may be simple or may be complex. In the early 1930’s another subatomic particle, the neutron, was discovered. The neutron, like the proton, is made out of quarks (Fig. 2), but it has zero electric charge. The neutron is slightly heavier than the proton by about 1/10 of a percent–small difference, but enough to cause a decay process that is common among subatomic particles. A neutron left to itself does not last forever. In an average time of about 15 minutes, a neutron spontaneously breaks up into a proton plus an electron plus another elementary particle, the extra mass of the neutron being used to produce the other particles, Figure 4. A shorthand to describe the decay process is $$\mathrm{neutron}\mathrm{proton}+\mathrm{electron}+\mathrm{another}\mathrm{particle}.$$ This means that the particle on left side of the arrow disappears, changing to the particles on the right side of the arrow. Incidentally as far as we know protons and electrons never decay; left alone, they last forever. ## The uncertain road to scientific certainty: cosmic rays and the discovery of the muon. Now it is time for me to return to my main story and describe the discovery of the next elementary particle, the muon. The discovery story begins in the early 1900’s with investigations of a natural phenomenon, cosmic rays, which are not related to cathode rays. The only connection is linguistic: a ray means something or a group of things moving through space or material in a more or less straight line. As with the electron, the muon discovery process was interwoven with the process of determining properties; and as with the electron many physicists were involved in these processes. As we now know, but as was not known in the 1920’s, cosmic rays are subatomic particles that enter the Earth’s atmosphere traveling with high energy, Figure 5. Some are protons and some are atomic nuclei. Cosmic rays come from outside the solar system and some may come from outside our galaxy. As cosmic rays pass through our atmosphere, they collide with the oxygen and nitrogen molecules in the air, breaking up the molecules and interacting with the oxygen and nitrogen nuclei to form other particles, mostly pions, Figures 2 and 6. Returning to the 1910’s and 1920’s, before all this was known, the first observed effect of cosmic rays was the discovery that the atmosphere could slightly conduct electricity. Observations also showed that the conductivity extended through the entire depth of the atmosphere, not just at the top of atmosphere. It was known from research on electrical conductivity in gases, research by the way closely tied to cathode ray research, that this conductivity could occur if molecules were broken up. But what was breaking up the air molecules and breaking them up at all levels of the atmosphere? It is natural in scientific research to try to explain a new observation using established knowledge. Well what sort of particles or rays were known? There was the proton, but other experiments had shown that the protons interact readily with air through what we now call the strong force; hence they would not be able to penetrate below the top levels of the atmosphere. What about high-energy light rays, the x-rays already discovered at the end of the nineteenth century . Millikan, who had won the Nobel prize for his measurements of the electron charge, liked this hypothesis. He pushed his hypothesis without mercy, using his power as a dominant American physicist. But Millikan was wrong. Experiments showed that the particle or ray that made the air conductive could get through thick pieces of lead, pieces that were known to stop x-rays. Here is a lovely illustration of another uncertainty in the practice of science: great researchers can be wrong. By the early 1930’s it was clear that mysterious particles had the ability to penetrate long distances in air and to pass through thick pieces of lead. Since scientists name effects even when not understood, the phenomenon was called the penetrating component in cosmic rays. In the practice of science naming a phenomenon does not mean that the phenomenon is understood. The famous Oppenheimer even composed a theory explaining that high-energy electrons could penetrate lots of material even though it was well known that it is difficult for low-energy electrons to penetrate material. In the practice of science the very human desire to explain can lead to premature theories and wrong theories. Yes, Oppenheimer was wrong too. Finally in 1937 three sets of experiments reported that the penetrating component could be explained by the existence of a particle more massive than an electron but not as massive as a proton, a new particle eventually called the muon! It was almost another ten years, however, before the full nature of the muon was determined. A complicated story had to be unraveled. Protons and nuclei hitting the upper levels of the atmosphere produce other particles, mostly pions, through the strong force, Figure 6, and the pions in turn decay into muons. The muon does not have the strong force and so interacts very little in the air or in other matter. Indeed it only interacts enough to make the air conducting. So both the muon and the electron lack the strong force. There is another similarity: the muon’s electric charge is exactly the same as the electron’s charge. Hence the muon acts electrically just like an electron. Well why don’t we have muonics, muonic motors, muonic computers? The short answer is that the muon is unstable, it decays in about $`10^6`$ seconds, a millionth of a second; a longer answer comes later. The muon decays because its mass is about 200 times the mass of the electron, and the decay process, Figure 7, is $$\mathrm{muon}\mathrm{electron}+\mathrm{another}\mathrm{particle}+\mathrm{another}\mathrm{particle}.$$ What are these other particles that occur in the decay of the muon and also occurred in the decay of the neutron? This was the question that led to the discovery of the elementary particle, the neutrino. ## Theory in science: the neutrino from theory to first discovery. The motivation that led to the discovery of the neutrino was very different from the motivation that led to the discovery of the electron and the muon. There was not the need to explain a general phenomenon such as cathode rays or cosmic rays; the need was to understand the experimental details of the decay process of the neutron and other nuclear decays. Experiments had shown that the energy balance in these decays required the production of “other particles” but the experiments could not detect the “other particles.” The great theoretical physicist Pauli proposed in the early 1930’s that the other particles, eventually called neutrinos, were particles with no electric charge, with no strong force, and with very small or no mass. But such particles had never been detected. In the 1950’s Reines and Cowan set out to see if the neutrino really existed . They explained that their experiment was designed “…to show that the neutrino has an independent existence, i.e. that it can be detected away from the site of its creation…” Therefore the motivation in the narrow sense was to test an hypothesis. In a broader sense the motivation was to see if a particle with the strange properties of the proposed neutrino could exist. This kind of motivation is very different from the phenomenon-driven motivations which led to the electron and muon discoveries. The task undertaken by Reines and Cowan was to verify the neutron decay hypothesis $$\mathrm{neutron}\mathrm{proton}+\mathrm{electron}+\mathrm{neutrino}$$ by showing that the neutrino existed. But how to do this? If the neutrino had no electric charge and no strong force it would only occasionally interact in matter through the aptly named weak force. Their answer had two parts. First they used a nuclear reactor in which there is a tremendous rate of neutron decay to produce an intense outflow of neutrinos, Figure 8. I should write a hypothetical intense outflow of neutrinos because the existence of the neutrino had not been proven. The second part of their answer was to use a large amount of matter in the form of liquid scintillator to detect the occasional neutrino interaction in the scintillator. A scintillator is a type of material that emits visible light when particles interact with the material. The experiment worked and the adjective hypothetical was removed, an accomplishment for which Reines received the 1995 Nobel Prize in Physics. This is a classic example of scientific discovery: puzzling experimental results leading to a bold new hypothesis, and then confirmation of the new hypothesis by a new and different experiment. But this simple sequence ignores the crucial use by Reines and Cowan of new experimental technology, the nuclear reactor and the large liquid scintillator detector. Scientific progress often depends upon the invention of new experimental technology to verify new hypotheses. ## Scientific laws: mass and electric charge in subatomic physics. In the biological, chemical and mechanical phenomena of our everyday lives the total mass never changes. If you break a brick in two, the sum of the masses of the two pieces is equal to the mass of the original brick. In the chemical reaction $$\mathrm{sodium}\mathrm{atom}+\mathrm{chlorine}\mathrm{atom}\mathrm{sodium}\mathrm{chloride}\mathrm{molecule}$$ the mass of the sodium chloride molecule is equal to the sum of the masses of the sodium atom and the chlorine atom. But in the world of subatomic particles one can destroy mass, changing mass into energy. Or one can create mass by changing energy into mass. An example of destroying mass is muon decay (the “other particles” are neutrinos): $$\mathrm{muon}\mathrm{electron}+\mathrm{neutrino}+\mathrm{neutrino}.$$ The sum of the masses of the electron and the two neutrinos is less than the mass of the muon. Some of the mass of the muon has been destroyed; it has been changed into energy. This process might be written $$\mathrm{muon}\mathrm{electron}+\mathrm{neutrino}+\mathrm{neutrino}+\mathrm{energy}.$$ The inverse process, changing energy into mass became important to experimental subatomic physics in the 1930’s with the inventions and improvements of particle accelerators, a subject discussed in the next section. An example of changing energy into mass is the reaction, Figure 9, $$\mathrm{electron}+\mathrm{electron}\mathrm{muon}+\mathrm{muon}.$$ The mass of a muon is about 200 times the mass of an electron. Therefore in this process the electrons have to possess large amounts of energy, this energy being changed into the masses of the muons. I want to be a little more specific about this elementary particle reaction, changing a pair of electrons into a pair of muons. Since electrons and muons have electric charge that can be positive or negative, it is usual to specify the sign of the charge. For example: $$\mathrm{negative}\mathrm{electron}+\mathrm{positive}\mathrm{electron}\mathrm{negative}\mathrm{muon}+\mathrm{positive}\mathrm{muon}.$$ Negative and positive units of electric charge have the same size for all known subatomic particles, therefore on the left side of this process the negative charge exactly cancels the positive charge and the total charge going into the reaction is zero. The products of the reaction on the right side also have total charge zero. But to the best of our experimental knowledge the reaction $$\mathrm{negative}\mathrm{electron}+\mathrm{positive}\mathrm{electron}\mathrm{positive}\mathrm{muon}+\mathrm{positive}\mathrm{muon}$$ where the total amount of charge changes, never occurs. The rule is that the total amount of charge going into a reaction must be the same as the total amount of charge coming out of the reaction. This is called the law of the conservation of electric charge. Again beware of terminology in the practice of science. No one understands why charge cannot be created or destroyed. Experimental rules are often called laws whether or not we understand the reason for the rule. On the other hand we do understand why mass can be created or destroyed in a reaction. Mass is just another form of energy; with the right apparatus and reaction, chemical energy can be changed into mechanical energy and mechanical energy can be changed into mass energy, or one can carry out any other combination of energy changes. ## Accelerators and high-energy physics. The energy of a particle depends upon its velocity; the greater the velocity the greater the energy. When we increase the velocity of an automobile or a particle we say we are accelerating the automobile or the particle; and so the devices that increase the velocities of particles are called accelerators . The basic idea used in accelerators is simple: a charged particle will accelerate in an electric field because of the electric force on the particle, Figure 10. But accelerator technology is complicated; I show two simple, schematic examples in Figure 10. Most present research in elementary particle physics uses high-energy particles from accelerators, and so elementary particle physics is also called high-energy physics. And indeed it can be very high energy. For example a proton can be given so much energy that in the collision with another proton dozens of pions can be produced: $$\mathrm{proton}+\mathrm{proton}\mathrm{proton}+\mathrm{proton}+\mathrm{dozens}\mathrm{of}\mathrm{pions}.$$ ## Two kinds of neutrinos and the lepton family. A major discovery using a high-energy accelerator was the experimental proof in the middle 1960’s that there are two kinds of neutrinos . One associated with the electron called an electron neutrino, the other associated with the muon, called the muon neutrino. Indeed in the decay of a muon these two kinds of neutrinos appear, for example: $$\mathrm{negative}\mathrm{muon}\mathrm{negative}\mathrm{electron}+\mathrm{muon}\mathrm{neutrino}+\mathrm{electron}\mathrm{neutrino}.$$ I have put in the electric charges in these muon decays to emphasize again that total charge does not change in a reaction. Of course the electric charge of neutrinos is zero. I hope elementary particle physics cognoscente will forgive me for not drawing a distinction here, or in the previous sections, between a neutrino and an antineutrino. In an experiment for which Lederman, Schwartz, and Steinberger received the Nobel Prize, a high-energy neutrino beam was produced indirectly by a high-energy proton accelerator. When these neutrinos interacted with matter only muons were produced, not electrons, Figure 11. Hence these neutrinos were muon associated. As I have explained neutrinos rarely interact and so it was not an easy experiment. Other later experiments have shown that one can use an accelerator to make a different kind of neutrino, neutrinos that produce only electrons when they interact with matter. Muon neutrinos associated with muons, electron neutrinos associated with electrons, does this sound like a tautology? It is not a tautology but the terminology promises more than we actually know. We do know that there are these two different kinds of neutrinos, one associated with the electron, one associated with the muon. But we do not understand the mechanism of that association. For example, is there something inside the electron that is also inside the electron neutrino? Allowing some anthropomorphism, how does a neutrino, a particle of perhaps no size, know that it is associated with an electron, another particle of perhaps no size? The lesson here about the practice of science is again that terminology may appear to have deeper meaning than is warranted. By the middle 1960’s the electron, muon, and the two neutrinos were thought of as forming a family, the lepton family, Table 2. The identification of the lepton family was based on two considerations. First, these four particles have nothing to do with the strong force. Second, they had very small masses compared to most other subatomic particles, less mass than the proton, less mass than the neutron, and less mass than other subatomic particles such as the pion. The Greek word leptos means small or fine, the electron, muon and two neutrinos were thought to be the smallest mass subatomic particles. Again not a profound terminology. There is one other particle that also has a very small mass, probably exactly zero mass. Sometimes light interacts with matter as a particle, not as a light wave, and this particle is called a photon. I mentioned the photon at the beginning of the paper as belonging to the third family of elementary particles, particles that carry the basic forces. In this paper, I cannot give an explanation useful to the reader of the difference between the lepton family and this third family. All I can say is that the photon behaves very differently from the electron, muon, and neutrinos; to include it in the lepton family would destroy the meaning of the lepton classification. ## Applications of basic research: electrons, muons, and neutrinos. The electric telegraph was developed and put into use in the in the first half of the nineteenth century long before the discovery of the electron, even though its operation depends on the properties of electrons in metals. The same is true of the electric motor, electric generator, telephone, and electric lights; all were invented and used before the discovery of the electron, although all depend on the properties of the electron. Even the early days of wireless technology were not based on electron physics . The invention and use of new technology need not depend on basic research; in fact for most of history it has not depended on basic research. With the discovery of the electron and the understanding of its behavior in solids, gases, and vacuums, however, many more inventions were possible. Radio technology, until the commercial use of the transistor in the 1950’s, depended upon the vacuum tube in which the electric current is carried through the vacuum by electrons going from the cathode to the anode, the old cathode ray idea. Of course the transistor, the integrated circuit, and the computer all depend upon a thorough understanding of the behavior of electrons in semiconductor metals such as silicon . And so we have become very use to the idea that basic research can lead to new technology: the wonder of the computer, the horror of nuclear weapons. I wrote earlier that we cannot use muons in electron technology because the muons are unstable. In addition muons behave very differently in matter from electrons. New discoveries sometimes can be used to duplicate or improve old technology, but more often new discoveries lead to new technology or no technology. There is a possibility that muons might someday be used in a new energy technology. The sun produces energy by the fusion of nuclei. As a simplified example, if a proton fuses with a deuteron, a combined nucleus plus energy is produced, Figure 12: $$\mathrm{proton}+\mathrm{deuteron}\mathrm{combined}\mathrm{nucleus}+\mathrm{energy}.$$ A proton is the nucleus of the hydrogen atom and a deuteron is the nucleus of the hydrogen atom found in heavy water. Both exist naturally, but the problem is getting the proton and the deuteron to collide with sufficient force to fuse. The high temperature of the sun and stars produces the required forceful collisions. By the way, this high temperature requirement is also the reason for the lack of success in producing energy on Earth through controlled fusion. But negative muons can produce proton-deuteron fusion at room temperature. This was demonstrated and understood decades ago. The negative charge of the muon pulls together the positively charged proton and positively charged deuteron; no extra temperature is needed, Figure 13. Where then are the muon fusion reactors? The problem is that muons decay, and so new muons have to be continually created using an accelerator. With existing accelerator technology the energy required to produce the muons is greater than the energy from the fusion. No one has yet designed a sufficiently efficient accelerator, but it may not be impossible to do so. Sometimes new discoveries can in principle lead to new technology, but economics or unfortunate technological problems may prevent practical use. What about the neutrinos? Do they offer any practical uses? There have been suggestions that muon neutrino beams could be used for geological research and prospecting deep in the Earth. The rate of interaction of the neutrinos would be proportional to the density of matter; the interaction rate being measured by the muons so produced. A grand but futuristic engineering project. ## Puzzles in science: the electron-muon problem. I now come to my research in lepton physics. In 1963 I joined the Stanford Linear Accelerator Center, SLAC, to do research in high-energy physics. I had been working with particles that interacted through the strong force, a broad and popular, but complicated, area. I wanted to work in a simpler area and so my thoughts turned to the known leptons before 1970: the electron, the muon, and the two neutrinos, Table 2. I was particularly intrigued by the connection between the electron and muon. With respect to the electromagnetic force, and the absence of the strong force, the muon behaves simply as a heavier electron, 207 times heavier. But why is it 207 times heavier? Another puzzle was understanding the muon decay $$\mathrm{negative}\mathrm{muon}\mathrm{negative}\mathrm{electron}+\mathrm{muon}\mathrm{neutrino}+\mathrm{electron}\mathrm{neutrino}.$$ Why doesn’t the muon decay to the electron through the simpler process $$\mathrm{negative}\mathrm{muon}\mathrm{negative}\mathrm{electron}+\mathrm{photon}\mathrm{?}$$ The photon has very small or zero mass just like the neutrinos, and the photon has zero electric charge so that the charge is the same on both sides of the decay reaction. By the middle 1960’s these questions and puzzles were called the electron-muon problem. I thought that SLAC would be an excellent place to work on the electron-muon problem. A high-energy electron accelerator was being built at SLAC, intense beams of high-energy electrons were available, and SLAC researchers were starting experiments on the collision of electrons with protons and nuclei. It was also easy to use the electrons to make beams of high-energy muons. I decided to start high-energy experiments on the collision of muons with protons and nuclei. ## Obsession in science. I was obsessed with a simple idea. Since the muon is much heavier than the electron, I speculated that the muon somehow had some of the heavier proton’s nature. Therefore I thought that the collisions of muons with protons would be different from the collisions of electrons with protons. Don’t try to follow this idea in detail because after five years of experiments with muons I realized in the early 1970’s that I should give up this idea. My colleagues and I found that there would always be errors of 10 or 15% in comparisons of our muon-proton collision measurements with the electron-proton collision measurements of the other SLAC researchers. We knew of no way to improve the precision of our experiments. And so even though obsessed with the electron-muon problem I gave up. It is important in the practice of science to know when to be obsessed and when to give up the obsession; it is important to learn the art of scientific obsession. A scientist needs obsession to keep going through the ups and downs of research, but obsession can also lead to years or decades of pointless research. There should be some comfort in the thought that if an idea is good, scientists will return to it in future years with better experiments and better theoretical understanding. To close this part of the story, since the early 1970’s there have been many better experiments on muon-proton collisions, experiments carried out for other reasons. These experiments have shown that nothing can be learned about the electron-muon problem from such experiments. Thus it was lucky that I gave up the obsession. Of course sometimes one gives up a scientific obsession, only to find that years later others have made it pay off. My attack on the electron-muon problem being thwarted by nature, I turned to another idea that had been in my mind since the early 1960’s. Perhaps there was another undiscovered and more massive charged lepton, a heavy relative of the electron and the muon. As my attack on the electron-muon problem began to go badly, I became more and more optimistic about finding a new charged lepton. As Voltaire wrote in Candide, “Optimism, said Candide, is a mania for maintaining that all is well when things are going badly.” ## Simple ideas in science: my search for a new lepton. I became obsessed with another simple idea. I thought that the electron and the muon might be the smallest mass members of a much larger family of leptons. I drew for myself the following chart: | We know that there is: | an electron | with its associated neutrino. | | --- | --- | --- | | We know that there is: | a muon heavier than electron | with its associated neutrino. | | Perhaps there is: | a lepton#3 heavier than muon | with its associated neutrino? | | Perhaps there is: | a lepton#4 heavier than lepton#3 | with its associated neutrino? | | And so forth | | | Thus I dreamed that there were a large number of more and more massive charged leptons: electron, muon, lepton#3, lepton#4, lepton#5 and so forth; and that each of these charged leptons had an associated neutrino. Why such a large number? Because I didn’t see why there should be any upper limit to the mass of a lepton. I had a hidden motivation in this search for new leptons. No one had solved the electron-muon problem; there were not enough clues. But if an additional charged lepton were to be discovered, then we would have the electron-muon-lepton#3 problem. We would surely have more clues. A basic principle in the practice of science: if you can’t solve a problem get more data. I decided the best way to look for these additional charged leptons was to copy the old reaction $$\mathrm{negative}\mathrm{electron}+\mathrm{positive}\mathrm{electron}\mathrm{negative}\mathrm{muon}+\mathrm{positive}\mathrm{muon}.$$ We could look for new lepton#3 by using the reaction $$\mathrm{negative}\mathrm{electron}+\mathrm{positive}\mathrm{electron}\mathrm{negative}\mathrm{lepton}\mathrm{\#}3+\mathrm{positive}\mathrm{lepton}\mathrm{\#}3.$$ Of course we would be delighted to find any new lepton, lepton#3 or lepton#4 or lepton#5. It was a very optimistic proposal: these hypothetical leptons had to exist and our experimental equipment had to work well enough for us to find them. An obvious worry was that the new leptons might exist, but we might not have enough energy to produce the required masses. I followed a rule of mine for starting new science ventures. If you get a new idea for an experiment, don’t spend forever trying to understand every detail of how you will carry it out, just start. You will learn as you proceed with the experiment. There is a problem in this rule. Usually one gets five or ten bad or fruitless ideas for every good idea. This means that you will spend much time on bad ideas. Unfortunately in the practice of experimental physics, it usually takes time to identify the good idea. ## The discovery of the tau lepton. Most people in the high-energy physics community of the early 1970’s didn’t know or didn’t care about this search for new leptons. Of those who did know about the search, most were skeptical, even among my colleagues. In the practice of science there is often a choice between working in a popular area that most colleagues feel is fruitful or working in an unpopular area that most colleagues feel is a waste of time and research money. Most of the time the popular field is the fruitful field, but a discovery in an unpopular area brings more satisfaction and more fame. In the end it is a question of one’s personality. About 1970 we were completing at SLAC the construction of an electron collider called SPEAR. Electron colliders, Figure 14, were a new technology in high-energy physics; their development began in the 1960’s. Electron colliders provide the means to collide negative electrons with positive electrons at high energy with great intensity. The work I am about to describe could not have been done without the new technology of electron colliders. As I have already written, in the practice of science new technology is often crucial. In 1973 the SPEAR electron collider began operation and my colleagues and I began to look for a new lepton. By 1975 we began to find evidence for the existence of a third lepton, a lepton much more massive than the muon, in fact about 17 times more massive! My close colleagues and I were excited, delighted, and overjoyed. We were detecting the reaction $$\mathrm{negative}\mathrm{electron}+\mathrm{positive}\mathrm{electron}\mathrm{negative}\mathrm{lepton}\mathrm{\#}3+\mathrm{positive}\mathrm{lepton}\mathrm{\#}3.$$ We knew it was lepton#3 because our experiments and other experiments had shown there was no other lepton with a mass greater than the muon but less than that of lepton#3. But the larger elementary particle physics community remained skeptical, doubting and criticizing our research. The problem was that while we continued to find more and more evidence for the existence of a new lepton, other experimenters could not verify our results using somewhat similar experimental methods. Furthermore a few of these experimenters were not eager to find verification. A somewhat unpleasant consequence of scientific competition is that once a discovery is claimed, others can get more credit for disproving the claim than for verifying the claim. It was a tough few years for me; our evidence kept growing but there was no outside verification. Certainty in science comes from verification of one’s findings by others; this is fundamental scientific practice. It allows the eventual overcoming of uncertainties in scientific practice. Finally in late 1977 other experimenters began to find our new lepton. We gave lepton#3 the Greek name tau, because tau, written as $`\tau `$, is the first letter of the Greek word for third. And so the reaction becomes $$\mathrm{negative}\mathrm{electron}+\mathrm{positive}\mathrm{electron}\mathrm{negative}\mathrm{tau}+\mathrm{positive}\mathrm{tau}.$$ ## The road from difficult research to easy research: the tau lepton. Our discovery was based upon studying about one hundred examples of this reaction and the properties of the tau leptons so produced. Today I work with an experiment at Cornell University where millions of these reactions have been detected and millions of tau leptons have been studied. An improved electron collider at Cornell, and new electron colliders at my laboratory SLAC and at the KEK laboratory in Japan, will enable physicists to study ten million examples per year of $$\mathrm{negative}\mathrm{electron}+\mathrm{positive}\mathrm{electron}\mathrm{negative}\mathrm{tau}+\mathrm{positive}\mathrm{tau}.$$ Usually improvements in technology allow obscure and difficult scientific studies to become easier and easier. This adds to the certainty of scientific results. If studies of a phenomenon never get easier, if the technology for carrying out the studies never improves, then it is most probably not science that is being practiced. Thus for psychic phenomena: it is no easier to verify the reality of telepathy than it was five hundred years ago. A few notes on the properties of the tau. Its mass is 3480 times the mass of the electron, Table 2. It has the same size electric charge as the electron, and like the electron and muon, it has nothing to do with the strong force. It is indeed a lepton. With the discovery of the tau, however, the name lepton has lost its original meaning. The tau is not a light particle it is a heavy particle having about twice the mass of the proton. There is a neutrino associated with the tau called, of course, the tau neutrino. The tau, like the muon, is unstable. It decays in an average time of roughly $`10^{13}`$ seconds. There are many ways in which the tau decays, but two of these ways beautifully demonstrate connections between the electron, the muon, and the tau. Recall that the muon decays through the process $$\mathrm{negative}\mathrm{muon}\mathrm{negative}\mathrm{electron}+\mathrm{electron}\mathrm{neutrino}+\mathrm{muon}\mathrm{neutrino}.$$ Two of the ways the tau decays are $$\mathrm{negative}\mathrm{tau}\mathrm{negative}\mathrm{electron}+\mathrm{electron}\mathrm{neutrino}+\mathrm{tau}\mathrm{neutrino}$$ $$\mathrm{negative}\mathrm{tau}\mathrm{negative}\mathrm{muon}+\mathrm{muon}\mathrm{neutrino}+\mathrm{tau}\mathrm{neutrino}.$$ ## Nature can be cruel: the electron-muon-tau problem. I had dreamed that once a new lepton was found, the properties of the new lepton would provide new clues to the inner nature of leptons, indirectly solving the old electron-muon problem. Now in 1998 we know a tremendous amount about the properties of the tau lepton. There have been hundreds of physics Ph.D. theses on the properties of the tau, more than a thousand experimental and theoretical papers on the tau, and every two years we hold an international conference devoted solely to the tau. But there are no new clues to the inner nature of the leptons. If you assume that the tau behaves exactly like a heavier version of the muon, and if you use knowledge acquired in other parts of subatomic physics, you can predict quantitatively the behavior of the tau. From one point of view this is wonderful, it shows that we are developing certain and consistent understanding of the behavior of elementary particles. But from the point of view of those who want to push deeper into the world of elementary particles, who want to push below the bottom of Figure 1, this is disheartening. Nature can be cruel. ## The uncertainty of research directions: are there more leptons? If you look back a few pages you will see that I dreamed not only of lepton#3, but also lepton#4 and lepton#5 and so on. Since the discovery of the tau there have been many, many searches for additional leptons. Yet no more have been found. I am as surprised as anyone. The powerful method we used to discover the tau $$\mathrm{negative}\mathrm{electron}+\mathrm{positive}\mathrm{electron}\mathrm{negative}\mathrm{tau}+\mathrm{positive}\mathrm{tau}$$ has been used at ever-increasing energies to search for the next charged lepton $$\mathrm{negative}\mathrm{electron}+\mathrm{positive}\mathrm{electron}\mathrm{negative}\mathrm{lepton}\mathrm{\#}4+\mathrm{positive}\mathrm{lepton}\mathrm{\#}4.$$ As I write this paper these searches have been carried out at the CERN European laboratory up to energies more than 50 times greater than the energy at which we discovered the tau. No new charged lepton has been found. This means that if lepton#4 exists, its mass is larger than about 50 times the mass of the tau, about 180,000 times the mass of the electron. In addition by a related method, experimenters have searched for heavier neutrinos; nothing has been found beyond the three known neutrinos, those associated with the electron, muon, and tau. The searches at CERN are not completed; they will extend about 10% higher in energy. Sometimes in the practice of science, changing one parameter in an experiment can lead to a discovery; it can be a change in energy or in precision or in the amount of data. About ten years from now, perhaps a little later, a new kind of electron collider called a linear collider will go into operation. This new technology accelerator is being developed at SLAC, in Japan, and in Europe. It will produce five times more energy than existing electron colliders. For the present there are two possibilities. One possibility is that there are no more leptons beyond the six in Table 2. That’s bad because at present we don’t understand why the number of lepton types is limited and we may not get any more clues from studying the leptons themselves. But there are smart young women and men entering high-energy physics; they may bring us that understanding. The other possibility is that there are more leptons, and we just don’t know how to find them. Each of the known leptons was discovered using a different experimental technology. The electron was found in the cathode ray phenomenon, the muon was found in cosmic rays, the electron neutrino was found using a reactor, the difference between the electron neutrino and the muon neutrino was discovered using a high-energy proton accelerator, and the tau was found using an electron collider. Perhaps this was simply because the discoveries stretched over a hundred years and technologies keep changing; or perhaps leptons are so elusive that a new technology is required for each discovery. Perhaps the next charged lepton is so massive that it is beyond the energy reach of present or near future searches using electron collider technology, even searches using the projected linear colliders. Thus there is an uncertain future for the hundred years of research on new leptons. We may have to give up much hope of finding new leptons, or we have to find a new technology. This is how a research direction can become uncertain even though it has been fruitful. ## Speculative experiments and the practice of science. But I have not given up; I have been speculating about other possibilities. In fact my colleagues and I are carrying out experiments based on these speculations. Perhaps there is a new type of massive, charged lepton that already exists in nature. Suppose this new type lepton was stable like the electron and had been produced in the early universe, perhaps in the “big bang.” Then it might be present in old pieces of matter such as meteorites and ancient rocks. For convenience I am going to call this hypothetical new lepton the lambda. But if the lambda is massive, why should it be stable, why shouldn’t it decay like the muon and the tau: $$\mathrm{negative}\mathrm{lambda}\mathrm{negative}\mathrm{muon}+\mathrm{muon}\mathrm{neutrino}+\mathrm{lambda}\mathrm{neutrino}$$ $$\mathrm{negative}\mathrm{lambda}\mathrm{negative}\mathrm{electron}+\mathrm{electron}\mathrm{neutrino}+\mathrm{lambda}\mathrm{neutrino}\mathrm{?}$$ These decays would be prevented and the lambda made stable by two kinds of speculative changes in the usual properties of a charged lepton. One speculation is that the lambda does not have the usual size of electric charge, but has some fractional electric charge, say 1/2 of the usual charge or 5/4 of the usual charge. Then the decays written above would not occur because the electric charge would be different after the decay compared to the electric charge before the decay. And as far as we know the total electric charge cannot change in a reaction. The other speculation is based on the observation that the muon and tau need their associated neutrinos in order to decay. If one assumes that there is no neutrino associated with the lambda, then its decay might be prevented. My colleagues and I at SLAC are carrying out experiments searching for massive, stable leptons. We are not using accelerators; we are using a highly automated and modernized version of the apparatus used by Millikan ninety years ago to measure the electron’s charge . If one is going to engage in a speculative experiment, there are three criteria that should be satisfied. One should make sure that the speculation does not violate established scientific knowledge. Carrying out the experiment should be interesting and pleasurable, that may be the only reward. It should be easy for others to duplicate the experiment so that the verification of a speculation can be checked. ## Neutrino masses and a surprising return to cosmic rays. I am about finished with my recounting of a hundred years of lepton research and what it teaches us about the practice of science. There is one more episode having to do with the masses of the neutrinos. It has been very hard to measure these masses; we only know for certain the upper limits given in Table 2, and there is even the possibility that neutrinos have zero mass. For decades it has been suspected, or at least hoped, that neutrinos might change into each other, a muon neutrino change into an electron neutrino, or the converse, an electron neutrino change into a muon neutrino, or a muon neutrino change into a tau neutrino. If such changes could occur, then a general principle of quantum mechanics predicts that the rate of change depends on the masses of the neutrinos. In the last decade there have been many searches for this neutrino-changing phenomenon. Experimenters have used electron neutrinos from reactors, the same neutrinos that Reines and Cowan first detected. Experimenters have used muon neutrino beams from high-energy accelerators, the same sort of beam used by Lederman, Schwartz, and Steinberger to show that there are two kinds of neutrinos. But all these experiments have been inconclusive at best. Now, as the century ends, the phenomenon of neutrino-changing may have been finally detected in of all places, cosmic rays. One effect of cosmic rays is to produce muon neutrinos, and these muon neutrinos pass through the atmosphere and into the Earth. We know enough about cosmic rays to predict how many muon neutrinos should hit the Earth’s surface per second. A vast new underground apparatus in Japan, called Super-Kamiokanda, has been used to count the number of muon neutrinos, and there seem not to be enough of them . Furthermore, it seems as though the missing muon neutrinos have changed into tau neutrinos or into some unknown neutrino, but not into electron neutrinos. This means that the muon neutrino and perhaps the tau neutrino definitely have a non-zero mass. But it could be a very small mass, less than 1/1,000,000 of the electron mass. These first results require verification from other experiments looking at cosmic rays and elucidation from experiments using reactors or accelerators. Still the results demonstrate the surprises that can occur in science. Surprises are the best part of the practice of science, but most surprises require the experimenters to do something new and different, such as examining a new phenomenon or applying a new technology to an old phenomenon. ## Looking ahead. An up-to-date physicist in 1899 would have known about the electron and some of its properties, but would have not been able to know anything else about the rest of the world of lepton physics. We are in the analogous state of ignorance in 1999. Whatever the science–physics, chemistry, biology, psychology–we cannot know what we will learn in the next hundred years. We only know that the practice of science is full of uncertainties and that the test of reality is always experiment and observation. Darwin wrote, “I must begin with a good body of facts and not from a principle (in which I always suspect some fallacy) and then as much deduction as you like.” ## Appendix on very large and very small numbers. It is tedious to write and hard to decipher a very large number such as 100,000,000,000. It is better to use the notation $`10^\mathrm{N}`$ where N tells us the amount of zeros in the number. For example: One thousand = 1,000 = 10<sup>3</sup> One million = 1,000,000 = 10<sup>6</sup> Ten million = 10,000,000 = 10<sup>7</sup>. A number such as $`1.5\times 10^7`$ means $`1.5\times 10,000,000`$. An analogous system is used for very small numbers such as 1/100,000. The number is written $`10^\mathrm{N}`$, where the negative sign indicates that the number of zeros that are in the denominator. Thus 1/1,000 = $`10^3`$ 1/1,000,000 = $`10^6`$. A number such as $`1.5\times 10^6`$ means 1.5/1,000,000.
no-problem/9901/quant-ph9901012.html
ar5iv
text
# How many functions can be distinguished with 𝒌 quantum queries?[1] ## I Introduction Quantum computers can solve certain oracular problems with fewer queries of the oracle than are required classically. For example, Grover’s algorithm for unstructured search can be viewed as distinguishing between the $`N`$ functions $$G_j(x)=\{\begin{array}{cc}\hfill 1& \text{ for }x=j\hfill \\ \hfill 1& \text{ for }xj\hfill \end{array}$$ (1) where both $`x`$ and $`j`$ run from 1 to $`N`$. Identifying which of these $`N`$ functions the oracle holds requires of order $`N`$ queries classically, whereas quantum mechanically this can be done with of order $`\sqrt{N}`$ quantum queries. The $`N`$ functions in (1) are a subset of the $`2^N`$ functions $$F:\{1,2,\mathrm{},N\}\{1,1\}.$$ (2) *All* $`2^N`$ functions of this form can be distinguished with $`N`$ queries so the $`N`$ functions in (1) are particularly hard to distinguish classically. No more than $`2^k`$ functions can be distinguished with only $`k`$ classical queries, since each query has only two possible results. Note that this classical “information” bound of $`2^k`$ does not depend on $`N`$, the size of the domain of the functions. Quantum mechanically the $`2^k`$ information bound does not hold. In this paper we derive an upper bound for the number of functions that can be distinguished with $`k`$ quantum queries. If there is a set of $`D`$ functions of the form (2) that can be distinguished with $`k`$ quantum queries, we show that $$D1+\left(\genfrac{}{}{0pt}{}{N}{1}\right)+\left(\genfrac{}{}{0pt}{}{N}{2}\right)+\mathrm{}+\left(\genfrac{}{}{0pt}{}{N}{k}\right).$$ (3) If the probability of successfully identifying which function the oracle holds is only required to be $`p`$ for each of the $`D`$ functions, then $$D\frac{1}{p}\left[1+\left(\genfrac{}{}{0pt}{}{N}{1}\right)+\left(\genfrac{}{}{0pt}{}{N}{2}\right)+\mathrm{}+\left(\genfrac{}{}{0pt}{}{N}{k}\right)\right].$$ (4) We also give two examples of sets of $`D`$ functions (and values of $`k`$ and $`p`$) where (3) and (4) are equalities. In these cases the quantum algorithms succeed with fewer queries than the best corresponding classical algorithms. One of these examples shows that van Dam’s algorithm distinguishing all $`2^N`$ functions with high probability after $`N/2+O(\sqrt{N})`$ queries is best possible, answering a question posed in his paper. We also give an example showing that the bound (3) is not always tight. An interesting consequence of (3) is a lower bound on the number of quantum queries needed to sort $`n`$ items in the comparison model. Here, we have $`D=n!`$ functions, corresponding to the $`n!`$ possible orderings, to be distinguished. The domain of these functions is the set of $`N=\left(\genfrac{}{}{0pt}{}{n}{2}\right)`$ pairs of items. If $`k=(1ϵ)n`$, the bound (3) is violated for $`ϵ>0`$ and $`n`$ large, as is easily checked. Hence, for any $`ϵ>0`$ and $`n`$ sufficiently large, $`n`$ items cannot be sorted with $`(1ϵ)n`$ quantum queries. ## II Main result Given an oracle associated with any function $`F`$ of the form (2), a quantum query is an application of the unitary operator, $`\widehat{F}`$, defined by $$\widehat{F}|x,q,w=|x,qF(x),w$$ (5) where $`x`$ runs from 1 to $`N`$, $`q=\pm 1`$, and $`w`$ indexes the work space. A quantum algorithm that makes $`k`$ queries starts with an initial state $`|s`$ and alternately applies $`\widehat{F}`$ and $`F`$-independent unitary operators, $`V_i`$, producing $$|\psi _F=V_k\widehat{F}V_{k1}\mathrm{}V_1\widehat{F}|s.$$ (6) Suppose that the oracle holds one of the $`D`$ functions $`F_1,F_2,\mathrm{},F_D`$, all of the form (2). If the oracle holds $`F_j`$, then the final state of the algorithm is $`|\psi _{F_j}`$, but we do not (yet) know what $`j`$ is. To identify $`j`$ we divide the Hilbert space into $`D`$ orthogonal subspaces with corresponding projectors $`P_1,P_2,\mathrm{},P_D`$. We then make simultaneous measurements corresponding to this commuting set of projectors. One and only one of these measurements yields a $`1`$. If the $`1`$ is associated with $`P_{\mathrm{}}`$ we announce that the oracle holds $`F_{\mathrm{}}`$. Following, if the oracle holds $`F`$, so that the state before the measurement was $`|\psi _F`$ given by (6), we know that for each $`\mathrm{}`$, $`P_{\mathrm{}}|\psi _F^2`$ is a $`2k`$-degree polynomial in the values $`F(1),F(2),\mathrm{},F(N)`$. More precisely, $$P_{\mathrm{}}|\psi _F^2=\underset{r=1}{\overset{m_{\mathrm{}}}{}}\left|Q_\mathrm{}r(F(1),\mathrm{},F(N))\right|^2$$ (7) where each $`Q_\mathrm{}r`$ is a $`k`$-th degree multilinear polynomial and $`m_{\mathrm{}}`$ is the dimension of the $`\mathrm{}`$-th subspace. Note that formula (7) holds for *any* $`F`$ whether or not $`F=F_j`$ for some $`j`$. The algorithm succeeds, with probability at least $`p`$, if for each $`j=1,\mathrm{},D`$, we have $$P_j|\psi _{F_j}^2=\underset{r=1}{\overset{m_j}{}}\left|Q_{jr}(F_j(1),\mathrm{},F_j(N))\right|^2p.$$ (8) We now prove the following lemma: Let $`F_0`$ be any one of the functions of form (2). If $`Q`$ is a polynomial of degree at most $`k`$ such that $$\left|Q(F_0(1),\mathrm{},F_0(N))\right|^2=1$$ (9) then $$\underset{F}{}\left|Q(F(1),\mathrm{},F(N))\right|^2\frac{2^N}{1+\left(\genfrac{}{}{0pt}{}{N}{1}\right)+\mathrm{}+\left(\genfrac{}{}{0pt}{}{N}{k}\right)}$$ (10) where the sum is over all $`2^N`$ functions of the form (2). Proof: Without loss of generality we can take $`F_0(1)=F_0(2)=\mathrm{}=F_0(N)=1`$. Now $$Q(F(1),\mathrm{},F(N))=a_0+\underset{x}{}a_xF(x)+\underset{x<y}{}a_{xy}F(x)F(y)+\mathrm{}$$ (11) where the last term has $`k`$ factors of $`F`$ and the coefficients are complex numbers. Note that $$\underset{F}{}F(x_1)F(x_2)\mathrm{}F(x_g)F(y_1)\mathrm{}F(y_h)=0$$ (12) as long as the sets $`\{x_1,\mathrm{},x_g\}`$ and $`\{y_1,\mathrm{},y_h\}`$ are not equal and $`x_1,\mathrm{},x_g`$ are distinct, as are $`y_1,\mathrm{},y_h`$. This means that $$\underset{F}{}\left|Q(F(1),\mathrm{},F(N))\right|^2=2^N\left(|a_0|^2+\underset{x}{}|a_x|^2+\underset{x<y}{}|a_{xy}|^2+\mathrm{}\right).$$ (13) Now (9) with $`F_0(x)1`$ means $$\left|a_0+\underset{x}{}a_x+\underset{x<y}{}a_{xy}+\mathrm{}\right|^2=1.$$ (14) Because of the constraint (14), the minimum value of (13) is achieved when all the coefficients are equal. Since there are $`1+\left(\genfrac{}{}{0pt}{}{N}{1}\right)+\mathrm{}+\left(\genfrac{}{}{0pt}{}{N}{k}\right)`$ coefficients, the inequality (10) is established. Suppose we are given an algorithm that meets condition (8) for $`j=1,\mathrm{},D`$. Then by the above lemma, $$\underset{r=1}{\overset{m_j}{}}\underset{F}{}\left|Q_{jr}(F(1),\mathrm{},F(N))\right|^2\frac{2^Np}{1+\left(\genfrac{}{}{0pt}{}{N}{1}\right)+\mathrm{}+\left(\genfrac{}{}{0pt}{}{N}{k}\right)}.$$ (15) Summing over $`j`$ using (7) yields $$\underset{j}{}\underset{F}{}P_j|\psi _F^2\frac{D2^Np}{1+\left(\genfrac{}{}{0pt}{}{N}{1}\right)+\mathrm{}+\left(\genfrac{}{}{0pt}{}{N}{k}\right)}.$$ (16) For each $`F`$, the sum on $`j`$ gives 1 since $`|\psi _F=1`$. Therefore the lefthand side of (16) is $`2^N`$ and (4) follows. ## III Examples 0. If $`k=N`$, all $`2^N`$ functions can be distinguished classically and therefore quantum mechanically. In this case (3) becomes $$2^N=D1+\left(\genfrac{}{}{0pt}{}{N}{1}\right)+\left(\genfrac{}{}{0pt}{}{N}{2}\right)+\mathrm{}+\left(\genfrac{}{}{0pt}{}{N}{N}\right)=2^N.$$ (17) 1. For $`k=1`$, if $`N=2^n1`$ there are $`N+1`$ functions that can be distinguished so the bound (3) is best possible. The functions can be written as $$f_a(x)=(1)^{ax}$$ (18) with $`x\{1,\mathrm{},N\}`$ and $`a\{0,1,\mathrm{},N\}`$ and $`ax=_ia_ix_i`$ where $`a_1\mathrm{}a_n`$ and $`x_1\mathrm{}x_n`$ are the binary representations of $`a`$ and $`x`$. To see how these can be distinguished we work in a Hilbert space with basis $`\{|x,q\}`$, $`x=1,\mathrm{},N`$ and $`q=\pm 1`$, where a quantum query is defined as in (5) and the work bits have been suppressed. We define $$|x=\frac{1}{\sqrt{2}}\{|x,+1|x,1\}x=1,\mathrm{},N$$ (19) and $$|0=\frac{1}{\sqrt{2}}\{|1,+1+|1,1\},$$ (20) so $`\{|x\}`$, $`x=0,1,\mathrm{},N`$, is an orthonormal set. Now by (5), if we define $`F(0)`$ to be $`+1`$, we have $$\widehat{F}|x=F(x)|xx=0,1,\mathrm{},N$$ (21) and in particular, $$\widehat{f}_a|x=(1)^{ax}|xx=0,1,\mathrm{},N$$ (22) Now let $$|s=\frac{1}{\sqrt{N+1}}\underset{x=0}{\overset{N}{}}|x$$ (23) and observe that the $`N+1`$ states $`\widehat{f}_a|s`$ are orthogonal for $`a=0,1,\mathrm{},N`$. 2. In an algorithm is presented that distinguishes all $`2^N`$ functions in $`k`$ calls with probability $`\left(1+\left(\genfrac{}{}{0pt}{}{N}{1}\right)+\mathrm{}+\left(\genfrac{}{}{0pt}{}{N}{k}\right)\right)/2^N`$. With this value of $`p`$, and $`D=2^N`$, the bound (4) becomes an equality. Furthermore, (4) shows that this algorithm is best possible. 3. Nowhere in this paper have we exploited the fact that for an algorithm that succeeds with probability 1, it must be the case that $`P_{\mathrm{}}|\psi _{F_j}=0`$ for $`\mathrm{}j`$. With this additional constraint it can be shown that for $`N=3`$, no set of $`7=1+\left(\genfrac{}{}{0pt}{}{3}{1}\right)+\left(\genfrac{}{}{0pt}{}{3}{2}\right)`$ functions can be distinguished with $`2`$ quantum queries. Thus for $`N=3`$ and $`k=2`$ the bound (3) is not tight.
no-problem/9901/hep-ph9901202.html
ar5iv
text
# FERMILAB-PUB-99/002 hep-ph/9901202 Can a light technipion be discovered at the Tevatron if it decays to two gluons? ## Abstract In multiscale and topcolor–assisted models of walking technicolor, light, spin–one technihadrons can exist with masses of a few hundred GeV; they are expected to decay as $`\rho _TW\pi _T`$. For $`M_{\rho _T}200\mathrm{GeV}`$ and $`M_{\pi _T}100\mathrm{GeV}`$, the process $`p\overline{p}\rho _TW\pi _T`$ has a cross section of about a picobarn at the Tevatron. We demonstrate the detectability of this process with simulations appropriate to Run II conditions, for the challenging case where the technipion decays dominantly into two gluons. Color–singlet technipions, $`\pi _T^\pm `$ and $`\pi _T^0`$, are the pseudo–Goldstone bosons of multiscale technicolor and topcolor–assisted technicolor , and are expected to be the lightest particles associated with the new physics. These technipions couple to fermions in a similar fashion as the standard model Higgs boson, with a magnitude set by the technipion decay constant $`F_T`$, but have no tree level couplings to $`W`$ or $`Z`$ gauge bosons. Consequently, technipions can be produced singly at hadron colliders through tree–level quark annihilation or one–loop gluon fusion processes. If the quark annihilation process is dominant, then the production rate is feeble, and the signal is a dijet peak, maybe with heavy flavor. There is little hope of distinguishing this from QCD heavy flavor backgrounds. If gluon fusion is dominant, then the production rate can be large, but the signal is a digluon peak, with daunting backgrounds, or a rare $`\gamma \gamma `$ peak. It may be possible to observe the latter at the Large Hadron Collider (LHC) or possibly even the Tevatron, but this is still an open question. On the other hand, technivector mesons also arise in technicolor models, and these can be produced at substantial rates through their mixing with gauge bosons . The technivector mesons in question are an isotriplet of color-singlet $`\rho _T`$ and the isoscalar partner $`\omega _T`$. Because techni-isospin is likely to be a good approximate symmetry, $`\rho _T`$ and $`\omega _T`$ should have equal masses. The enhancement of technipion masses due to walking technicolor suggests that the channels $`\rho _T\pi _T\pi _T`$ and $`\omega _T\pi _T\pi _T\pi _T`$ are kinematically closed. Thus, the decay modes $`\rho _TW_L\pi _T`$ and $`Z_L\pi _T`$, where $`W_L`$, $`Z_L`$ are longitudinal weak bosons, and $`\omega _T\gamma \pi _T`$ may dominate . In recent phenomenological analyses, it has been assumed that $`\pi _T^0`$ decays into $`b\overline{b}`$. The presence of heavy flavor, plus an isolated lepton or photon, provides clear signatures for these processes. It has been demonstrated that such signals can be easily detected in Run II of the Fermilab Tevatron , and experimental searches following this prescription have been carried out on the Run I dataset . However, it is quite possible that light technipions contain colored technifermions, in which case the decay $`\pi _T^0gg`$ may contribute significantly or dominate if the number of technicolors $`N_{TC}`$ is large . In this case, the signature of technivector production is $`\gamma `$ or $`W+2`$ jets, and the backgrounds are correspondingly more severe. We present simulations of $`\overline{p}p\rho _T^\pm W_L^\pm \pi _T^0`$ for the Tevatron collider with $`\sqrt{s}=2\mathrm{TeV}`$ and an integrated luminosity of $`2\mathrm{fb}^1`$, corresponding to that expected in Run II. We follow the previous analysis which studied $`\pi _T^0b\overline{b}`$ in using topological cuts to exploit the resonant production process and thus enhance the signal–to–background ratio. For the cross sections and cuts we use in this paper, the signal stands out well above the background. We would expect that the complementary channel $`\overline{p}p\rho _T^0Z_L\pi _T^0`$ with $`Z\mathrm{}^+\mathrm{}^{}`$ might add some sensitivity, but we have not considered this in detail. For the case of $`Z\nu \overline{\nu }`$, the fake $`\text{ / }E_T`$ background from QCD multijet production is likely to overwhelm the signal. We have used Pythia 6.1 to generate $`\overline{p}p\rho _T^\pm W^\pm \pi _T^0`$ and $`\pi _T^0gg`$ at the Tevatron Collider with $`\sqrt{s}=2\mathrm{TeV}`$. The cross section is calculated under the assumptions of . The total production cross section times the branching ratio for the decay of the $`W`$–boson into electrons, $`\sigma B(W^\pm e^\pm )`$, is in the range 0.1–0.7 pb for $`m_{\rho _T}>200`$ GeV and $`m_{\pi _T}>100`$ GeV. The parameter dependence is illustrated in Fig. 1. For detailed studies we have focused on a typical point, with $`m_{\rho _T}=220\mathrm{GeV}`$ and $`m_{\pi _T}=110\mathrm{GeV}`$, where $`\sigma B=0.45\mathrm{pb}`$. The partial width for the decay $`\pi _T^0gg`$ can be calculated using the (energy–dependent) expression $`\mathrm{\Gamma }(\pi _Tgg)={\displaystyle \frac{1}{128\pi ^3F_T^2}}\alpha _s^2C_{\pi _T}N_{TC}^2s^{3/2},`$ (1) while the competing decay has the width $`\mathrm{\Gamma }(\pi _Tb\overline{b})={\displaystyle \frac{1}{4\pi ^2F_T^2}}3p_fC_fm_b^2.`$ (2) In these expression, $`\sqrt{s}`$ is the resonance mass, $`p_f`$ is the $`b`$–quark momentum in the resonance rest frame, and the constants $`C_{\pi _T}`$ and $`C_f`$ account for the flavor content of the technipion wave function. By comparing these expressions, we find that the $`\pi _T^0gg`$ decay begins to dominate when $`N_{TC}>34`$. For the purposes of this study, we have forced the $`\pi _T^0`$ to decay always into the $`gg`$ final state. Our final results refer to the quantity $`\sigma B`$ denoted above times the branching ratio for $`\pi _T^0gg`$. The dominant background, $`W^\pm \mathrm{jet}\mathrm{jet}`$, was calculated in two ways: firstly using the standard implementation of Pythia for the processes $`qgWq^{}`$ and $`q\overline{q}^{}Wg`$ with a minimum $`W`$ transverse momentum of 15 GeV/$`c`$; and secondly by interfacing the explicit “2 to 3” processes (e.g. $`q\overline{q}^{}Wgg`$, $`qgWq^{}g`$ and others) with Pythia. In the latter case, the showering scale for initial state radiation is the same as the factorization scale $`Q_{ISR}^2=m_W^2`$, while for final state radiation it is the same as the invariant mass of the two final state partons, $`Q_{FSR}^2=m_{jj}^2`$. Representative color flows are used to connect the initial state to the final state. The two separate calculations are found to be in excellent agreement. Jets were found using the clustering code provided in Pythia with a cell size of $`\mathrm{\Delta }\eta \times \mathrm{\Delta }\varphi =0.1\times 0.1`$, a cone radius $`R=0.7`$ and a minimum jet $`E_T`$ of 5 GeV. Cell energies were smeared using a calorimeter resolution $`\sigma _E`$ of $`0.5\sqrt{E(\mathrm{GeV})}`$. Missing transverse energy $`\text{ / }E_T`$ was estimated by taking the vector sum of the momenta of all clusters of energy found in the calorimeter with $`E_T>5`$GeV. In a previous study this was found to give a reasonable representation of the $`\text{ / }E_T`$ resolution of the DØ detector. Selected events were required to have an isolated electron, large missing energy, and two or more jets. The kinematic selections applied were: * Electron: $`E_T>20\mathrm{GeV}`$; pseudorapidity $`|\eta |<1.0`$; * Missing energy: $`\text{ / }E_T>20\mathrm{GeV}`$; * Two or more jets with $`E_T>20\mathrm{GeV}`$; and $`|\eta |<1.25`$, separated from the lepton by at least $`\mathrm{\Delta }R=0.7`$ * Leading jet $`E_T>40\mathrm{GeV}`$. Since the technipion was forced to decay into gluons, whose large color-charge leads to a high probability for final state radiation, the technipion mass was estimated by: $$m_{jj(j)}=\{\begin{array}{cc}\text{invariant mass}(jet_1,jet_2,jet_3)\hfill & \text{if }E_T(jet_3)>15\mathrm{GeV}\hfill \\ \text{invariant mass}(jet_1,jet_2)\hfill & \text{otherwise}\hfill \end{array}$$ This algorithm seeks to recombine some of the final state radiation. We have not worked extensively to optimize our algorithm, since the experimental situation will undoubtedly be more complicated than we have simulated. Our goal is to demonstrate that it is possible to achieve a better resolution than the naive dijet mass. Figure 2 shows that it gives a significantly improved technipion mass resolution compared with naively taking the invariant mass of the leading two jets, at the cost of introducing a high-side tail from initial-state radiation. The peak of the reconstructed mass is shifted downwards from its “true” value of 110 GeV by the cumulative effects of final state gluon radiation, fragmentation (particles emitted outside the cone), and muons and neutrinos within the jets. These effects also make the mass resolution much broader than the calorimetric energy resolution alone would imply. A similar mass estimation technique might be applicable to the $`b\overline{b}`$ invariant mass in such cases as $`Hb\overline{b}`$ searches. We note, however, that the requirement that there be displaced vertex tags within each of the candidate $`b`$ jets already rejects much of the radiative contamination in the $`b\overline{b}`$ case. This is not, of course, possible for an object decaying to two gluon jets. Requiring that the lepton and jets be central in pseudorapidity exploits the fact that the signal events will tend to be produced with larger center-of-mass scattering angles than the background. Figure 3(a) shows the distribution of $`m_{jj(j)}`$ (as defined above) jets for signal events (dotted), background (dashed) and their sum (solid) passing these criteria for a luminosity of $`2\mathrm{fb}^1`$. The particular kinematics of $`\rho _TW_L\pi _T`$ suggests other cuts that can discriminate signal from the $`W+\mathrm{jets}`$ background. The small $`Q`$-value for the $`\rho _T`$ decay causes the $`\pi _T`$ (and the $`W_L`$) to have low longitudinal and transverse momenta, and the jets from the technipion decay to be similar in energy and to have a large opening azimuthal angle $`\mathrm{\Delta }\varphi (jj)`$. These expectations were borne out by simulated distributions in these variables. Cutting on these variables then helps to suppress the $`Wjj`$ background to $`\rho _TW_L\pi _T`$. Consequently, we have taken the selected events in Fig. 3(a) and applied additional topological cuts: * $`35<p_T^{jj(j)}<65\mathrm{GeV}`$; * $`|p_L^{jj(j)}|<55\mathrm{GeV}`$; * $`(E_T(jet_1)E_T(jet_2))/(E_T(jet_1)+E_T(jet_2))<0.5`$; * $`\mathrm{\Delta }\varphi (jj)>90^{}`$. The dijet transverse momentum $`p_T^{jj(j)}`$ and longitudinal momentum $`p_L^{jj(j)}`$ are calculated from the leading three (two) jets if the third jet has $`E_T`$ greater (less) than $`15\mathrm{GeV}`$, just as is the invariant mass $`m_{jj(j)}`$. The precise numerical values in these cuts of course depend on the technirho and technipion masses and their difference. An experimental search would need to re-optimize the cut values for each point in parameter space, as was in fact done in . The effects of the topological cuts are shown in Fig. 3(b). The signal-to-background at $`100\mathrm{GeV}`$ is improved from $`1:12`$ to $`1:5`$ by these cuts. A visible excess is apparent in this distribution, which corresponds to the $`\pi _T^0`$ mass. Also, the signal distribution peaks in a different mass bin than is expected for the background. Fig. 3(c) shows the corresponding invariant mass distribution after topological cuts for the $`Wjj`$ system, which corresponds roughly to the $`\rho _T`$ mass. Here the $`W`$ four-momentum was reconstructed from the lepton and $`\text{ / }E_T`$, taking the lower-rapidity solution in each case. Unfortunately, while the signal-to-background is also good in the peak region of this plot, the cuts have resulted in the signal and background shapes being similar. Unlike the previous figure a deviation from the expected background shape is not visible. To estimate the significance of the signal, we counted signal $`S`$ and background $`B`$ events within $`\pm 16`$ GeV of the peak. We find $`S=85`$ and $`B=1716`$ from Fig. 3(a), yielding $`S/B=.05`$ and $`S/\sqrt{B}=2.04`$. From Fig. 3(b), we find $`S=54`$ and $`B=467`$, with $`S/B=.12`$ and $`S/\sqrt{B}=2.51`$. Including the decays of the $`W`$ into muons, and assuming the same efficiency as for electrons, the significance of the signal distribution in Fig. 3(b) is increased to 3.55. Therefore, if two experiments collect $`2\mathrm{fb}^1`$ of data in Run II, then a 5 sigma deviation might be observed in the combined data sets. Additionally, the nearly degenerate $`\omega _T`$ may produce a $`\gamma gg`$ signal of comparable significance. In conclusion, we have shown that the low-scale technicolor signature $`\rho _TW\pi _T`$ can be discovered in Run II of the Tevatron for production rates as low as a few picobarns, even if the decay mode $`\pi _T^0gg`$ dominates. The research of JW is supported by the Fermi National Accelerator Laboratory, which is operated by Universities Research Association, Inc., under Contract No. DE–AC02–76CHO3000. We thank the Aspen Center for Physics for its hospitality while this work was carried out.
no-problem/9901/physics9901054.html
ar5iv
text
# An Efficient Molecular Dynamics Scheme for the Calculation of Dopant Profiles due to Ion Implantation ## I Introduction The principal reason for implanting ions<sup>*</sup><sup>*</sup>*Within this paper, *ion* is used to refer to the implanted species, and *atom* to refer to a particle of the target material; this has no implication to the charge state of either atom type. into silicon wafers is to dope regions within the substrate, and hence modify their electrical properties in order to create electronic devices. The quest for ever increasing processor performance demands smaller device sizes. The measurement and modeling of dopant profiles within these ultra shallow junction devices is challenging, as effects that were negligible at high implant energies become increasingly important as the implant energy is lowered. The experimental measurement of dopant profiles by secondary ion mass spectrometry (SIMS) becomes problematic for very low energy (less than 10 keV) implants. There is a limited depth resolution of measured profiles due to profile broadening, as the SIMS ion-beam produces ‘knock-on’s, and so leads to effects such as diffusion of dopants and mixing. The roughness and disorder of the sample surface can also convolute the profile, although this can be avoided to a large extent by careful sample preparation. The use of computer simulation as a method for studying the effects of ion bombardment of solids is well established. Binary collision approximation (BCA), ‘event-driven’ codes have traditionally been used to calculate such properties as ranges of implanted species and the damage distributions resulting from the collision cascade. In this model, each ion trajectory is constructed as a series of repulsive two-body encounters with initially stationary target atoms, and with straight line motion between collisions. Hence the algorithm consists of finding the next collision partner, and then calculating the asymptotic motion of the ion after the collision. This allows for efficient simulation, but leads to failure of the method at low ion energies. The BCA approach breaks down when multiple collisions (where the ion has simultaneous interactions with more than one target atom) or collisions between moving atoms become significant, when the crystal binding energy is of the same order as the energy of the ion, or when the time spent within a collision is too long for the calculation of asymptotic trajectories to be valid. Such problems are clearly evident when one attempts to use the BCA to simulate channeling in semiconductors; here the interactions between the ion and the target are neither binary nor collisional in nature, rather they occur as many simultaneous soft interactions which steer the ion down the channel. An alternative to the BCA is to use molecular dynamics (MD) simulation, which has long been applied to the investigation of ion bombardment of materials, to calculate the ion trajectories. The usefulness of this approach was once limited by its computational cost and the lack of realistic models to describe materials. With the increase in computational power, the development of efficient algorithms, and the production of accurate empirical potentials, it is now feasible to conduct realistic MD simulations. In the classical MD model, atoms are represented by point masses that interact via an empirical potential function that is typically a function of bond lengths and angles; in the case of Si a three-body or many-body potential, rather than a pair potential is required to model the stable diamond lattice and to account for the bulk crystal properties. The trajectories of atoms are obtained by numerical integration of Newton’s laws, where the forces are obtained from the analytical derivative of the potential function. Thus, MD provides a far more realistic description of the collision processes than the BCA, but at the expense of a greater computational requirement. Here we present a highly efficient MD scheme that is optimized to calculate the concentration profiles of ions implanted into crystalline silicon. The algorithms are incorporated into our implant modeling molecular dynamics code, REEDNamed for ‘Rare Event Enhanced Domain following’ molecular dynamics., which runs on many architectures either as a serial, or as a trivially parallel program. ## II Molecular Dynamics Model The basis of the molecular dynamics model is a collection of empirical potential functions that describe interactions between atoms and give rise to forces between them. In addition to the classical interactions described by the potential functions, the interaction of the ion with the electrons within the target is required for ion implant simulations, as this is the principle way in which the ion loses energy. This is accomplished via a phenomenological electronic stopping-power model. Other ingredients necessary to the computation are a description of the target material structure and thermal vibration within the solid. It is also necessary to define a criterion to decide when the ion has come to rest in the substrate. We terminate a trajectory when the *total* energy of the ion falls below 5 eV. This was chosen to be well below the displacement threshold energy of Si (around 20 eV). ### A Empirical Potential Functions Interactions between Si atoms are modeled by a many-body potential developed by Tersoff. This consists of Morse-like repulsive and attractive pair functions of interatomic separation, where the attractive component is modified by a many-body function that has the role of an effective Pauling bond order. The many-body term incorporates information about the local environment of a bond; due to this formalism the potential can describe features such as defects and surfaces, which are very different to the tetrahedral diamond structure. ZBL ‘pair specific’ screened Coulomb potentials are used to model the ion-Si interactions for As, B, and P ions. Where no ‘pair specific’ potential was available, the ZBL ‘universal’ potential has been used. This is smoothly truncated with a cosine cutoff between 107% and 147% of the sum of the covalent radii of the atoms involved; the cutoff distances were chosen as they give a screening function that approximates the ‘pair specific’ potentials for the examples available to us. The ZBL ‘universal’ potential is also used to describe the close-range repulsive part of the Tersoff Si-Si potential, as the standard form is not sufficiently strong for small atomic separations. The repulsive Morse term is splined to a shifted ZBL potential, by joining the two functions at the point where they are co-tangent. In the case of Si-Si interactions, the join is at an atomic separation of 0.69 Å, and requires the ZBL function to be shifted by 148.7 eV. The increase in the value of the short-range repulsive potential compensates for the attractive part of the Tersoff potential, which is present even at short-range. ### B Inelastic Energy Loss The Firsov model is used to describe the loss of kinetic energy from the ion due to inelastic collisions with target atoms. We implement this using a velocity dependent pair potential, as derived by Kishinevskii. This gives the force between atoms $`i`$ and $`j`$ as: $$𝐅_{ij}=\frac{2^{1/3}\mathrm{}}{2\pi a_\text{B}}(𝐯_j𝐯_i)\left(Z_1^2\times I\left(\frac{Z_1^{1/3}\alpha R}{a}\right)+Z_2^2\times I\left(\frac{Z_2^{1/3}(1\alpha )R}{a}\right)\right)$$ (1) where: $`I(X)={\displaystyle _X^{\mathrm{}}}{\displaystyle \frac{\chi ^2(x)}{x}}𝑑x\text{, and }\alpha =\left(1+\left({\displaystyle \frac{Z_2}{Z_1}}\right)^{1/6}\right)^1`$ (2) and $`\chi (x)`$ is a screening function, $`Z`$ is atomic number ($`Z_1>Z_2`$), $`R`$ is atomic separation, and $`a=(9\pi ^2/128)^{1/3}a_\text{B}`$. For consistency with the ion-Si interactions, we use the ZBL ‘universal’ screening function within the integral; there are no fitted parameters in this model. We have found that it is necessary to include energy loss due to inelastic collisions, and energy loss due to electronic stopping (described below) as two distinct mechanisms. It is not possible to assume that one, or other, of these processes is dominant and *fit* it to model all energy loss for varying energies and directions. ### C Electronic Stopping Model A new model that involves both global and local contributions to the electronic stopping is used for the electronic energy loss. This modified Brandt-Kitagawa model was developed for semi-conductors and contains only one fitted parameter per ion species, for all energies and incident directions. We believe that by using a realistic stopping model, with the minimum of fitted parameters, we obtain a greater transferability to the modeling of implants outside the fitting set. This should be contrasted to many BCA models which require completely different models for different ion species or even for different implant angles for the same ion species, and that contain several fitted parameters per species. Our model has been successfully used to describe the implant of As, B, P, and Al ions with energies in the sub MeV range into crystalline Si in the $``$100$``$, $``$110$``$, and non-channeling directions, and also into amorphous Si. While initially developed for use in BCA simulations, the only modification required to the model for its use in MD is to allow for the superposition of overlapping charge distributions, due to the fact that the ion is usually interacting with more than one atom at a time. The one fitting parameter is $`r_s^0`$, the ‘average’ one electron radius of the target material, which is adjusted to account for oscillations in the $`Z_1`$ dependence of the electronic stopping cross-section. ### D Structure of the Target Material For the calculations presented here, the target is crystalline Si with a surface amorphous layer. The amorphous structure was obtained from a simulation of repeated radiation damage and annealing, of an initially crystalline section of material. Thermal vibrations of atoms are modeled by displacing atoms from their lattice sites using a Debye model. We use a Debye temperature of 519.0 K for Si obtained by recent electron channeling measurements. This gives an rms thermal vibrational amplitude in one dimension of 0.0790 Å at 300.0 K. Note, we do not use the Debye temperature as a fitting parameter in our model, as is often done in BCA models. The thermal velocity of the atoms is unimportant as it is so small compared to the ion velocity, and is set to zero. At present there is no accumulation of damage within our simulations, as we wish to verify the fundamental model with the absolute minimum of parameters that can be fit. At a later date we will incorporate a statistical damage model into our simulations in a manner similar to that used in BCA codes. We also intend to include the capability of using amorphous, or polycrystalline targets in our simulations. ## III Efficient Molecular Dynamics Algorithms During the time that MD has been in use, many algorithms have been developed to enhance the efficiency of simulations. Here we apply a combination of methods to increase the efficiency of the type of simulation that we are interested in. We incorporate both widely used methods, which are briefly mentioned below, and new or lesser known algorithms for this specific type of simulation which we describe in greater detail. ### A Basic Algorithms We employ neighbor lists to make the potential and force calculation O($`N`$), where $`N`$ is the number of particles. Coarse grained cells are used in the construction of the neighbor list; this is combined with a Verlet neighbor list algorithm to minimize the size of the list. Atoms within 125% of the largest interaction distance are stored in the neighbor list, which is updated only when the relative motion of atoms is sufficient for interacting neighbors to have changed. ### B Timestep selection The paths of the atoms are integrated using Verlet’s algorithm, with a variable timestep that is dependent upon both kinetic and potential energy of atoms. For high energy simulations the potential energy as well as the velocity of atoms is important, as atoms may be moving slowly but have high, and rapidly changing, potential energies during impacts. The timestep is selected using: $$\mathrm{\Delta }t_n=\frac{C_{DIS}}{\sqrt{\begin{array}{c}\mathrm{max}\\ 1iN\end{array}\left(\frac{2\times \left[KE_i+\mathrm{max}(0,PE_i)\right]}{M_i}\right)}}$$ (3) where $`KE_i`$, $`PE_i`$ and $`M_i`$ are the kinetic energy, potential energy and mass respectively of atom $`i`$, and $`C_{DIS}`$ is a constant with a value of 0.10 Å. Away from hard collisions, only the kinetic energy term is important, and the timestep is selected to give the fastest atom a movement of $`C_{DIS}`$ in a single timestep. When the timestep is increasing, it is limited by: $$\mathrm{\Delta }t_n^{}=\mathrm{min}(1.05\times \mathrm{\Delta }t_{n1},\frac{3}{4}\mathrm{\Delta }t_{n1}+\frac{1}{4}\mathrm{\Delta }t_n)$$ (4) to prevent rapid oscillations in the size of the timestep, and the maximum timestep is limited to 2.0 fs. The timestep selection scheme was checked to ensure that the total energy in a full (i.e., without the modifications described below) MD simulation was well conserved for any single ion implant with no electronic stopping; e.g. in the case of a non-channeling (10 tilt and 22 rotation) 5 keV As ion into a 21168 atom Si{100} target, the energy change was 3.6 eV (0.004%) during the 250 fs it took the ion to come to rest. ### C Domain following Even with the computation resources available today it is infeasible to calculate dopant profiles by full MD simulation. Although the method is O($`N`$) in the number of atoms involved, the computational requirements scale extremely quickly with the ion energy. The cost of the simulation can be estimated as the number of atoms in the system multiplied by the number of timesteps required. Consider the case of an ion, subject to an energy loss proportional to its velocity, $`v(t)`$, which is then given by $`v(t)=u\mathrm{exp}(\alpha t)`$ where $`u`$ is its initial velocity and $`\alpha `$ is the loss coefficient. Each dimension of the system must scale approximately as the initial ion velocity, $`u`$, to fully contain an ion-path. If the timestep size is chosen so that the maximum distance moved by any particle in a single step is constant, the number of timesteps is approximately proportional to the ion distance. Hence the method is roughly O($`u^4`$). Although it is possible to compute a few trajectories at ion energies of up to 100s of keV, the calculation of the thousands necessary to produce statistically reliable dopant profiles is out of the question. Therefore, we have concentrated on developing a restricted MD scheme which is capable of producing accurate dopant profiles with a much smaller computational overhead. As we are only concerned with the path of the implanted ion, we only need to consider the region of silicon immediately surrounding the ion. We continually create and destroy silicon atoms, to follow the domain of the substrate that contains the ion. Material is built in slabs one unit cell thick to ensure that the ion is always surrounded by a given number of cells on each side. Material is destroyed if it is outside the domain defined by the ion position and the domain thickness. In this scenario, the ion sees the equivalent of a complete crystal, but primary knock-on atoms (PKAs) and material in the wake of the ion path behave unphysically, due to the small system dimensions. Hence we have reduced the cost of the algorithm to O($`u`$), at the expense of losing information on the final state of the Si substrate. This algorithm is similar to the ‘translation’ approach used in the MDRANGE computer code developed by Nordlund. The relationship between the full and restricted MD approaches is shown in Fig. 1. Fig. 2 illustrates a single domain following trajectory. The ion is initially above a semi-infinite volume that is the silicon target. As the ion approaches the surface, atoms begin to be created in front of it, and destroyed in its wake. This process is continued until the ion comes to rest at some depth in the silicon substrate. Several thousand of such trajectories are combined to produce the depth profile of implanted ions. ### D Moving Atom Approximation This was first introduced by Harrison to increase the efficiency of ion sputtering yield simulations. In this scheme atoms are divided into two sets; those that are ‘on’ have their positions integrated, and those that are ‘off’ are stationary. At the start of the simulation, only the ion is turned on, and is the only atom to have forces calculated and to be integrated. Some of the ‘off’ atoms will be used in the force calculations and will have forces assigned to them. If the resultant force exceeds a certain threshold, the atom is turned on and its motion is integrated. The simulation proceeds in this way with more and more atoms having their position integrated as energy becomes dispersed throughout the system. We use two thresholds in our simulation; one for atoms interacting directly with the ion, and one for atom-atom interactions. We are, of course, mostly concerned with generating the correct motion for the ion, so the ion-atom interactions are of the most critical and require a lower threshold than the atom-atom interactions. In fact, for any reasonable threshold value, almost any ion-atom interaction will result in the atom being turned on, due to the large ion energy. Hence the ion-atom threshold is set to zero in these simulations, as adjusting the value gives no increase in efficiency. In the case of the atom-atom threshold, we estimate a reasonable value by comparison to simulations without the moving atom approximation (MAA). Smith et al. found a force threshold of $`1.12\times 10^9`$ N for both atom-atom and ion-atom interactions gave the correct sputtering yield (when compared to simulations without the MAA) in the case of 1 keV Ar implant into Si. We have found a larger value ($`8.0\times 10^9`$ N) gives the correct dopant profile, when compared to a simulations without the approximation. Our ability to use a larger value is due to two reasons. The motion of atoms not directly interacting with the ion only has a secondary effect on its motion by influencing the position of directly interacting atoms, so small errors in the positions of these atoms has little consequence. Also, by dividing the interactions into two sets, we do not have to lower the threshold to give the correct ion-atom interactions. ### E Pair Potential Approximation and Recoil Interaction Approximation While we use a many-body potential to describe a stable silicon lattice for low energy implants, this introduces a significant overhead to our simulations. For higher ion velocities, we do not need to use such a level of detail. A pair potential is sufficient to model the Si-Si interactions, as only the repulsive interaction is significant. Also, as the lattice is built at a metastable point with respect to a pair potential, with atoms initially frozen due to the MAA, and the section of material is only simulated for a short period of time, stability is not important. Hence, at a certain ion velocity we switch from the complete many-body potential to a pair potential approximation (PPA) for the Si-Si interactions. This is achieved in our code by setting the many-body parameter within the Tersoff potential to its value for undistorted tetrahedral Si, and results in a Morse potential splined to a screened coulomb potential. We make a further approximation for still higher ion energies, where only the ion-Si interactions are significant in determining the ion path. For ion velocities above a set threshold we calculate only ion-Si interactions. This approximation, termed the recoil interaction approximation (RIA), brings the MD scheme close to many BCA implementations. The major difference that exists between the two approaches is that the ion path is obtained by integration, rather than by the calculation of asymptotes, and that multiple interactions are, by the nature of the method, handled in the correct manner. We have determined thresholds of 90.0 eV/$`m_\text{u}`$ and 270.0 eV/$`m_\text{u}`$ for the PPA, and RIA, respectively are sufficiently high that both low and high energy calculated profiles are unaffected by their use. As the thresholds are based on the ion velocity, a single high energy ion simulation will switch between levels of approximation as the ion slows down and will produce the correct end of range behavior. ## IV Rare Event Algorithm A typical dopant concentration profile in crystalline silicon, as illustrated in Fig. 3, has a characteristic shape consisting of a near-surface peak followed by an almost exponential decay over some distance into the material, with a distinct end of range distance. The concentration of dopant in the tail of the profile is several orders of magnitude less than that at the peak. Hence if we wish to calculate a statistically significant concentration at all depths of the profile we will have to run many ions that are stopped near the peak for every one ion that stops in the tail, and most of the computational effort will not enhance the accuracy of the profile we are generating. In order to remove this redundancy from our calculations, we employ an ‘atom splitting’ scheme to increase the sampling in the deep component of the concentration profile. Every actual ion implanted is replaced by several virtual ions, each with an associated weighting. At certain *splitting depths* in the material, each ion is replaced by two ions, each with a weighting of half that prior to splitting. Each split ion trajectory is run separately, and the weighting of the ion is recorded along with its final depth. As the split ions see different environments (material is built in front of the ion, with random thermal displacements), the trajectories rapidly diverge from one another. Due to this scheme, we can maintain the same number of virtual ions at any depth, but their weights decrease with depth. Each ion could of course be split into more than two at each depth, with the inverse change in the weightings, but for simplicity and to keep the ion density as constant as possible we work with two. To maximize the advantages of this scheme, we dynamically update the splitting depths. The correct distribution of splitting depths is obtained from an approximate profile for the dopant concentration. The initial profile is either read in (e.g. from SIMS data), or estimated from the ion type, energy and incident direction using a crude interpolation scheme based on known depths and concentrations for the peak and tail. Once the simulation is running, the profile and the splitting depths are re-evaluated at intervals. The algorithm to determine the splitting depths from a given profile is illustrated in Fig. 3. At the start of the simulation, we specify the number of orders of magnitude, $`M`$, of change in the concentration of moving ions over which we wish to reliably calculate the profile. We split ions at depths where the total number of ions (ignoring weighting) becomes half of the number of actual implanted ions. Hence we will use $`N`$ splitting depths, where $`N`$ is the largest integer $`M\times \mathrm{log}_210`$. The splitting depths, $`d_i`$ ($`1iN`$), are then chosen such that: $$_0^{d_i}C(x)𝑑x=(1(\frac{1}{2})^i)\times _0^{\mathrm{}}C(x)𝑑x$$ (5) where $`C(x)`$ is the concentration of stopped ions (i.e., the dopant concentration) at depth $`x`$. Although we are using an approximate profile from few ions to generate the splitting depths, the integration is a smoothing operation and so gives good estimates of the splitting depths. To minimize the storage requirements due to ion splitting, each real ion is run until it comes to rest, and the state of the domain is recorded at each splitting depth passed. The deepest split ion is then run, and further split ions are stored if it passes any splitting depths. This is repeated until all split ions have been run, then the next real ion is started. Hence the maximum we ever need to store is one domain per splitting depth (i.e., 16 domains when splitting over 5 orders of magnitude). ## V Simulation Details All simulations were run with a Si{100} target at a temperature of 300 K. A surface amorphous layer of one, or three unit cells thickness was used. Dopant profiles were calculated for As, B, P, and Al ions; in each case it was assumed that only the most abundant isotope was present in the ion beam. The direction of the incident ion beam is specified by the angle of tilt, $`\theta ^{}`$, from normal and the azimuthal angle $`\varphi ^{}`$, as ($`\theta `$,$`\varphi `$). The incident direction of the ions was either (0,0), i.e. normal to the surface ($``$100$``$ channeling case), (7–10,0–30) (non-channeling), or (45,45) ($``$110$``$ channeling), and a beam divergence of 1.0 was always assumed. Simulations were run for 1,000 ions, with the splitting depths updated every 100 ions. A domain of 3$`\times `$3$`\times `$3 unit cells was used and the profile was calculated over either 3, or 5 orders of magnitude change in concentration. The simulations were run on pentium pro workstations running the Red Hat Linux operating system with the GNU g77 Fortran compiler, or SUN Ultra-sparc workstations with the SUN Solaris operating system and SUN Fortran compiler. The running code typically requires about 750K of memory. ## VI Results and Discussion Two sets of results are presented; we first demonstrate the effectiveness and stability of the rare event enhancement scheme, and then give examples of data produced by the simulations and compare to SIMS data. Example timings from simulations are also given. A more extensive set of calculated profiles will be published separately . ### A Performance of the Rare Event Algorithm An example of the evolution of splitting depths during a simulation is shown in Fig. 4, for the case of non-channeling 5 keV As implanted into Si{100}. The positions of the splitting depths near the peak stabilize quickly. Splitting depths near the tail take far longer to stabilize, as these depend on ions that channel the maximum distance into the material. Although atom splitting enhances the number of (virtual) ions that penetrate deep into the material, the occurrence of an ion that will split to yield ions at these depths is still a relatively rare event. The fact that all splitting depths do stabilize is also an indication that we have run enough ions to generate good statistics for the entire profile. The paths of 5 keV As ions implanted at normal incidence into Si{100} are shown in Fig. 5, with the number of splittings shown by the line shading. The 1,000 implanted real ions were split to yield a total of 19,270 virtual ions. The paths taken by 27 split ions produced from the first real ion of this simulation, and the resulting distribution of the ion positions are shown in Fig. 6. The final ions are the result of between 3 and 6 splittings, depending upon the range of each trajectory. This is typical of the distribution of splittings for one real ion; the final depths of ions are not evenly distributed over the entire ion range, but are bunched around some point within this range. This reflects how the impact position of the ion and collisions during its passage through the amorphous layer affect its ability to drop into a channel once in the crystalline material. The weighting of the second 500 of the ions (after the splitting depths had stabilized) is plotted against final depth in Fig. 7 (note the log scale). We have estimated the uncertainty in the calculated dopant profiles in order to judge the increase in efficiency obtained through the use of the rare event enhancement scheme. The uncertainty was estimated by dividing the final ion depths into 10 sets. A depth profile was calculated from each set using a histogram of 100 bins, with constant bin size. A reasonable measure of the uncertainty is the standard deviation of the distribution of the 10 concentrations for each bin. Fig. 8 shows calculated dopant profiles from 1,000 real ions for the case of 2 keV As at (7,0) into Si{100}, obtained with and without atom splitting over five orders of magnitude. The profiles are plotted with the uncertainty represented by the size of error bars; the length of each error bar corresponds to half the standard deviation of concentrations in that bin. The uncertainty is constant in the case of the profile obtained with the rare event scheme, whereas the profile obtained without the scheme is only reliable over one order of magnitude. Timings from these simulations, and a simulation with splitting to three orders of magnitude are given in Table I. From these timings, we can estimate the efficiency gain due to the rare event algorithm. We decrease the time required by a factor of 89 in the case of calculating a profile to three orders of magnitude, and by a factor of 886 when calculating a profile over 5 orders of magnitude, compared to the estimated time requirements without rare event enhancement. The gain in efficiency increases exponentially with the number of orders of magnitude in concentration over which we wish to calculate the profile. ### B Comparison of Profiles to SIMS Data The remaining figures show the calculated concentration profile of B, As, and P ions for various incident energies and directions. Profiles were generated from a histogram of 100 bins, using adaptive bin sizes; the final ion depths were sorted and the same number of virtual ions assigned to each bin. No other processing, or smoothing of the profiles was done. Also shown are low dose ($``$ $`10^{13}`$ ions/cm<sup>2</sup>) SIMS data; for comparison, all profiles were scaled to an effective dose of $`10^{12}`$ ions/cm<sup>2</sup>. We have also examined Al ion implants, but were unable to match calculated profiles to the available SIMS data for a physically reasonable parameter value in our electronic stopping model. This may be due to one or more of the following reasons; Al is the only metal that we are implanting; the Al-Si interaction is the only interaction for which we do not have a pair specific ZBL potential; we only have a very limited set of SIMS data to compare to. In the case of the low energy ($``$ 10 keV) implants, we compare to SIMS data obtained with a thin and well controlled surface layer; here we assume one unit cell thickness of surface disorder in our simulations. For the other cases considered here, the surface was less well characterized; we assume three unit cells of disorder at the surface, as this is typical of implanted Si. For the low energy implants, we have calculated profiles over a change of five orders of magnitude in concentration; for the higher energy implants we calculate profiles over 3 orders of magnitude. The results of the REED calculations show good agreement with the experimental data. In the case of the low energy implants, the SIMS profile is only resolved over two orders of magnitude in some cases, while we can calculate the profile over five orders of magnitude. We give timing results from several simulations, as examples of the cpu requirements of our implementation of the model. Note, the results presented here are from a functional version of REED, but the code has yet to be fully optimized to take advantage of the small system sizes (around 200 atoms). Timing data are given in Table II, for profiles calculated over five orders of magnitude on a single pentium pro. Run times are dependent on the ion type and its incident direction, but are most strongly linked to the ion velocity. We estimate a runtime of approximately 30 hours per $`\sqrt{\text{keV}/m_\text{u}}`$, for this version of our code. ## VII Conclusions In summary, we have developed a restricted molecular dynamics code to simulate the ion implant process and directly calculate ‘as implanted’ dopant profiles. This gives us the accuracy obtained by time integrating atom paths, whilst obtaining an efficiency far in excess of full MD simulation. There is very good agreement between the MD results and SIMS data for B, P, and As implants. We are unable to reproduce published SIMS data for Al implants with our current model. This discrepancy is currently being investigated; our findings will be published separately. We can calculate the dopant profile to concentrations one or two orders of magnitude below that measurable by SIMS for the channeling tail of low dose implants. The scheme described here gives a viable alternative to the BCA approach. Although it is still more expensive computationally, it is sufficiently efficient to be used on modern desktop computer workstations. The method has two major advantages over the BCA approach: (i) Our MD model consists only of standard empirical potentials developed for bulk Si and for ion-solid interactions. The only fitting is in the electronic stopping model, and this involves *only one* parameter per ion species. This should be contrasted to the many parameters that have to be fit in BCA models. We believe that by using physically based models for all aspects of the program, with the minimum of fitting parameters, we obtain good transferability to the modeling of implants outside of our fitting set. (ii) The method does not break down at the low ion energies necessary for production of the next generation of computer technology; it gives the correct description of multiple, soft interactions that occur both in low energy implants, and high energy channeling. We are currently working to fully optimize the code, in order to maximize its efficiency. The program is also being extended to include a model for ion induced damage, amorphous and polycrystalline targets, and to model cluster implants such as BF<sub>2</sub>. We also note that the scheme can be easily extended to include other ion species such as Ge, In and Sb, and substrates such as GaAs and SiC. ## VIII Acknowledgments We gratefully acknowledge David Cai and Charles Snell for providing us with their insight during many discussions, and Al Tasch and co-workers for providing preprints of their work and SIMS data. This work was performed under the auspices of the United States Department of Energy. ## IX References ## X Figures ## XI Tables
no-problem/9901/astro-ph9901354.html
ar5iv
text
# New Generation Atmospheric Cherenkov Detectors ## 1 Introduction The EGRET detector on-board of CGRO has stimulated the field of high energy astrophysics by the detection of 271 $`\gamma `$-ray sources Hartmann99 in the energy range between 10 MeV - 20 GeV. In parallel, the success of the ground-based imaging atmospheric Cherenkov technique, operating between 250 GeV - 50 TeV, has demonstrated that $`\gamma `$-ray astronomy can be expanded well beyond GeV energies. A pioneering experiment, using the Whipple Observatory 10 m imaging telescope, achieved the first unequivocal detection of the Crab Nebula Weekes89 , a plerion or pulsar-powered synchrotron nebula. More of a surprise was the discovery of TeV emission from a subclass of active galactic nuclei (AGNs), the so-called blazars: Mrk 421 Punch92 , Mrk 501 Quinn96 and 1ES 2344+514 Catanese98 . Although EGRET has reported 66 high-confidence (and 27 low-confidence) blazar identifications with redshifts between z = 0.0018 - 2.286, the blazars at TeV energies are all nearby with redshifts between z = 0.031 - 0.044. It is tempting to speculate Stecker97 that TeV $`\gamma `$-rays are absorbed by the extragalactic IR background when traveling distances comparable to most EGRET blazars. However, whether or not the non-detection of most EGRET blazars at TeV energies can be attributed to their interaction with IR background photons ($`\gamma \gamma e^+e^{}`$), or is simply due to a spectral break at the source, remains an open question, because of the lack of data above 20 GeV. Contrary to earlier predictions Stecker93 for the IR background density, recent TeV $`\gamma `$-ray observations show that the energy spectra of Mrk 421 and Mrk 501 extend beyond 10 TeV Aharon97 ; Samuelson98 ; Krennrich99 , the highest energies detected from any AGN. This is still consistent with detailed models Primack98 for the IR density. Apart from constraining IR background models, the high energy $`\gamma `$-rays from nearby blazars can also provide important data to test $`\gamma `$-ray production models. In particular the possibility of acceleration of hadronic cosmic rays in jets of blazars can be studied. Extending the measurements to 100 TeV and down to 20 GeV could provide the crucial evidence for the understanding of particle acceleration in jets. Also the fact that the $`\gamma `$-ray peak emission occurs at vastly different energies, e.g., at a few GeV for 3C 279 Wehrle98 and at a few hundred GeV for Mrk 501 Catanese97 (a weak detection by EGRET has been reported just recently Kataoka99 ) emphasizes the importance of a big energy coverage in blazar studies. A wide range of energies from 20 GeV up to 100 TeV with a good energy resolution is desirable to study spectral features. The interest in galactic $`\gamma `$-ray astronomy was ignited by the detection of two additional plerions PSR 1706-44 Kifune95 and Vela Yoshikoshi98 and even more so, by a detection of a shell-type supernova remnant, SN 1006 Tanimori98a by the CANGAROO telescope. One of the primary motivations for galactic $`\gamma `$-ray astronomy is the understanding of supernova shock acceleration and the origin of the galactic cosmic rays. The search for their most promising acceleration sites, in shell-type supernova remnants Drury94 , requires a good sensitivity for extended Buckley98 $`\gamma `$-ray emission. Future atmospheric Cherenkov detectors will also address the sensitivity for extended sources like the galactic plane and shell-type supernova remnants. GeV and TeV $`\gamma `$-ray measurements have caused a wide scientific interest. It is crucial for the understanding of the $`\gamma `$-ray emission processes to close the energy gap between 20 GeV - 250 GeV and to advance the sensitivity between 250 GeV - 100 TeV. The gap between 20 GeV and 250 GeV can be explored with future satellite and ground-based instruments. Both techniques are currently beeing investigated: the satellite-based GLAST Glast96 detector with a relatively small collection area but a wide field of view, and the ground-based atmospheric Cherenkov technique with a small field of view but a large collection area. The energies above 250 GeV will remain the domain of the ground-based detectors. In this overview of atmospheric Cherenkov detectors, the detection principle and their anticipated sensitivity, energy range and angular resolution are discussed. In Section 2 important design considerations for atmospheric Cherenkov detectors are briefly outlined. In Section 3 the individual detectors are described. Section 4 gives a summary of the anticipated performance of the various instruments emphasizing their individual strengths. ## 2 Design Considerations The design of a new instrument is driven by science. From the previous section it is evident that the need for a wide range in energy (20 GeV - 100 TeV), wide field-of-view and spectroscopic capabilities puts many requirements on future detectors. The detection principle of $`\gamma `$-rays above 20 GeV from the ground is based on the measurement of Cherenkov light from the electromagnetic atmospheric cascade initiated by the $`\gamma `$-ray primary. The Cherenkov light emitted from the secondary $`\mathrm{e}^\pm `$ over a range of altitudes (6 - 20 km atmospheric height) is focused onto an area of 200 - 300 m diameter at ground defining the light pool. The collection area, the area from which a shower can be detected, is in the order of 70,000 $`\mathrm{m}^2`$, although the area also depends on the energy and detailed detector design. The intrinsically large collection area of atmospheric Cherenkov detectors (order of $`10^5`$ larger than EGRET’s collection area) would ideally extend the sensitivity above 20 GeV, where EGRET’s measurements are limited by statistics. The large collection area also provides the means for the detection of low fluxes in the 1 TeV - 100 TeV regime with a good sensitivity. In particular large zenith angle observations, which provide an even larger collection area (several 100,000 $`\mathrm{m}^2`$), is efficient for the detection of the highest energies above 10 TeV Sommers87 ; Krennrich97 ; Tanimori98b . The following issues need to be addressed by future detectors to reach the objectives outlined in the previous section: a low energy threshold of 20 GeV, improved sensitivity between 200 GeV - 100 TeV, high angular resolution (few arcminutes) and good energy resolution (10%). ### 2.1 Low Energy Threshold The detection of low energy $`\gamma `$-ray air showers requires triggering on faint Cherenkov light flashes, e.g., 2.5 photons per $`\mathrm{m}^2`$ from a 50 GeV $`\gamma `$-ray shower<sup>1</sup><sup>1</sup>1This value comes from Paré 1996 and is valid for the Thémis site. The absolute value depends on the altitude and atmospheric conditions at the observational site.. A limitation arises from the night sky background light (NSB $`24\times 10^{12}\mathrm{photons}/\mathrm{m}^2/\mathrm{s}/\mathrm{sr}`$) through its fluctuations $`\sqrt{\mathrm{NSB}}`$: however, since the Cherenkov pulses are extremely short (5 - 10 ns), the $`\mathrm{NSB}`$ can be greatly reduced (e.g., 0.8 -1.6 $`\mathrm{photons}/\mathrm{m}^2`$ within 5 ns for a photosensitive detector with $`0.6^{}`$ sensitive diameter<sup>2</sup><sup>2</sup>2The minimum solid angle acceptance required for an atmospheric Cherenkov detector is $`0.6^{}`$ to cover the angular extend over which the Cherenkov light from a sub-TeV $`\gamma `$-ray shower from a point source occurs determined by its intrinsic angular size and shower height fluctuations. However, the light distribution of a $`\gamma `$-ray shower image is elliptical and peaked and can be described by its width and length with a scale of $`0.15^{}\times 0.3^{}`$. With imaging telescopes the aperture for the triggering pixels can be reduced through the use of a fine pixellation camera, effectively reducing the $`\mathrm{NSB}`$.). Air showers can only be detected if the Cherenkov signal exceeds several times $`\sqrt{\mathrm{NSB}}`$. This can be quantified by the signal to noise ratio, defined as the number of Cherenkov photons over the night sky fluctuation. A low energy threshold detector requires the optimization of the signal to noise ratio, e.g., by minimizing the solid angle acceptance $`\mathrm{\Omega }`$ of the triggering photodetectors or by shortening the coincidence time window for the Cherenkov pulses in different photodetectors effectively reducing the $`\sqrt{\mathrm{NSB}}`$ contribution. For imaging telescopes the solid angle acceptance covered by the minimum required photodetectors (typically photomultipliers) should be comparable to the $`\gamma `$-ray image size scale. The time window $`\tau `$ should be close to the intrinsic pulse width of the Cherenkov flash. In order to lower the energy threshold of atmospheric Cherenkov detectors (presently E $`>250\mathrm{GeV}`$) to 50 GeV (20 GeV), the number of Cherenkov photons collected has to be substantially increased, e.g. through increasing the mirror area $`\mathrm{A}_{\mathrm{mirror}}`$. The energy threshold scales as $`\mathrm{E}_{\mathrm{thres}}\sqrt{\mathrm{NSB}\mathrm{\Omega }\tau /\mathrm{A}_{\mathrm{mirror}}}`$. Simple extrapolation from existing instruments, for example the Whipple 10 m telescope which operates at a high signal to noise ratio, suggests a mirror area of $`1800\mathrm{m}^2`$ and $`11,700\mathrm{m}^2`$ to reach 50 GeV and 20 GeV respectively. However, it is important to notice, that the overall quantum efficiency of existing detectors (the convolution of mirror reflectivity and photomultiplier quantum efficiency) does not exceed $``$ 10% Mirzoyan94 , dominantly hampered by the low quantum efficiency of photomultiplier tubes. Improving the light collection efficiency also lowers the energy threshold. Future atmospheric Cherenkov detectors described in Section 3 use various means to achieve a lower energy threshold. Arrays of 10 m imaging telescopes, proposed by the VERITAS Weekes97 and HESS Hofmann97 collaboration, aiming for an energy threshold of $`100`$ GeV, are based on a moderate increase of mirror area, combined with fast electronics and fine pixellation imaging cameras. The MAGIC Barrio98 collaboration proposes a single 17 m imaging telescope aiming for a low energy threshold through a modestly increased mirror area together with high quantum efficiency phototubes. A dramatic increase of mirror area is currently explored by the STACEE Ong97 and CELESTE Pare97 collaborations, where 48 - 160 heliostats of solar power plants give a gigantic mirror area for detection of $`\gamma `$-rays down to $``$ 30 GeV. ### 2.2 Sensitivity, Angular and Energy Resolution The rejection of much more numerous cosmic-ray induced showers has been pivotal in establishing the sensitivity of ground-based TeV astronomy Weekes89 . In constrast to satellite instruments, which utilize anti-coincidence scintillator shields inhibiting the detector from triggering on charged cosmic-ray particles, ground-based instruments rely mostly<sup>3</sup><sup>3</sup>3However, at the hardware trigger level some background is rejected, because at E $``$ 100 GeV (300 GeV) a $`\gamma `$-ray induced shower produces 10 times (4 times) more light than a proton induced shower of the same energy. Also, the field-of-view determines the background level. Therefore, a comparison of instruments should include both, the off-line and the hardware rejection capability. on background rejection in off-line analysis. The imaging technique has provided an efficient means to separate $`\gamma `$-ray from hadronic initiated showers which can be expressed by a quality factor Q defined as $`\mathrm{Q}=\epsilon /\sqrt{\kappa }`$ with $`\epsilon `$ = efficiency for $`\gamma `$-rays and $`\kappa `$ = efficiency for cosmic-ray events. Because the detection of $`\gamma `$-rays is background dominated ($`10^3`$ more cosmic rays than $`\gamma `$-rays), the sensitivity of any atmospheric Cherenkov detector depends critically on its background rejection capability. The measurement of the arrival direction of $`\gamma `$-rays depends on the detection technique: an arrival time measurement of the wavefront of Cherenkov photons (as used in STACEE and CELESTE) or by the analysis of the shower image orientation in imaging telescopes (VERITAS, HESS and MAGIC). A good angular resolution is particularly important for point-source sensitivity at energies below 100 GeV, where isotropically arriving cosmic electrons constitute an additional, non-hadronic background, which cannot be rejected through $`\gamma `$/hadron separation. The angular resolution naturally improves with the $`\gamma `$-ray primary energy from $`0.1^{}`$ below 100 GeV up to the $`0.02^{}`$ range at TeV energies. The energy resolution is important for extracting the physics of the $`\gamma `$-ray source. For example the sharp spectral breaks in pulsars as suggested by the polar cap model Daugherty96 and generally the measurement of the spectral breaks of most EGRET sources between 20 GeV and 200 GeV provides motivation for an energy resolution in the range of 10%. ## 3 Future Projects ### 3.1 Imaging Telescopes The imaging technique uses optical reflectors (e.g. Whipple 10 m telescope) with a tessellated mirror structure and a matrix of fast photomultipiers in the focal plane. With this configuration an image of the Cherenkov light of an air shower is measured and analyzed. Weekes et al. Weekes89 have demonstrated that the analysis of image shape and orientation is very efficient in distinguishing $`\gamma `$-ray from cosmic-ray initiated air showers. A Q-factor of $``$ 10 (99.7% of cosmic rays are rejected while keeping 60% $`\gamma `$-rays) with the Whipple 10 m telescope has been pivotal in establishing the imaging technique, which is to date the only technique which has detected $`\gamma `$-ray sources above 250 GeV at a level of $`10\sigma `$. The Crab Nebula is detected at a rate of 2 $`\gamma `$-rays per minute with a sensitivity of about 7$`\sigma `$ per hour. Based on this success, two different concepts have been proposed to increase the sensitivity further: the development of an optimized single large telescope (MAGIC) and stereoscopic imaging using multiple telescopes (VERITAS and HESS). #### 3.1.1 Single Telescope Imaging: MAGIC The potential of improving the single telescope imaging method has been recognized by several groups: CAT, MAGIC and the Whipple collaboration. Although the CAT imaging telescope employs a relatively small 18 $`\mathrm{m}^2`$ mirror (75 $`\mathrm{m}^2`$ for the Whipple 10 m) an energy threshold of 200 GeV has been reached Goret97 . This is due to fast electronics and a fine pixellation camera ($`0.12^{}`$ vs. $`0.25^{}`$ for the Whipple 10 m) optimizing the signal to noise ratio at the trigger level. The combination of a 10 m telescope with a pixellation of $`0.12^{}`$ is currently pursued by the Whipple collaboration (GRANITE III) by upgrading the 10 m telescope Lamb95 . An energy threshold of 120 GeV is anticipated. To push this strategy further, the MAGIC collaboration pursues the design of a 17 m diameter telescope. The concept is based on increasing the mirror area, a better quantum efficiency (45% GaAsP photocathode) of the photon detection devices and fast speed electronics. Estimates given by the MAGIC collaboration quote an energy threshold of 30 - 40 GeV using standard photomultipliers and 15 GeV using photodetectors with GaAsP photocathodes Barrio98 . Simulations by the MAGIC group show, that 20 GeV $`\gamma `$-ray showers produce images which contain good information about the arrival direction suggesting an angular resolution of $`0.2^{}`$ near threshold<sup>4</sup><sup>4</sup>4Note that the angular resolution is a function of the primary energy and improves substantially at higher energies., and a good background rejection<sup>5</sup><sup>5</sup>5Cosmic-ray background from hadrons, muons and electrons has been considered. with a Q-factor of $`6`$. An energy resolution of 50% at the threshold energy and 20% at 100 GeV is quoted. #### 3.1.2 Stereoscopic Imaging: VERITAS and HESS A logical extension of the single telescope imaging technique is the stereoscopic detection of air showers with multiple instruments which has been first demonstrated by Grindlay Grindlay72 . The detection of a $`\gamma `$-ray signal with multiple imaging telescopes was demonstrated by Daum et al. Daum97 and Krennrich et al. Krennrich98 . Impressive results have come from the HEGRA telescope array (4 telescopes of 8.5 $`\mathrm{m}^2`$ mirror area each) using relatively small reflectors and a pixellation of $`0.25^{}`$, showing a good angular resolution and excellent background rejection at 1 TeV Konopelko98 . Two next generation multi-telescope projects are under development; the VERITAS array (7 $`\times `$ 10 m telescopes) in the northern hemisphere (Arizona) and the HESS project ($`16\times 10\mathrm{m}`$ telescopes) in the southern hemisphere (Namibia). The major objective of those multi-telescope installations is the stereoscopic detection of $`\gamma `$-ray sources above 100 GeV with a high sensitivity, angular resolution and energy resolution Aharonian97b ; Aharonian97c . The multi-telescope imaging technique is based on the stereoscopic view of $`\gamma `$-ray showers (Figure 1, 2). This provides an angular resolution of $`0.08^{}`$ at 100 GeV and $`0.02^{}`$ for the highest energies for a VERITAS type detector Vassiliev98 , which by itself improves the background suppression of the cosmic-ray induced showers in comparison to a single telescope. In addition, the image shapes of air showers can be better constrained with several telescopes and be reconstructed in 3-dimensional space providing a measurement of the height of shower maximum, shower impact point on the ground (Figure 2) and the light density at different locations within the Cherenkov light pool. As a result, the $`\gamma `$-ray energy Carterlewis98 can be better measured with a resolution of 13% - 18% (corresponding to 10 TeV and 100 GeV). Also the Q-factor improves through a better classification of $`\gamma `$-ray, cosmic-ray or muon images and excellent angular resolution<sup>6</sup><sup>6</sup>6Note for a point source the background rejection is due to two different factors, the angular resolution and the distinction of $`\gamma `$-rays from hadronic showers and single muons.. Also, the collection area for the stereoscopic operation of a 7-telescope array requiring a 3-telescope coincidence is increased to 200,000 $`\mathrm{m}^2`$. Monte Carlo simulations suggest that the point-source sensitivity of the VERITAS array, e.g., at 300 GeV Vassiliev98 , is a factor of 10 better than with the currently operating Whipple 10 m telescope. The energy threshold (analysis threshold) of arrays can be lower than for an individual telescope. First, the rejection of local muons (a muon can be detected up to 80 m distant from telescope) through an array trigger will be important at $`\gamma `$-ray energies below 300 GeV, where they constitute a major background. Remaining muons falling between the telescopes can be rejected by their parallactic displacement<sup>7</sup><sup>7</sup>7As opposed to distant $`\gamma `$-ray showers, local muons show a parallactic offset when comparing images in different telescopes.. Lastly, faint Cherenkov flashes (barely triggering) do not produce a well defined image shape and hence not much information about the nature of the primary is available. Those images are usually rejected in the single telescope analysis. However, using the stereoscopic view Krennrich95 ; Hillas96 $`\gamma `$-ray showers differ from hadronic showers: images from hadronic primaries show a more irregular parallactic displacement than $`\gamma `$-ray induced showers. This method can be used to provide hadronic background rejection at energies close to the trigger threshold of telescope arrays. For VERITAS a trigger and analysis threshold of 50 GeV seems possible using stereoscopic reconstruction methods, and the limit arises mostly from the night sky background fluctuations. ### 3.2 Light Pool Sampling with Heliostats The potential of utilizing solar power plants for $`\gamma `$-ray astronomy has been recognized Tumer91 , because mirror areas of several thousand square meters provide the necessary signal to noise ratio to trigger on E $`>`$ 20 - 300 GeV $`\gamma `$-ray primaries. The exploration of the lowest energies E $``$ 20 GeV with a low cost device is the primary objective of the STACEE, CELESTE and GRAAL Plaga95 projects. The principle of detecting $`\gamma `$-rays with heliostats is shown in Figure 3. The Cherenkov light from an extensive air shower is collected with steerable mirrors and reflected onto a stationary secondary mirror located on the central tower. Because the secondary mirror forms an image of the locations of the heliostats it projects the light from each individual heliostat onto a different position in the focal plane. Photomultipliers are used to detect and sample the light distribution. Due to the different times of flight between different heliostats and the secondary mirror it is necessary to delay the signals with respect to each other and combine them afterwards into one trigger. By forming an analog sum of the signals between several phototubes, the total amount of light collected by all mirrors can be combined almost as if it were detected by a single large mirror<sup>8</sup><sup>8</sup>8Alternatively, they can be combined in a digital sum after they have individually passed a discriminator., therefore providing a low energy threshold. Because of the diameter of the Cherenkov light-pool of $`\gamma `$-ray induced showers at 20 GeV (200 m diameter), the number of heliostats which participate in the trigger have to be limited to mirrors which fall into an area of that size. The Cherenkov light distribution is sampled at different positions on the ground. This provides information about the Cherenkov light intensity as a function of the position within the light pool. Hadronic showers show more irregular azimuthal variations in the light density within the light pool than $`\gamma `$-ray showers, and this property can be utilized to reject hadronic cosmic-ray showers. The arrival direction is somewhat preset by a fairly narrow field of view or solid angle acceptance of the configuration (angular extend $`1.2^{}`$)<sup>9</sup><sup>9</sup>9The solid angle acceptance of individual heliostats is $`0.7^{}`$. The arrival direction can be measured by deriving the orientation of the shower wavefront from the delays between the signals from the different heliostats. The arrival direction reconstruction also requires information about the shower core location which can be achieved by sampling the light density on the ground Pare96 providing an angular resolution of $`0.2^{}`$ with 40 heliostats (CELESTE). First light by the CELESTE collaboration has resulted in a tentative detection of the Crab Nebula at an analysis threshold of 80 GeV Smith98 . ## 4 Discussion The different techniques to detect $`\gamma `$-rays from the ground are largely complementary as emphasized in Figure 4. The lowest energies are explored by solar power plants starting at 20 GeV with a sensitivity for point sources. Those instruments will operate up to a few hundred GeV where their narrow aperture limits the detection of higher energy $`\gamma `$-rays. The MAGIC project targets a 15 - 40 GeV energy threshold exploring the imaging technique at the lowest energies. Because of its $`2.5^{}3.5^{}`$ field of view, MAGIC could also provide a sensitivity for extended sources and $`\gamma `$-ray burst counterpart searches. At higher energies of E $`>`$ 100 GeV (possibly 50 GeV), VERITAS and HESS can detect $`\gamma `$-rays over a big dynamic range of up to 100 TeV, whereas their primary sensitivity is between 100 GeV - 10 TeV. The high angular resolution and strong background suppression provides excellent sensitivity in the order of a few milli-Crab. Also, the combined field-of-view of VERITAS (or HESS) can create maps of extended regions in the sky covering $`10^{}`$ in diameter with a single exposure. An all-sky survey of the TeV sky will be carried out with MILAGRO Yodh96 , also providing potentially important information where to point atmospheric Cherenkov detectors. From Figure 4 it becomes clear that the 20 - 200 GeV window will be opened up for high energy $`\gamma `$-ray astronomy by the alliance of space-based (GLAST) and ground-based Cherenkov detectors. ## Acknowledgements This work is supported by a grant from the U.S. Department of Energy. I am grateful to my colleagues who have provided me with detailed information about the status of experiments in particular W. Hofmann, E. Lorenz, R. Ong, M. Panter, G. Sinnis, D. Smith, S. Westerhoff.
no-problem/9901/hep-ph9901303.html
ar5iv
text
# DYNAMICAL BREAKING OF CPT AND BARYOGENESIS ## 1 Introduction The mass in the visible Universe appears to be made up exclusively of matter. There is no evidence for stable antimatter up to close to the present Hubble radius . Based on the observed matter energy density (from which the number density $`n_B`$ of baryons can be determined) and on the measured temperature of the cosmic microwave background (which yields the entropy density $`s`$), it follows that the baryon to entropy ratio $`n_B/s`$ is $$\frac{n_B}{s}\mathrm{\hspace{0.17em}10}^{10}.$$ (1) A second argument in favor of this value of $`n_B/s`$ comes from the theory of big bang nucleosynthesis . The predicted and observed abundances of the light elements agree precisely if $`n_B/s`$ is in the range given by (1). The goal of the theory of baryogenesis is to explain the origin of (1) starting with symmetric initial conditions at very early times. Sakharov realized that in order to obtain a model of baryogenesis, three criteria must be satisfied: 1. $`n_B`$ violating processes must exist; 2. these processes must involve C and CP violation; 3. they must occur out of thermal equilibrium. Another way to state these criteria is that, in addition to the existence of baryon number violating processes, there needs to be a period in the early Universe in which the CPT symmetry is broken. In an expanding Universe, this condition is not hard to achieve since the expansion determines a preferred direction of time. The first theory of baryogenesis (there are several good recent reviews on this topic) was in the context of Grand Unified Theories, theories in which baryon number violating processes occur at the perturbative level since there are particles (the superheavy Higgs and gauge particles $`X`$ and $`A_\mu `$) which couple baryons and leptons. Baryons are generated at a temperature $`T_{out}T_{GUT}10^{16}\mathrm{GeV}`$ by the out-of-equilibrium decay of the superheavy $`X`$ and $`A_\mu `$ particles. These particles were in thermal equilibrium for $`TT_{GUT}`$ but fall out of equilibrium at a temperature $`T_{out}`$ close to the GUT symmetry breaking scale $`T_{GUT}`$ as the Universe expands and cools. Obviously, GUT baryogenesis makes use of new physics beyond the standard model. It also requires a new source of CP violation (perturbative CP violation in the standard model sector is too weak to account for the observed value of $`n_B/s`$), but such new CP violation is rather naturally present in the extended Higgs sector of a GUT model. A potentially fatal problem for GUT baryogenesis was pointed out by Kuzmin, Rubakov and Shaposhnikov : there are nonperturbative processes in the standard model which violate baryon number and are unsuppressed for $`TT_{EW}`$, where $`T_{EW}`$ is the electroweak symmetry breaking scale, and hence can erase any primordial baryon asymmetry generated at $`TT_{EW}`$, for example $`T=T_{GUT}`$. One way to protect GUT baryogenesis from this washout is to generate during the GUT phase transition an asymmetry in a quantum number like $`BL`$ (where $`B`$ and $`L`$ denote baryon and lepton number, respectively) which is not violated by nonperturbative electroweak processes. The nonperturbative baryon number violating processes in the electroweak theory are related to the nontrivial gauge vacuum structure . The configuration $`A_\mu =0`$ is not the only vacuum state. There are energetically degenerate states with nontrivial gauge field configurations $`A_\mu 0`$. A gauge-invariant way to distinguish between these states is in terms of a topological invariant, the Chern-Simons number $`N_{CS}`$. The transitions between configurations of different $`N_{CS}`$ are called sphaleron transitions . They are exponentially suppressed at zero temperature $`T=0`$. However, at temperatures $`TT_{EW}`$, they are unsuppressed. In a theory in which $`N_f`$ fermion SU(2) doublets couple to the gauge fields, there is a change in baryon number $`\mathrm{\Delta }N_B`$ associated with a sphaleron transition: $$\mathrm{\Delta }N_B=N_f\mathrm{\Delta }N_{CS}.$$ (2) Hence, for $`TT_{EW}`$, baryon number violating processes are in equilibrium. Note, however, that sphalerons preserve $`BL`$. An alternative to trying to protect a primordial matter asymmetry generated at some temperature $`TT_{EW}`$ from sphaleron washout is to make use of out-of-equilibrium sphaleron processes at $`TT_{EW}`$ to re-generate a new baryon number below the electroweak phase transition. This is the goal of electroweak baryogenesis. Following early work by Shaposhnikov and Arnold and McLerran , concrete models of electroweak baryogenesis were suggested by Turok and Zadrozny and by Cohen, Kaplan and Nelson . These mechanisms were based on sphaleron processes inside or in the vicinity of bubble walls nucleated at the electroweak phase transition. These mechanisms require the electroweak phase transition to be strongly first order and nucleation-driven. In this case, below the critical temperature $`T_{EW}`$, bubbles of the low temperature vacuum are nucleated in the surrounding sea of the false (i.e. high temperature) vacuum and then expand until they percolate. Detailed studies (see recent review articles for details) indicate that physics beyond the standard model is needed in order to implement the mechanism, specifically in order for the phase transition to be strongly first order and to obtain sufficient CP violation. In this light, defect-mediated electroweak baryogenesis may be a promising alternative, since many theories beyond the standard model predict topological defects. In this case, the baryogenesis mechanism involves sphaleron processes inside the topological defects. In the following sections, I will review the defect-mediated electroweak baryogenesis mechanism and discuss how the dynamical breaking of CPT symmetry in defect networks leads to a nonvanishing net baryon number. These sections are based on and , respectively. In Section 4 I will mention recent ideas on QCD-scale “baryogenesis”, a charge separation mechanism which also makes crucial use of the effective T violation in the defect dynamics in an expanding Universe. ## 2 Defect-Mediated Electroweak Baryogenesis Before discussing the role of defects in electroweak baryogenesis I will review the main points of the “standard” or “first-order” mechanism . It is based on two key assumptions: 1. The electroweak phase transition is first order. 2. The transition is nucleation-driven (rather than fluctuation-driven, see e.g. the article by Goldenfeld for critical comments on transition dynamics from the point of view of condensed matter physics). If these assumptions are satisfied, then the electroweak phase transition proceeds by the nucleation of bubbles of the low temperature vacuum in a surrounding sea of the high temperature, symmetric vacuum. Inside the bubbles, the electroweak symmetry is broken and sphalerons are suppressed, outside the bubbles the symmetry is restored and the sphaleron rate is not suppressed. The bubbles are nucleated with microscopic radius and then expand monotonically until they percolate. Let us briefly consider the way in which the Sakharov criteria are satisfied: The standard electroweak theory contains C and CP violating interactions which couple to the fields excited in the bubble walls (2nd criterium). The bubbles are out of equilibrium field configurations (3rd condition). Baryogenesis occurs via sphaleron processes near the bubble walls (1st criterium). Note that the bubble dynamics (expansion into the false vacuum) represents the effective dynamical breaking of CPT. The master equation for electroweak baryogenesis is $$\frac{dn_B}{dt}=3\mathrm{\Gamma }\mu ,$$ (3) where $$\mathrm{\Gamma }=\kappa (\alpha _wT)^4$$ (4) is the sphaleron rate in the false vacuum ($`\alpha _w`$ is the electroweak fine structure constant and $`\kappa `$ is a constant which must be determined in numerical simulations), and $`\mu `$ is the chemical potential for baryon number which is determined by the interplay between defect dynamics and CP violating interactions of the bubble wall, a complicated issue which is still not fully understood quantitatively. In qualitative terms, fermions scatter off the wall, generating a nonvanishing lepton number in front of the bubble (let us say at point $`x`$) which yields $`\mu (x)0`$ and biases sphaleron processes in front of the wall, yielding $`n_B(x)0`$. This value of $`n_B(x)`$ is then preserved as the wall passes by and the point $`x`$ becomes part of the true vacuum domain. The chemical potential $`\mu `$ is proportional to the constant $`ϵ`$ describing the strength of CP violation. In the standard electroweak theory, $`ϵ`$ is much too small to account for the observed $`n_B/s`$. Thus, extra CP violation beyond the standard model is required for successful electroweak baryogenesis. Another reason why physics beyond the standard model is required is that in the context of the basic electroweak model, sphaleron processes are still in equilibrium below $`T_{EW}`$ if the Higgs mass $`m_H`$ is larger than $`90`$GeV, which experimental bounds now indicate must be the case. In addition, for large $`m_H`$, the phase transition is no longer strongly first order, eliminating the first order baryogenesis mechanism alltogether. Even in the MSSM (the minimal supersymmetric standard model), the window for successful first order electroweak baryogenesis is very small . Hence, extensions of the standard model are required in order to realize baryogenesis at the electroweak scale. Many extensions of the standard model, e.g. theories with additional U(1) gauge symmetries which are broken at or above $`T_{EW}`$, admit topological defects. In this case, there is an alternative way to implement electroweak baryogenesis which does not make use of bubbles created at a first order transition. Topological defects may replace bubble walls as the out-of-equilibrium field configurations needed to satisfy the third Sakharov criterium. To be specific, we make the following assumptions: 1. New physics at a scale $`\eta >T_{EW}`$ generates topological defects. 2. The electroweak symmetry is unbroken inside the defects and the defects are sufficiently thick such that sphalerons fit inside them. Given these assumptions, the scenario for baryogenesis is as follows: At the critical temperature $`\eta `$, a network of defects is produced by the usual Kibble mechanism . The initial separation of the defects is microscopic, and hence a substantial fraction of space lies inside the defects. As the Universe expands, the defect network coarsens. As long as $`T>T_{EW}`$, all baryon number violating processes are in equilibrium and hence $`n_B=0`$. Once $`T`$ drops below $`T_{EW}`$ (more precisely, when $`T`$ falls below the temperature $`T_{out}`$ at which sphalerons fall out of equilibrium), then baryon number generation sets in triggered by the defects, in a manner analogous to how bubble walls trigger baryogenesis in the first order mechanism described earlier. The mechanism can be described with the help of Figure 1. The defect is moving with velocity $`v_D`$ through the primordial plasma. At the leading edge, a baryon excess of negative sign builds up due to CP violating scatterings from the defect wall. Consider now a point $`x`$ in space which the defect crosses. When $`x`$ is hit by the leading defect edge, a value $`n_B(x)=\mathrm{\Delta }n_B<0`$ is generated. While $`x`$ is inside the defect core, this baryon asymmetry relaxes (at least partially) by sphaleron processes. When the trailing edge of the defect passes by, then an asymmetry $`\mathrm{\Delta }n_B`$ of equal magnitude but opposite sign as what is produced at the leading edge is generated. Due to the partial washout in the defect core, the net effect is to produce a positive baryon number density. The same master equation (3) as for first order electroweak baryogenesis also applies to defect-mediated electroweak baryogenesis. However, the maximal $`n_B/s`$ which can be generated from defects is suppressed compared to what could be obtained in successful first order electroweak baryogenesis for several reasons. Most importantly, not all points in space are passed by defects after $`T_{EW}`$, and hence there is an important geometrical suppression factor $`SF`$. The value of $`SF`$ is the fraction of space which will be in a defect core at some time after $`t_{EW}`$. The value of $`SF`$ depends sensitively on the type of defect, and on the defect formation scale $`\eta `$. For non-superconducting cosmic strings $$SF\lambda v_D(\frac{T_{EW}}{\eta })^{3/2},$$ (5) where $`\lambda `$ is the coupling constant of the string sector which determines the string width and string separation $`\xi (t)`$ at the time $`t_c`$ of string formation: $$\xi (t_c)\lambda ^1\eta ^1.$$ (6) The factor $`(T_{EW}/\eta )^{3/2}`$ in Equation (5) for the suppression factor comes from the coarsening of the defect network after formation and the resulting growth of $`\xi (t)`$. Therefore, the fraction of space covered by defects at $`T_{EW}`$ decreases as the string formation scale $`\eta `$ increases. For domain walls, there is much less suppression, because of the higher dimensionality of the defects. We find $`SFv_D`$ . For monopoles, on the other hand, the suppression factor renders defect-mediated baryogenesis completely ineffective. A further suppression factor comes from having only partial relaxation of $`n_B`$ inside the defects . A calculation without taking this latter factor into account yields (for non-superconducting cosmic strings) $$\frac{n_B}{s}\lambda \kappa \alpha _w^2g_{}^1(\frac{m_t}{T_{EW}})^2ϵ(\frac{T_{EW}}{\eta })^{3/2},$$ (7) where $`g_{}`$ gives the number of spin degrees of freedom in the radiation bath, $`ϵ`$ is the CP violating phase, and $`m_t`$ is the top quark mass. Efficient defect-mediated electroweak baryogenesis thus requires either cosmic strings with $`\eta `$ close to $`T_{EW}`$ (plus other optimistic assumptions about the parameters such as $`ϵ1`$ \- although according to Cline et al. even this may not be sufficient), or domain walls (which in turn must decay at late times in order to avoid the domain wall over-abundance problem ). Defect-mediated electroweak baryogenesis carries the advantage of being independent of the order of the electroweak phase transition and of the Higgs mass. In addition, whereas the efficiency of first-order baryogenesis is exponentially suppressed if $`T_{out}<T_{EW}`$ (since bubbles are only present at $`T_{EW}`$), defect-mediated baryogenesis is only suppressed by a power of $`T_{out}/T_{EW}`$ since defects are present at all times after $`T_{EW}`$. The power is determined by the coarsening dynamics of the defect network. Note that defect-mediated baryogenesis is not tied to the electroweak scale. Any defects which arise in the early Universe can potentially play a role in baryogenesis, as long as they couple to baryon number violating processes. This applies in particular to defects formed at the GUT scale. GUT defect-mediated baryogenesis is a mechanism which competes with the usual GUT baryogenesis channel based on the out-of-equilbrium decay of the superheavy $`A_\mu `$ and $`X`$ particles. If $`T_{out}T_{GUT}`$, then defect-mediated GUT baryogenesis is in fact the dominant mechanism. ## 3 Dynamical Breaking of CPT and Defect-Mediated Baryogenesis Let us in the following consider explicitly how dynamical CPT violation is crucial for defect-mediated baryogenesis . To be specific, we consider extensions of the standard model with CP violation in an extended Higgs sectior in the form of a CP violating phase $`ϵ`$ (e.g. the relative phase between the two doublets in the two Higgs doublet model). This phase has the following transformation properties under CP and T: $`CP:`$ $`ϵ(x,t)ϵ(x,t)`$ $`T:`$ $`ϵ(x,t)ϵ(x,t)`$ (8) $`CPT:`$ $`ϵ(x,t)ϵ(x,t).`$ Hence, $`_\mu ϵ`$ is odd under CPT. How the defect (which we take to be a string) interacts with the plasma can be modelled by a term in the Lagrangian of the form $$_ϵ_\mu ϵj_5^\mu ,$$ (9) where $`j_5^\mu `$ is the axial current . The axial current transforms under CPT as $$CPT:j_5^\mu j_5^\mu ,$$ (10) so that the interaction Lagrangian $`_ϵ`$ is invariant under CPT as it must be. Hence, it follows that a static string is its own antiparticle, and hence cannot generate any net baryon number. For a moving string, an apparent CPT paradox arises: The CPT conjugate of a defect with a value $`ϵ>0`$ inside the core moving with velocity $`\stackrel{}{v}`$ is a defect with the same value of $`ϵ`$ in the core moving with the same velocity vector $`\stackrel{}{v}`$. Hence, if CPT were a symmetry, then it would not be possible for the string to generate a net baryon number. The resolution of this apparent CPT paradox starts with the observation that the master equation (3) for baryogenesis is a dissipative equation which explicitly violates T symmetry, and, since it conserves CP, also violates CPT. Like the Ilion field of Cohen and Kaplan , the defect network evolution drives the system out of thermal equilibrium, acting as an external source of T violation, and the dissipative processes tend to restore the thermal equilibrium. In turn, dissipation leads to a damping of the defect motion. If dissipation were the only force, then the defects would come to rest and $`n_B`$ violation would stop. However, the expansion of the Universe induces a counterforce on the defects which keeps the defect network out of equilibrium and allows $`n_B`$ generation to continue. The lesson we draw from this study is that the expansion of the Universe is the source of explicit external T violation which keeps the defect network out of equilibirium. The ordering dynamics of the defect network fueled by the cosmological expansion then leads to dynamical CPT violation and to the biasing of baryon number production. ## 4 Defect-Mediated QCD Scale Baryogenesis As mentioned above, defect-mediated baryogenesis can be effective not only at the electroweak scale, but at any scale when defects are produced. Recent work has shown that as a consequence of the nontrivial vacuum structure of low energy QCD, domain walls form at the QCD phase transition. These domain walls separate regions of space in which the effective strong CP parameter $`\theta `$ has very different values. Hence, the domain walls automatically are associated with maximal CP violation. Recently, a new baryogenesis (more precisely charge separation) scenario was proposed based on these QCD domain walls. Since this mechanism will be reviewed in a separate conference proceedings article , I will here only highlight the main points. The starting point of the QCD baryogenesis scenario is a new nonperturbative analysis of the vacuum structure of low energy QCD . Considering the vacuum energy $`E`$ of pure gluodynamics as a function of $`\theta `$, it was realized (see also ) that $`E(\theta )`$ must have a multi-branch structure $$E(\theta )=\mathrm{min}_kE_k(\theta )=\mathrm{min}_kE_0(\theta +2\pi k),$$ (11) and hence must in general have several isolated degenerate minima. When fermionic matter is introduced, at low energies represented by a chiral condensate matrix $`U`$ which contains the pion and sigma prime fields, then the potential energy $`W(U,\theta )`$ depends only on the combination $`\theta iTr\mathrm{ln}U`$ (by the anomalous Ward Identities). Hence, from the multi-branch structure of $`E(\theta )`$ it immediately follows that for fixed value of $`\theta `$, the potential $`V(U)=W(U,\theta )`$ has several isolated minima. These vacua differ in terms of the effective strong CP parameter $`\theta `$. Since there are several discrete minima of the potential, domain walls separating the different phases exist. In fact, by the Kibble mechanism , during the QCD phase transition at $`T=T_{QCD}`$, inevitably a network of domain walls will form. The second crucial ingredient of the new scenario is charge separation. In analogy to how in $`1+1`$ dimensional physics solitons acquire a fractional charge , in a $`3+1`$ dimensional theory domain walls will also acquire a fractional baryonic charge. In the chiral limit, the different vacuum states would be energetically degenerate. In the presence of a nonvanishing quark mass $`m_q`$, the energy of states increases as a function of $`|\theta |`$. Hence, the different phases of the theory, which immediately after the phase transition are equidistributed, will no longer be so below a temperature $`T_d`$ at which the energy difference between the minima becomes thermodynamically important. At this time, the domain wall network will break up into a set of B-shells, domains of states of large $`|\theta |`$ in a surrounding sea of the phase with the lowest value of $`|\theta |`$. In the absence of explicit strong CP violation, i.e. when the lowest energy vacuum is $`\theta =0`$, then there are an equal number of B-shells with $`\theta >0`$ and $`\theta <0`$. A B-shell with $`\theta >0`$ has negative baryon number. In order to generate a net baryon number in B-shells, (a small amount of) explicit CP violation is required such that the only B-shells which form have the same sign of $`\theta `$ (which we take to be positive). In this case, the total baryon number trapped in the walls is negative. Since there is no overall baryon number violation in QCD, the compensating positive baryon number must be in the bulk. This is the way in which domain walls in QCD lead to an effective baryogenesis mechanism by means of charge separation. Note that, in analogy to electroweak baryogenesis, the explicit T violation due to the expansion of the Universe which leads to the coarsening and eventual fragmentation of the defect network is crucial for the mechanism. As our estimates indicate, it appears possible to generate a baryon to entropy ratio comparable to what observations require. ## Acknowledgments I wish to thank Alan Kostelecky for the invitation to speak at this meeting, and for his hospitality. I thank my collaborators Anne-Christine Davis, Igor Halperin, Tomislav Prokopec, Mark Trodden and Erik Zhitnitsky, for enjoyable and stimulating collaborations. This work was supported in part by the US Department of Energy under Contract DE-FG02-91ER40688. ## References
no-problem/9901/astro-ph9901165.html
ar5iv
text
# The Synchrotron Peak Shift during High-Energy Flares of Blazars ## 1 Introduction 66 blazar-type AGNs have been detected by the EGRET instrument on board the Compton Gamma-Ray Observatory as sources of $`\gamma `$-rays above 100 MeV (Hartman et al. 1999). These objects are identified with flat-spectrum radio sources classified as BL Lac objects or flat-spectrum radio quasars (FSRQs). Many of these objects exhibit variability at all wavelengths, generally with the most rapid variability, on time scales of hours to days, observed at the highest $`\gamma `$-ray energies (e.g., Bloom et al. 1997; Wagner et al. 1995; Mukherjee et al. 1997). The broadband (radio–$`\gamma `$-ray) emission from blazars is most probably emitted via nonthermal synchrotron radiation and Comptonization of soft photons by energetic particles in relativistic outflows powered by accreting supermassive black holes. Potential sources of soft photons which are Compton-scattered to produce the $`\gamma `$-ray emission include internal synchrotron photons (e.g., Marscher & Gear 1985, Maraschi et al. 1992, Bloom & Marscher 1996), jet synchrotron radiation rescattered by circumnuclear material (Ghisellini & Madau 1996, Bednarek 1998, Böttcher & Dermer 1998), and accretion-disk radiation which enters the jet directly (Dermer & Schlickeiser 1993) and/or after being scattered by surrounding BLR clouds and circumnuclear debris (e.g., Sikora, Begelman & Rees 1994; Blandford & Levinson 1995; Dermer, Sturner, & Schlickeiser 1997; Protheroe & Biermann 1997). In recent years, extensive simultaneous broadband observations of blazars have enabled detailed modeling of their broadband spectra. A general result of these modeling efforts appears to be that the spectra of high-frequency peaked BL Lac objects (HBLs) are well reproduced using a synchrotron-self Compton (SSC) model, where the radio through X-ray radiation is produced by nonthermal synchrotron radiation of ultrarelativistic electrons and the $`\gamma `$-ray emission, extending in some cases up to TeV energies, results from Compton scattering of the synchrotron radiation by the same population of electrons (e. g., Mastichiadis & Kirk 1997 for Mrk 421, Pian et al. 1998 for Mrk 501). However, a critical question in this case is whether the primary acceleration mechanism is able to accelerate electrons up to the required ultrarelativistic energies, which are $`1`$ TeV if the recent HEGRA detection of $`>25`$ TeV photons from Mrk 501 (Aharonian et al. 1999) is real. On the other hand, FSRQs are more successfully modeled assuming that the soft seed photons for Comptonization are external to the jet (EC for external Comptonization; e. g., Dermer et al. 1997 for 3C 273, Sambruna et al. 1997 and Mukherjee et al. 1999 for PKS 0528+134, Böttcher et al. 1997 for 3C 279). There appears to be a more or less continuous sequence of spectral properties of different objects, HBL $``$ LBL $``$ FSRQ, characterized by increasing $`\gamma `$-ray luminosity, increasing dominance of the energy output in the $`\gamma `$-ray component over the synchrotron component, and a shift of the peaks in the $`\nu F_\nu `$ spectra of both components towards lower energies. This sequence can be understood in terms of increasing dominance of the EC over the SSC mechanism (Fossati et al. 1997, Ghisellini et al. 1998). Recently, Böttcher & Collmar (1998) and Mukherjee et al. (1999) have suggested that in the case of FSRQs a similar sequence might also occur between different intensity states of the same object, and have applied this idea to the various states of PKS 0528+134. There, it was assumed that the high states of PKS 0528+134 are characterized by a high bulk Lorentz factor of the ultrarelativistic material in the jet, implying that the external radiation field is more strongly boosted into the comoving rest frame than during the quiescent state. This effect, leading to a stronger dependence of the EC radiation on the Doppler factor than of the SSC radiation, was first pointed out by Dermer (1995). In Fig. 1, the fit results to two extreme states of PKS 0528+134 from Mukherjee et al. (1999) are compared. The figure reveals an important prediction of the model adopted in that paper: During the $`\gamma `$-ray high state we expect that the synchrotron spectrum peaks at lower frequencies than in the low state. Unfortunately, the peaks of the $`\nu F_\nu `$ synchrotron spectra of most FSRQs are in the infrared and thus very hard to observe. For this reason, the results of detailed modeling of the synchrotron component of an FSRQ have to be regarded with caution since in most cases the shape of the synchrotron spectrum is not well enough constrained to allow an exact determination of jet parameters. Note, for example, that in Ghisellini et al. (1998) in many cases the 9-parameter model adopted there is used to effectively fit less than 10 data points, and that the model IR – optical spectra of PKS 0528+134 calculated in Mukherjee et al. (1999) for several observing periods are extremely poorly constrained. In the very low $`\gamma `$-ray states of PKS 0528+134 (VPs 39, 337, and 616), even the $`\gamma `$-ray spectrum is very poorly constrained. Therefore it is important to demonstrate that the predicted synchrotron peak shift does not depend on the details of the adopted jet model and is not a consequence of fine-tuning of parameters. In Section 2, I will present an analytical estimate of the synchrotron peak shift on the basis of very simple, general arguments. In Section 3, several FSRQs are suggested as promising candidates to test the prediction made in this Letter. I summarize in Section 4. ## 2 Estimate of the synchrotron peak shift The shift of the $`\nu F_\nu `$ peak of the synchrotron spectrum during a $`\gamma `$-ray flare of an FSRQ will be determined on the basis of the following general assumptions: (a) The $`\gamma `$-ray flare is predominantly caused by an enhancement of the energy density $`u_{ext}^{}`$ of external photons in the rest frame comoving with a component (blob) of ultrarelativistic material moving along the jet. This enhancement can be caused by an increasing bulk Lorentz factor $`\mathrm{\Gamma }`$ of the jet material. We have $`u_{ext}^{}=\mathrm{\Gamma }^2u_{ext}`$. (b) The electrons in the blob have a non-thermal distribution with a peak at the energy $`\gamma _b`$ which is determined by the balance of an energy-independent acceleration rate $`\dot{\gamma }_{acc}`$ to the radiative energy loss rate (cf. Ghisellini et al. 1998). (c) During a flare, the radiative energy loss rate of relativistic electrons in the blob is dominated by Compton scattering of external photons in the Thomson regime. The assumptions (b) and (c) imply that $`\gamma _b\dot{\gamma }_{acc}^{1/2}u_{ext}^{1/2}\mathrm{\Gamma }^1`$. The jets of blazars are directed at a small angle $`\theta \mathrm{\Gamma }^1`$ with respect to our line of sight. Thus, to a good approximation, the Doppler factor $`\delta =\left(\mathrm{\Gamma }[1\beta _\mathrm{\Gamma }\mathrm{cos}\theta ]\right)^1\mathrm{\Gamma }`$. Then, the observed peak of the synchrotron component depends on the bulk Lorentz factor and the external photon density as $$ϵ_{sy}B^{}\gamma _b^2\mathrm{\Gamma }\dot{\gamma }_{acc}B^{}u_{ext}^1\mathrm{\Gamma }^1,$$ (1) where $`B^{}`$ is the magnetic field strength in the comoving frame. The apparent bolometric luminosity in the synchrotron component varies acording to $$L_{sy}B_{}^{}{}_{}{}^{2}\mathrm{\Gamma }^4\gamma _b^2\dot{\gamma }_{acc}B_{}^{}{}_{}{}^{2}\mathrm{\Gamma }^2u_{ext}^1.$$ (2) A plausible way to estimate the magnetic field strength might be the assumption of equipartition of magnetic field energy density to the energy density of ultrarelativistic electrons in the jet. However, our conclusions do not depend on the particular choice of $`B^{}`$. If equipartition applies, then $`B^{}\dot{\gamma }_{acc}^{1/4}u_{ext}^{1/4}\mathrm{\Gamma }^{1/2}`$, and Eqs.(1) and (2) become (the superscript $`ep`$ denotes the equipartition case) $$ϵ_{sy}^{ep}\dot{\gamma }_{acc}^{5/4}u_{ext}^{5/4}\mathrm{\Gamma }^{3/2}$$ (3) and $$L_{sy}^{ep}\dot{\gamma }_{acc}^{3/2}u_{ext}^{3/2}\mathrm{\Gamma }.$$ (4) Assuming Thomson scattering at the $`\nu F_\nu `$ peak of the Compton component (this is a reasonable assumption for the peak at several MeV – 10 GeV, while beyond that energy Klein-Nishina effects may become important), the $`\gamma `$-ray spectrum peaks at $$ϵ_Cϵ_{ext}\gamma _b^2\mathrm{\Gamma }^2ϵ_{ext}\dot{\gamma }_{acc}u_{ext}^1$$ (5) where $`ϵ_{ext}`$ is the mean photon energy of the external photon field in the stationary frame of the AGN. The apparent bolometric luminosity in the Compton component depends on $`\mathrm{\Gamma }`$ and $`u_{ext}`$ through $$L_Cu_{ext}\mathrm{\Gamma }^6\gamma _b^2\dot{\gamma }_{acc}\mathrm{\Gamma }^4,$$ (6) independent of the external photon density. Remarkably, this implies that an enhancement of the external photon density $`u_{ext}`$, e. g., due to structural changes of the circumnuclear material does not produce a $`\gamma `$-ray flare, although it leads to spectral variability, shifting both spectral components to lower frequencies. Now let us consider the effect of a variation of the bulk Lorentz factor as a possible cause of a $`\gamma `$-ray flare. Eqs. (1), (3), and (5) predict a shift of the synchrotron peak to lower frequencies, while the $`\gamma `$-ray peak should remain at basically the same photon energy. The ratio of the luminosities in both components varies according to $`L_C/L_{sy}\mathrm{\Gamma }^2u_{ext}B_{}^{}{}_{}{}^{2}`$ or $`L_C/L_{sy}^{ep}\mathrm{\Gamma }^3u_{ext}^{3/2}\dot{\gamma }_{acc}^{1/2}`$, respectively. If $`u_{ext}`$ and $`\dot{\gamma }_{acc}`$ remain constant and equipartition applies, this yields a variation $`L_C(L_{sy}^{ep})^4`$, thus predicting a much stronger relative variation of the two spectral components than predicted by the SSC model which was ruled out for 3C 279 for this reason by the observation of a variation of the $`\gamma `$-ray component by amplitudes greater than the square of the amplitudes of variation of the synchrotron component (Wehrle et al. 1998). The synchrotron peak is still shifted to lower energies even if the increase of the bulk Lorentz factor is physically related to a more efficient particle acceleration in the comoving frame, as long as during the flare the product $`\dot{\gamma }_{acc}\mathrm{\Gamma }^1`$ is lower than during the quiescent state. In this case, Eqs. (1), (3), and (5) predict that the $`\gamma `$-ray spectrum becomes spectrally harder, while at the same time the synchrotron peak shifts to lower energies. In fact, spectral hardening of the $`\gamma `$-ray spectrum during flares is a common feature in EGRET-detected FSRQs (e. g., Collmar et al. 1997, Hartman et al. 1996, Wehrle et al. 1998). This flaring behavior is in qualitative contrast to the flares observed in HBLs where a shift of both the synchrotron and the Compton peaks to higher frequencies is observed (e. g., Catanese et al. 1997, Pian et al. 1998, Kataoka et al. 1999). These objects are believed to be qualitatively different from FSRQs because (a) the broad-line regions surrounding the central engine are weak or absent and the isotropic luminosity of the central accretion disk is generally weaker than in quasars, implying that the external soft photon field is much weaker than in the FSRQ case and hence negligible compared to the intrinsic synchrotron radiation field, and (b) Compton scattering events near the $`\nu F_\nu `$ peak of the $`\gamma `$-ray spectrum most probably occur in the extreme Klein-Nishina regime, thus rendering Compton cooling rather inefficient. The flaring behavior of these sources seems to be dominated by an enhanced efficiency of electron acceleration in the jet which shifts both spectral components towards higher photon energies (Mastichiadis & Kirk 1997, Pian et al. 1998). Therefore, I propose that the qualitatively different behavior of the synchrotron peak during a $`\gamma `$-ray flare may serve as a diagnostic to determine the dominant electron cooling and radiation mechanism at $`\gamma `$-ray energies. ## 3 Candidate sources As mentioned earlier, most of the bright and well-observed FSRQs (e. g., PKS 0528+134: Mukherjee et al. 1996, or 3C 279: Wehrle et al. 1998) have their synchrotron peak in the infrared where it is very hard to observe. Fig. 3 of Werhle et al. (1998) seems to indicate a shift of the synchrotron peak of 3C 279 to lower frequencies during the 1996 flare state. However, the peak frequency range is not covered in any of the observing periods presented there. Thus, the determination of the peak frequency from the existing data on 3C 279 would necessarily be model dependent. A much more promising candidate to test the predictions presented in this Letter is its “sister” source 3C 273 (von Montigny et al. 1997). This source appears to be ideal for such a study for several reasons. (1) It is a very bright radio and IR source, persistently detectable from radio through IR and optical frequencies. (2) During flares it is a strong EGRET source, allowing an easy detection of a $`\gamma `$-ray flare even with the degraded sensitivity of EGRET. (3) The strong big blue bump yields a very exact determination of the luminosity ($`310^{46}`$ erg s<sup>-1</sup>) and spectrum of the underlying accretion disc of the AGN which facilitates the normalization of model calculations. In Fig. 2, the simultaneous radio spectra of three epochs before, during, and after the prominent $`\gamma `$-ray flare of 3C 273 in 1993 November (von Montigny et al. 1997) are presented. A comparison of the flare-state radio spectrum (open circles connected by solid lines) to the pre-flare and post-flare spectra seems to indicate a softening of the synchrotron spectrum during the flare, in perfect agreement with the expectation if the $`\gamma `$-rays are produced by the EC mechanism. At frequencies above 100 GHz, the radio spectra shown in Fig. 2 are simultaneous to within 1 day. However, the radio spectrum shortly (i. e. a few days) prior to the 1993 $`\gamma `$-ray flare was not monitored simultaneously with reasonable spectral coverage at $`>100`$ GHz. The pre-flare period closest to the 1993 $`\gamma `$-ray flare for which such a simultaneous radio spectrum was available, was about 100 days before the flare. At that time, 3C273 was not in the field of view of EGRET so that we can not be sure that the source was in its quiescent state. Furthermore, the broadband spectrum of 3C273 shown in von Montigny et al. (1997) and in Kubo et al. (1998) suggests that the actual synchrotron peak is located at higher frequencies, $`\nu _{sy}10^{13}`$ Hz, which have not been monitored in regular, short time intervals in the von Montigny et al. (1997) campaign. Therefore, although this result seems very promising, it needs more solid confirmation by future broadband campaigns with good spectral and temporal coverage of the $`100`$ GHz – $`10^{14}`$ Hz frequency range in order to allow a reliable, model-independent determination of the synchrotron peak of precisely simultaneous snapshot spectra in different $`\gamma `$-ray states of the source. In order to test the predictions presented here, it is particularly important that the synchrotron spectrum prior to a $`\gamma `$-ray flare is known. An apparent shift of the synchrotron peak towards higher frequencies following a $`\gamma `$-ray flare is also predicted by alternative models, for example by the Marscher & Gear (1985) SSC model where a radio flare is delayed with respect to an outburst at $`\gamma `$-rays due to the finite synchrotron cooling time of electrons in the jet. A few other FSRQs also emit a high flux at multi-GHz frequencies and might therefore allow the precise determination of the synchrotron peak and its shift during $`\gamma `$-ray flares in future campaigns. In particular, PKS 0420-014 (Radecke et al. 1995, von Montigny et al. 1995), PKS 0521-365 (Pian et al. 1996), and 3C454.3 (= PKS 2251+158; von Montigny et al. 1995) have successfully been observed with near-complete frequency coverage of their synchrotron peak, located at $`10^{12}`$$`10^{13}`$ Hz. These sources may serve as test cases for the predictions presented in this Letter. ## 4 Summary On the basis of very general arguments I have shown that the synchrotron $`\nu F_\nu `$ peak in the broadband spectra of EGRET-detected FSRQs is expected to shift towards lower frequencies during $`\gamma `$-ray flares, if Comptonization of external radiation is the dominant electron cooling and radiation mechanism at $`\gamma `$-ray energies. This behavior is qualitatively different from most probably SSC-dominated HBLs where both the synchrotron and the $`\gamma `$-ray component shift towards higher frequencies during the flare state. I propose this qualitative difference as a new diagnostic tool to distinguish between the two competing radiation mechanisms potentially responsible for the production of $`\gamma `$-rays in the jets of blazars. Results of a broadband campaign on 3C 273 have been used to support the prediction of the external-Comptonization model, but future campaigns with more continuous frequency and temporal coverage in the radio – IR spectral range are needed to draw solid conclusions. Other promising candidates for such campaigns include the FSRQs PKS 0420-014, PKS 0521-365, and 3C454.3. I thank R. Mukherjee and P. Sreekumar for valuable discussions, drawing my attention to this problem, and for useful comments on the manuscript. I also thank the referee, S. D. Bloom, for very useful suggestions. This work was supported by NASA grant NAG 5-4055.
no-problem/9901/quant-ph9901001.html
ar5iv
text
# Quantum slow motion ## Abstract We simulate the center of mass motion of cold atoms in a standing, amplitude modulated, laser field as an example of a system that has a classical mixed phase-space. We show a simple model to explain the momentum distribution of the atoms taken after any distinct number of modulation cycles. The peaks corresponding to a classical resonance move towards smaller velocities in comparison to the velocities of the classical resonances. We explain this by showing that, for a wave packet on the classical resonances, we can replace the complicated dynamics in the quantum Liouville equation in phase-space by the classical dynamics in a modified potential. Therefore we can describe the quantum mechanical motion of a wave packet on a classical resonance by a purely classical motion. To have an intuitive picture of the quantum mechanical dynamics of a wave packet we are usually confined to the semi-classical regime, that is, to orbits with action large compared to Planck’s constant , or to special systems like the harmonic oscillator, where the quantum evolution equations in phase-space are identical to the classical ones . In this letter we propose a scheme which enables us to describe a wave packet, localized near a resonance of a classical mixed phase-space, by classical dynamics in a modified potential. Hereby we replace the potential in the high order quantum Liouville equation by an effective potential in such a way that we obtain a classical Liouville equation. Then we describe the quantum motion as classical motion in this modified potential. We are then able to characterize the quantum effect by comparing the modified dynamics with the dynamics in the original potential. This method is applicable well beyond the semi-classical regime for many different potentials. Usually quantum effects on wave packets express themselves in the revival and fractional revival properties or in the occurrence of tunneling phenomena . Both take place on a comparatively long time scale so that we intuitively don’t expect quantum effects to be visible on a short time scale. We disprove this intuitive assumption in our model where we use the center of mass motion of cold atoms in a standing amplitude modulated laser field. Here we demonstrate that the momentum distribution after each cycle of the modulation is peaked at smaller momenta than we would expect classically. This shows that the atoms are traveling slower than we would expect from classical simulations and we can give a very simple explanation of this “quantum slow motion” phenomenon. Since we here include stimulated and spontaneous transitions we expect that this quantum mechanical effect is of realistic order of magnitude to observe experimentally. We investigate a cloud of two level atoms situated in a standing laser field, with a periodic modulated amplitude. In this system the Hamiltonian of the center-of-mass motion in the limit of large detuning is $$H(t)=\frac{p^2}{2}\kappa (12ϵ\mathrm{cos}t)\mathrm{cos}q,$$ (1) where $`p`$ and $`q`$ denote scaled dimensionless momentum and position, $`t`$ time, and $`\kappa `$ and $`ϵ`$ are the parameters defining the depth of the standing wave and the strength of the amplitude modulation, respectively. Note that $`p`$ and $`q`$ fulfill the commutator relation $`[p,q]=\mathrm{i}\text{ \_*[-1.0ex]k}`$, where _ \*\[-1.0ex\]k is a scaled Planck’s constant that is in some sense a measure for the “quantum mechanicality” of the problem since it defines the size of a minimum uncertainty wave packet in relationship to the resonances . In Fig. 1(left) we show as an example the classical stroboscopic phase-space portrait for $`ϵ=0.2`$ with $`\kappa =1.2`$. This choice of parameters is capable to show classical stable period-one resonances after each modulation period symmetrically situated along the momentum axes. This specific phase-space structure allows a quantum mechanical wave packet, situated initially near one of these resonances, to coherently tunnel to the other resonance. This takes place on a long time scale in terms of cycles of the modulation. Note that this tunneling cannot be understood in terms of the presence of a potential barrier as it is present in several publications regarding tunneling in mixed systems. We simulate the tunneling dynamics by starting each realization with a minimum uncertainty wave packet that may be squeezed , centered on the classical resonance. We then simulate the full quantum mechanical dynamics by applying a split operator algorithm with adapted time step size in the context of a standard quantum Monte Carlo integration scheme to include stimulated and spontaneous transitions . We calculate the mean momenta and the corresponding variance from the Poincare section of the momentum distribution taken after each cycle of the modulation at $`t=2n\pi `$. In Fig. 2 (full line) we show the result of this simulation for $`ϵ=0.2`$, $`\kappa =1.2`$, and $`\text{ \_*[-1.0ex]k}=0.25`$. Related to recent experiments we used the parameters for Rubidium to obtain a realistic scenario. We plot the mean momentum after each cycle of the modulation of the standing wave against the number of cycles. As expected and clearly indicated by the drop down of the variance, we observe coherent tunneling of the mean momentum from the location of the resonance at approximately $`p=1`$ to the corresponding resonance at $`p=1`$. However there are additional oscillations that might lead to the conclusion that the wave packet is not sitting precisely on the classical period-one fixed point but is indeed circulating around an alternative stable point in phase-space. It seems like the wave packet, centered on the classical resonance is not appropriately centered on the ”true” resonance but sitting beside it. Therefore the mean momentum at each kick strongly oscillates around its mean motion. This lead us to the conclusion that if we moved the initial wave packet onto this alternative stable point and started the simulation of the dynamics from there, we expect the oscillations to vanish. This is exactly what we see in Fig. 2 (dashed line). The oscillations are strongly compressed and we face essentially the situation of a well localized wave packet which undergoes coherent tunneling on the longer time scale. For the dynamics of the wave packet the classical resonance is obviously not important but a modified resonance, shifted towards slower momentum. How can we explain this effect? To give an explanation we first recall that a wave packet localized near a classical resonance has been shown to remain localized without changing its shape, at least for a long time. Therefore we may assume that a minimum uncertainty wave packet sitting near a classical resonance will remain unchanged in shape for several cycles. This is the main assumption we need to apply a theory of Henriksen et. al where the effect of quantum mechanics on a wave packet is described as classical motion, that is as motion following the classical Liouville equations in phase-space, but in a modified potential. The convenient quantum mechanical phase-space representation is the Wigner function $`W(q,p,t)`$,because it has the correct quantum mechanical marginal distributions. Since in the experiments we are seeking to describe the momentum distribution and the position distribution of the center of mass motion, this property of the Wigner function allows us to compare the marginals directly with the measured distributions. The phase-space dynamics of the Wigner function is given by $$\frac{W}{t}=p\frac{W}{q}+\frac{\mathrm{i}}{\text{ \_*[-1.0ex]k}}\left(\underset{\nu =0}{\overset{\mathrm{}}{}}\frac{1}{\nu !}\left(\frac{\text{ \_*[-1.0ex]k}}{2\mathrm{i}}\right)^\nu \frac{^\nu V(q,t)}{q^\nu }\frac{^\nu W}{p^\nu }\underset{\nu =0}{\overset{\mathrm{}}{}}\frac{1}{\nu !}\left(\frac{\text{ \_*[-1.0ex]k}}{2\mathrm{i}}\right)^\nu \frac{^\nu V(q,t)}{q^\nu }\frac{^\nu W}{p^\nu }\right)$$ (2) where $`V(q,t)=\kappa (1ϵ\mathrm{cos}t)\mathrm{cos}q`$ denotes the potential. This, for the following convenient representation, corresponds to the well known one given by Wigner where only one sum over odd derivatives occurs. Due to the special spatial dependence of our cosine potential, where the odd derivatives reproduce themselves, we can replace the infinite sum by defining an effective potential $`V_{eff}`$ by $$\frac{V_{eff}}{q}=\frac{\mathrm{i}}{\text{ \_*[-1.0ex]k}}\left(\underset{\nu =0}{\overset{\mathrm{}}{}}\frac{1}{\nu !}\left(\frac{\text{ \_*[-1.0ex]k}}{2\mathrm{i}}\right)^\nu \frac{^\nu V(q,t)}{q^\nu }\frac{^\nu W}{p^\nu }\underset{\nu =0}{\overset{\mathrm{}}{}}\frac{1}{\nu !}\left(\frac{\text{ \_*[-1.0ex]k}}{2\mathrm{i}}\right)^\nu \frac{^\nu V(q,t)}{q^\nu }\frac{^\nu W}{p^\nu }\right)/\frac{W}{p}.$$ (3) Then Eq. 2 is replaced by the first order equation $$\frac{W}{t}=p\frac{W}{q}+\frac{V_{eff}}{q}\frac{W}{p}$$ (4) which is identical to the classical Liouville equation describing the classical dynamics in the modified potential $`V_{eff}`$. In this sense the action of quantum mechanics can be described by the classical motion in a modified potential. Assuming a Gaussian squeezed minimum uncertainty wave packet with time dependent squeeze parameter $`\xi (t)`$, we take the Wigner function to be of the form $$W(q,p,t)=\frac{1}{\pi \text{ \_*[-1.0ex]k}}\mathrm{exp}\left(\frac{\xi }{\text{ \_*[-1.0ex]k}}(qq)^2\frac{1}{\text{ \_*[-1.0ex]k}\xi }(pp)^2\right)$$ (5) with the mean time dependent momentum and position, $`p(t)`$ and $`q(t)`$, respectively, chosen in such a way, that the wave packet always stays centered on the resonance in order the assumption of staying unchanged in shape to remain valid. It is not important to know the explicit time dependence of these parameters. Then the effective potential is $$V_{eff}(q,t)=V(q,t)\mathrm{exp}\left(\frac{\text{ \_*[-1.0ex]k}}{4}\right)\frac{\mathrm{sinh}(pp)}{pp}$$ (6) That means the motion of the wave packet is locally described by the original potential compressed by a factor of $`\mathrm{exp}(\text{ \_*[-1.0ex]k}/4)`$ since the sinh-factor can, for the sake of qualitative discussion, locally be approximated by $`1`$. In Fig. 1 (middle and right) we show for $`\text{ \_*[-1.0ex]k}=0.25`$ and $`\text{ \_*[-1.0ex]k}=0.35`$ classical stroboscopic phase-space portraits for the effective potential and compare them to the phase-space portrait of the original potential. Note, that our approximation is only valid in the vicinity of the period-one resonances. However, since we are interested in exactly these regions of phase space this kind of representation gives an idea of what is going on, although the other phase-space regions are not to be taken as a valid description of the dynamics there. The main conclusion regarding the resonances is that the central resonance at $`(q,p)=(0,0)`$ becomes smaller and the second order resonances we are interested in are pushed towards smaller momenta $`p`$, which exactly corresponds to the observation made in Fig. 2, where we could simulate the tunneling phenomenon best for initially situating the wave packet at the shifted resonance. Eq. 6 indicates that the effect scales with _ \*\[-1.0ex\]k which identifies it as a purely quantum mechanical effect. We can clearly see this property by comparing Fig. 1 (middle) and (right), where we can directly see the relocation of the classical resonance for two values of _ \*\[-1.0ex\]k . In Fig. 3 we simulate wave packets for different values of _ \*\[-1.0ex\]k for only a few cycles. We start each simulation with a minimum uncertainty wave packet in such a way that the oscillations in the evolution are most suppressed and observe that in correspondence to the modified potential the mean momenta and therefore the wave packets themselves are relocated towards smaller velocities with increasing _ \*\[-1.0ex\]k . Note that the curve corresponding to $`\text{ \_*[-1.0ex]k}=0.25`$ corresponds to a situation where the conditions for tunneling are fulfilled, therefore the mean momentum starts to decrease. There is a second important consequence of this phenomenon in the scenario of present experiments of investigating the short time behavior of loading all the resonances from a spatially uniform distributed cloud of atoms. In order to effectively load the resonances we start with a phase shift of $`\pi /2`$, that is to say, we now investigate the Hamiltonian $$H(t)=\frac{p^2}{2}\kappa (12ϵ\mathrm{sin}t)\mathrm{cos}q,$$ (7) and take the snapshots at $`t=\pi /2+2n\pi `$. Then the resonances are initially aligned on the q-axes and are therefore covered best by the cloud of atoms. A classical picture of the dynamics suggests, that as time goes by only those atoms initially sitting close to a resonance remain, whereas all the other atoms perform a nonlinear motion corresponding to the fact that they are sitting in a chaotic region of phase-space. Therefore we expect to observe after some time only the three peaks of loaded resonances. Since the assumption of a durable wave packet is only valid for a wave packet initially situated on a resonance and not for all the other wave packets this motivates us to believe that the local relocation of the resonance described above only happens to those atoms trapped at the resonance. This should change the overall momentum distribution in comparison to a pure classical simulation. In Fig. 4 (left) we compare the momentum distributions of snapshots at $`t=9\pi /2`$, that is after 2.25 modulation cycles only, of three different simulations: a quantum mechanical (top), a modified classical (middle), and a purely classical simulation(bottom). Note that this is a very short time compared to tunneling and revival experiments. For the quantum simulation (top) we start with a large number of wave packets of the width of the distribution in momentum of the atom cloud. The width in position is chosen in order to have a minimum uncertainty wave packet. We distribute them uniformly on the q-axis, apply the Monte Carlo integration scheme and finally add the contribution of each wave packet up to get the whole momentum distribution. In the purely classical simulation (bottom) we simply take a cloud of point particles uniformly distributed in $`q`$direction and Gaussian distributed in $`p`$direction. Then the individual motion of the atoms is treated classically by letting the atoms evolve following the classical Liouville dynamics, but we still have included stimulated and spontaneous transitions in a Monte-Carlo-Integration scheme. Note that the quantum peaks are shifted towards smaller momenta. This shift becomes larger with the scaled Planck’s constant _ \*\[-1.0ex\]k which is a further indication, that this effect can be explained by the quantum mechanical effect described above. To show that the occurrence of the effective potential may in principle be sufficient to explain this feature, we simulate this by applying the classical simulation again (middle), where we now change the trajectory according to the effective potential once we start on a resonance. This is a very simple approach which is certainly only useful to show qualitatively that our explanation is suitable to describe the quantum dynamics. But we note that this modified classical simulation indeed shows the essential features of the pure quantum simulations. In Fig. 4(right) we show the same simulations but without any modulation. Here the difference corresponding to the quantum mechanical effect vanish and now more or less all three simulations show the same structure. This structure is due to classical transient effects, which appear in the first few cycles and are closely related to the motion in the standing wave, since they are independent of the modulation. This transient is always there and interferes with the quantum mechanical effect investigated in this paper. However the quantum mechanical effect is easily to identify since it vanishes for $`ϵ=0`$. Therefore this effect is clearly related to the modulation and therefore shows a quantum feature of the classical mixed phase space. Note that the effect is vanishing for $`ϵ=0`$ is consistent with our theory since in this case we face classical integrable motion. A wave packet in such a system is not stabilized but spreads and changes its shape and therefore the assumption for applying the theory of Henriksen et al. is no longer valid. To conclude, we have shown that we can use the property of wave packets staying localized on resonances of a classical mixed phase space to simplify the complicated quantum dynamics in phase space. In this case we can describe the quantum dynamics of the wave packet by the classical motion in a modified potential. This is not only valid for the cosine potential investigated, but also as already mentioned in to polynomial potentials of arbitrary high order and to other systems that has been topic of investigations of the relationship of classical chaotic motion and the corresponding quantum dynamics. First there is the atomic bouncer in an evanescent field $$V(q,t)=\lambda q+\kappa (1+ϵ\mathrm{cos}t)\mathrm{exp}(q)$$ (8) This setup of evanescent light waves can be modified to get a Morse potential which serves as an atomic trap. In these two cases it is also very straightforward to find the modified potential and to come to similar conclusions to those in this paper.
no-problem/9901/cond-mat9901237.html
ar5iv
text
# 1 Introduction ## 1 Introduction The detailed structure of bulk amorphous polymers is a topic of scientific interest because it is necessary for the microscopic understanding of their properties. However, because of the amorphous nature of polymer melts and glasses, structural information is difficult to obtain experimentally. In particular, it is interesting to know how amorphous a polymer melt is on a local scale, i.e. how much residual order is left on a local scale, how far local order extends before it disappears into the long-range disorder of amorphous systems , and how the local order depends on the molecular architecture. The interest has recently been revived by solid-state NMR studies of Graf et al. , from which it was inferred that a melt of polybutadiene is far more ordered than hitherto expected. The alignment of polymer chains is restricted to a local scale, there is no sign of nematic ordering. In order to get a better understanding of local packing and ordering effects computer simulations are very helpful, because the system is precisely known and because one has access to all data including positions and velocities of all particles at all times. Atomistic simulations may be useful in order to get an understanding of a specific system whereas simplified models yield the properties of generic polymer melts. Additionally, they need much less simulation time which allows to tackle relatively big systems for long times . Therefore, a simple bead-spring model may be a good starting point to elaborate generic packing effects. There was some work done for semiflexible chains, both by Monte Carlo and molecular dynamics, mostly to study liquid crystals or focusing on confined systems . The influence of chain stiffness on the dynamic structure factors of polymer melts was investigated also by analytical theory by Harnau et. al who found major discrepancies to the fully flexible system for large scattering vectors, i.e. on short distances. In a previous article , we showed that there is considerable local chain alignment even in melts of fully flexible chains (persistence length: 1 monomer diameter). This persistence length originates from excluded volume interaction. If there was no interaction at all (except for connectivity) the persistence length would be zero (e.g. polycatenans). In the present contribution, this model is extended to include some more information about the chemical architecture of the polymer. We firstly introduce bending potentials of different strength, in order to study the effect of semiflexibility of single chain structure as well as on the mutual local orientation of neighboring chains. Secondly, we study models with alternating stiffness in an attempt to mimic simplistically polymers with rigid subunits connected by more flexible links, like polybutadiene and polyisoprene with their alternating single and double bonds which are currently under investigation experimentally . ## 2 Simulated System We performed molecular dynamics simulations (for details of the parallel program POLY, see ref. ) of melts of polymer chains at a density $`\rho ^{}=0.85`$ and temperature $`T^{}=1`$ at a timestep $`\delta t^{}=0.01`$ using a truncated and shifted Lennard-Jones potential (Weeks-Chandler-Anderson potential) for the excluded-volume interaction between all beads. Lennard Jones reduced units are used throughout this paper where the mass $`m`$, the potential well depth $`ϵ`$ and the radius of the potential minimum $`\sigma `$ define the unit system. $$V_{LJ}(r)=4ϵ\left[\left(\frac{\sigma }{r}\right)^{12}\left(\frac{\sigma }{r}\right)^6\right]+ϵ,r<r_{cutoff}=\sqrt[6]{2}\sigma $$ (1) and a finitely extendable non-linear elastic (FENE) potential $$V_{FENE}(r)=\frac{\alpha }{2}\frac{R^2}{\sigma ^2}\mathrm{ln}\left(1\frac{r^2}{R^2}\right),r<R=1.5\sigma ,\alpha =30$$ (2) for the connection of neighboring beads. Additionally, a bond angle potential $$V_{angle}=x\left(1\frac{𝐫_{i1,i}𝐫_{i,i+1}}{r_{i1,i}r_{i,i+1}}\right)$$ (3) is used. This model system was already widely studied both for flexible and for semiflexible or liquid crystalline polymer systems . To first approximation, there is $`\frac{l_p}{l_b}=\frac{x}{k_BT}`$ where $`l_p`$ is the persistence length (see section 3)and $`l_b`$ the bond length. In our units, the numerical values for $`x`$ and $`l_p`$ therefore coincide. This potential is applied to every bead, to every 2nd bead, every 3rd bead etc. The latter is a useful model for polymers with alternating stiffness such as single-bond, double-bond sequences or for copolymers with different persistence lengths of the constituents. In the following, we refer to a system with angular potential strength $`x`$ and a (topological) distance of $`y`$ monomers between two applications of the bond potential as $`x`$-$`y`$ system. In this sense, a fully flexible chain is referred as 0-1 chain. A 5-2 chain for example has $`x=5`$ applied to every second bond angle. All simulated systems contained 500 chains of 50, 100 or 200 monomers each, so the overall number of particles was between 25.000 and 100.000 in a cubic periodic box. The short-chain systems (50 monomers) could be observed until the auto-correlation function of the end-to-end vector $`𝐑_s`$ was decayed. Figure 1a shows the reorientation in the case of the 5-1 system which has the longest relaxation time. The second Legendre polynomial $`P_2(z)=(3z^21)/2`$ is used for consistency with analyses further below. At this time, approximately, the mean square displacement of a single monomer begins to coincide with that of the center of mass. For the longer chains, we first waited until the radius of gyration and the end-to-end distance did not change systematically any more but fluctuated only around their mean values. The loss of local orientation of shorter chain segments is shown in figure 1b. One sees that there are two regimes. On short time scales ($`t^{}<5000`$), there is a fast decay due to local processes. On long time scales, however, there is a long tail which is determined by the overall motion of the whole chain. For the investigations in the following this overall motion is not important. All systems were simulated at least for $`t^{}=20000`$. We trust that the static chain properties were well equilibrated, because the overall properties like gyration radius settled and at least local orientation decayed. Moreover, the error estimation for $`𝐑_s`$ and $`𝐑_g`$ was performed according to a binning analysis . The correlation times for the observed properties resulting from this analysis were also exceeded substantially. In the case of 5-1 with 200 monomers, which has the longest equilibration times, this “binning time” is about $`t_b^{}=8000`$. Such systems were then simulated for $`t^{}=40000`$ to 80000. ## 3 Chain Structure in the Melt In this section, we investigate the effect of the melt environment on single chains. The presence of the other chains screens out the excluded volume interaction and the chain statistics of a self-avoiding walk appropriate for chains in good solvent turns into a simple random walk . This is, for example, evident in the single chain structure functions (see figure 8 in section 5). In the semiflexible case, one expects that, at large scales, the Gaussian statistics (random walk) is fulfilled, whereas on short scales the local stiffness is relevant. Two concepts can be used for analysis: One is the idea of a Kuhn length $`l_K`$ which is defined via $$l_K=\frac{R_s^2}{l_b(N1)}.$$ (4) This assumes that the melt consists of “blobs” of length $`l_K`$ which contain inside all the local information which is not relevant on the long scales. The second idea is the persistence length $`l_p`$ which derives from the worm-like chain model . It corresponds to the decay length of the correlation of bond orientations (the tangent vector) along the chain. $$\mathrm{cos}\alpha (s)=𝐮(s)𝐮(0),s:\text{ monomer index}$$ (5) which can be shown to decay exponentially in this model $$\mathrm{cos}\alpha (s)=e^{sl_b/l_p}.$$ (6) Since we do not only apply the bond angle potential to every bond, but also investigated systems with alternating stiff and flexible bonds (e.g. 5-2), the persistence lengths of these systems are not a priori known (at least the $`x`$-$`y`$ case $`y1`$). In order to determine $`l_p`$ the bond correlation function (eq. 5) was determined, in 100 configurations after the equilibration and the initial decay was fitted with an exponential $`e^{l/l_p}`$ (see table 1). If the bending potential was applied to every monomer the decay was well approximated by an exponential and the decay length $`l_p`$ was not too far from the expected value $`x`$ from the bond angle potential. This is in agreement with Monte Carlo results for stronger stiffness . In the case of alternating stiffness, minor deviations from exponential decay were observed (see figure 2a). The error in the bond correlation is about 0.03. Hence, the systems with very short persistence lengths, were difficult to determine because only very few points for fitting the decay were available and, therefore, the resulting error bars are not negligible. However, in all cases a fit over more than one order of magnitude was possible. It is not clear if for the $`x`$-$`2`$ case the bond correlation has to follow an exponential law. However, we found this always to be the case. From a simple argument, the effective persistence length in the case of persistence lengths $`l_{p1}`$ and $`l_{p2}`$ for alternating angles is $$\frac{1}{l_p}=\frac{1}{2}\left(\frac{1}{l_{p1}}+\frac{1}{l_{p2}}\right),$$ (7) because $$e^{2l/l_p}=e^{l/l_{p1}}e^{l/l_{p2}}.$$ (8) This is exactly true for all points with even monomer distances in the bond correlation function. This result may be generalized to a repetitive sequence of $`n`$ different bond angle potentials $$\frac{1}{l_p}=\frac{1}{n}\underset{j=1}{\overset{n}{}}\frac{1}{l_{pj}}.$$ (9) A more elaborate calculation in the framework of a generalized wormlike chain model with varying stiffness yields the same result. The persistence length of the fully flexible model is found to be exactly one monomer distance which is on average $`l_b=0.97`$. The bond correlation functions (of inner monomers) show in the very beginning a decay with a persistence length which is close to the expected value (on the length scale of about 5 monomers). This suggests that the very local orientation correlation is determined by the “true” potential strength whereas on longer scales finite size or many chain effects contribute considerably. This is especially reflected in the persistence length values for the 5-1 chains (see figure 2b), where the bond correlation function for the shorter chain shows at distances $`s>5`$ substantial differences to the longer chain. They may be attributed to finite chain length effects. All bond correlation functions were determined starting from the innermost monomers in order to avoid end effects as much as possible. Also the end-to-end distances and radii of gyration of the corresponding chains were calculated. They are also presented in table 1. Upon increasing $`x`$, the $`x`$-1 systems stretch the chains considerably (see figure 3). A much larger bending force constant is needed for the $`x`$-2 systems than for $`x`$-1 if one wants the same $`R_g`$. Therefore the systems $`x`$-2 behave more like fully flexible chains with a bigger monomer which is most strongly seen in the persistence length. Note that even in the 100-2 case where an almost rigid and a fully flexible bond alternate, the chain stretching is not as strong as in the 2-1 case. So there is a fundamental difference between these two scenarios. At least in the $`x`$-1 cases we find $`2l_pl_K`$ as expected from the wormlike chain model . The relation $`R_s^26R_g^2`$ for the Gaussian chain is well fulfilled in most of our cases. The larger deviations, e.g. in the 5-1 case with 50 monomers, may be attributed to finite chain length effects. Therefore, we do not see substantial deviations from Gaussian behavior. ## 4 Melt structure Local orientation of neighboring chains may be measured by the spatial orientation correlation function OCF. To this end, we define unit vectors between adjacent monomers $`𝐮=\frac{𝐫_i𝐫_{i1}}{|𝐫_i𝐫_{i1}|}`$. The scalar products between two such unit vectors describe the angle between chain tangent vectors $$\mathrm{cos}\alpha (r)=𝐮_{chain1}𝐮_{chain2}.$$ (10) The distance $`r`$ denotes the distance between the centers of mass of the respective chain segments. In order to compare better to NMR experiments, as well as to avoid the distinction between head and tail of the chain, we use the second Legendre polynomial $`P_2(r)=\frac{1}{2}(3\mathrm{cos}^2\alpha (r)1)`$. Figure 4a shows inter-chain orientation correlation functions of different systems. The first minimum $`(r<1)`$ is close to $`P_2=\frac{1}{2}`$ which would indicate a perfect perpendicular ordering. Two chains which come so close can only pack perpendicular because of the excluded volume interaction. The radial distribution function (RDF, see below) shows that there are very few such contacts. The first peak $`(r1.2)`$ shows a preferred parallel alignment at the distance of the first neighbor. A second parallel peak follows at $`r2`$. The intervening minima $`(r1.6)`$ get weaker for stronger orientation which indicates a stronger local parallel ordering. The OCF decays to zero with $`r`$ because the system is globally isotropic, not nematic. The local ordering is only slightly different for the systems 0-1, 2-1, and 5-2, whereas the 5-1 chain shows a more pronounced local parallel orientation. For the 5-1 chains, there is residual parallel ordering even at the intermediate minimum $`(r1.6)`$, where the other systems show some perpendicular ordering. Except for the very few direct contacts, there is parallel orientation between neighboring chains. This ordering is visible up to about three monomer diameters. The more flexible systems (0-1, 2-1, 5-2) show qualitatively a similar ordering, but it is less pronounced and there is a intermediate preferred perpendicular orientation at the distance of about the first minimum in the radial distribution function $`(r1.6)`$. The orientation depends only weakly on the chain length (figure 4b). This demonstrates that the effect is strictly local. This holds even though the global dynamics of the chains of different lengths is strongly different: The chains of length 50 are not yet entangled whereas the longer chains are already influenced strongly by entanglements (entanglement length in the 0-1 case $``$ 60 monomers ). Also orientation correlation functions of longer chain segments are investigated. In this case, not only vectors connecting nearest neighbors but vectors connecting next-to-nearest neighbor beads or beads farther apart are taken into account (see figure 5). $$𝐮_d:=\frac{𝐫_i𝐫_{id}}{|𝐫_i𝐫_{id}|}$$ (11) It is clear (figure 6a) that the effect of local parallel chain orientation is not restricted to segments of 2 monomers only. It persists when larger chain fragments are analyzed. On the other hand, the degree of ordering decreases with the segment size considered. Figure 6b shows again the more pronounced local ordering in the 5-1 case compared to the more flexible chains. The 2-1 and the 13-2 systems coincide. Their persistence lengths are quite similar, and for the bigger segment sizes the exact local realization of this persistence length seems to average out. Our results are also in qualitative agreement with an early lattice Monte Carlo investigation of shorter chains . Lattice models, however, are biased in favor of orientation correlation. The inter-chain radial distribution function $`g(r)`$ (RDF) on large scales does not change much in the case of added stiffness (figure 7a). However, there are some differences on very local scales. Both the second and third neighbor peaks are farther apart for stronger stiffness. Furthermore, the minimum between the first and second neighbor shell is not as pronounced as in the more flexible cases. The local stretching allows a closer approach of chains. This leads to a reduction of the expected correlation hole. In fully flexible systems the number of neighbors of one monomer being on the same chain increase with increasing chain length. No effect of chain length is seen here (figure 7b) which again reflects the strict locality of the structure formation in the melt. ## 5 Structure functions The structure of single chains and of the overall melt may be additionally characterized by static structure functions. Figure 8 shows the single-chain and melt structure functions of our systems. The (isotropically averaged) melt structure function is defined as $$S_{melt}(k)=\frac{1}{N}|\underset{m=1}{\overset{N_C}{}}\underset{j=1}{\overset{n_b}{}}\mathrm{exp}(ikr_j^m)|^2=S_{SC}(k)S_{inter}(k)$$ (12) where $`S_{SC}`$ denotes the single chain structure function $$S_{SC}(k)=\frac{1}{N}\underset{m=1}{\overset{N_C}{}}|\underset{j=1}{\overset{n_b}{}}\mathrm{exp}(ikr_j^m)|^2.$$ (13) The first sums run over all chains ($`N_C`$: number of chains, $`m`$: chain index), the second along the chains ($`n_b=N/N_C`$: number of beads along the chain, $`j`$: monomer index along the chain). In the limit for $`k0`$ the chain structure is no more visible but we just see a massive object which is related to the first plateau in $`S_{SC}`$. The next “scaling” regime is connected to the fractal nature of the chains. The self similarity yields a decay with $`k^{\frac{1}{\nu }}`$, where $`d=\frac{1}{\nu }`$ is the fractal dimension of the chain. For a Gaussian chain we have $`\nu =\frac{1}{2}`$. Stretching leads to a smaller fractal dimension resulting in a less steep decay. In the large-$`k`$ range we deal with structure of the size of one monomer. A bead spring model has no structure on a shorter scale. The melt structure function is the Fourier transform of the density-density correlation function. It shows therefore some additional peaks which correspond to peaks in the RDF. Hence, it contains not only information about the single-chain structure but also about the overall structure of the whole system. The single-chain structure functions $`S_{SC}`$ of the $`x`$-1 systems look very similar, whereas there are strong differences between the 3-1 and the 3-2 system, the latter behaving very much like a fully flexible system. The crossover to the scaling regime is shifted to smaller $`k`$-vectors for stiffer chains. There is additionally a crossover ($`k=0.8`$) between two regimes in the decay which means that there is different fractal chain structure on different length scales. At larger scales (small-$`k`$ regime) the slope does not differ much from the fully flexible case, whereas on intermediate scales larger deviations occur, which indicate local chain stretching. The 3-2 system, however, is close to the fully flexible (Gaussian) system. But its slope is not as steep, which hints at a slight stretching of the chains compared to the Gaussian chain. This minor difference between the alternating stiffness and the fully flexible chains supports our earlier suspicion that the alternating chains behave like renormalized bead-spring chains with larger effective monomers. On the other hand, the stiffened chains with true semiflexibility are strongly different on intermediate scales. The melt structure functions $`S_{melt}(k)`$ differ also between the $`x`$-1 case and the $`x`$-2 case. The latter is again very similar to the 0-1 system. The slopes in the regimes around $`k1`$ are clearly different, whereas the fine structure revealing peaks connected to the neighboring shells is quite similar. The exact positions of these peaks differ, however, which shows again that the distance to the nearest neighbors is slightly altered with stiffness. Figure 8b shows that all differences are on local scales $`(k>0.1)`$. The static structure functions coincide for $`k0`$, where the overall structure on large scales gets important. So the melt structure function shows that all systems behave similar on large scales but the systems with homogeneous and alternating stiffness differ on local scales. ## 6 Conclusions The static structure of semiflexible polymers in the melt was determined. The stiffness strongly affects the persistence length, end-to-end distance and radius of gyration. Stiff chains are more stretched than chains only interacting via excluded volume. This also affects the local mutual ordering of the chains. Stiff chains pack more parallel on local scales whereas the overall structure remains isotropic. The chain length does not influence this strictly local phenomenon. Systems consisting of alternating stiff and flexible links behave similar to systems with much weaker overall persistence lengths. Their structure is similar to the structure of fully flexible chains with larger monomers. Their overall persistence length is smaller than expected by a analytical calculation using the persistence lengths of the respective potentials. Finally, the overall local structure of the melt differs considerably for alternating and homogeneous stiffness. The static data presented here shows already ordering effects but its effect on dynamical properties relevant to NMR experiments can not be inferred. In order to compare directly to NMR experiments on real polymer melts, dynamical investigations are needed. Such simulations and analyses are presently being performed with the mesoscopic model of this article. Moreover, detailed atomistic simulations are underway for melts of specific polymers. They provide directly the time evolution of the atom-atom vectors monitored in the experiments, at least for short times. These simulations will be mapped onto more coarse grained simulations like the one presented here. This mapping will allow us to make the connection between the $`(x,y)`$ parameters of our model and real polymers. The presented static properties are a first step on the way of understanding ordering phenomena as examined in NMR experiments. ## Acknowledgements Valuable discussions with Andreas Heuer, Mathias Pütz, and Heiko Schmitz are gratefully acknowledged.
no-problem/9901/cond-mat9901121.html
ar5iv
text
# Bosons in a Toroidal Trap: Ground State and Vortices ## I Introduction Recent spectacular experiments with alkali vapors $`{}_{}{}^{87}Rb`$, $`{}_{}{}^{23}Na`$ and $`{}_{}{}^{7}Li`$ confined in magnetic traps and cooled down to a temperature of the order of $`100`$ nK have renewed the interest in the Bose-Einstein condensation. Theoretical studies of the Bose-Einstein condensate (BEC) in harmonic traps have been performed for the ground state , collective low-energy surface excitations and vortex states . The presence of vortex states is a signature of the macroscopic phase coherence of the system (the existence of a macroscopic quantum phase has been recently demonstrated ). Moreover, vortices are important to characterize the superfluid properties of Bose systems . It has been found that the BEC in monotonically increasing potentials can not support stable vortices in the absence of an externally imposed rotation . Instead, stable vortices of Bose condensates can be obtained in 1-D and quasi-2D toroidal traps: such Bose condensates are superfluid . In this paper we study a 3-D toroidal trap given by a quartic Mexican hat potential along the cylindrical radius and a harmonic potential along the $`z`$ axis. The resulting trapping potential is very flexible and it is possible to modify considerably the density profile of the BEC by changing the parameters of the potential or the number of bosons. We analyze the ground state properties and the vortex stability of the condensate for both positive and negative scattering length and calculate also the spectrum of the Bogoliubov elementary excitations. In particular, we consider $`{}_{}{}^{87}Rb`$ and $`{}_{}{}^{7}Li`$ atoms. The Gross-Pitaevskii energy functional of the Bose-Einstein Condensate (BEC) reads: $$\frac{E}{N}=d^3𝐫\frac{\mathrm{}^2}{2m}|\mathrm{\Psi }(𝐫)|^2+V_0(𝐫)|\mathrm{\Psi }(𝐫)|^2+\frac{gN}{2}|\mathrm{\Psi }(𝐫)|^4,$$ (1) where $`\mathrm{\Psi }(𝐫)`$ is the wave function of the condensate normalized to unity, $`V_0(𝐫)`$ is the external potential of the trap, and the interatomic potential is represented by a local pseudopotential so that $`g=4\pi \mathrm{}^2a_s/m`$ is the scattering amplitude ($`a_s`$ is the s-wave scattering length). $`N`$ is the number of bosons of the condensate and $`m`$ is the atomic mass. The scattering length $`a_s`$ is supposed to be positive for $`{}_{}{}^{87}Rb`$ and $`{}_{}{}^{23}Na`$, but negative for $`{}_{}{}^{7}Li`$. It means that for $`{}_{}{}^{87}Rb`$ and $`{}_{}{}^{23}Na`$ the interatomic interaction is repulsive while for $`{}_{}{}^{7}Li`$ the atom-atom interaction is effectively attractive. The extremum condition for the energy functional gives the Gross-Pitaevskii (GP) equation $$\left[\frac{\mathrm{}^2}{2m}^2+V_0(𝐫)+gN|\mathrm{\Psi }(𝐫)|^2\right]\mathrm{\Psi }(𝐫)=\mu \mathrm{\Psi }(𝐫),$$ (2) where $`\mu `$ is the chemical potential. This equation has the form of a nonlinear stationary Schrödinger equation. We study the BEC in an external Mexican hat potential with cylindrical symmetry, which is given by $$V_0(𝐫)=\frac{\lambda }{4}(\rho ^2\rho _0^2)^2+\frac{m\omega _z^2}{2}z^2,$$ (3) where $`\rho =\sqrt{x^2+y^2}`$ and $`z`$ are the cylindrical coordinates. This potential is harmonic along the $`z`$ axis and quartic along the cylindrical radius $`\rho `$. $`V_0(𝐫)`$ is minimum along the circle of radius $`\rho =\rho _0`$ at $`z=0`$ and $`V_0(𝐫)`$ has a local maximum at the origin in the $`(x,y)`$ plane. Small oscillations in the $`(x,y)`$ plane around $`\rho _0`$ have a frequency $`\omega _{}=\rho _0(2\lambda /m)^{1/2}`$. First, let us consider the Thomas-Fermi (TF) approximation: i.e. neglect the kinetic energy. It is easy to show that the kinetic energy is negligible if $`N>>(\mathrm{}^2/2m)(\lambda \rho _0^2+m\omega _z^2/2)/\mu _0^2`$, where $`\mu _0=(2/\pi ^2)(\lambda /4)^{1/4}(m\omega _z^2/2)^{1/4}g^{1/2}`$ is the bare chemical potential. This condition is satisfied for $`\lambda ^2\rho _0^8>>16\mathrm{}^2(\lambda \rho _0^2+m\omega _z^2/2)/(2m)`$. In the TF approximation we have $$\mathrm{\Psi }(𝐫)=\left[\frac{1}{gN}(\mu V_0(𝐫))\right]^{1/2}\mathrm{\Theta }(\mu V_0(𝐫)),$$ (4) where $`\mathrm{\Theta }(x)`$ is the step function. For our system we obtain that: a) the wave function has its maximum value at $`\rho =\rho _0`$ and $`z=0`$; b) for $`\mu <\lambda \rho _0^4/4`$ the wave function has a toroidal shape; c) for $`\mu >\lambda \rho _0^4/4`$ the wave function has a local minimum at $`\rho =z=0`$; d) the chemical potential scales as $`\mu \mu _0N^{1/2}`$. It is important to note that the TF approximation neglects tunneling effects: to include these processes, it is necessary to analyze the full GP problem. ## II Ground state properties and elementary excitations We perform the numerical minimization of the GP functional by using the steepest descent method . It consists of projecting onto the minimum of the functional an initial trial state by propagating it in imaginary time. In practice one chooses a time step $`\mathrm{\Delta }\tau `$ and iterates the equation $$\mathrm{\Psi }(𝐫,\tau +\mathrm{\Delta }\tau )=\mathrm{\Psi }(𝐫,\tau )\mathrm{\Delta }\tau \widehat{H}\mathrm{\Psi }(𝐫,\tau ),$$ (5) by normalizing $`\mathrm{\Psi }`$ to $`1`$ at each iteration. We discretize the space with a grid of points taking advantage of the cylindrical symmetry of the problem. At each time step the matrix elements entering the Hamiltonian are evaluated by means of finite–difference approximants. We use grids up to $`200\times 200`$ points verifying that the results do not depend on the discretization parameters. The number of iterations in imaginary time depends on the degree of convergence required and the goodness of the initial trial wave function. We found that strict convergence criteria have to be required on the wave function in order to obtain accurate estimates of the wavefunction. In our calculations we use the $`z`$–harmonic oscillator units. We write $`\rho _0`$ in units $`a_z=(\mathrm{}/(m\omega _z))^{1/2}=1\mu `$m, $`\lambda `$ in units $`(\mathrm{}\omega _z)a_z^4=0.477(5.92)`$ peV/$`\mu `$m<sup>4</sup> and the energy in units $`\mathrm{}\omega _z=0.477(5.92)`$ peV for $`{}_{}{}^{87}Rb`$ ($`{}_{}{}^{7}Li`$). Moreover, we use the following values for the scattering length: $`a_s=50(13)\AA `$ for $`{}_{}{}^{87}Rb`$ ($`{}_{}{}^{7}Li`$) . We have to distinguish two possibilities: positive or negative scattering length. In the case of positive scattering length we can control the density profile of the BEC by modifying the parameters of the potential and also the number of particles. In Figure 1 we show the ground state density profile of the $`{}_{}{}^{87}Rb`$ condensate for several numbers of atoms. For small number of particles the condensate is essentially confined along the minimum of $`V_0(𝐫)`$, there is a very small probability of finding particles in the center of the trap so that the system is effectively multiply connected. As $`N`$ increases the center of the trap starts to fill up and the system becomes simply connected. The value of $`N`$ for which there is a crossover between the two regimes increases with the value of $`\lambda `$ and of $`\rho _0`$ and, within Thomas-Fermi approximation, scales like $`\lambda ^{3/2}\rho _0^8`$. In Table 1 we show the energy per particle, the chemical potential and the average transverse and vertical size for the trapping potential characterized by the parameters $`\rho _0=2`$ and $`\lambda =4`$ in the $`z`$–harmonic oscillator units. As expected, the energy per particle and the chemical potential grow by increasing the number of particles but they do not scale as $`N^{1/2}`$ because, with this trapping potential, the TF approximation is valid for $`N>>10^4`$. It is instead interesting to observe that $`\sqrt{<x^2>}=\sqrt{<y^2>}`$ grows less than $`\sqrt{<z^2>}`$ due to the presence of a steep (quartic) potential along the transverse direction $`\rho =\sqrt{x^2+y^2}`$ and a softer (quadratic) barrier along the vertical direction $`z`$. In the case of negative scattering length, it is well known that for the BEC in harmonic potential there is a critical number of bosons $`N_c`$, beyond which there is the collapse of the wave function . We obtain the same qualitative behavior for the $`{}_{}{}^{7}Li`$ condensate in our Mexican hat potential. However, in cylindrical symmetry, the collapse occurs along the line which characterizes the minima of the external potential, i.e. at $`\rho =\rho _0`$ and $`z=0`$. The numerical results are shown in Figure 2. We notice that, for a fixed $`\rho _0`$, the critical number of bosons $`N_c`$ is only weakly dependent on the height of the barrier of the Mexican potential. These results suggest that we can not use toroidal traps to significantly enhance the metastability of the BEC with negative scattering length. To calculate the energy and wavefunction of the elementary excitations, one must solve the so–called Bogoliubov–de Gennes (BdG) equations . The BdG equations can be obtained from the linearized time–dependent GP equation. Namely, one can look for zero angular momentum solutions of the form $$\mathrm{\Psi }(𝐫,t)=e^{\frac{i}{\mathrm{}}\mu t}\left[\psi (\rho ,z)+u(\rho ,z)e^{i\omega t}+v^{}(\rho ,z)e^{i\omega t}\right],$$ (6) corresponding to small oscillations of the wavefunction around the ground state solution $`\psi `$. By keeping terms linear in the complex functions $`u`$ and $`v`$, one finds the following BdG equations $$\left[\frac{\mathrm{}^2}{2m}\left(\frac{^2}{\rho ^2}+\frac{1}{\rho }\frac{}{\rho }+\frac{^2}{z^2}\right)+V_0(\rho ,z)\mu +2gN|\psi (\rho ,z)|^2\right]u(\rho ,z)+$$ $$+gN|\psi (\rho ,z)|^2v(\rho ,z)=\mathrm{}\omega u(\rho ,z),$$ (7) $$\left[\frac{\mathrm{}^2}{2m}\left(\frac{^2}{\rho ^2}+\frac{1}{\rho }\frac{}{\rho }+\frac{^2}{z^2}\right)+V_0(\rho ,z)\mu +2gN|\psi (\rho ,z)|^2\right]v(\rho ,z)+$$ $$+gN|\psi (\rho ,z)|^2u(\rho ,z)=\mathrm{}\omega v(\rho ,z).$$ (8) The BdG equations allow one to calculate the eigenfrequencies $`\omega `$ and hence the energies $`\mathrm{}\omega `$ of the elementary excitations. This procedure is equivalent to the diagonalization of the N–body Hamiltonian of the system in the Bogoliubov approximation . The excitations can be classified according to their parity with respect to the symmetry $`zz`$. We have solved the two BdG eigenvalue equations by finite–difference discretization with a lattice of $`40\times 40`$ points in the $`(\rho ,z)`$ plane. In this way, the eigenvalue problem reduces to the diagonalization of a $`3200\times 3200`$ real matrix. We have tested our program in simple models by comparing numerical results with the analytical solution and verified that a $`40\times 40`$ mesh already gives reliable results for the lowest part of the spectrum. In Table 3 we show the lowest elementary excitations of the Bogoliubov spectrum for the ground state of the system. One observes the presence of an odd collective excitation at energy quite close to $`\mathrm{}\omega =1`$ (in units $`\mathrm{}\omega _z`$). This mode is related to the oscillations of the center of mass of the condensate which, due to the harmonic confinement along the $`z`$–axis, is an exact eigenmode of the problem characterized by the frequency $`\omega _z`$, independently of the strength of the interaction. For large $`N`$ the lowest elementary excitations saturate, suggesting that the Thomas–Fermi asymptotic limit is reached. In the case of negative scattering length we verified that, quite close to the critical number of bosons $`N_c`$, an even mode softens driving the transition towards a collapsed state. ## III Vortices and their metastability Let us consider states having a vortex line along the $`z`$ axis and all bosons flowing around it with quantized circulation. The observation of these states would be a signature of macroscopic phase coherence of trapped BEC. The axially symmetric condensate wave function can be written as $$\mathrm{\Psi }_k(𝐫)=\psi _k(\rho ,z)e^{ik\theta },$$ (9) where $`\theta `$ is the angle around the $`z`$ axis and $`k`$ is the integer quantum number of circulation. The resulting GP functional (1), representing the energy per particle, can be written in terms of $`\psi _k(𝐫)`$ by taking advantage of the cylindrical symmetry of the problem: $$\frac{E}{N}=\rho 𝑑\rho 𝑑z𝑑\theta \frac{\mathrm{}^2}{2m}\left[|\frac{\psi _k(\rho ,z)}{\rho }|^2+|\frac{\psi _k(\rho ,z)}{z}|^2\right]+$$ $$+\left[\frac{\mathrm{}^2k^2}{2m\rho ^2}+V_0(\rho ,z)\right]\left|\psi _k(\rho ,z)\right|^2+\frac{gN}{2}|\psi _k(\rho ,z)|^4.$$ (10) Due to the presence of the centrifugal term, the solution of this equation for $`k0`$ has to vanish on the $`z`$ axis providing a signature of the vortex state. Vortex states are important to characterize the macroscopic quantum phase coherence and also superfluid properties of Bose systems . It is easy to calculate the critical frequency $`\mathrm{\Omega }_c`$ at which a vortex can be produced. One has to compare the energy of a vortex state in a frame rotating with angular frequency $`\mathrm{\Omega }`$, that is $`E\mathrm{\Omega }L_z`$, with the energy of the ground state with no vortices. Since the angular momentum per particle is $`\mathrm{}k`$, the critical frequency is given by $`\mathrm{}\mathrm{\Omega }_c=(E_k/NE_0/N)/k`$, where $`E_k/N`$ is the energy per particle of the vortex with quantum number $`k`$. In Table 2 we show some results for vortices of $`{}_{}{}^{87}Rb`$. The critical frequency turns out to increase slightly with the number of atoms. This corresponds to a moderate lowering of the momentum of inertia per unit mass of the condensate when $`N`$ grows. For $`{}_{}{}^{7}Li`$ we calculate the critical number $`N_c`$ of bosons for which there is the collapse of the vortex wave function. We find that $`N_c`$ has a rather weak dependence on the quantum number of circulation $`k`$. Note that, in the case of a harmonic external potential, there is an enhancement of $`N_c`$ by increasing $`k`$ because in that case rotation strongly reduces the density in the neighborhood of the origin, where the external potential has its minimum . Once a vortex has been produced, the BEC is superfluid if the circulating flow persists, in a metastable state, in the absence of an externally imposed rotation . As discussed previously, vortex solutions centered in harmonic traps have been found , but such states turn out to be unstable to single particle excitations out of the condensate. To study the metastability of the vortex we first analyze the following Hartree-Fock equation $$\left[\frac{\mathrm{}^2}{2m}^2+V_0(𝐫)+2gN|\psi _k(𝐫)|^2\right]\varphi (𝐫)=ϵ\varphi (𝐫),$$ (11) which describes, in the weak coupling limit, one particle transferred from the vortex state $`\mathrm{\Psi }_k(𝐫)`$ to an orthogonal single-particle state $`\varphi (𝐫)`$. Quasiparticle motion is governed by an effective Hartree potential $`v_{eff}(𝐫)=V_0(𝐫)+2g|\psi _k(𝐫)|^2`$, which combines the effects of the trap with a mean repulsion by the condensate. Figure 3 shows $`v_{eff}(\rho ,z)`$ for $`N=5000`$ and $`50000`$. The repulsion induced by the underlying condensate is quite evident near $`\rho =\rho _0`$. Let $`\mu _k`$ be the chemical potential of the vortex state characterized by a circulation quantum number $`k`$, then the vortex is metastable if $`ϵ>\mu _k`$ and unstable if $`ϵ<\mu _k`$ . As shown in Table 2, for our 3-D system all the studied vortices are metastable and so the BEC can support persistent currents, thus it is superfluid. Contrary to what may be inferred by means of semiclassical arguments the wave function describing the excitation $`\varphi (𝐫)`$ is not localized near the symmetry axis even for rather large numbers of atoms. A bound state at $`\rho =z=0`$ should pay a large kinetic energy cost due to the strong localization of the particle induced by the effective potential. Instead, it is more convenient to place the excited particle on top of the Bose condensate i.e. at $`\rho =\rho _0`$ and $`z0`$ as shown in Figure 4. It is well known that the Hartre–Fock approximation describes only single particle excitations . To have the complete spectrum, including collective excitations, one must solve the BdG equations . One must look for solutions of the form $$\mathrm{\Psi }_k(𝐫,t)=e^{\frac{i}{\mathrm{}}\mu _kt}\left[\psi _k(\rho ,z)e^{ik\theta }+u(\rho ,z)e^{i(k+q)\theta }e^{i\omega t}+v^{}(\rho ,z)e^{i(kq)\theta }e^{i\omega t}\right].$$ (12) Here $`q`$ represents the quantum number of circulation of the elementary excitation. We have solved the two BdG eigenvalue equations by finite–difference discretization using the same method described in Section II. We have checked that a $`40\times 40`$ mesh gives the correct excitation energies within Hartree-Fock approximation. Therefore, for the purpose of determining the stability of the vortex state, this rather coarse mesh is sufficiently accurate. The results are shown in Table 4: The lowest Bogoliubov excitation is positive and always lower than the lowest Hartree–Fock one. Moreover by increasing the number of particles their difference increases as expected for collective excitations. We have also verified that vortex states become unstable by strongly reducing either density (down to about one hundred bosons in our model trap) or scattering length. Therefore, the behavior of the 3-D trap we have analyzed closely resembles the simplified 1-D model studied in Ref. which represents the limit of deep trapping potential. Also in that case Bogoliubov approximation has been used to evaluate the spectrum of elementary excitations showing that vortices are stabilized by strong repulsive interparticle interactions (or equivalently by high density). The 1-D model, however, should be taken with caution because other branches of low energy collective excitations are present in such low-dimensional systems . ## IV Conclusions We have studied the Bose-Einstein condensate in a 3-D toroidal trap given by a quartic Mexican hat potential along the cylindrical radius and a harmonic potential along the $`z`$ axis. We have shown that it is possible to modify strongly the density profile of the condensate by changing the parameters of the potential or the number of bosons. The properties of the condensate and its elementary excitations have been analyzed for both positive and negative scattering length by considering $`{}_{}{}^{87}Rb`$ and $`{}_{}{}^{7}Li`$ atoms. For $`{}_{}{}^{7}Li`$, which has negative scattering length, we have calculated the critical number of atoms for which there is the collapse of the wave function. The results have shown that a toroidal trap does not enhance the metastability of the ground state of the condensate. On the other hand, in the case of a harmonic external potential, we have recently shown that, when a realistic non local (finite range) effective interaction is taken into account, a new stable branch of Bose condensate appears for $`{}_{}{}^{7}Li`$ at higher density. Presumably a similar state can be found also in presence of a toroidal external trap for a sufficiently large number of particles when non locality effects are included. A superfluid is characterized by the presence of persistent currents in the absence of an externally imposed rotation. In order to investigate this peculiar sign of the macroscopic phase coherence of the condensate, we have also studied vortex states. Our results suggest that vortices can support persistent currents in 3-D toroidal traps with fairly large numbers of atoms. This feature essentially depends on the toroidal geometry of the trap and should be independent on other details of the confining potential. ## Acknowledgements This work has been supported by INFM under the Research Advanced Project (PRA) on ”Bose-Einstein Condensation”. ## References M.H. Anderson, J.R. Ensher, M.R. Matthews, C.E. Wieman, and E.A. Cornell, Science 269, 189 (1995). K.B. Davis, M.O. Mewes, M.R. Andrews, N.J. van Druten, D.S. Drufee, D.M. Kurn, and W. Ketterle, Phys. Rev. Lett. 75, 3969 (1995). C.C. Bradley, C.A. Sackett, J.J. Tollet, and R.G. Hulet, Phys. Rev. Lett. 75, 1687 (1995). M. Edwards and K. Burnett, Phys. Rev. A 51, 1382 (1995). M. Lewenstein and L. You, Phys. Rev. A 53, 909 (1996). F. Dalfovo and S. Stringari, Phys. Rev. A 53, 2477 (1996). R.J. Dodd et al, Phys. Rev. A 54, 661 (1996); G. Baym and C.J. Pethick, Phys. Rev. Lett. 76, 6 (1996). S. Stringari, Phys. Rev. Lett. 77, 2360 (1996). A. Smerzi and S. Fantoni, Phys. Rev. Lett. 78, 3589 (1997). M.R. Andrews, C.G. Townsend, H.J. Miesner, D.S. Drufee, D.M. Kurn, and W. Ketterle, Science 275, 637 (1997). A.J. Leggett, in Low Temperature Physics, Lecture Notes in Physics, vol. 394, pp. 1-91, Ed. M.J.R. Hoch and R.H. Lemmer (Springer, Berlin, 1991). D.S. Rokhsar, Phys. Rev. Lett. 79, 2164 (1997). D.S. Rokhsar, “Dilute Bose gas in a torus: vortices and persistent currents”, cond-mat/9709212. M. Benakli, S. Raghavan, A. Smerzi, S. Fantoni and R. Shenoy, “Macroscopic Angular Momentum States of Bose-Einstein Condensates in Toroidal Traps”, cond-mat/9711295. E.P. Gross, Nuovo Cimento 20, 454 (1961); L.P. Pitaevskii, Sov. Phys. JETP 13, 451 (1961). S. Koonin and C.D. Meredith, Computational Physics (New York, 1990). F. Dalfovo, S. Giorgini, M. Guilleumas, L.P. Pitaevskii and S. Stringari, Phys. Rev. A 56, 3804 (1997). A.L. Fetter, Phys. Rev. A 53, 4245 (1996). D.A.W. Hutchinson, E. Zeremba, A. Griffin, Phys. Rev. Lett. 78, 1842 (1997). E.H. Lieb and W. Lininger, Phys. Rev. 130, 1616(1963). A. Parola, L. Salasnich and L. Reatto, Phys. Rev. A 57, 3180 (1998). L. Reatto, A. Parola and L. Salasnich, J. Low Temp. Phys. 113, N. 3 (1998). ## Figure Captions Figure 1. Particle probability density in the ground state of $`{}_{}{}^{87}Rb`$ atoms as a function of the cylindrical radius at $`z=0`$ (symmetry plane). The curves correspond to different numbers of atoms: from $`5000`$ to $`50000`$. Parameters of the external potential: $`\rho _0=2`$ and $`\lambda =4`$. Lengths are in units of $`a_z=1\mu `$m and $`\lambda `$ in units of $`(\mathrm{}\omega _z)a_z^4=0.477`$ peV/$`\mu `$m<sup>4</sup>. Figure 2. Critical number $`N_c`$ of $`{}_{}{}^{7}Li`$ atoms versus the potential barrier at the origin: $`V_0(0)=\lambda \rho _0^4/4`$. Open squares: $`\rho _0=2`$; full squares: $`\rho _0=3`$; open circles: $`\rho _0=4`$. Energy is in units of $`\mathrm{}\omega _z=5.92`$ peV ($`\omega _z=9.03`$ kHz) and length in units of $`a_z=1\mu `$m. Figure 3. Effective potential $`v_{eff}(\rho ,z)`$ appearing in the eigenvalue equation for the single particle excitation Eq. (8). Two sections at $`z=0`$ and $`z=3`$ ($`z=6`$) are shown for $`N=5000`$ ($`N=50000`$) atoms in panel a (b). $`z=0`$ corresponds to the symmetry plane. The dotted line represents the external potential. The chemical potential of the vortex state is marked by a short dashed line, the excitation energy by a long dashed line. Parameters of the external potential: $`\rho _0=2`$ and $`\lambda =4`$. Units as in Fig. 1 with $`\mathrm{}\omega _z=0.477`$ peV ($`\omega _z=0.729`$ kHz). Figure 4. Particle probability density of the $`k=1`$ vortex state (solid line) and square of the excitation wave function (dashed line) at the radial distance $`\rho =\rho _0=2`$, i.e. where both the wave functions peak. Curves are for $`N=5000`$ ($`N=50000`$) atoms in panel a (b). Units and parameters as in Fig. 3. | $`N`$ | $`E/N`$ | $`\mu `$ | $`\sqrt{<\rho ^2>}`$ | $`\sqrt{<z^2>}`$ | | --- | --- | --- | --- | --- | | $`5000`$ | $`5.85`$ | $`7.71`$ | $`1.96`$ | $`1.41`$ | | $`10000`$ | $`7.45`$ | $`10.26`$ | $`1.97`$ | $`1.70`$ | | $`20000`$ | $`9.84`$ | $`14.00`$ | $`1.97`$ | $`2.05`$ | | $`30000`$ | $`11.73`$ | $`16.94`$ | $`1.97`$ | $`2.29`$ | | $`40000`$ | $`13.36`$ | $`19.47`$ | $`1.98`$ | $`2.48`$ | | $`50000`$ | $`14.81`$ | $`21.74`$ | $`1.99`$ | $`2.63`$ | Table 1. Ground state of $`{}_{}{}^{87}Rb`$ atoms in the toroidal trap with $`\rho _0=2`$ and $`\lambda =4`$. Chemical potential and energy are in units of $`\mathrm{}\omega _z=0.477`$ peV ($`\omega _z=0.729`$ kHz). Lengths are in units of $`a_z=1\mu `$m. | $`N`$ | $`E_1/N`$ | $`\mu _1`$ | $`ϵ`$ | $`\mathrm{}\mathrm{\Omega }_c`$ | | --- | --- | --- | --- | --- | | $`5000`$ | $`6.00`$ | $`7.87`$ | $`9.56`$ | $`0.15`$ | | $`10000`$ | $`7.61`$ | $`10.44`$ | $`12.46`$ | $`0.16`$ | | $`20000`$ | $`10.02`$ | $`14.22`$ | $`16.60`$ | $`0.18`$ | | $`30000`$ | $`11.93`$ | $`17.20`$ | $`19.80`$ | $`0.20`$ | | $`40000`$ | $`13.57`$ | $`19.75`$ | $`22.54`$ | $`0.21`$ | | $`50000`$ | $`15.04`$ | $`22.04`$ | $`25.09`$ | $`0.23`$ | Table 2. Vortex states and excitation energies of $`{}_{}{}^{87}Rb`$ atoms with $`k=1`$ in the toroidal trap with $`\rho _0=2`$ and $`\lambda =4`$ within Hartree-Fock approximation. Units as in Tab. 1. | $`N`$ | $`\mathrm{}\omega _1`$ | $`\mathrm{}\omega _2`$ | $`\mathrm{}\omega _3`$ | $`\mathrm{}\omega _4`$ | | --- | --- | --- | --- | --- | | $`1`$ | $`1.00`$ | $`1.98`$ | $`2.97`$ | $`3.96`$ | | $`5000`$ | $`1.00`$ | $`1.70`$ | $`2.43`$ | $`3.19`$ | | $`10000`$ | $`1.00`$ | $`1.68`$ | $`2.37`$ | $`3.08`$ | | $`20000`$ | $`1.00`$ | $`1.66`$ | $`2.32`$ | $`3.00`$ | | $`30000`$ | $`1.00`$ | $`1.66`$ | $`2.30`$ | $`2.96`$ | | $`40000`$ | $`1.00`$ | $`1.66`$ | $`2.30`$ | $`2.95`$ | | $`50000`$ | $`1.00`$ | $`1.66`$ | $`2.30`$ | $`2.95`$ | Table 3. Lowest elementary excitations of the Bogoliubov spectrum for the ground state of $`{}_{}{}^{87}Rb`$ atoms in the toroidal trap with $`\rho _0=2`$ and $`\lambda =4`$. Units as in Tab. 1. | $`N`$ | $`\mathrm{}\omega `$ | $`ϵ\mu _1`$ | | --- | --- | --- | | $`5000`$ | $`1.22`$ | $`1.69`$ | | $`10000`$ | $`1.48`$ | $`2.02`$ | | $`20000`$ | $`1.73`$ | $`2.38`$ | | $`30000`$ | $`1.88`$ | $`2.60`$ | | $`40000`$ | $`1.99`$ | $`2.79`$ | | $`50000`$ | $`2.08`$ | $`3.05`$ | Table 4. Bogoliubov vs Hartree–Fock lowest elementary excitation for a vortex state of $`{}_{}{}^{87}Rb`$ atoms with $`k=1`$ in the toroidal trap with $`\rho _0=2`$ and $`\lambda =4`$. Units as in Tab. 1.
no-problem/9901/cond-mat9901135.html
ar5iv
text
# Hall potentiometer in the ballistic regime ## Abstract We demonstrate theoretically how a two-dimensional electron gas can be used to probe local potential profiles using the Hall effect. For small magnetic fields, the Hall resistance is inversely proportional to the average potential profile in the Hall cross and is independent of the shape and the position of this profile in the junction. The bend resistance, on the other hand, is much more sensitive on the exact details of the local potential profile in the cross junction. A conductive atomic force microscope (AFM) tip has been used, in contact and non contact mode, as a local voltage probe in order to measure the distribution of the electrical potential on a surface. This technique is called nanopotentiometry and allows two-dimensional potential mapping . The sensitivity and the spatial resolution are limited by the finite size of the conductive probe and by the quality of the surface preparation. On the other hand such tips can also be used to induce potential variations in the sample in order to influence the conduction locally . Measuring the change in the resistance of the device gives information on the local transport properties. Due to the complexity of the problem, i.e. the different dielectric layers, interfaces and screening of the two-dimensional electron gas (2DEG), it is rather difficult to calculate theoretically the induced potential of the tip in the 2DEG. The aim of the present work is to propose a new technique which allows to measure such local potential profiles inside a 2DEG. Using the Hall effect we will demonstrate theoretically that the 2DEG can be used as a probe to measure the inhomogeneous induced potential profile. The system we consider is depicted in the inset of Fig. 1(a) and consists of a mesoscopic Hall bar placed in a homogeneous magnetic field, containing a local inhomogeneous potential profile in the cross junction which is e.g. induced by a STM-tip. To describe the transport properties in the Hall cross we will use the billiard model . In this model the electrons are considered as point particles which is justified when the Fermi wavelength $`\lambda _FW,d`$ where $`2W`$ is the width of the Hall probes and $`d`$ the radius of the potential profile which acts as a scatterer for the electrons. The electron motion is taken ballistic and governed by Newtons law which is justified at low temperatures and in case of high mobility samples where the mean free path $`l_eW,d`$. We assume that the temperature is not extremely low such that interference effects are averaged out due to thermal smearing. In a typical GaAs-heterostructure the electron density is $`n3\times 10^{11}cm^2`$ with typical low temperature mobility $`\mu 10^6cm^2/Vs`$, which gives $`\lambda _F=450`$Å and $`l_e=5.4\mu m`$. This billiard model has been used successfully to describe e.g. the experiments of Ref. and to explain the working of a ballistic magnetometer . Using the Landauer-Büttiker formalism for the Hall geometry with identical leads, the Hall $`R_H`$ and the bend $`R_B`$ resistances are given by $`R_H`$ $`=`$ $`{\displaystyle \frac{\mu _4\mu _2}{eI_{13}}}={\displaystyle \frac{h}{2e^2}}{\displaystyle \frac{T_R^2T_L^2}{Z}},`$ (1) $`R_B`$ $`=`$ $`{\displaystyle \frac{\mu _2\mu _3}{eI_{14}}}={\displaystyle \frac{h}{2e^2}}{\displaystyle \frac{T_F^2T_LT_R}{Z}},`$ (2) where $`Z=\left[T_R^2+T_L^2+2T_F\left(T_R+T_F+T_L\right)\right]\left(T_R+T_L\right)`$ and $`T_R`$, $`T_L`$, $`T_F`$ are the probabilities for an electron to turn to the right probe, to the left probe and to the forward probe, respectively. These probabilities will be calculated using the ballistic billiard model . In the following we will express the magnetic field in units of $`B_0=mv_F/2eW`$, and the resistance in $`R_0=\left(h/2e^2\right)\pi /2k_FW`$, where $`W`$ is the half width of the leads, $`m`$ is the mass of the electron, $`k_F=\sqrt{2mE_F/\text{h}\text{ }\text{ }^2}`$, and $`v_F=`$h $`k_F/m`$ the Fermi velocity. For electrons moving in GaAs ($`m=0.067m_e`$) and for a typical channel width of $`2W=1\mu m`$ and a Fermi energy of $`E_F=10meV`$ $`\left(n_e=2.8\times 10^{11}cm^2\right)`$, we obtain $`B_0=0.087T`$ and $`R_0=0.308k\mathrm{\Omega }`$. In order to demonstrate the main physics involved, we consider first a mathematically simple potential profile, namely a rectangular potential barrier with radius $`d`$ and height $`V_0`$ placed in the center of the cross junction: $`V\left(r\right)=V_0`$ if $`r<d`$ and $`V\left(r\right)=0`$ if $`r>d`$. This potential barrier is schematically shown in the inset of Fig. 1(a) by the shaded circular area. Inside the potential barrier the kinetic energy of the electrons is reduced to $`E_FV_0`$ with $`E_F`$ the kinetic energy of the electrons outside this region. Hence, the electron velocity $`v`$, the density $`n`$ of the electrons and also the radius $`R_c`$ of the cyclotron orbit are reduced inside the potential region. For $`V_0<0`$ the opposite occurs. This will result in changes in the transmission probabilities $`T_R`$, $`T_F`$, $`T_L`$ and consequently it will alter the Hall and bend resistances. In Fig. 1 we show the Hall resistance $`R_H`$ and the bend resistance $`R_B`$ as function of the external applied magnetic field for different sizes and heights of the rectangular potential profiles. Fig. 1(a) shows the Hall resistance and Fig. 1(b) the bend resistance for different potential heights $`V_0`$ but fixed radius $`d=0.5W`$, while in Fig. 1(c) and 1(d) the radius $`d`$ is varied and the potential height $`V_0=0.2E_F`$ is kept fixed. Notice that there exists a critical magnetic field $`B_c=B_c(d)`$, such that for $`B>B_c`$ no electrons are entering the area of the potential barrier, because their cyclotron radius is so small that they skip along the edge of the probe without ’feeling’ the potential barrier. For $`B>B_c`$ the diameter of the cyclotron orbit, $`2R_c=2v_F/\omega _c`$, is less than the distance between the edge of the rectangular potential barrier and the corner of the cross junction. Therefore, the electrons do not feel the presence of the $`V0`$ region in the cross junction and the Hall and bend resistances equal the classical $`2D`$ values: $`R_H/R_0=2B/\pi B_0`$ and $`R_B=0`$. A simple calculation of this critical field gives $`B_c/B_0=4/(\sqrt{2}d/W)`$, which results into $`B_c/B_0=4.4,7.8,18.7`$ for $`d/W=0.5,0.9,1.2`$, respectively, and which agrees with the results of Figs. 1(a,c). For $`B<B_c`$ the electron trajectories are only weakly bend And consequently they sample the $`V0`$ region. The latter region increases (decreases) the turning probability $`T_R`$ and hence reduces (enhances) $`R_B`$ and enhances (reduces) $`R_H`$ as is clearly observed in Fig. 1 for $`V_0>0`$ ($`V_0<0`$). For low potential barriers, i.e. $`|V_0|E_F`$, we have $`R_B0`$ for $`B/B_0>2`$ which is substantially below $`B_c`$. The reason is that almost no electrons finish in probe 2 and 3, i.e. $`T_L0T_F`$, and consequently $`R_B=0`$. At the magnetic field $`B=2B_0`$ the cyclotron diameter, $`2R_c`$, equals the probe width, $`2W`$. For higher potential barriers some of the electrons are deflected on the potential barrier into probe 2 and hence $`R_B<0`$. This is clearly observed in Fig. 1(b). We found that, even in the presence of an inhomogeneous potential profile, the Hall resistance is linear for small magnetic fields (i.e. $`BB_c`$). The slope increases with the radius and with the height of the rectangular potential barrier. We analyzed the Hall factor $`\alpha =R_H/B`$ for $`BB_c`$ in Fig. 2(a) as function of the radius $`d`$ for $`V_0=0.2E_F`$ and in Fig. 2(b) as function of the height $`V_0`$ for $`d=0.4W`$. Notice that the Hall factor increases with $`d`$ and $`V_0`$. Increasing $`d`$ or $`V_0`$ results in an increase of the average potential $`V`$ or in a reduction of the average electron density $`n_e`$ in the cross junction. For a rectangular potential profile we find for the average electron density in the cross junction $`n_e=n_e\left[1\pi \left(d/2W\right)^2V_0/E_F\right]`$ from which we define an effective Hall coefficient $`\alpha ^{}=\left(R_H/B\right)n_e/n_e`$ which is shown by the dashed curve in Fig. 2. Notice that $`\alpha ^{}`$ is almost independent (within 2%) of $`d`$ for $`d/W<1.0`$ if $`V_0`$ is constant and practically independent of $`V_0`$ for $`V_0/E_F<0.5`$ if $`d`$ is constant. For $`V_0/E_F>0.5`$ the potential profile is no longer a weak perturbation on the electron motion and consequently the Hall resistance can no longer be described in terms of an average electron density in the cross junction. The fact that $`\alpha ^{}`$ is practically independent of $`V_0`$ and $`d`$ indicates that for low magnetic fields the Hall resistance is completely determined by the average potential in the cross region, and is independent of the detailed potential profile as long as $`V/E_F<0.5`$. This is our major conclusion of this work, which will be confirmed further. The bend resistance $`R_B`$, on the other hand, is much more sensitive to the exact form of the potential barrier. For example: the bend resistance for a wide, but low barrier ($`d=0.9W`$, $`V_0=0.2E_F`$) does not equal to the bend resistance for a narrow, but high barrier ($`d=0.5W`$, $`V_0=0.648E_F`$), even though the average potential in the junction region is the same (see Figs. 1(b,d)). Next, we investigate the effect of the functional form of the potential by considering, as an example, a gaussian potential profile in the center of the junction: $`V_g\left(r\right)=V_{g,0}\mathrm{exp}\left(r^2/d_g^2\right)`$ with $`V_{g,0}`$ the height and $`2d_g`$ the width at half height. In Fig. 3 we compare the Hall factors $`\alpha `$ and $`\alpha ^{}`$ of the rectangular potential (solid curves) with height $`V_0=0.2E_F`$ and radius $`d`$, and the gaussian potential (symbols) with width $`d_g=d`$ and height $`V_{g,0}`$ as a function of the radius $`d`$. $`V_{g,0}`$ is varied with $`d`$ such that the average potential inside the cross junction is the same for the two potential profiles $`\left(V=V_g\right)`$. The differences between the effective Hall factor $`\alpha ^{}`$ in the two cases is negligeable and hence this illustrates again that the Hall resistance is completely determined by the average potential in the cross region, and is independent of the detailed potential profile. In the inset of Fig. 3 we show, as an example, the rectangular potential with $`d=0.5W`$ and $`V_0=0.2E_F`$ and the corresponding gaussian potential with $`d_g=0.5W`$ and $`V_{g,0}0.202E_F`$ which has the same $`V`$. Finally, we investigate the effect of displacing the rectangular potential barrier away from the center of the junction. As an example, we consider a rectangular potential barrier with height $`V_0=0.2E_F`$ and radius $`d=0.5W`$ which is displaced at different distances $`\rho /W=0,\pm 0.1,\pm 0.2,\pm 0.3,\pm 0.4,\pm 0.5`$ from the center of the cross junction in different directions $`\phi =0,\pi /4,\pi /2,3\pi /4`$ with regard to the x-axis. Notice that in all these cases the entire potential barrier is inside the cross junction and hence the average potential in the cross junction is the same. Because the problem is no longer symmetric we are not allowed to use the reduced Eqs. (1) and (2) but we are forced to retain the original Landauer-Büttiker formula . In Fig. 4 we show that the change in the effective Hall factor $`\alpha ^{}`$ as function of the distance $`\rho `$ for the different directions $`\phi `$ is less than 1% for $`d=0.5W`$ and $`V_0=0.2E_F`$. Only when the circular potential barrier is very close to one of the probes the deviation becomes of the order of 1%. This result illustrates that for low magnetic fields the Hall resistance is completely determined by the average potential in the cross region, and is independent of the detailed position of the potential barrier in the cross junction. In conclusion, we investigated the Hall and the bend resistances, in the presence of an inhomogeneous potential profile in the junction of a mesoscopic Hall bar. We found that in the low magnetic field regime the Hall resistance is linear in the magnetic field and is determined by the average potential in the cross junction, independent of the shape and the position of the potential barrier as long as $`V/E_F<0.5`$. This general result makes such a Hall device a powerful experimental tool for non invasive investigations of induced potential profiles. The bend resistance depends much more sensitively on the detailed shape and position of the potential profile. The resistance at which the classical 2D Hall resistance is recovered gives us information on the size of the potential barrier. The present results are valid in the ballistic regime and are expected, as in the case of the Hall magnetometer , to be modified in the diffusive regime. Acknowledgments: This work is supported by IUAP-IV, the Flemish Science Foundation (FWO-Vl) and the Inter-University Micro-Electronics Center (IMEC, Leuven).
no-problem/9901/cond-mat9901331.html
ar5iv
text
# Microwave transport approach to the coherence of interchain hopping in (TMTSF)2PF6 ## 1 Introduction Due to a pronounced chain structure, the (TMTSF)<sub>2</sub>X, \[X = PF<sub>6</sub>, AsF<sub>6</sub>, ClO<sub>4</sub>…\] Bechgaard salts have become the prototypical examples of quasi-one-dimensional (Q1D) conductors with the highest conductivity direction parallel to the stacking axis (a) of the TMTSF molecules. Their low temperature properties have attracted much attention since various transitions such as incommensurate spin-density-wave (SDW), superconductivity, field-induced SDW, quantum Hall effect etc., have been observed Ishiguro90 . Recently, the normal phase (i.e. above the transition temperature of the broken symmetry ground states) has attracted much interest. Since the tunneling integral along the chain direction ($`t_a0.25eV`$) is at least one order of magnitude larger than the transfer integrals $`t_b`$ and $`t_c`$ in the transverse directions ($`t_b200`$K and $`t_c10`$K), the organic metallic chains are usually considered as weakly coupled. Although the coupling along c is likely irrelevant over a large temperature domain, the effective value of $`t_b`$ has to be considered to precise the dimensionality of the electron gas in the normal phase. At sufficiently high temperatures however, the physical properties are expected to be essentially governed by 1D phenomena. It is well known that, in a strictly 1D interacting electron gas, the Fermi liquid (FL) picture breaks down and must be replaced by the so-called Luttinger liquid (LL) description Schulz91 . Since non-negligible interchain coupling along the b’ direction exist in the Bechgaard’s salts, departure from the LL model Boies95 migth be induced. Indeed, when the temperature is progressively lowered, transverse b’ interactions are expected to become more effective so that a crossover from a Q1D to a two- dimensional (2D) electron gas picture should occur: the FL behavior might be recovered provided that the Coulomb interactions are not too strong. However, the actual value of the crossover temperature $`T_x`$ is highly debated. According to simple band calculations, the dimensional crossover for the single particle motion is then expected to occur at $`T_x150200K`$.This is in agreement with the temperature dependence of the longitudinal DC resistivity which is showing a transition regime from a roughly linear behavior to a $`T^2`$ profile (indicative of a FL behavior dominated by electron-electron scattering effects) over that temperature range Jerome94 . However, photoemission spectra Dardel93 ; Zwick97 are incompatible with a FL picture over that temperature range and early optical experiments with the light polarized along the transverse b’ direction failed to evidence a coherent plasma edge above 50K Jacobsen83 . Moreover, deviations from the FL picture have also been observed down to 50 K in NMR experiments, suggesting then an upper bound value for the crossover temperature Bourbonnais93 ; Wzietek93 . Furthermore, the frequency dependence of the conductivity is well known to display unusual features Schwartz98 ; Vescoli98 : for frequencies above the effective interchain transfer integral, the electrodynamics is consistent with the prediction of the LL picture, while at low frequencies, pronounced deviations with respect to the Drude picture are present Timusk96 . Among all the experimental approachs used to study the dimensionality of the electron gas, transverse transport measurements are particularly relevent to directly probe the interchain couplings. It was further realized that since the transverse transfer integrals are small, an electric field applied along the b’ or c\* directions could also act as a probe of the physical properties in the plane perpendicular to that direction. Early resistivity measurements Jacobsen81b along the hard axis (c\*) have shown a non monotonic behavior of the temperature profile: a maximum of $`\rho _c`$ was observed near 80 K. More recently, a strong pressure dependence of this unusual feature was evidenced Moser98 and a typical 1D power law profile was found above the characteristic $`\rho _c`$ maximum. This maximum was then ascribed to a broad crossover regime indicative of a deconfinement of the carriers from the chain axis; this results in a gradual onset of coherent transport along b’ below 80 K, suggesting then a FL behavior in the a-b’ plane. However, the anisotropy ratio $`\rho _c/\rho _a`$ was not found temperature independent as expected from FL arguments and an incipient Fermi liquid was therefore invoked. These observations contrast with the work of Gor’kov et al. Gorkov98 ; Gorkov96 ; Gorkov95 who recently argued that the longitudinal transport properties, below 60 K and down to the SDW transition temperature, can be well accounted for in terms of a weakly interacting Fermi liquid. However, such a quasi-particle like signature (if ever) should also be detected in the b’ transverse direction. Reliable measurements of the transverse transport properties along b’ are highly needed to clarify the present controversy. Unfortunately, since the Bechgaard salts have a pronounced needle shape whose axis is parallel to the chains, transverse transport along b’ is particularly difficult to perform with usual DC methods. Owing to non-uniform current distributions between contacts, parasitic contributions from other directions are likely introduced. These problems can be avoided by using a contactless microwave technique which allows a better control on the orientation of the current lines in these organic needles. In this paper, we report microwave resistivity data obtained along the transverse directions in (TMTSF)<sub>2</sub>PF<sub>6</sub> crystals. These data clearly indicates the temperature range over which the coherence of interchain hopping sets in and they confirm strong deviations from a Fermi liquid description. ## 2 Experiment High quality single crystals of (TMTSF)<sub>2</sub>PF<sub>6</sub> have been synthesized by the standard electrochemical method with typical dimensions ($`8\times 0.25\times 0.1`$ mm<sup>3</sup>) along the a, b’ and c\* axes respectively. Such a needle geometry is not suitable for precise measurement of the transverse transport properties; this is particularly true for our microwave technique which yields very accurate data only when the electric field is oriented along the needle’s axis. Each needle was therefore cut into three pieces in order to perform the measurements along the a, b’ and c\* axes (natural faces of the single crystals) on the same single crystal. Each piece was then cut in small blocks so that it could be reconstructed with the shape of a needle having one of the crystal directions as its axis. We used a conventional cavity perturbation technique at 16.5 GHz Fertey97 to obtain the electrical resistivity along each crystal axes as a function of the temperature (2-300 K) and a magnetic field (0-14 Tesla) applied along the c\* direction. Unfortunately, due to the microwave resonator design, no data could be collected with both the magnetic field and the electric field along the c\* axis. To prevent microcraks the samples were slowly cooled to the lowest temperature at 0.6K/min. The temperature was monitored either with a Si diode (zero field) or a capacitor sensor (field up to 14T). ## 3 Results and discussion Along the high conductivity axis, the microwave resistivity has been determined by using the Hagen-Rubens limit (skin depth regime). Since the conductivity is much lower along the b’ and c\* axes, the microwave data were rather analyzed in the framework of the metallic limit of the quasistatic approximation Musfeldt95 . We report in Figure 1, the temperature profile of the resistivity along the (a), (b’) and (c\*) axes of (TMTSF)<sub>2</sub>PF<sub>6</sub> in the normal phase for zero magnetic field value. The orders of magnitude are in good agreement with published DC results Jacobsen81b ; Moser98 and the SDW phase is evidenced for each crystal direction by an abrupt resistivity increase below 12 K. Along the chain axis (a), the usual metallic behavior is observed down to 13.6 K, where the resistivity reaches a minimum. Interestingly, the microwave resistivity profile displays clearly, near 45 K, a change of slope, when the DC one usually shows a single quadratic behavior below 100 K (dashed-dotted line in Figure 1). Along the least conducting direction (c\*), the microwave resistivity profile is consistent with the DC curve Moser98 : it increases first when the temperature is decreased from 300 K (not shown on the figure), reaches a maximum near 90 K and recovers a metallic behavior on further cooling. The minimum value is obtained near 15.3 K. The b’ resistivity presents definitely a different profile, being almost flat (below 120 K) on the logarithmic scale compared to the other crystal directions. This profile is shown in more details in Figure 2. On lowering the temperature, the resistivity $`\rho _b^{}`$ first decreases monotonically down to a local minimum around 75 K, increases slightly to reach a local maximum near 40 K and decreases again down to 15 K before entering the SDW phase below 12 K. This peculiar profile observed between 12 and 80 K is weakly sample dependent: this is illustrated in Figure 3 where we compare the normalized resistivity (relative to the value just above the SDW transition) obtained on three different samples (different batches). Such a dependency can be explained by two factors: i) a sligth misalignment relative to one another of the small crystals used in the needle’s construction; ii) a different impurity content in crystals of different batches. A correlation with the latter factor is difficult to evaluate for the moment. However, it seems clear from Figure 3 that the local maximum of $`\rho _b^{}`$ around 40 K is intrincic to this crystal direction. Its absence on the DC resistivity profile Jacobsen81b could signify that the latter is significantly polluted by different components of the resistivity tensor as previously mentioned. The emergence of an insulating behavior below 70 K ($`d\rho _b^{}/dT<0`$ in Figure 2) refutes the possible existence of quasi-particle states down to 40 K. A Fermi liquid description of interacting electrons above 40 K would, indeed, have required a quadratic temperature profile of both the $`\rho _a`$ and $`\rho _b^{}`$ components. Between 50 K and 70 K, $`\rho _b^{}`$ is better understood by assuming a Luttinger liquid behavior along the stacks: this yields a power-law increase $`\rho _b^{}(T)T^{2\alpha }`$, where $`\alpha `$ is the exponent of the single-particle density of states of the LL Moser98 . However, due to the reduced temperature domain used to fit the power law and the slight sample dependency of $`\rho _b^{}`$ in this temperature range, the exponent $`\alpha `$ might be only approximative ($`2\alpha 0.3`$) and prevent in turn reliable determination value of the exponent $`K_\rho `$ characterizing the charge degrees of freedom of a LL. The resistivity maximum observed around 40 K mimics the temperature profile of $`\rho _c`$ which also displays a resistivity maximum at higher temperature, near 80 K Jacobsen81b ; Moser98 . Therefore, it could be attributed to the onset of a deconfinement of the carriers and the restauration of a 2D conductivity regime in the a-b plane below 40 K. This is supported by the observation of an important increase of the resistivity when a magnetic field is applied perpendicularly to the 2D plane of motion (Figure 2), only for temperatures below 40 K. This is exemplified in the inset of Figure 2 which displays the variation of the resistivity relatively to its zero field value as a function of the magnetic field for temperatures between 16 and 80 K. It is well known that in the Bechgaard salts, a magnetic field applied along the hard axis c\* confines the motion of the carriers to the chain axis. Therefore, a reduction of the metallic behavior parallel to b’ is expected if there is some coherent motion along that direction. Below 40 K, the progressively increasing magnetoresistivity can then be interpreted as the signature of the 2D motion. On the contrary, since no magnetoresistivity is observed above 40 K, transverse coherent hopping is apparently absent and coherent motion is confined to the organic stacks. This observation of a dimensional crossover around 40 K supports the predictions of the renormalization group theory: a Luttinger liquid picture may persist down to the low temperature region when many-body effects on interchain hopping are considered Bourbonnais85 . Indeed, one-dimensional many-body effects are expected to lower the efficiency of interchain tunneling, thereby decreasing its amplitude. However, transients effects are likely associated to such a crossover since the deconfinement region is not sharply defined but spreaded out in temperature: in the coherent regime (16-30 K), $`\rho _b^{}`$ does not show a quadratic temperature profile (the power law exponent ranges from 0.37 to 0.075 for the three samples studied). This signifies that important deviations from a FL quasi-particle transport exist along the b’ axis, as exemplified by the temperature profile of the microwave resisitivity anisotropy ratio $`\rho _𝐚/\rho _𝐛^{}`$ and $`\rho _𝐚/\rho _𝐜`$ presented in Figure 4. Below 70 K, $`\rho _𝐚/\rho _𝐜`$ is practically constant: it can be reasonably fitted by a $`T^{0.15}`$ law in clear contrast with the results of Moser et al. Moser98 , ($`T^{0.5}`$) over the same temperature range. On the contrary, $`\rho _𝐚/\rho _𝐛^{}`$ continuously decreases from 300 K down to the SDW transition temperature. Only a change of curvature is observed around the dimensional crossover temperature near 40 K, seemingly in contradiction with a constant ratio expected for a true FL behavior. The onset of coherence transport along the transverse direction is also supported by some features observed on the microwave resistivity along the chain axis. We show, in Figure 5, a significant increase of the resisticivty for a magnetic field of 14 Tesla applied along c\*. As seen in the inset, a significant magnetoresistance is again observed only for temperatures below 40 K, consistently with a dimensional crossover over that range. The change of slope precedingly identified on $`\rho _a`$ around 45 K in zero field thus signals the onset of transverse coherent transport. ## 4 Conclusion The microwave resistivity data reported in this paper for (TMTSF)<sub>2</sub>PF<sub>6</sub> crystals clearly show the onset of coherent transport properties along the intermediate conductivity direction b’. This temperature scale is evidenced by a resistivity maximum around 40 K along the b’ axis and supported by a progressively increasing magnetoresistance below 40 K, when a magnetic field is applied along c\*. Similar effects observed on the temperature profile of the longitudinal resistivity $`\rho _a`$ confirm the dimensional crossover. Furthermore, a temperature profile analysis of $`\rho _b^{}`$ has failed to detect any clear-cut Fermi liquid component in the whole normal phase domain. Our results seems to be in good agreement with previous results such as NMR Bourbonnais93 ; Wzietek93 , photoemission Dardel93 or optical data Timusk96 , which claim the abscence of any quasi-particle features. These results, however, contrast with reflectance Schwartz98 ; Vescoli98 or very low temperature magnetotransport Danner94 data, which seem to be well described by bare band parameters. A detailed theoretical framework that would clarify why some experimental probes are apparently strongly sensisitve to many-body effect while others do not, is missing so far. ###### Acknowledgements. The authors are grateful to J. Beerens for giving access to his 14 Tesla experimental set-up, to C. Bourbonnais for critical reading of the manuscript and useful suggestions during the course of this work, M. Castonguay and J. Corbin for technical assistance. This work was supported by grants from the Fonds pour la Formation de Chercheurs et l’Aide à la Recherche of the Government of Québec (FCAR) and from the Natural Science and Engineering Research Council of Canada (NSERC).
no-problem/9901/astro-ph9901131.html
ar5iv
text
# A Search for Aperiodic Millisecond Variability in Cygnus X-1 ## 1 Introduction Identifying and understanding short timescale variability in cosmic sources has repeatedly led to a better understanding of their fundamental nature and of the important physical processes present. For example, during the last decade, studies of fast time variability in low-mass X-ray binaries concentrated on quasi-periodic oscillations (QPOs) and various noise components frequencies up to 2 kHz. The study of QPOs and associated noise components led to a qualitatively more complete understanding of the accretion processes and the various omnipresent instabilities (van der Klis (1997)). Similar breakthroughs in our understanding of black hole candidates have yet to occur. The dynamical and radiative timescales in the inner disk of accreting black hole candidates are predicted to be in the millisecond range. The thin disk models of Wallinder, Kato & Abramowicz (1992), scaled to stellar mass black holes, have local thermal and acoustic timescales $`<`$ 1 ms, and quasiperiodic variability in the X-ray emission is predicted at these timescales. Bao & $`Ø`$stgaard (1995) have numerically modeled orbiting shots in a geometrically thin accretion disk around a black hole including all relativistic effects. The shots, or “hot spots”, were simulated to radiate photons isotropically in their proper rest frames. For various shot distributions and different inclination angles, they find that the power density spectrum (PDS) exhibits a “cutoff” at the Keplerian frequency corresponding to the inner edge of the accretion disk. This cutoff is present for both optically thick and thin disks. The accretion model of Nowak & Wagoner (1995) also predicts a sharp cutoff in the PDS falling as $`f^5`$ for frequencies greater than the Keplerian frequency of the inner edge of the disk. The $`f^5`$ dependence arises from the three-dimensional hydrodynamic turbulent flow interior to the edge of the disk. Advection-dominated disk models for black holes (Chakrabarti & Titarchuck (1996); Narayan (1996)) provide a self-consistent explanation for the energy spectra of hard and soft states of black hole candidate binary systems. In these types of models a standing shock can develop in the accretion flow at about 10 — 30 Schwarzschild radii (R<sub>Sch</sub>). The location of the shock defines an effective inner edge for both the disk and the halo components that can lead to abrupt changes in the PDS. Depending on the mass and angular momentum of the black hole these effects are predicted to be in the 3 — 100 Hz range. There are also interesting theoretical predictions of QPOs at a few hundred Hz arising from the special character of BH accretion (Perez et al. (1997)). Millisecond variability in Cyg X-1 has been reported twice. Rothschild et al. (1974) reported millisecond bursts in an observation of Cyg X-1 obtained with a rocket experiment. These bursts appeared as excess counts over that expected from Poisson statistics assuming that the Poisson expectation remains constant. However, the leakage of variability at lower frequencies ($`10`$ Hz) into the higher frequencies of interest ($`1000`$ Hz) invalidates this assumption (Press & Schechter (1974); Weisskopf & Sotherland (1978)). Indeed, when the pre-1978 literature is carefully reviewed, the analysis of Cyg X-1 timing spectra from a number of experiments show no conclusive evidence for millisecond variability (Weisskopf & Sotherland (1978)). More recent results show no model-independent evidence for millisecond variability (Lochner, Swank & Szymokowiak (1989)), except in the context of the shot model (Lochner, Swank & Szymokowiak ; Negoro et al. (1995)). Lochner et al. (1991) used the phase portrait idea to determine parameters of a shot noise model. Using data from HEAO A-2 and EXOSAT, they find evidence for characteristic shot durations lasting from milliseconds to a few seconds. However, this analysis is model-dependent. Meekins et al. (1984) (hereafter M84 ) worked to untangle leakage effects from slower time scales and developed a $`\chi ^2`$ method to claim detected variability at timescales of 0.3 – 3.0 millisecond in a HEAO A-1 observation of Cyg X-1 with 8 $`\mu `$s resolution. This result has been prominently quoted as the only strong evidence for variability in Cyg X-1 at millisecond time scales; for example, see van der Klis (1995) and Liang (1998). In this paper we present a reanalysis of HEAO A-1 data and an analysis of Rossi X-ray Timing Explorer (RXTE) data that contradicts the apparently clear observation in the PDS of millisecond power from Cyg X-1 of M84. From our analyses, we conclude that the observed millisecond power in the M84 PDS is due to either the known HEAO A-1 reset problem (Wood et al. (1984)) or a previously unknown instrumental effect. ## 2 Analyses ### 2.1 Observations We have analyzed archival observations of Cyg X-1 and the supernova remnant Cas A from the High Energy Astrophysics Observatory A-1 experiment (HEAO A-1). We have also analyzed new observations of Cyg X-1 made by the Rossi X-ray Timing Explorer (RXTE). The HEAO A-1 observations were made on 1978 May 7 while Cyg X-1 was in the hard (low) state. Cas A was observed on 1978 August 2. Data from this presumably Poisson source were used to model the response of the detector and to search for instrumental effects. Both sets of A-1 data were recorded using the high bit rate mode described below. The RXTE observations occurred on 1996 June 8, 1996 June 17, 1996 June 27 and 1996 July 12 as part of our approved RXTE Guest Investigator program. Similar observations have been previously published (Cui et al. (1997); Belloni et al. (1996)). Cyg X-1 was in its soft (high) state at the time of the RXTE observations. Table 1 shows the observation dates and times for all of the data used in our analyses. We have used two techniques to analyze the HEAO A-1 data: the relative integral power method of M84 and the standard Fast Fourier transform (FFT) power spectrum method. The RXTE observations were analyzed using the FFT power spectrum method only. For the HEAO A-1 analyses, we have derived new methods to correct the data for both dead time and instrumental effects. For the RXTE analysis we use the standard RXTE dead time correction (Zhang et al. (1995)). ### 2.2 Analysis of the HEAO A-1 Data of Cas A and Cyg X-1 The HEAO A-1 data were recorded in the high bit rate (HBR) mode and consisted of a series of zeros and ones. A zero indicated no photons in the previous 8 $`\mu `$s and a one indicated that at least one photon was detected in the 8 $`\mu `$s interval. There was no energy information in this mode. The energy range covered by these observations is about 1 to 25 keV. The data for Cas A were Fourier transformed to search for deviations from the expected Poisson source spectrum. For a Poisson source measured by a detector with no dead time, the expected spectrum is flat with a value 2, using the Leahy normalization (Leahy et al. (1983)). Introducing dead time into the system slightly reduces the normalization value, but the shape remains relatively flat in the region in which we are interested. The observed Fourier power for Cas A-1 was not flat, but instead showed a broad “knee” in the spectrum, as shown in Figure 1. The distribution of differences in photon arrival times ($`\mathrm{\Delta }t_\gamma `$) showed a kink in the expected offset exponential distribution. This effect was modeled under the assumption that it was a previously uncorrected instrumental effect. The effect may have been unique to the HBR data or general to the HEAO A-1 data, but it would have been difficult to observe in the well-studied 5 and 320 ms binned data modes. In the HBR PDS it was apparent only at frequencies above about 100 Hz, which is the Nyquist frequency for the 5 ms data. It was not possible to determine the times between individual events for the 5 and 320 ms modes since the data were binned; therefore the kink in the $`\mathrm{\Delta }t_\gamma `$ distribution is not likely to be observable. We searched the 5 ms data for this effect and did not find any indications of it. Eadie et al. (1971) note that the hyperexponential function is applicable in situations where there is a mixture of exponential processes. We find that an offset hyperexponential function is a good representation of the HEAO A-1 HBR $`\mathrm{\Delta }t_\gamma `$ distribution: $$f_H(t)=U(t\tau )(p_1\rho _1e^{\rho _1(t\tau )}+(1p_1)\rho _2e^{\rho _2(t\tau )})$$ (1) where $`U(t\tau )`$ is the Heavyside step function, $`\tau `$ is the dead time, $`\rho _1`$ and $`\rho _2`$ are the count rates for two Poisson processes and $`p_1`$ is the probability of generating an $`\mathrm{\Delta }t_\gamma `$ from the first Poisson process. The model was fitted to the Cas A data and the results are shown in Figure 2. ### 2.3 Power Spectrum Analysis of HEAO A-1 Data We analyzed the Cyg X-1 data using a Fourier transform and observed a spectrum similarly shaped to the Cas A PDS. We again interpreted this as a manifestation of either the instrument reset problem or of a previously unreported instrumental effect. We determined the effective Poisson noise floor in the presence of the instrumental effect for a non-Poisson source, Cyg X-1. Using the following procedure we fit Equation 1 to the Cyg X-1 $`\mathrm{\Delta }t_\gamma `$ distribution. Random $`\mathrm{\Delta }t_\gamma `$ were drawn from Equation 1 as defined by the above fit parameters and accumulated to generate absolute times. This Monte Carlo time series was Fourier transformed in the same manner as the data were. The $`\chi ^2`$ of the Monte Carlo PDS to the data PDS was calculated for frequencies above 100 Hz. This procedure was repeated for a grid of parameter values whose origin was defined by the initial fit to the $`\mathrm{\Delta }t_\gamma `$ distribution. The resulting best fit PDS defined the effective Poisson noise floor. The resultant noise corrected PDS for the HEAO data is shown in Fig. 3. Fig. 4 shows the region of the PDS above 10 Hz to better examine the power at high frequencies. No statistically significant power above the noise floor is observed above 25 Hz. This is consistent with our assumption that power above 100 Hz is attributable to Poisson noise. ### 2.4 Relative integral power analysis M84 derived a statistic, that they called the relative integral power, to quantify aperiodic variability. The details of their approach are described in Section III of their paper. The new statistic, $`P_{rel}`$, of M84 is defined as the total discrete Fourier transform power of the mean subtracted time series divided by the square of the total number of x-ray counts, $`N^2`$, in the time series of length $`T`$ divided into $`m`$ equal length bins. $$P_{rel}\frac{1}{N^2}\underset{j=\frac{m}{2}1}{\overset{\frac{m}{2}}{}}(|a_j|^2a_0^2)=\frac{\chi ^2}{N}$$ (2) where the $`a_j`$ are the standard Fourier coefficients $$a_j=\underset{k=0}{\overset{m1}{}}x_ke^{2\pi ijk/m},$$ (3) $`x_k`$ is the number of events in the $`k`$th time bin, and $`a_j`$ is the Fourier coefficient at frequency $`f_j(=j/T)`$. The distribution of power variability over all possible frequencies forms the Fourier power spectrum. With the Leahy normalization, this power spectrum is given by Leahy et al.(1983); $$P_j=\frac{2|a_j|^2}{N},j=1,\mathrm{},m/21$$ (4) where $`N`$ is the total number of x-ray counts observed in the time interval 0$``$T. The M84 analysis did not include corrections for dead time or instrumental effects. We have derived an approximation to the relative integral power that allows for simple corrections to equation 1 of M84 for these effects. We define $`\chi ^2`$ as, $$\chi ^2=\underset{j=1}{\overset{\frac{m}{2}1}{}}P_j+\frac{1}{2}P_{\frac{m}{2}}$$ (5) where $`m`$ is the number of time bins in the data segment under consideration. As in M84, the 9 minutes of HEAO A-1 data is divided into $`L`$ contiguous data segments of width $`\mathrm{\Delta }t_{seg}`$, each containing $`m`$ (= 10) time bins, and various quantities were calculated. This is repeated with $`T\mathrm{\Delta }t_{seg}`$ = 0.3 ms, 1 ms, 3 ms, 10 ms, 30 ms and 100 ms. The average of the $`\chi ^2`$ in Equation 5 over the entire ensemble of 10 bin data segments for a given $`\mathrm{\Delta }t_{seg}`$ can be approximated by $$\chi ^2\frac{m1}{2}P$$ (6) where $`P`$ is the average Leahy-normalized power over the entire ensemble of 10 bin data segments ($`L`$ 9 minutes/$`\mathrm{\Delta }t_{seg}`$) and the set of frequencies $`f_j=1/\mathrm{\Delta }t_{seg},2/\mathrm{\Delta }t_{seg},\mathrm{}m/2\mathrm{\Delta }t_{seg}`$. Using equation 6 for the average $`\chi ^2`$ in equation 1 of M84 yields $$P_{rel}\frac{\frac{m1}{2}[PP_{noise}]}{(N1)}$$ (7) for each $`\mathrm{\Delta }t_{seg}`$. The expected noise is simply proportional to the Poisson floor. The M84 analysis of the HEAO A-1 HBR observation of Cyg X-1 found an excess of variability at time scales above 1 ms and a sharp cutoff at about 1 ms . We have reanalyzed the same observation using their method without dead time and instrumental corrections and have found excellent consistency with their results. There are some minor discrepancies which can be attributed to differences in the bin offsets and the use of a different digitization of the original analog tape. The comparison of our analysis to that of M84 is shown in Figure 5. By ignoring instrumental effects, M84 chose a value of 2 for the noise floor, where 2 is the value at all frequencies of the Fourier transform of a Poisson source in the Leahy normalization. Using Equation 1 as the probability distribution for the noise floor, we have calculated the expected noise floor in Equation 7. We binned the data in a different manner than the original M84 analysis. The uncorrected results with the new binning are shown in Figure 6. The shape and normalization are in good agreement with that of the original M84 work, shown in Figure 5. Figure 7 shows the results of correcting for standard Poisson dead time. Note that the normalization is increased and that the peak is broader, enhancing the effect observed by M84. The results of our analysis, which has been corrected for dead time and instrumental effects, of the HEAO A-1 observation of Cyg X-1 are shown in Figure 8. We see no evidence for the previously reported rise in the relative integral power, once the corrections for the previously uncorrected instrumental effects or the manifestation of the known reset problem are applied. ### 2.5 Power spectrum Analysis of RXTE Data We have analyzed four RXTE/PCA observations of Cyg X-1, see Table 1. The RXTE data were recorded with 4 $`\mu `$s time resolution. During all four observations the source was in the high (soft) state. Figure 9 shows the RXTE All Sky Monitor (ASM) light curve for Cyg X-1 around the time of our observations. We binned the data into 50 microsecond bins. The light curves were divided into equal segments of 26 seconds. An FFT was performed on each data segment. The results were averaged over all segments and over equal logarithmic frequency intervals. We used the Leahy normalization for the PDS. The dead time corrected Poisson noise power was then subtracted from the PDS obtained to yield the remaining signal above the noise. To determine the Poisson noise floor, we calculated the Poisson power spectrum, correcting for nonparalyzable dead time using Equation 44 in Zhang et al. (1995) with a dead time of 10 microseconds (Zhang & Jahoda (1998)). Corrections were not made for the energy dependent dead time or for very large events. However, below 30 Hz, these corrections are not significant and can be ignored (Cui et al. (1997)). Figure 10 shows the Poisson noise subtracted PDS for these observations. Figure 11 shows the 10–30 Hz region on a linear scale to show the behavior of the PDS as it approaches the limit imposed by the corrections. Figure 12 shows the PDS’s for both RXTE and HEAO A-1 on the same axes. ## 3 Results ### 3.1 HEAO A-1 As reported above, we have discovered either an unknown instrumental effect or a manifestation of the known reset problem in the HEAO A-1 high bit rate data. This effect could not have been discovered using the binned 5 ms and 320 ms timing resolution modes of HEAO A-1 because their Nyquist frequencies are too low. The effect is observable only with the higher 8 $`\mu `$s time resolution of the HBR data or it is a mode-dependent problem that only occurs in the HBR data mode. Once this effect is taken into account, we observe no evidence for the rise or the sharp cutoff in the relative integral power for Cyg X-1 reported by M84. We observe excess power with a 95% confidence level at frequencies below 25 Hz in the noise subtracted PDS from Cyg X-1 in its hard state. Above 30–40 Hz, the noise subtracted PDS is consistent with the null hypothesis. In the region where excess power is significant, we find that the spectral shape can be described by a power law spectrum with a break in the spectrum at 3 Hz. From 0.1 to 3 Hz the spectral index is $`1.20\pm 0.08`$ and above the 3 Hz the spectrum steepens to $`1.7\pm 0.2`$. ### 3.2 RXTE Our results are consistent with previously published results (Cui et al. (1997); Belloni et al. (1996)). Below 30 Hz we find that the spectral shape can be described by a broken power law with the break occurring at about 10 Hz. Below 10 Hz the spectral index is 1.05$`\pm `$0.01 and between 10 and 30 Hz the spectral index steepens to 1.75$`\pm `$0.03. The lack of corrections for very large events and energy dependent dead time in the standard RXTE corrections make it impossible to extend the search for excess power beyond about 30 Hz at this time. There is adequate data in the sample to extend the search to higher frequencies once these additional corrections are developed. Additional work on these corrections is necessary to exploit the full timing capabilities of RXTE to search for excess power using this method. ## 4 Conclusions In light of our discovery of either a new instrumental effect or a correction for the known reset problem that accounts for the observed relative integral power of M84 there is no longer any evidence for model-independent aperiodic variability on millisecond timescales from Cyg X-1. This lack of observed variability does not rule out its existence. RXTE should have the capability to make measurements of excess power at millisecond time scales once the appropriate corrections are available. Our results for the Cyg X-1 PDS’s are consistent with previous measurements in the hard and soft states. Previous measurements of Cyg X-1 often show a flat PDS below about 0.1 Hz, although the location of the maximum frequency varies from about 0.04 to 0.4 Hz. The minimum frequency studied here is only 0.1 Hz, and we see no indication of a flat PDS in our data. Our RXTE spectral shape determination is consistent with other RXTE observations (Cui et al. (1997); Belloni et al. (1996)) made around the same time. In both sets of observations the spectral shape is similar to that observed for most black-hole candidates in the same state (van der Klis (1995)). In order to extend the range of the types of searches into the regime where they can start to impact the models discussed above, we must make several improvements to the data and the techniques. At present, the limiting factor the RXTE data is the lack of adequate background subtraction at high frequencies. This is being worked on (Zhang & Jahoda (1998)) but is not available. Cross checks on Poisson sources, as presented here using Cas A, are invaluable in searching for uncorrected dead time and instrument difficulties. To maximize the utility of the cross checks, the cross checking observations are best done in the same data taking modes and at around the same times as the the observations of the source under study. Unfortunately, the current available instrument data sets do not have these properties. Work supported by Department of Energy contract DE-AC03-76SF00515, NASA RXTE Guest Investigator Grand, the Office of Naval Research, and Stanford University.
no-problem/9901/hep-ph9901357.html
ar5iv
text
# Non-thermal Production of Neutralino Cold Dark Matter from Cosmic String Decays ## I Introduction In spite of the increasing evidence that cold matter (matter with pressure $`p=0`$) makes up less than the critical density $`\rho _c`$ for a spatially flat Universe, equally strong evidence for the existence of a substantial amount of cold dark matter (CDM) remains. The best current estimates give $`\mathrm{\Omega }_{CDM}0.3`$ whereas $`\mathrm{\Omega }_B<0.1`$ (here, $`\mathrm{\Omega }_X=\rho _X/\rho _c`$ denotes the fractional contribution of $`X`$ matter to $`\rho _c`$, and $`B`$ stands for the contribution of baryons). The leading candidates for cold dark matter are the axion and the neutralino. The axion is a neutral spin-zero Pseudo-Goldstone boson associated with the spontaneous breaking of the global $`U_{PQ}(1)`$ symmetry, which was introduced by Peccei and Quinn as a solution to the strong CP problem. At zero temperature the axion mass is given by $$m_a6\times 10^6eVN(\frac{10^{12}\mathrm{GeV}}{f_a})$$ where $`f_a`$ is Peccei-Quinn symmetry breaking scale and N is a positive integer which describes the color anomaly of $`U_{PQ}(1)`$. Axions can be produced by three different mechanisms: vacuum alignment, axion string decay and axion domain wall decay . Cosmology yields an upper limit on $`f_a`$ of $`f_a10^{12}`$ GeV. The neutralino is an electrically neutral hypothetical particle which arises in supersymmetric models. In many such models, e.g. in the MSSM (the minimal supersymmetric standard model), the lightest supersymmetric particle (LSP) is stable, unless R-parity violating interactions are included. The LSP is generally thought to be the lightest neutralino $`\chi `$. The neutralinos in the Universe today are in general assumed to be a relic of an initially thermal neutralino distribution in the hot early Universe. Based on this thermal production mechanism, there have been many calculations of the LSP abundance (for a review, see e.g. ) as a function of the MSSM parameters. These studies show that there exists a domain of parameter space in the MSSM which is consistent with all of the present experimental constraints and for which the $`\chi `$ has a relic mass density $`\mathrm{\Omega }_\chi 1`$. However, cosmology also imposes limits on the LSP mass. In the case of a Bino-like LSP, the calculation of Refs. yields $`M_{\stackrel{~}{B}}300`$ GeV. A recent study relaxes this upper bound to about 600 GeV by including the $`\stackrel{~}{B}`$ coannihilations with the $`\stackrel{~}{e}`$ and $`\stackrel{~}{\mu }`$. In this paper, we propose a new non-thermal production mechanism of the LSP. We consider models with an extra $`U(1)`$ gauge symmetry in extensions of the MSSM. This $`U(1)`$ symmetry could be $`U_{BL}(1)`$, where $`B`$ and $`L`$ are respectively baryon and lepton numbers. Such models explain the neutrino masses via the see-saw mechanism. Another possibility is that the new $`U(1)`$ corresponds to a $`U(1)^{}`$ from string theory or grand unified theories . The basic idea of our mechanism is as follows. When the extra $`U(1)`$ symmetry which we have introduced gets broken at a scale $`\eta `$, a network of strings is produced by the usual Kibble mechanism . The initial separation of the strings is microscopic, of the order $`\lambda ^1\eta ^1`$ (where $`\lambda `$ is a typical Higgs self coupling constant of the $`U(1)`$ sector of the theory) which implies that a substantial fraction of the energy density of the Universe is trapped in strings. After the symmetry breaking phase transition, the defect network coarsens. In the process, string loops decay. If, as we assume, the fields excited in the strings couple to the neutralino $`\chi `$, then a non-thermal distribution of $`\chi `$ particles will be generated during the process of string decay. The total energy density in $`\chi `$ particles will depend on the scale $`\eta `$ of $`U(1)`$ symmetry breaking. The presence of our alternative generation mechanism for $`\chi `$ particles relaxes the constraints on the mass of the $`\chi `$. Even if the usual thermal generation mechanism is too weak to generate $`\mathrm{\Omega }_\chi 1`$, our new non-thermal mechanism may, for appropriate values of $`\eta `$, be able to lead to $`\mathrm{\Omega }_\chi 1`$. In fact, we find that if $`\eta <10^8\mathrm{GeV}`$ and $`M_\chi 100\mathrm{G}\mathrm{e}\mathrm{V}`$, then our mechanism will lead to $`\mathrm{\Omega }_\chi >1`$, unless the couplings of the $`U(1)`$ sector to $`\chi `$ are small. Note that there are similarities between our non-thermal production and the mechanism based on preheating proposed in . To begin with, we consider a general case and calculate the relic mass density of the LSP, then we will move on to a discussion of some implications. ## II LSP Production via String Decay Local cosmic strings form at a phase transitions associated with the spontaneous symmetry breaking of a gauge group $`G`$ down to a subgroup $`H`$ of $`G`$ if the first homotopy group of the vacuum manifold $`\pi _1(\frac{G}{H})`$ is nontrivial. We suppose the existence of such a phase transition which is induced by the vacuum expectation value (vev) of some Higgs field $`\mathrm{\Phi }`$, $`|\mathrm{\Phi }|=\eta `$, and takes place at a temperature $`T_c`$ with $`T_c\eta `$. The strings are formed by the Higgs field $`\mathrm{\Phi }`$ and some gauge field $`A`$ of $`G`$ whose generator is broken by the vev of $`\mathrm{\Phi }`$. We assume that the generator of $`G`$ associated with $`A`$ is diagonal so that the strings are abelian. The mass per unit length of the strings is given by $`\mu =\eta ^2`$. During the phase transition, a network of strings forms, consisting of both infinite strings and cosmic string loops. After the transition, the infinite string network coarsens and more loops form from the intercommuting of infinite strings. Cosmic string loops loose their energy by emitting gravitational radiation. When the radius of a loop becomes of the order of the string width, the loop releases its final energy into a burst of $`\mathrm{\Phi }`$ and $`A`$ particles. Those particles subsequently decay into LSP, which we denote by $`\chi `$, with branching ratios $`ϵ`$ and $`ϵ^{}`$. For simplicity we now assume that all the final string energy goes into $`\mathrm{\Phi }`$ particles. A single decaying cosmic string loop thus releases $`N2\pi \lambda ^1ϵ`$ LSPs which we take to have a monochromatic distribution with energy $`E\frac{T^c}{2}`$. In such scenarios, we thus have two sources of cold dark matter which will contribute to the matter density of the universe. We have CDM which comes from the standard scenario of thermal production; it gives a contribution to the matter density $`\mathrm{\Omega }_{therm}`$. And we also have non-thermal production of CDM which comes from the decay of cosmic string loops and gives a contribution $`\mathrm{\Omega }_{nonth}`$. The total CDM density is $`\mathrm{\Omega }_{CDM}=\mathrm{\Omega }_{therm}+\mathrm{\Omega }_{nonth}`$. During the temperature interval between $`T_c`$ and the LSP freezeout temperature $`T_\chi `$, LSPs released by decaying comic string loops will thermalise very quickly with the surrounding plasma, and hence will contribute to $`\mathrm{\Omega }_{therm}`$, which should not sensitively deviate from the value calculated by the standard method . However, below the LSP freezeout temperature, since the annihilation of the LSP is by definition negligible, each CDM particle released by cosmic string decays will contribute to $`\mathrm{\Omega }_{nonth}`$. We obviously must have $$\mathrm{\Omega }_{nonth}<1.$$ (1) This will lead us to a constraint (a lower bound) on the cosmic string forming scale. We now calculate $`\mathrm{\Omega }_{nonth}`$. We assume that the strings evolve in the friction dominated regime so that the very small scale structure on the strings has not formed yet. The network of strings can then be described by a single length scale $`\xi (t)`$ <sup>*</sup><sup>*</sup>*The friction dominated regime lasts from the time $`t_c`$ at which the strings network forms until a time $`t_{}(G\mu )^1t_c`$, where $`G`$ is Newton’s constant . In our scenario, the CDM is produced at and below the LSP freezeout temperature $`T_\chi 10^210^3`$ GeV. Hence for $`T_c(10^{10.5}10^{11})\mathrm{GeV}=T_c^{}`$, when the temperature of the Universe reaches $`T_\chi `$, the strings are still in the friction dominated regime. Since we are looking for a lower bound $`T_c^l`$ on the scale $`\eta `$ of the strings, and since as we will show below $`T_c^lT_c^{}`$, the time interval of interest in our scenario is in the friction dominated regime.. In the friction dominated period, the length scale $`\xi (t)`$ has been shown to scale as : $$\xi (t)=\xi (t_c)\left(\frac{t}{t_c}\right)^{\frac{3}{2}}$$ (2) where $`\xi (t_c)(\lambda \eta )^1`$ and $`\lambda `$ is the Higgs self quartic coupling constant. The number density of cosmic string loops created per unit of time is given by : $$\frac{dn}{dt}=\nu \xi ^4\frac{d\xi }{dt}$$ (3) where $`\nu `$ is a constant of order 1. We are interested in loops decaying below $`T_\chi `$. The number density of LSP released from $`t_{lsp}`$ till today is given by: $$n_{lsp}^{nonth}(t_0)=N\nu _{\xi _F}^{\xi _0}\left(\frac{t}{t_0}\right)^{\frac{3}{2}}\xi ^4𝑑\xi $$ (4) where the subscript $`0`$ refers to parameters which are evaluated today. $`\xi _F=\xi (t_F)`$ where $`t_F`$ is the time at which cosmic string loops which are decaying at time $`t_\chi `$ (associated with the LSP freezeout temperature $`T_\chi `$) have formed. Now the loop’s average radius shrinks at a rate $`\frac{dR}{dt}=\mathrm{\Gamma }_{loops}G\mu `$, where $`\mathrm{\Gamma }_{loops}`$ is a numerical factor $`1020`$. Since loops form at time $`t_F`$ with an average radius $`R(t_F)\lambda ^1G\mu M_{pl}^{\frac{1}{2}}t_F^{\frac{3}{2}}`$, they have shrunk to a point at the time $`t\lambda ^1\mathrm{\Gamma }_{loops}^1M_{pl}^{\frac{1}{2}}t_F^{\frac{3}{2}}`$. Thus $`t_F(\lambda \mathrm{\Gamma })_{loops}^{\frac{2}{3}}M_{pl}^{\frac{1}{3}}t_\chi ^{\frac{2}{3}}`$. Now the entropy density is $`s=\frac{2\pi ^2}{45}g_{}T^3`$ where $`g_{}`$ counts the number of massless degrees of freedom in the corresponding phase. The time $`t`$ and temperature $`T`$ are related by $`t=0.3g_{}^{\frac{1}{2}}(T)\frac{M_{pl}}{T^2}`$ where $`M_{pl}`$ is the Planck mass. Thus using Eqs.(2) and (4), we find that the LSP number density today released by decaying cosmic string loops is given by: $$Y_{LSP}^{nonth}=\frac{n_{lsp}^{nonth}}{s}=\frac{6.75}{\pi }ϵ\nu \lambda ^2\mathrm{\Gamma }_{loops}^2g_{_{T_c}}^{\frac{9}{4}}g_{_{T_\chi }}^{\frac{3}{4}}M_{pl}^2\frac{T_\chi ^4}{T_c^6},$$ (5) where the subscript on $`g^{}`$ refers to the time when $`g^{}`$ is evaluated. The LSP relic abundance is related to $`Y_\chi `$ by: $`\mathrm{\Omega }_\chi h^2`$ $``$ $`M_\chi Y_\chi s(t_0)\rho _c(t_0)^1h^2`$ (6) $``$ $`2.82\times 10^8Y_\chi ^{tot}(M_\chi /\mathrm{GeV})`$ (7) where $`h`$ is the Hubble parameter and $`M_\chi `$ is the LSP mass. Now $`Y_{LSP}^{tot}=Y_\chi ^{therm}+Y_\chi ^{nonth}`$; hence by setting $`h=0.70`$, Eqs. (7) and (1) lead to the following constraint: $$5.75\times 10^8Y_\chi ^{nonth}(M_\chi /\mathrm{GeV})<1.$$ (8) We thus see that Eqs. (5) and (8) lead to a lower bound on the cosmic string forming temperature $`T_c`$. Recent measurements of cosmological parameters from the cosmic microwave background radiation combined with Type IA supernovae show evidence for a cosmological constant. In such a scenario, the relic matter density satisfies $`\mathrm{\Omega }_Mh^20.35`$. In Fig. 1, we have plotted the bound on $`T_c`$ as a function of $`ϵ^{\frac{1}{5}}M_\chi `$ for both $`\mathrm{\Omega }_\chi h^2=1`$ and $`\mathrm{\Omega }_\chi h^2=0.35`$. We have set $`g_{_{T_c}}=250`$, $`g_{_{T_\chi }}=90`$, $`T_\chi =\frac{m_\chi }{20}`$, $`M_{pl}=1.22\times 10^{19}`$ GeV, and the cosmic string parameters $`\nu =1`$, $`\lambda =0.5`$ and $`\mathrm{\Gamma }=10`$. The region above each curves corresponds to $`\mathrm{\Omega }_\chi h^2<1`$ ($`\mathrm{\Omega }_\chi h^2<0.35`$ respectively), and the region below to $`\mathrm{\Omega }_\chi h^2>1`$ ($`\mathrm{\Omega }_\chi h^2>0.35`$ respectively); this region is excluded by observations. We see that if there is a cosmological constant, a slightly stronger bound on $`T_c`$ is obtained. ## III Implications for Phenomenology Our results have important implications for supersymmetric extensions of the standard model with extra $`U(1)`$’s (or grand unified models with an intermediate $`SU(3)_c\times SU(2)_L\times U(1)_Y\times U(1)^{}`$ gauge symmetry). Most importantly, the requirement $`\mathrm{\Omega }_{nonth}<1`$ imposes a new constraint on supersymmetric model building and rules out many models with a low scale of a new symmetry breaking which produces defects such as cosmic strings. Consider, for example, the model with an extra $`U_{BL}(1)`$ gauge symmetry. In this model, the spectrum of the standard model is extended to include right-handed neutrinos $`N_i`$. The light neutrinos receive masses via the see-saw mechanism and the matter-antimatter asymmetry of the universe is generated by the out-of-equilibrium decay of these right-handed neutrinos. In the latter case, leptogenesis can occur by the decay of cosmic strings associated with the spontaneous breaking of the $`U_{BL}(1)`$ gauge symmetry . In the supersymmetric version of this model, the strings will release not only right-handed neutrinos $`N_i`$, but also their superpartners $`\stackrel{~}{N}_i`$. The heavy neutrinos $`N_i`$ and their scalar partners $`\stackrel{~}{N}_i`$ can decay into various final states including the LSP. The superpotential relevant to the decays is $$W=H_1ϵLy_lE^c+H_2ϵLy_\nu N^c,$$ where $`H_1,H_2,L,E^c`$ and $`N^c`$ are the chiral superfields and $`y_l,y_\nu `$ are Yukawa couplings for the lepton and neutrino Dirac masses, $`m_l=y_lv_1,m_D=y_\nu v_2`$, with $`v_{1,2}`$ being the vacuum expectation values of the Higgs fields. At tree level, the decay rates of $`N_i`$ into s-lepton plus Higgsino and lepton plus Higgs are the same and they are smaller than the rate of $`\stackrel{~}{N}_i`$ decaying into s-lepton plus Higgs and Higssino plus lepton by a factor of 2. If the neutralino is higgsino-like, the LSP arise directly from the decays of the $`N_i`$ and $`\stackrel{~}{N}_i`$. If the neutralino is bino- or photino-like, subsequent decays of s-lepton into binos or photinos plus leptons will produce the LSP. For reasonable values of the parameters, we estimate the branching ratio $`ϵ`$ of the heavy particle decay into LSP to be between 0.1 and 0.5. From Eq. (5) it follows that string decays can easily produce the required amount of LSP. However too many LSPs will be generated unless the $`BL`$ breaking scale, $`\mathrm{\Lambda }_{BL}`$ is higher than about $`10^8`$ GeV . In turn, this will set a lower limit on the neutrino masses generated by the see-saw mechanism, $`m_\nu m_D^2/\mathrm{\Lambda }_{BL}`$. Inserting numbers and taking $`m_Dm_\tau 1.8`$ GeV, one obtains that $`m_\nu 30`$ eV. In models with spontaneous breaking of a $`U_{BL}(1)`$ gauge symmetry, upper and lower bounds on the $`BL`$ breaking scale have already been derived from considerations of cosmic rays from string decay and from leptogenesis , respectively. Our lower bound on the $`BL`$ breaking scale is independent of leptogenesis. Our lower limit on the $`BL`$ symmetry breaking scale in gauged $`BL`$ models and in general models with an extra $`U(1)`$ pushes the mass of the new gauge boson far above the Fermi scale, rendering it impossible to test the new physics signals from the extra $`Z^{}`$ in accelerators. To summarize, we have pointed out a new production mechanism for neutralino dark matter which can be effective in many models beyond the MSSM, models with extra gauge symmetries which admit topological defects. The decay of these defects gives rise to a nonthermal contribution to the neutralino density. We have focused on the nonthermal production of LSPs from string decays. Similarly, one could consider LSP production from other topological defects. We have calculated the relic LSP mass density $`\mathrm{\Omega }_{nonth}`$ as a function of the string scale, the freezeout temperature and the mass of the LSP. The LSP mass density has two contributions, one from thermal production which has been calculated by many authors in the literature before, another is the non-thermal production calculated in this paper. Our results indicate that if the scale $`\eta `$ of string production is about $`10^8`$GeV, then our nonthermal mechanism can produce the required closure density of LSPs. For values of $`\eta `$ smaller than the above bound, the model is in conflict with observations since the LSPs would overclose the Universe. One important caveat must be made concerning our calculations. Cosmic strings arising in supersymmetric models are generically superconducting . In this case, the string dynamics may be very different from that of ordinary strings, the dynamics assumed in this paper, and thus the corresponding constraints on particle physics model building would be quite different. Nevertheless, the main point that cosmic string decay in extensions of the MSSM can yield a new production mechanism for dark matter remains unaffected. Acknowledgments We are grateful to A.-C. Davis and G. Senjanović for useful discussions. This work is supported in part by the National Natural Science Foundation of China and by the U.S. Department of Energy under Contract DE-FG02-91ER40688, TASK A.
no-problem/9901/math9901057.html
ar5iv
text
# Multiplicities of Points on Schubert Varieties in Grassmannians ## 1. Main result An important invariant of a singular point on an algebraic variety $`X`$ is its *multiplicity*: the normalized leading coefficient of the Hilbert polynomial of the local ring. The main result of the present note is an explicit determinantal formula for the multiplicities of points on Schubert varieties in Grassmannians. This is a simplification of a formula obtained in . More recently, the recurrence relations for multiplicities of points on more general (partial) flag varieties were obtained in . However, to the best of our knowledge the case of Grassmannians remains the only case for which an explicit formula for multiplicities is available. Fix positive integers $`d`$ and $`n`$ with $`0dn`$, and consider the Grassmannian $`Gr_d(V)`$ of $`d`$-dimensional subspaces in a $`n`$-dimensional vector space $`V`$ (over an algebraically closed field of arbitrary characteristic). Recall that Schubert varieties in $`Gr_d(V)`$ are parameterized by the set $`I_{d,n}`$ of integer vectors $`𝐢=(i_1,\mathrm{},i_d)`$ such that $`1i_1<\mathrm{}<i_dn`$. For a given complete flag $`\{0\}=V_0V_1\mathrm{}V_n=V`$, the Schubert variety $`X_𝐢`$ is defined as follows: $$X_𝐢:=\{WGr_d(V)dim(WV_{i_k})k\text{ for }k=1,\mathrm{},d\}.$$ The Schubert cell $`X_𝐢^0`$ is an open subset in $`X_𝐢`$ given by $$X_𝐢^0:=\{WX_𝐢dim(WV_{i_k1})=k1\text{ for }k=1,\mathrm{},d\}.$$ It is well known that the Schubert variety $`X_𝐢`$ is the disjoint union of Schubert cells $`X_𝐣^0`$ for all $`𝐣𝐢`$ in the componentwise partial order on $`I_{d,n}`$. The multiplicity of a point $`x`$ in $`X_𝐢`$ is constant on each Schubert cell $`X_𝐣^0X_𝐢`$, and we denote this multiplicity by $`M_𝐣(𝐢)`$. Our main result is the following explicit formula for $`M_𝐣(𝐢)`$ (where the binomial coefficients $`\left(\genfrac{}{}{0pt}{}{a}{b}\right)`$ are subject to the condition that $`\left(\genfrac{}{}{0pt}{}{a}{b}\right)=0`$ for $`b<0`$): ###### Theorem 1. The multiplicity $`M_𝐣(𝐢)`$ of a point $`xX_𝐣^0X_𝐢`$ is given by (1) $$M_𝐣(𝐢)=(1)^{s_1+\mathrm{}+s_d}det\left[\begin{array}{cccc}\left(\genfrac{}{}{0pt}{}{i_1}{s_1}\right)& \mathrm{}& \mathrm{}& \left(\genfrac{}{}{0pt}{}{i_d}{s_d}\right)\\ \left(\genfrac{}{}{0pt}{}{i_1}{1s_1}\right)& \mathrm{}& \mathrm{}& \left(\genfrac{}{}{0pt}{}{i_d}{1s_d}\right)\\ \mathrm{}& & & \mathrm{}\\ \left(\genfrac{}{}{0pt}{}{i_1}{d1s_1}\right)& \mathrm{}& \mathrm{}& \left(\genfrac{}{}{0pt}{}{i_d}{d1s_d}\right)\end{array}\right],$$ where (2) $$s_q:=\mathrm{\#}\{j_pi_q<j_p\}.$$ The proof of Theorem 1 will be given in the next section. Although determinants of matrices formed by binomial coefficients were extensively studied by combinatorialists (see, e.g., ), the experts whom we consulted did not recognize the determinant in (1). We conclude this section by an example illustrating Theorem 1. ###### Example 2. Assume the indices $`𝐢,𝐣`$ satisfy $`j_di_1`$. In this situation the numbers $`s_1,\mathrm{},s_d`$ attain the smallest possible value: $`s_1=\mathrm{}=s_d=0`$. Then the $`(p,q)`$-entry of the determinant in (1) has the form $`P_p(i_q)`$, where $`P_p(t)`$ is a polynomial with the leading term $`t^{p1}/(p1)!`$. It follows that (3) $$M_𝐣(𝐢)=\frac{1}{1!\mathrm{}(d1)!}V(𝐢)=\frac{1}{1!\mathrm{}(d1)!}\underset{p>q}{}(i_pi_q),$$ where $`V(𝐢)`$ is the Vandermonde determinant $`det((i_q^{p1}))`$. ## 2. Proof of Theorem 1 Fix two vectors $`𝐣𝐢`$ from $`I_{d,n}`$, and let $$\mathrm{deg}(𝐣,𝐢):=d\mathrm{\#}\{i_qi_q\{j_1,\mathrm{},j_d\}\}.$$ For a nonnegative integer vector $`𝐬=(s_1,\mathrm{},s_d)`$, we set $$|𝐬|:=s_1+\mathrm{}+s_d.$$ As shown in and \[3, page 202\], the multiplicity $`M_𝐣(𝐢)`$ satisfies the initial condition $`M_𝐣(𝐣)=1`$ and the partial difference equation (4) $$M_𝐣(𝐢)=\frac{1}{\mathrm{deg}(𝐣,𝐢)}\underset{𝐤}{}M_𝐣(𝐤),$$ where the sum is over all $`𝐤I_{d,n}`$ such that $`𝐣𝐤<𝐢`$, and $`|𝐤|=|𝐢|1`$. To prove $`(`$1$`)`$, we proceed by induction on $`|𝐢|`$. The initial step is to verify $`(`$1$`)`$ for $`𝐢=𝐣`$. In this case the numbers $`s_1,\mathrm{},s_d`$ attain their maximum possible value: $`s_q=dq`$. It follows that (5) $$(1)^{|𝐬|}det\left[\begin{array}{cccc}0& \mathrm{}& 0& 1\\ \mathrm{}& & 1& \\ 0& \text{.}.\text{.}& \text{.}.\text{.}& \mathrm{}\\ 1& & \mathrm{}& \end{array}\right]=1=M_𝐣(𝐣),$$ as required. For the inductive step, we introduce some notation. To any nonnegative integer vector $`𝐬=(s_1,\mathrm{},s_d)`$ we associate a polynomial $`P_𝐬(𝐭)[𝐭]=[t_1,\mathrm{},t_d]`$ defined by (6) $$P_𝐬(𝐭)=(1)^{|𝐬|}det\left[\begin{array}{cccc}\left(\genfrac{}{}{0pt}{}{t_1}{s_1}\right)& \mathrm{}& \mathrm{}& \left(\genfrac{}{}{0pt}{}{t_d}{s_d}\right)\\ \left(\genfrac{}{}{0pt}{}{t_1}{1s_1}\right)& \mathrm{}& \mathrm{}& \left(\genfrac{}{}{0pt}{}{t_d}{1s_d}\right)\\ \mathrm{}& & & \mathrm{}\\ \left(\genfrac{}{}{0pt}{}{t_1}{d1s_1}\right)& \mathrm{}& \mathrm{}& \left(\genfrac{}{}{0pt}{}{t_d}{d1s_d}\right)\end{array}\right];$$ here $`\left(\genfrac{}{}{0pt}{}{t}{s}\right)`$ is the polynomial $`t(t1)\mathrm{}(ts+1)/s!`$ for $`s0`$, and $`\left(\genfrac{}{}{0pt}{}{t}{s}\right)=0`$ for $`s<0`$. Thus our goal is to show that $`M_𝐣(𝐢)=P_𝐬(𝐢)`$ with $`𝐬`$ given by (2). For $`q=1,\mathrm{},d`$, let $`\mathrm{\Delta }_q:[𝐭][𝐭]`$ denote the partial difference operator $`\mathrm{\Delta }_qP(𝐭)=P(𝐭)P(𝐭e_q)`$, where $`e_1,\mathrm{},e_d`$ are the unit vectors in $`^d`$. Here is the key lemma. ###### Lemma 3. For any nonnegative integer vector $`𝐬`$, the corresponding polynomial $`P_𝐬(𝐭)`$ satisfies the partial difference equation (7) $$(\mathrm{\Delta }_1+\mathrm{}+\mathrm{\Delta }_d)P=0.$$ ###### Proof. First notice that the Vandermonde determinant $`V(𝐭)=_{p>q}(t_pt_q)`$ satisfies (7) since it is a non-zero skew-symmetric polynomial of minimal possible degree, and the operator $`\mathrm{\Delta }_1+\mathrm{}+\mathrm{\Delta }_d`$ preserves the space of skew-symmetric polynomials. The vector space of solutions of (7) is also invariant under translations $`𝐭𝐭+𝐤`$ so it is enough to show that each $`P_𝐬(𝐭)`$ is a linear combination of polynomials $`V(𝐭+𝐤)`$. Here is the desired expression: (8) $$P_𝐬(𝐭)=\frac{1}{1!\mathrm{}(d1)!}\underset{0𝐤𝐬}{}(1)^{|𝐤|}\left(\genfrac{}{}{0pt}{}{s_1}{k_1}\right)\mathrm{}\left(\genfrac{}{}{0pt}{}{s_d}{k_d}\right)V(𝐭+𝐤).$$ Let us prove (8). The same argument as in Example 2 above shows that (9) $$\frac{1}{1!\mathrm{}(d1)!}V(𝐭+𝐤)=det\left[\begin{array}{cccc}\left(\genfrac{}{}{0pt}{}{t_1+k_1}{0}\right)& \mathrm{}& \mathrm{}& \left(\genfrac{}{}{0pt}{}{t_d+k_d}{0}\right)\\ \left(\genfrac{}{}{0pt}{}{t_1+k_1}{1}\right)& \mathrm{}& \mathrm{}& \left(\genfrac{}{}{0pt}{}{t_d+k_d}{1}\right)\\ \mathrm{}& & & \mathrm{}\\ \left(\genfrac{}{}{0pt}{}{t_1+k_1}{d1}\right)& \mathrm{}& \mathrm{}& \left(\genfrac{}{}{0pt}{}{t_d+k_d}{d1}\right)\end{array}\right].$$ Substituting this expression into (8) and performing the multiple summation, we see that the right hand side becomes the determinant of the $`d\times d`$ matrix whose $`(p,q)`$-entry is $$\underset{k_q=0}{\overset{s_q}{}}(1)^{k_q}\left(\genfrac{}{}{0pt}{}{s_q}{k_q}\right)\left(\genfrac{}{}{0pt}{}{t_q+k_q}{p1}\right)=(1)^{s_q}\left(\genfrac{}{}{0pt}{}{t_q}{p1s_q}\right)$$ (the last equality is a standard binomial identity). This completes the proof of (8) and Lemma 3. ∎ One last piece of preparation before performing the inductive step: the Pascal binomial identity $`\left(\genfrac{}{}{0pt}{}{t}{s}\right)=\left(\genfrac{}{}{0pt}{}{t1}{s}\right)+\left(\genfrac{}{}{0pt}{}{t1}{s1}\right)`$ implies that (10) $$\mathrm{\Delta }_qP_𝐬(𝐭)=P_{𝐬+e_q}(𝐭e_q)$$ for any nonnegative integer vector $`𝐬`$ and any $`q=1,\mathrm{},d`$. To conclude the proof of Theorem 1, suppose that $`𝐣<𝐢`$ and assume by induction that $`M_𝐣(𝐤)`$ is given by $`(`$1$`)`$ for any $`𝐤I_{d,n}`$ such that $`𝐣𝐤<𝐢`$. Let $`𝐬`$ be the vector given by (2). In view of $`(`$4$`)`$, the desired equality $`M_𝐣(𝐢)=P_𝐬(𝐢)`$ is a consequence of the following: (11) $$\mathrm{deg}(𝐣,𝐢)P_𝐬(𝐢)\underset{𝐤}{}M_𝐣(𝐤)=0,$$ where the sum is over all $`𝐤I_{d,n}`$ such that $`𝐣𝐤<𝐢`$, and $`|𝐤|=|𝐢|1`$. We shall deduce $`(`$11$`)`$ from the equality $$\underset{q=1}{\overset{d}{}}\mathrm{\Delta }_qP_𝐬(𝐢)=0$$ provided by Lemma 3. To do this, we compute $`\mathrm{\Delta }_qP_𝐬(𝐢)`$ in each of the following mutually exclusive cases (we use the conventions $`i_0=0`$ and $`s_0=d`$): Case 1: $`i_q\{j_1,\mathrm{},j_d\}`$, $`i_q1>i_{q1}`$. Then $`𝐤:=𝐢e_q`$ belongs to $`I_{d,n}`$, and we have $`𝐣𝐤`$. Replacing $`𝐢`$ by $`𝐤`$ in (2) does not change the vector $`𝐬`$. By our inductive assumption, $`P_𝐬(𝐤)=M_𝐣(𝐤)`$, and so $`\mathrm{\Delta }_qP_𝐬(𝐢)=P_𝐬(𝐢)M_𝐣(𝐤)`$. Case 2: $`i_q\{j_1,\mathrm{},j_d\}`$, $`i_q1=i_{q1}`$. For such $`q`$, we have $`P_𝐬(𝐢e_q)=0`$ since the corresponding determinant has the $`(q1)`$th and $`q`$th columns equal to each other. Thus $`\mathrm{\Delta }_qP_𝐬(𝐢)=P_𝐬(𝐢)`$. Case 3: $`i_q\{j_{q+1},\mathrm{},j_d\}`$, $`i_q1>i_{q1}`$. As in Case 1, we have $`𝐤:=𝐢e_qI_{d,n}`$, and $`𝐣𝐤`$. However now replacing $`𝐢`$ by $`𝐤`$ in (2) changes $`𝐬`$ to $`𝐬+e_q`$. Combining the inductive assumption with (10), we conclude that $`\mathrm{\Delta }_qP_𝐬(𝐢)=P_{𝐬+e_q}(𝐤)=M_𝐣(𝐤)`$. Case 4: $`i_q\{j_{q+1},\mathrm{},j_d\}`$, $`i_q1=i_{q1}`$. In this case, the $`d\times d`$ matrix whose determinant is $`P_{𝐬+e_q}(𝐢e_q)`$ has the $`(q1)`$th and $`q`$th columns equal to each other, hence $`\mathrm{\Delta }_qP_𝐬(𝐢)=P_{𝐬+e_q}(𝐤)=0`$. Case 5: $`i_q=j_q`$. Then we have $$s_1s_2\mathrm{}s_{q1}s_q+1=d+1q,$$ and so the $`d\times d`$ matrix whose determinant is $`P_{𝐬+e_q}(𝐢e_q)`$ has a zero $`(d+1q)\times q`$ submatrix. As in Case 4, this implies $`\mathrm{\Delta }_qP_𝐬(𝐢)=P_{𝐬+e_q}(𝐤)=0`$. Adding up the contributions $`\mathrm{\Delta }_qP_𝐬(𝐢)`$ from all these cases, we obtain $`(`$11$`)`$; this completes the proof of Theorem 1. ###### Remark 4. In , the multiplicity $`M_𝐣(𝐢)`$ was expressed as a multiple sum given by $`(`$8$`)`$. ###### Remark 5. The multiplicity $`M_𝐣(𝐢)`$ is by definition a positive integer. The partial difference equation $`(`$4$`)`$ (combined with the initial condition $`M_𝐣(𝐣)=1`$) makes the positivity of $`M_𝐣(𝐢)`$ obvious but the fact that $`M_𝐣(𝐢)`$ is an integer becomes rather mysterious. On the other hand, Theorem 1 makes it clear that $`M_𝐣(𝐢)`$ is an integer but not that $`M_𝐣(𝐢)>0`$. It would be interesting to find an expression for $`M_𝐣(𝐢)`$ that makes obvious both properties. ###### Remark 6. The space of all polynomial solutions of the partial difference equation $`(`$7$`)`$ can be described as follows. Let $`𝐲=(y_1,\mathrm{},y_d)`$ be an auxiliary set of variables, and let $`\phi :[𝐲][𝐭]`$ be the isomorphism of vectors spaces that sends each monomial $`_{q=1}^dy_q^{n_q}`$ to $`_{q=1}^dt_q(t_q+1)\mathrm{}(t_q+n_q1)`$. The map $`\phi `$ intertwines each $`\mathrm{\Delta }_q`$ with the partial derivative $`\frac{}{y_q}`$. It follows that the space of solutions of $`(`$7$`)`$ is the image under $`\phi `$ of the $``$-subalgebra in $`[𝐲]`$ generated by all differences $`y_py_q`$. ###### Remark 7. Jerzy Weyman informed us about the following determinantal formula (unpublished) for the multiplicity $`M_𝐣(𝐢)`$ in the special case when $`𝐣=(1,2,\mathrm{},d)`$. Let $`\lambda `$ be the partition $`(i_dd,\mathrm{},i_22,i_11)`$, and let $`\lambda =(\alpha _1,\mathrm{},\alpha _r|\beta _1,\mathrm{},\beta _r)`$ be the Frobenius notation of $`\lambda `$ (see ). According to J. Weyman, $`M_𝐣(𝐢)`$ is equal to the determinant of the $`r\times r`$ matrix whose $`(p,q)`$-entry is $`\left(\genfrac{}{}{0pt}{}{\alpha _p+\beta _q}{\alpha _p}\right)`$. It is not immediately clear why this determinantal expression agrees with the one given by $`(`$1$`)`$. ## Acknowledgements We are grateful to V. Lakshmibai who initiated this project by suggesting to one of us (J. R.) to publish the results of his thesis . We thank Sergey Fomin, Ira Gessel and Jerzy Weyman for helpful conversations.
no-problem/9901/cond-mat9901049.html
ar5iv
text
# Slow dynamics of water under pressure ## Abstract We perform lengthy molecular dynamics simulations of the SPC/E model of water to investigate the dynamics under pressure at many temperatures and compare with experimental measurements. We calculate the isochrones of the diffusion constant $`D`$ and observe power-law behavior of $`D`$ on lowering temperature with an apparent singularity at a temperature $`T_c(P)`$, as observed for water. Additional calculations show that the dynamics of the SPC/E model are consistent with slowing down due to the transient caging of molecules, as described by the mode-coupling theory (MCT). This supports the hypothesis that the apparent divergences of dynamic quantities along $`T_c(P)`$ in water may be associated with “slowing down” as described by MCT. On supercooling water at atmospheric pressure, many thermodynamic and dynamic quantities show power-law growth . This power law behavior also appears under pressure, which allows measurement of the locus of apparent power-law singularities in water \[Fig. 1(a)\]. The possible explanations of this behavior have generated a great deal of interest. In particular, three scenarios have been considered: (i) the existence of a spinodal bounding the stability of the liquid in the superheated, stretched, and supercooled states ; (ii) the existence of a liquid-liquid transition line between two liquid phases differing in density ; (iii) a singularity-free scenario in which the thermodynamic anomalies are related to the presence of low-density and low-entropy structural heterogeneities . Based on both experiments and recent simulations , several authors have suggested that the power-law behavior of dynamic quantities might be explained by the transient caging of molecules by neighboring molecules, as described by the mode-coupling theory (MCT) , which we address here. This explanation would indicate that the dynamics of water are explainable in the same framework developed for other fragile liquids , at least for temperatures above the homogeneous nucleation temperature $`T_H`$. Moreover, this explanation of the dynamic behavior on supercooling may be independent of the above scenarios suggested for thermodynamic behavior \[Fig. 1(a)\]. Here we focus on the behavior of the diffusion constant $`D`$ under pressure, which has been studied experimentally . We perform molecular dynamics simulations in the temperature range 210 K – 350 K for densities ranging from 0.95 g/cm<sup>3</sup> – 1.40 g/cm<sup>3</sup> \[Table I\] using the extended simple point charge potential (SPC/E) . We select the SPC/E potential because it has been previously shown to display power-law behavior of dynamic quantities, as observed in supercooled water at ambient pressure . In Fig. 2, we compare the behavior of $`D`$ under pressure at several temperatures for our simulations and the experiments of ref. . The anomalous increase in $`D`$ is qualitatively reproduced by SPC/E, but the quantitative increase of $`D`$ is significantly larger than that observed experimentally. This discrepancy may arise form the fact that the SPC/E potential is under-structured relative to water , so applying pressure allows for more bond breaking and thus greater diffusivity than observed experimentally. We also find that the pressure where $`D`$ begins to decrease with pressure – normal behavior for a liquid – is larger than that observed experimentally. This simple comparison of $`D`$ leads us to expect that the qualitative dynamic features we observe in the SPC/E potential will aid in the understanding of the dynamics of water under pressure, but will likely not be quantitatively accurate. We next determine the approximate form of the lines of constant $`D`$ (isochrones) by interpolating our data over the region of the phase diagram studied \[Fig. 1(b)\]. We note that the locus of points where the slope of the isochrones changes sign (i.e. the locus of points where $`D`$ obtains a maximum value) is close to the $`T_{\text{MD}}`$ locus . At each density studied, we fit $`D`$ to a power law $`D(T/T_c1)^\gamma `$. The shape of the locus of $`T_c`$ values compares well with that observed experimentally , and changes slope at the same pressure \[Figs. 1(a) and (b)\]. We find the striking feature that $`\gamma `$ decreases under pressure for the SPC/E model, while $`\gamma `$ increases experimentally \[Fig. 3\]. This disagreement underscores the need to improve the dynamic properties of water models, most of which already provide an adequate account of static properties . We next consider interpretation of our results using MCT, which has been used to quantitatively describe the weak supercooling regime – i.e., the temperature range where the characteristic times become three or four orders of magnitude larger than those of the normal liquid . The region where experimental data are available in supercooled water is exactly the region where MCT holds. MCT provides a theoretical framework in which the slowing down of the dynamics arises from caging effects, related to the coupling between density modes, mainly over length scales on the order of the nearest neighbors. In this respect, MCT does not require the presence of a thermodynamic instability to explain the power-law behavior of the characteristic times. MCT predicts power-law behavior of $`D`$, and also that the Fourier transform of the density-density correlation function $`F(q,t)`$, typically referred to as the intermediate scattering function, decays via a two-step process. $`F(q,t)`$ can be measured by neutron scattering experiments and is calculated via $$F(q,t)\frac{1}{S(q)}\underset{j,k=1}{\overset{N}{}}e^{i𝐪[𝐫_k(t)𝐫_j(0)]},$$ (1) where $`S(q)`$ is the structure factor . In the first relaxation step, $`F(q,t)`$ approaches a plateau value $`F_{\text{plateau}}(q)`$; the decay from the plateau has the form $`F_{\text{plateau}}(q)F(q,t)t^b`$, where $`b`$ is known as the von Schweidler exponent. According to MCT, the value $`b`$ is completely determined by the value of $`\gamma `$ , so calculation of these exponents for SPC/E determines if MCT is consistent with our results. The range of validity of the power-law $`t^b`$ is strongly $`q`$-dependent , making unambiguous calculation of $`b`$ difficult. Fortunately, the same exponent $`b`$ controls the long-time behavior of $`F(q,t)`$ at large $`q`$. Indeed, MCT predicts that at long time, $`F(q,t)`$ decays according to a Kohlrausch-Williams-Watts stretched exponential $$F(q,t)=A(q)\mathrm{exp}\left[\left(\frac{t}{\tau (q)}\right)^{\beta (q)}\right],$$ (2) with $`lim_q\mathrm{}\beta (q)=b`$ . We show the $`q`$-dependence of $`\beta `$ for each density studied at $`T=210`$ K \[Fig. 4\]. We also calculate $`\beta `$ for the “self-part” of $`F(q,t)`$, denoted $`F_{\text{self}}(q,t)`$ . In addition, we show the expected value of $`b`$ according to MCT, using the values of $`\gamma `$ extrapolated from Fig. 3. The large-$`q`$ limit of $`\beta `$ appears to approach the value predicted by MCT . Hence we conclude that the dynamic behavior of the SPC/E potential in the pressure range we study is consistent with slowing down as described by MCT \[Fig. 5\]. We also note that on increasing pressure, the values of the exponents become closer to those for hard-sphere ($`\gamma =2.58`$ and $`b=0.545`$) and Lennard-Jones ($`\gamma =2.37`$ and $`b=0.617`$) systems . This confirms that the hydrogen-bond network is destroyed under pressure and that the water dynamics become closer to that of normal liquids, where core repulsion is the dominant mechanism. A significant result of our analysis is the demonstration that MCT is able to rationalize the dynamic behavior of the SPC/E model of water at all pressures. In doing so, MCT encompasses both the behavior at low pressures, where the mobility is essentially controlled by the presence of strong energetic cages of hydrogen bonds, and at high pressures, where the dynamics are dominated by excluded volume effects. We wish to thank A. Rinaldi, S. Sastry, and A. Scala for their assistance. FWS is supported by an NSF graduate fellowship. The Center for Polymer Studies is supported by NSF grant CH9728854 and British Petroleum.
no-problem/9901/astro-ph9901416.html
ar5iv
text
# Order reduction in semiclassical cosmology. ## I Introduction Controversial fourth order differential equations, which govern the semiclassical cosmology can be reduced to second-order , , and in this way, exempted from quantum-originated instabilities . The reduction is based on the self-consistence condition, i.e. the assumption that both equations and solutions are perturbatively expandable in $`\mathrm{}`$. Under this condition the universe becomes an ordinary mechanical system with a two-dimensional phase-space corresponding to the single degree of mechanical freedom – the scale factor $`a(t,\mathrm{})`$. Self-consistent theory is still renormalizable , Minkowski space-time regains stability in the class of homogeneous and isotropic models, quasi-inflationary phenomena disappear . Similar reduction techniques are being applied to gravity with higher than fourth-order derivatives and also in other branches of physics . However, imposing the self-consistence condition on the cosmological scale $`a(t)`$ encounters some difficulties. In a universe with vanishing spatial curvature still remains a freedom to multiply metrics by an arbitrary constant factor, therefore the scale $`a(t)`$ is not a measurable quantity. Requirement for $`a(t,\mathrm{})`$ to be $`\mathrm{}`$-expandable is physically unclear. In open or closed universes this freedom is reduced by the choice $`k=\pm 1`$, and the scale factors are uniquely determined by cosmological observables: the Hubble parameter $`H`$ and the energy density $`ϵ`$. Yet arbitrarily small changes in any of these observables in the vicinity of critical density $`ϵ\mathrm{\Lambda }=3H^2`$ may result in indefinite changes of $`a(t)`$. The perturbative character of the energy-momentum tensor (and consequently the equations) with respect to an arbitrary chosen parameter, in general, does not imply the same property of the metrics. Finally, expanding $`a(t)`$ in the equations, which contain fixed curvature index $`k`$ , limits quantum corrections to only those, which preserve the same sign of the space curvature. This limitation is particularly severe for a flat universe, where generic quantum corrections would contribute to the space curvature unless the $`k=0`$ condition prevents that. This limitation cannot be derived directly from the Lagrangian and, in fact, it forms an additional constraint imposed on the theory (which is not even true in classical gravity<sup>*</sup><sup>*</sup>*Note that the Lemaıtre universes, which are of positive space curvature, are obtained from flat universes (not closed!) when $`\mathrm{\Lambda }`$ diverges from zero.). Not arguing with the very idea of self-consistence, we draw attention to some circumstances which are important for semiclassical cosmology: 1. Without harm to the reduction procedure, one can release the consistence condition for the scale factor, demanding instead the same property for cosmological observables (the Hubble parameter, etc.) 2. For a radiation filled universe with vanishing cosmological constant $`\mathrm{\Lambda }=0`$ the self-consistence condition is superfluous, since the original equation is of second (!) order. Terms with higher derivatives cited by classical papers contain an additional hidden factor $`\mathrm{}`$ and are eventually eliminated in the first order expansion. 3. We show that quantum corrections form the equation of state of a barotropic fluid, and discuss the stability of the Minkowski space-time on the ground of dynamic systems theory. ## II Condition of self-consistence for Hubble’s expansion rate. We consider semiclassical gravity theory with the Lagrangian $`R+\alpha _1\mathrm{}R^2+\alpha _2\mathrm{}R^{\mu \nu }R_{\mu \nu }+L_{rad}`$, where $`L_{rad}`$ represents classical radiation or another thermalized field of massless particles. Typically, cosmologies containing the $`R^2`$ and the $`R^{\mu \nu }R_{\mu \nu }`$ terms lead to 4 order equations and violate the stability of empty space. We write quantum terms on the right-hand side of the field equations and treat them as corrections to the energy-momentum tensor. We think of $`\mathrm{}`$ as the theory parameter, which can take arbitrary values, so the limit transition $`\mathrm{}0`$ defines the classical limit of the theory. The field equations we write in the Einsteinian form $`R_{\mu \nu }\frac{1}{2}Rg_{\mu \nu }+\mathrm{\Lambda }g_{\mu \nu }=T_{\mu \nu }`$, but with the modified, effective energy-momentum tensor $$T_{\mu \nu }=T_{\mu \nu }^{(rad)}\mathrm{}\alpha _1^{(1)}H_{\mu \nu }\mathrm{}\alpha _3^{(3)}H_{\mu \nu },$$ (1) where $`{}_{}{}^{(1)}H_{\mu \nu }^{}`$ $`=`$ $`\frac{1}{2}R^2g_{\mu \nu }2RR_{\mu \nu }2\mathrm{}Rg_{\mu \nu }+2_\mu _\nu R`$ (2) $`{}_{}{}^{(3)}H_{\mu \nu }^{}`$ $`=`$ $`R_\mu ^\sigma R_{\sigma \nu }+\frac{2}{3}RR_{\mu \nu }+\frac{1}{2}R_{\sigma \rho }R^{\sigma \rho }g_{\mu \nu }\frac{1}{4}R^2g_{\mu \nu }`$ (3) and the constant $`\alpha _3`$ is some combination of $`\alpha _1`$ and $`\alpha _2`$ (Robertson-Walker symmetry have been partially exploited to derive formula (1). For more precise explanation see .) Derived in this way the (0,0)-equation $`0`$ $`=`$ $`\mathrm{\Lambda }{\displaystyle \frac{\kappa \mu }{a^4}}+{\displaystyle \frac{1}{a^2}}\left[3k+3\left({\displaystyle \frac{da}{dt}}\right)^2\right]`$ (4) $`+`$ $`{\displaystyle \frac{\alpha _1\mathrm{}}{a^4}}\left[18k^2+36k\left({\displaystyle \frac{da}{dt}}\right)^2+54\left({\displaystyle \frac{da}{dt}}\right)^436a\left({\displaystyle \frac{da}{dt}}\right)^2{\displaystyle \frac{d^2a}{dt^2}}+18a^2\left({\displaystyle \frac{d^2a}{dt^2}}\right)^236a^2{\displaystyle \frac{da}{dt}}{\displaystyle \frac{da^3}{dt^3}}\right]`$ (5) $`+`$ $`{\displaystyle \frac{\alpha _3\mathrm{}}{a^4}}\left[3\left({\displaystyle \frac{da}{dt}}\right)^4+6k\left({\displaystyle \frac{da}{dt}}\right)^2+3k^2\right]`$ (6) contains four fundamental constants, two of them classical - the gravitation constant $`\kappa `$ (further on we put $`8\pi \kappa =1`$), the cosmological constant $`\mathrm{\Lambda }`$, and two quantum ones - $`\alpha _1`$ and $`\alpha _3`$. There are also two quantities which define a particular solution: the constant of motion $`\mu =ϵ_0a_0^4`$, and the index of space curvature $`k`$. Therefore, the transition from classical to quantum theory with the self-consistence $`a=a_0+\mathrm{}a_1`$ imposed on (6) preserves the type of space curvature, including the strong $`k=0`$ limitation for the flat universe. One can get rid of the last two constants, and consequently, of the constraints they bring, by introducing the Hubble expansion parameter $`H=\frac{1}{a}\frac{da}{dt}`$. Differentiating (6) twice, we obtain the fourth order equation for $`H`$, which contains only fundamental constantsThe reverse procedure would give the equation with two parameters of continuous values. Consequently, the equation (10) formally has a broader class of solutions than (6). However, the freedom to choose $`k`$ as different from $`0`$, $`\pm 1`$ is a trivial one, and resolves itself to rescaling the metrics by a constant factor. $`0`$ $`=`$ $`3{\displaystyle \frac{d^2H}{dt^2}}18H{\displaystyle \frac{dH}{dt}}4H\left(3H^2\mathrm{\Lambda }\right)`$ (7) $`+`$ $`18\mathrm{}\alpha _1{\displaystyle \frac{d^4H}{dt^4}}+162\mathrm{}\alpha _1H{\displaystyle \frac{d^3H}{dt^3}}`$ (8) $`+`$ $`{\displaystyle \frac{d^2H}{dt^2}}\left[6\left(51\mathrm{}\alpha _1+\mathrm{}\alpha _3\right){\displaystyle \frac{dH}{dt}}+6\left(90\mathrm{}\alpha _1+\mathrm{}\alpha _3\right)H^24\mathrm{\Lambda }\left(6\mathrm{}\alpha _1+\mathrm{}\alpha _3\right)\right]`$ (9) $`+`$ $`4H\left[162\mathrm{}\alpha _1\left({\displaystyle \frac{dH}{dt}}\right)^2+{\displaystyle \frac{dH}{dt}}\left(3\left(48\mathrm{}\alpha _1\mathrm{}\alpha _3\right)H^22\mathrm{\Lambda }\left(6\mathrm{}\alpha _1+\mathrm{}\alpha _3\right)\right)3\mathrm{}\alpha _3H^4\right].`$ (10) This equation describes the dynamics of Robertson-Walker models with arbitrary space curvature, and what is equally important, it is expressed in terms of observable quantities. A self-consistence condition imposed on measurable quantities has well defined physical meaning. We adopt Simon’s ansatz to $`H`$, namely we state that $`H(t)=H_{class}(t)+\mathrm{}H_{quant}(t)`$ is perturbative in $`\mathrm{}`$. Now, the procedure of the order reduction can be done in two ways: 1) one can differentiate twice the zeroth-order expansion (equation (10) with $`\alpha _1=\alpha _3=0`$) to find the third and fourth derivatives and eliminate them from the full equation (10) - this is equivalent to what is done in , 2) substitute the expansion $`H(t)=H_{class}(t)+\mathrm{}H_{quant}(t)`$ directly into (10) and abandon terms second order in $`\mathrm{}`$ or higher . In both cases we obtain the second order equation $`0`$ $`=`$ $`3{\displaystyle \frac{d^2H}{dt^2}}18H{\displaystyle \frac{dH}{dt}}4H\left(3H^2\mathrm{\Lambda }\right)`$ (11) $`+`$ $`2{\displaystyle \frac{d^2H}{dt^2}}\left[3\left(51\mathrm{}\alpha _1+\mathrm{}\alpha _3\right){\displaystyle \frac{dH}{dt}}+3\left(90\mathrm{}\alpha _1+\mathrm{}\alpha _3\right)H^22\mathrm{\Lambda }\left(6\mathrm{}\alpha _1+\mathrm{}\alpha _3\right)\right]`$ (12) $`+`$ $`4H\left[459\mathrm{}\alpha _1\left({\displaystyle \frac{dH}{dt}}\right)^2+{\displaystyle \frac{dH}{dt}}\left(3\left(372\mathrm{}\alpha _1\mathrm{}\alpha _3\right)H^22\mathrm{\Lambda }\left(69\mathrm{}\alpha _1+\mathrm{}\alpha _3\right)\right)\right]`$ (13) $`+`$ $`4\left(3\left(180\mathrm{}\alpha _1\mathrm{}\alpha _3\right)H^4204\mathrm{\Lambda }\mathrm{}\alpha _1H^2+8\mathrm{\Lambda }^2\mathrm{}\alpha _1\right),`$ (14) which is nonlinear both in $`H`$ and its derivatives. So strong nonlinearity allows one to find exact solutions only in some particular situations. The is not the case in equation (14). However, this equation becomes much more transparent after one rewrites the quantum corrections as contributions to energy density and pressure. Qualitative analysis is then enabled. Let $`ϵ`$ and $`P`$ denote respectively effective energy density and effective pressure, i.e. each of these quantities is supplemented by quantum corrections. The universe dynamics is determined by the system of the Raychaudhuri (15) and the continuity (16) equations $`{\displaystyle \frac{dH}{dt}}`$ $`=`$ $`H^2{\displaystyle \frac{1}{6}}\left(3P+ϵ2\mathrm{\Lambda }\right)`$ (15) $`{\displaystyle \frac{dϵ}{dt}}`$ $`=`$ $`3H\left(P+ϵ\right)`$ (16) We differentiate (15), substitute into (14) and apply the continuity equation (16) to get the relation between pressure, energy and the cosmological constant in differential form $$\frac{dP}{dϵ}=\frac{P+ϵ/9}{P+ϵ}\frac{2}{9}\alpha _3\mathrm{}\frac{\left(3P+ϵ\right)^2}{P+ϵ}\frac{\alpha _3\mathrm{}}{27}\frac{8\mathrm{\Lambda }^2}{P+ϵ}$$ (17) As a matter of fact, one can solve equation (17) analytically, however the solution takes unclear implicit form. This is much simpler to follow the other way. The solution of (17) must be a function of the energy density and cosmological constant solely, hence $`P(ϵ,\mathrm{\Lambda })`$ is independent of the expansion rate $`H`$. Therefore the limit transition $`H^23(ϵ+\mathrm{\Lambda })`$ does not affect its values, and the general solution is identical with the integral found for the flat universe. In the last case the equation $$\frac{1}{18}\left[3\frac{dH}{dt}+2\left(3H^2\mathrm{\Lambda }\right)\right]+\mathrm{}\alpha _1\frac{d^3H}{dt^3}+7\mathrm{}\alpha _1H\frac{d^2H}{dt^2}+\frac{\mathrm{}}{3}\left[12\alpha _1\left[\frac{dH}{dt}\right]^2+\left(36\alpha _1\alpha _3\right)H^2\frac{dH}{dt}\alpha _3H^4\right]=0$$ (18) is an analogue to equation (10). Its order reduces by two, and finally the equation takes a particularly simple form $$\frac{dH}{dt}=\frac{2}{3}ϵ+\frac{2\alpha _3\mathrm{}}{9}\left(ϵ^2\mathrm{\Lambda }^2\right)$$ (19) Now, comparing (19) with the Raychaudhuri equation (15) we obtain the equation of state of cosmological substratum in the form of the algebraic relation $$P=\frac{1}{3}ϵ\frac{4\alpha _3\mathrm{}}{9}\left(ϵ^2\mathrm{\Lambda }^2\right)$$ (20) Function $`P(ϵ,\mathrm{\Lambda })`$, defined by (20), fulfills the differential equation(17) with an accuracy to terms $`o(\mathrm{})`$. By simple calculation , one can confirm that the exact solutions found by Parker and Simon also obey (20). As we have already mentioned, the equation of state (20) is barotropic, i.e. effective pressure is solely the function of the effective energy density (including the energy of vacuum $`\mathrm{\Lambda }`$). While reducing the equations order we eliminate contributions to the energy-momentum tensor coming from the expansion rate ; therefore the universe evolution becomes a reversible process (equations (15)-(16) are invariant under the time reflection $`tt`$). Quantum corrections contained in (20), and consequently the dynamical system (15-16) are free of the $`\alpha _1`$ constant. The only term multiplied by $`\alpha _1`$ which survives the reduction procedure , has been assimilated here by the effective energy densityIn this approach the quantum corrections modify effective energy density and pressure, not the fundamental constants like in . ## III The $`\mathrm{\Lambda }=0`$ case. Its worth noticing that in some physically interesting situations the reduction procedure eliminating higher order derivatives is redundant. In the radiation filled universe with null cosmological constant the correction $`\mathrm{}\alpha _1H_{\mu \nu }^{(1)}`$, which formally appears as linear in $`\mathrm{}`$, actually is quadratic, and consequently should be abandoned as the $`o(\mathrm{})`$ term. To show this let us express the traceless tensor $`{}_{}{}^{(1)}H_{\nu }^{\mu }`$ in terms of the Ricci scalar and the effective energy density $`{}_{}{}^{(1)}H={\displaystyle \frac{R}{2}}(4ϵR)\left[\begin{array}{cccc}1& 0& 0& 0\\ 0& 1/3& 0& 0\\ 0& 0& 1/3& 0\\ 0& 0& 0& 1/3\end{array}\right]`$ The field equations with the energy-momentum tensor (1) show that the scalar $`R`$ involves the trace of the tensor $`{}_{}{}^{(3)}H_{\nu }^{\mu }`$, namely $`R=\alpha _3\mathrm{}^{(3)}H_\mu ^\mu `$, so it is a quantity linearly dependent on $`\mathrm{}`$. Writing $`{}_{}{}^{(3)}H_{\mu }^{\mu }`$ in terms of the effective energy density $`ϵ`$ with the accuracy to terms $`o(\mathrm{})`$ we get $`R=\frac{4}{3}\alpha _3\mathrm{}ϵ^2`$. Tensors $`{}_{}{}^{(1)}H_{\nu }^{\mu }`$ and $`{}_{}{}^{(3)}H_{\nu }^{\mu }`$ can be rewritten as $`{}_{}{}^{(1)}H_{\nu }^{\mu }={\displaystyle \frac{8}{3}}\alpha _3\mathrm{}ϵ^3\left[\begin{array}{cccc}1& 0& 0& 0\\ 0& 1/3& 0& 0\\ 0& 0& 1/3& 0\\ 0& 0& 0& 1/3\end{array}\right]`$ $`{}_{}{}^{(3)}H_{\nu }^{\mu }={\displaystyle \frac{1}{3}}ϵ^2\left[\begin{array}{cccc}1& 0& 0& 0\\ 0& 5/3& 0& 0\\ 0& 0& 5/3& 0\\ 0& 0& 0& 5/3\end{array}\right]{\displaystyle \frac{8}{27}}\alpha _3\mathrm{}ϵ^3\left[\begin{array}{cccc}0& 0& 0& 0\\ 0& 1& 0& 0\\ 0& 0& 1& 0\\ 0& 0& 0& 1\end{array}\right]`$ Now it is clear that only the second of the expressions $`\alpha _1\mathrm{}^{(1)}H_\nu ^\mu `$ and $`\alpha _3\mathrm{}^{(3)}H_\nu ^\mu `$ is essentially linear in $`\mathrm{}`$ and forms the first-order quantum contribution to the energy-momentum tensor. The first one $`\alpha _1\mathrm{}^{(1)}H_\nu ^\mu `$, which carries all higher derivatives is actually square in $`\mathrm{}`$. This is closely related to the absence of particle creation in the radiation-filled Robertson-Walker universe (see and papers cited there.) The theory with the energy-momentum tensor $`T_{\mu \nu }=T_{\mu \nu }^{(rad)}\mathrm{}\alpha _3H_{\mu \nu }^{(3)}`$ leads to the effective equation of state $`P=\frac{1}{3}ϵ\frac{4\alpha _3\mathrm{}}{9}ϵ^2`$, which is perfectly consistent with (20). ## IV Stability of the empty space - dynamical systems approach. The equation of state of the form $`P=P(ϵ,\mathrm{\Lambda })`$ (or more generally $`P=P(ϵ,\mathrm{\Lambda },H)`$ see ) uniquely determines cosmological evolution. The system (15-16), which defines the universe dynamics in the $`\{H,ϵ\}`$-phase space is autonomous. Choosing a point in the $`\{H,ϵ\}`$-phase space, one determines uniquely the metrics in the initial moment as well as the metrics’ evolution in time.<sup>§</sup><sup>§</sup>§ We abandon here a trivial freedom to multiply the flat universe metrics by a factor constant in time. The stability of the Minkowski space-time is defined by the stability of the $`(H,ϵ)=(0,0)`$ point in the $`\{H,ϵ\}`$-phase space under the condition $`\mathrm{\Lambda }=0`$. For the equation of state (20) discussed in the preceding section the autonomous system (15-16) reads: $`{\displaystyle \frac{dH}{dt}}`$ $`=`$ $`H^2{\displaystyle \frac{1}{3}}\left[ϵ{\displaystyle \frac{2}{3}}\alpha _3\mathrm{}ϵ^2\right]`$ (21) $`{\displaystyle \frac{dϵ}{dt}}`$ $`=`$ $`4H\left[ϵ{\displaystyle \frac{1}{3}}\alpha _3\mathrm{}ϵ^2\right]`$ (22) and its trajectories form levels of the integral $$H^2=\frac{ϵ}{3}K\sqrt{\frac{ϵ}{Gϵ_0a_0^4}}\frac{\alpha _3\mathrm{}}{6}K\sqrt{\frac{ϵ^3}{Gϵ_0a_0^4}}$$ (23) The phase portrait of the system (21-22) is shown on Fig. 1. For completeness and also for readers convenience, we attach Fig. 2. showing classical Friedmanian dynamics in the same representation. The phase structure of classical radiation-filled universes and the phase structure defined by (21-22) are topologically equivalent in the low energy limit. This is so because one cannot enrich the structure of the $`\{H,ϵ\}`$-phase plane without violating the standard energy conditions. On the other hand, according to (20) these conditions are well fulfilled for low and positive energy densities. The equation of state (20) formally admits violation of the energy conditions but these states appears already in the Planckian regime and hence, far beyond the region where semiclassical approximation is valid. (The dotted region in the upper part of Fig. 1, which contains three ’additional’ critical points must be recognized as nonphysical). An essential property of the system (21-22) is the absence of solutions that change the energy density from positive to negative, or the reverse. (Such behaviour was possible in the original semiclassical theory and disqualified the empty space as a ground state.) Indeed, on the strength of (20), the initial condition $`ϵ=0`$ results in $`ϵ+P=0`$, and consequently the right-hand side of equation (22) vanishes. Both conditions $`ϵ=0`$ and $`dϵ/dt=0`$ ensure that the state of the zero energy density is ’persistent’. This is consistent with the results based on the functional integral formalism , where all higher derivative terms responsible for instability are eliminated by regularisation of the energy-momentum tensor. The stability of Minkowski space-time is the same as in the classical theory. In both cases, the classical or the quantum, the $`(H,ϵ)=(0,0)`$ point is a three-fold point with elliptical sector and its type does not depend on the value of $`\mathrm{}`$. This means that the phase space is structuraly stable against quantum corrections in the low energy density limit. This nontrivial property does not follow from the solutions analicity in $`\mathrm{}`$, but from the form of the energy density tensor (1) In general, a three-fold point may bifurcate into simple critical points under smooth changes of the equation coefficients. This is what occurs when cosmological constant appears. Solutions are analytical in $`\mathrm{\Lambda }`$, though the critical point corresponding to empty space bifurcates into three simple points. Two of them represent de Sitter space time, the third one – the Einstein static universe . However, no bifurcation, results from quantum corrections.. ## V Summary and conclusions In the reduced Simon-Parker theory the energy-momentum tensor is renormalized to take the hydrodynamic form with a simple, barotropic equation of state. The self-consistence conditions for semiclassical cosmology can be imposed on observable quantities and weakened. By demanding the Hubble expansion rate to be perturbative in $`\mathrm{}`$ we allow the space curvature to alter from 0 while quantum corrections to the flat universes occur. In the particular case of the radiation-filled universe and vanishing cosmological constant, the dynamics of the Robertson-Walker universe in the (original) semiclassical theory is described by a second order equation, therefore it does not need either the reduction or additional conditions of self-consistence. The reason lies in the absence of particle creation in the radiation filled universe, which manifests itself as an additional factor $`\mathrm{}`$ ’hidden ’ in the tensor $`{}_{}{}^{(1)}H_{\mu \nu }^{}`$. This eventually eliminates all the higher derivative terms. Minkowski space-time has the same stability character as for Einsteinian gravity, which is consistent with results based on the functional integral formalism . The stability of Minkowski space-time is independent of the numerical value of the Planck constant. In the language of dynamical systems theory, this property is called the structural stability of the $`\{H,ϵ\}`$-phase space against changes of $`\mathrm{}`$. Its worth noticing that the Liapunov stability of the environment with equation of state (20) with respect to position-dependent perturbations is also the same as for the classical radiation-filled universe, in contrast to the original semiclassical theory, where quantum corrections let inhomogeneities grow. This suggests an insignificant role for semiclassical corrections in the processes of structure formation in the early universe. ## Acknowledgements We would like to acknowledge Prof. Marek Demiański and Prof. Lech Sokołowski for helpful discussion. This work was partially supported by Polish research project KBN Nr 2 P03D 02210.
no-problem/9901/astro-ph9901054.html
ar5iv
text
# A HIGH RESOLUTION STUDY OF THE SLOWLY CONTRACTING, STARLESS CORE L1544 ## 1 Introduction The very first stage in models of the star formation process is the gravitational collapse of a starless dense molecular core (e.g., Shu, Adams, & Lizano 1987) but this stage is relatively short-lived, and therefore rare. One of the few known examples is the dense core, L1544, in Taurus which shows no evidence of an embedded young stellar object but appears to have extended inward motions (Tafalla et al. 1998; hereafter T98). This core is therefore an excellent testing ground for theories of low mass, isolated, star formation. T98 presented single-dish molecular line observations and analyzed the large scale structure and dynamics of L1544. They find that lines of CO and its isotopes are single peaked and trace an outer envelope, but higher-dipole-moment lines of H<sub>2</sub>CO, CS, and N<sub>2</sub>H<sup>+</sup> are strongly self-absorbed and trace a central core. The self-absorption is predominantly red-shifted as expected for inward motions (e.g., Myers et al. 1996; hereafter M96) although there is also a small region of blue-shifted reversals toward the south. The inferred inward speeds range from $`0.10`$ $`\mathrm{km}\mathrm{s}^1`$ in the northeast to $`0.03`$ $`\mathrm{km}\mathrm{s}^1`$ (relative outward motions) in the region of reversals in the south. T98 note that the dynamics differ from several theoretical predictions for core collapse: the infall speeds are too large for ambipolar diffusion (Ciolek & Mouschovias 1995); the spatial extent of infall is too large for inside-out collapse (Shu 1977) at such an early stage; and the density is too highly centrally condensed to be due to collapse from a uniform density configuration (Larson 1969). In order to measure the structure and dynamics in greater detail, we made high resolution observations of N<sub>2</sub>H<sup>+</sup>(1–0). This line was chosen because the T98 observations show a compact, centrally condensed core with double-peaked spectra. These observations are the first kinematic measurements of a starless core on the size scale $`2000`$ AU ($`0.01`$ pc). Our main finding is that the maximum infall speed is (surprisingly) essentially the same, $`0.1`$ km s<sup>-1</sup>, as on much larger scales. ## 2 Observations L1544 was observed with the ten-antenna Berkeley-Illinois-Maryland array <sup>1</sup><sup>1</sup>1Operated by the University of California at Berkeley, the University of Illinois, and the University of Maryland, with support from the National Science Foundation (BIMA) in its compact C configuration for two 8 hour tracks on November 4 and 5, 1997. Projected baselines ranged from 1.9 to 25.1 k$`\lambda `$. The phase center was $`\alpha (2000)=5^\mathrm{h}04^\mathrm{m}16.^\mathrm{s}62,\delta (2000)=25^{}10^{}47\stackrel{}{\mathrm{.}}8`$. Amplitude and phase were calibrated using 3 minute observations of 0530+135 interleaved with each 25 minute integration on source. The passband shape was determined from observing the bright quasar 3C 454.3. No suitable planets were available for flux calibration and we therefore assumed a flux density of 2.4 Jy for 0530+135 based on flux density measurements taken within a month of these observations. From the scatter in these measurements, the flux density scale is accurate to 30%. The digital correlator was configured with 512 channel windows at a bandwidth of 6.25 MHz ($`0.04`$ $`\mathrm{km}\mathrm{s}^1`$ velocity resolution per channel) centered on the 7 hyperfine components of N<sub>2</sub>H<sup>+</sup>(1–0) (Caselli, Myers, & Thaddeus 1995) in the lower sideband and C<sup>34</sup>S(2–1) in the upper sideband. Eight 32 channel windows at a bandwidth of 100 MHz. were used to measure continuum radiation. Data were reduced with the MIRIAD package using standard procedures. The data sets from each day were calibrated separately and then transformed together using natural weighting. The resulting “dirty” map was cleaned and restored with a gaussian beam of FWHM size $`14\stackrel{}{\mathrm{.}}8\times 6\stackrel{}{\mathrm{.}}6`$ at position angle $`1^{}`$. The continuum map shows a 5 mJy peak, corresponding to $`5\sigma `$, at offset position, $`\mathrm{\Delta }\alpha =4^{\prime \prime },\mathrm{\Delta }\delta =12^{\prime \prime }`$, but this has not been verified by the IRAM Plateau de Bure interferometer (J.-F. Panis, personal communication) and may be due to contamination of low level line emission from species such as CH<sub>3</sub>OH in the wide windows. We also searched for continuum emission at 3.6 cm from a very young protostellar wind using the VLA in B-configuration on July 5, 1998. No emission was detected to a $`3\sigma `$ level of 39 $`\mu `$Jy which is considerably less than the outflows detected by Anglada (1995) and, extrapolating his data, implies an upper limit to the bolometric luminosity of any protostar in L1544 of $`0.03L_{}`$. The N<sub>2</sub>H<sup>+</sup> lines were detected with high signal-to-noise but no emission was detected in the other spectral window implying a $`3\sigma `$ upper limit of 0.25 $`\mathrm{K}\mathrm{km}\mathrm{s}^1`$ for the C<sup>34</sup>S emission integrated over velocities 6.8 to 7.6 $`\mathrm{km}\mathrm{s}^1`$. A map of the velocity integrated N<sub>2</sub>H<sup>+</sup> emission for the isolated hyperfine component F<sub>1</sub>F$`=0112`$ is displayed in Figure 1. There is a single, compact, elongated structure with FWHM size 7000 AU$`\times `$3000 AU. Double-peaked spectra are found toward the map center as in T98, but before analyzing these profiles it is necessary to account for the effect of the missing zero-spacing flux (see Gueth et al. 1997). Therefore, we combined the single-dish IRAM 30 m data of T98 with the BIMA map: the single-dish data were shifted by $`13^{\prime \prime }`$ to the south and then scaled to match the interferometer data in the region of visibility overlap (6 m to 30 m) using a best fit gain $`4.8`$ Jy K<sup>-1</sup>. The resolution of the resulting data set is the same as for the interferometer data alone but a circular restoring beam of FWHM size $`10^{\prime \prime }`$ (having approximately the same beam area) was used to create a regular grid of spectra for analysis. ## 3 Analysis The combined IRAM-BIMA spectra are also double-peaked toward the map center and we attribute this to self-absorption for the reasons outlined in T98. The red peak is stronger than the blue as expected for a contracting core but the contrast between the two is not as great as in the interferometer map alone which implies that the absorbing region is more spatially extended than the emitting region. To examine the structure and dynamics of the core in further detail, we fit the data with a simple model of a collapsing core. The self-absorption implies a decreasing excitation temperature gradient away from the core center toward the observer. Many previous models of core collapse (e.g., Zhou 1995) have assumed a spherical gas distribution, but Figure 1 shows that the core is highly elongated with an axial ratio $`0.4`$, so such models are inappropriate in this case. Here, we allow for variations in parameters parallel and perpendicular to a major axis in the plane of the sky, and consider two isothermal layers, each of constant thickness, but at different densities so as to provide the excitation temperature inversion. We were motivated to consider this heuristic model because it greatly simplifies the treatment of the radiative transfer yet fits individual spectra quite well. It allows us to infer the variation of density in the plane of the sky and the magnitude and spatial distribution of the relative motion between the two layers (the infall speed). The line brightness temperature is given by M96 equation (2) and depends on the excitation temperature and optical depth of each layer. The excitation temperatures are derived using a two level approximation as in M96 equation (8a,b) and the peak optical depth of the 101-012 transition is $$\tau _{\mathrm{pk},i}=\left(\frac{Xn_i\mathrm{\Delta }z_i}{4.9\times 10^{12}\mathrm{cm}^2}\right)\left(\frac{1e^{4.5/T_{\mathrm{ex},i}}}{\sigma T_{\mathrm{ex},i}}\right),$$ $`(1)`$ for $`\tau _i=\tau _{\mathrm{pk},i}\mathrm{exp}[(vv_i)^2/2\sigma ^2]`$, where $`v_i`$ is the systemic velocity and $`\sigma `$ is the velocity dispersion. Here, the index $`i=f`$ or $`r`$ for the front or rear layer respectively, and $`X`$ is the abundance of N<sub>2</sub>H<sup>+</sup> relative to H<sub>2</sub>, $`n_i`$ is the density of H<sub>2</sub>, $`\mathrm{\Delta }z_i`$ the layer thickness, and $`T_{\mathrm{ex},i}`$ the excitation temperature. This formalism allows each spectrum to be fit very well but requires at least 6 independent parameters for each spectrum (M96). Instead, we present a global fit to the data that describes the core structure and dynamics with the addition of fewer parameters. The route to a global fit was guided by the individual fits; these showed that the velocity dispersion did not vary greatly but the excitation temperature and optical depth both increased sharply toward the map peak. The kinetic temperature is constrained to the range $`T_\mathrm{k}=1113.5`$ K by the CO observations of T98 and is insufficient to account for the range of excitation temperature, implying that the density must increase toward the map center. We therefore fixed $`T_\mathrm{K}`$ and $`\sigma `$ and chose the functional form, $$f(x,y)=1+\left(\frac{xx_{\mathrm{pk}}}{\mathrm{\Delta }x}\right)^2+\left(\frac{yy_{\mathrm{pk}}}{\mathrm{\Delta }y}\right)^2,$$ $`(2)`$ to describe the spatial variation of the density, $`n_i(x,y)=n_{\mathrm{pk},i}/f(x,y)`$, where $`xy`$ defines an orthogonal coordinate system rotated with respect to the $`\alpha \delta `$ observational frame by an angle $`\theta `$ (measured anti-clockwise from north), chosen so that $`\mathrm{\Delta }x\mathrm{\Delta }y`$ (i.e., $`x`$ is the major axis). The kinetic temperature and velocity dispersion are then held constant, both spatially within a layer and from layer to layer. The individual fits (and also channel maps) show that there is a significant velocity gradient that is approximately uniform and along the major axis and therefore we set $$v_r(x)=v_0+(dv/dx)(xx_{\mathrm{pk}}),v_f(x,y)=v_r(x)+v_{\mathrm{in}}(x,y),$$ $`(3)`$ where $`v_0`$ is a constant velocity offset, $`dv/dx`$ is a constant velocity gradient ($`dv/dy0`$), and $`v_{\mathrm{in}}=v_fv_r`$ is the velocity difference between the front and rear layers (the infall speed). In addition to $`v_{\mathrm{in}}`$, there are, formally, a total of 14 parameters in the final model: $`x_{\mathrm{pk}},y_{\mathrm{pk}},\theta `$ to define the coordinate system; $`n_{\mathrm{pk},\mathrm{f}},\mathrm{\Delta }z_f,n_{\mathrm{pk},\mathrm{r}},\mathrm{\Delta }z_r,\mathrm{\Delta }x,\mathrm{\Delta }y`$ to describe the densities in each layer; the velocity parameters, $`v_0`$ and $`dv/dx`$; and the constants, $`T_\mathrm{k},\sigma `$, and $`X`$. This is far fewer than the $`>`$200 parameters needed to fit the spectra in Figure 2 using 6 parameters per spectrum. Furthermore, the parameter space is quite tightly constrained by the maps showing the approximate center, angle, and velocity gradient of the core, and the estimates of kinetic temperature, sizes, and linewidths that were determined by T98. Also, not all the parameters are independent; all values of $`\mathrm{\Delta }z_i`$ and $`X`$ with the same product result in the same model output. The principal parameters that were varied were the six density parameters and the infall speed. We proceeded by assuming a constant value for $`v_{\mathrm{in}}(x,y)`$, and searched for the best global fit by least squares minimization of the difference between the model and observed spectra for a grid of $`5\times 7`$ spectra at $`10^{\prime \prime }`$ spacing about the core center (Figure 2). This showed that any viable fit must satisfy the following conditions: (1) $`n_{\mathrm{pk},\mathrm{r}}n_{\mathrm{cr}}n_{\mathrm{pk},\mathrm{f}}`$ and $`\mathrm{\Delta }z_r\mathrm{\Delta }z_f`$ where $`n_{\mathrm{cr}}`$ is the critical density; the excitation temperature must be high in the rear which emits strongly, and low in the front which emits weakly but has a large line-of-sight thickness and absorbs strongly. (2) $`\mathrm{\Delta }x,\mathrm{\Delta }y`$ $`<`$ (core size); the excitation temperature and optical depth must decrease rapidly away from the map center. (3) $`dv/dx0`$; a constant systemic velocity is a very poor fit and a velocity gradient is required. (4) $`v_{\mathrm{in}}>0`$; the front layer must be moving toward the rear layer to account for the asymmetry of the self-absorption. In fact, many of the spectra do not show two distinct peaks but rather a bright red peak and a blue shoulder indicating that the velocity of the absorption must be shifted relative to the emission by an amount approximately equal to the velocity dispersion, $`v_{\mathrm{in}}\sigma `$. Model spectra are plotted with the observations in Figure 2 for the best fit parameter values listed in Table 1. We chose a high value for the abundance, $`X=1\times 10^9`$, so that the front layer (with peak density $`n_{\mathrm{pk},\mathrm{f}}=10^4`$ $`\mathrm{cm}^3`$) has a symmetrical size, $`2\mathrm{\Delta }z_f=0.4`$ pc, that approximately matches the region of C<sup>18</sup>O emission in T98. The infall speed was determined for each spectrum individually by least squares minimization after the other parameters were fixed. This hybrid global-individual approach was chosen to emphasize the spatial variation of $`v_{\mathrm{in}}`$. However, its determination requires high signal-to-noise ratios and optical depth, and it could only be reliably measured around the map center where there are bright lines and strong self-absorption. We find that the best fit has $`v_{\mathrm{in}}`$ increasing toward the core center and along the major axis, but because the optical depth falls off rapidly, the significance of this fit is only marginally better than a fit with constant $`v_{\mathrm{in}}=0.075`$ $`\mathrm{km}\mathrm{s}^1`$. The difference between the model spectra and the data has a mean of zero and standard deviation $`0.12`$ K per 0.04 $`\mathrm{km}\mathrm{s}^1`$ channel, about 50% greater than the rms noise in the spectra. The spatial variation of column density and infall speed are displayed along with the spectra in Figure 2. ## 4 Discussion The observations reported here are the first to define the inward motions in a starless core on the size scale of star formation, i.e. on a size scale which encloses about a stellar mass. We therefore comment here on the model parameters, the inferred core motions and their physical basis, and the evolutionary status of L1544. Our model properties agree with four independent constraints. We find agreement between the size scale, $`\mathrm{\Delta }z_f`$, as discussed above, and the kinetic temperature, $`T_\mathrm{k}=12`$ K, from the T98 CO observations. The core mass can be derived by integrating the density profile out to a distance equal to the spatial thickness of each layer: $`M_f=0.2M_{},M_r=0.4M_{}`$ implying a total symmetric mass $`2(M_f+M_r)=1.2M_{}`$, which is comparable to estimates of the dense gas mass discussed in T98. The density profile and peak column density are consistent with dust continuum measurements of the structure of a similar prestellar core, L1689B, in the $`\rho `$ Ophiuchus cloud (André, Ward-Thompson, & Motte 1996). We have also fit the data with the exponent in equation (2) varying from 1.8 to 2.5 (with corresponding changes in $`\mathrm{\Delta }x`$ and $`\mathrm{\Delta }y`$) without significantly increasing the least squares difference. The core dynamics are of greatest interest. The dispersion, $`\sigma =(\sigma _\mathrm{T}^2+\sigma _{\mathrm{NT}}^2)^{1/2}`$, is the quadrature sum of a thermal component, $`\sigma _\mathrm{T}=(kT_\mathrm{k}/\mu )^{1/2}`$, where $`k`$ is the Boltzmann constant and $`\mu `$ is the molecular mass, and a non-thermal component, $`\sigma _{\mathrm{NT}}`$, which has a best fit value of $`0.085`$ $`\mathrm{km}\mathrm{s}^1`$, similar to that measured for C<sup>34</sup>S by T98. For $`T_\mathrm{k}=12`$ K, $`\sigma _\mathrm{T}(\mathrm{H}_2)=0.22`$ $`\mathrm{km}\mathrm{s}^1`$, much greater than $`\sigma _{\mathrm{NT}}`$ and comparable to gravitational speeds $`(GM_r/\mathrm{\Delta }z_r)^{1/2}=0.27`$ $`\mathrm{km}\mathrm{s}^1`$. Therefore, the turbulent pressure support is negligible. However, the non-spherical core shape implies that it cannot be supported entirely by thermal pressure and there must also be an anisotropic force resisting self-gravity. The velocity gradient along the major axis, $`dv/dx=3.8`$ $`\mathrm{km}\mathrm{s}^1`$ pc<sup>-1</sup>, implies a change in velocity $`\mathrm{\Delta }v_{\mathrm{rot}}=0.13/\mathrm{sin}i`$ $`\mathrm{km}\mathrm{s}^1`$ over the 7000 AU major axis FWHM core diameter, where $`i`$ is the inclination of the gradient to our line of sight. If the non-sphericity is due to rotation, however, the aspect ratio constrains $`\mathrm{cos}i<0.4`$, so $`\mathrm{\Delta }v_{\mathrm{rot}}<0.14\mathrm{km}\mathrm{s}^1\sigma _\mathrm{T}(\mathrm{H}_2)`$ and we conclude that rotation is dynamically insignificant. This leaves magnetic fields as the most likely explanation for the extended core shape. The observed aspect ratio implies approximate equipartition between thermal and static magnetic pressure support (Li & Shu 1996), corresponding to a field strength $`B30\mu `$G at number densities $`10^5`$ $`\mathrm{cm}^3`$. The thermal, gravitational, and rotational speeds are all greater than the maximum infall speed. We have determined a range, $`v_{\mathrm{in}}0.02`$ to $`0.09`$ $`\mathrm{km}\mathrm{s}^1`$, that is similar to the values derived from the CS/C<sup>34</sup>S analysis by T98 even though the maps are quite different in spatial scale. The interferometer map presented here has finer resolution, but covers a much smaller spatial extent than the T98 CS map. Comparing the two maps, the high resolution observations indicate a sharp increase in density, consistent with free-fall collapse, but no corresponding increase in infall speed. A possible explanation is that starless cores, such as L1544, are predicted to evolve through a series of quasi-static equilibria (see discussion in Li & Shu 1996). For example, neutrals in an outer envelope diffuse through magnetic field lines onto a central core which is supported by thermal pressure. The piling up of material on the central core causes its density to increase and its (Jeans) size to decrease. The mass-to-flux ratio increases until the collapse becomes supercritical and magnetic field lines are dragged into the core which remains (quasi-)static. The infall velocities remain sub-sonic except for a small region at the envelope-static core boundary at very late times (Li 1998; Basu & Mouschovias 1994). As with T98, however, the quantitative comparison shows discrepancies. For example, both Ciolek & Mouschovias (1995) and Basu & Mouschovias (1994) predict smaller infall velocities than measured here, i.e., $`0.02`$ $`\mathrm{km}\mathrm{s}^1`$, at size scales of 0.02 pc and densities of $`4\times 10^5`$ $`\mathrm{cm}^3`$. The large scale ($`0.1`$ pc) inward motions observed by T98, possibly driven by turbulent pressure gradients (Myers & Lazarian 1998) and not included in the above models, probably increase the infall speed at all size and density scales. N<sub>2</sub>H<sup>+</sup> is an ion and therefore moves only along, or with, the magnetic field lines. Since the ions and neutrals are moving together at a speed much greater than their relative drift speed, either we are looking down along the field lines, or the magnetic field lines are being dragged in along with the gas. The geometry suggests that the field lines lie more closely perpendicular than parallel to our line of sight, and therefore the second possibility, supercritical collapse, seems more reasonable. In this case, L1544 may be very close to forming a star: Li (1998) shows that, for a spherical cloud, both the ion and neutral infall speeds are very small for times $`t15\tau _{\mathrm{ff}}`$, where $`\tau _{\mathrm{ff}}0.4`$ Myr is the free-fall timescale, but then rapidly increase for $`t19\tau _{\mathrm{ff}}`$ and the core forms a star by $`20\tau _{\mathrm{ff}}`$. There may be many cores that have begun collapsing in Taurus, but these calculations suggest that only those that are more than 95% of the way there, such as L1544 appears to be, may show detectable inward motions. Observational verification, however, awaits the statistics from surveys for inward motions in starless cores. Discussions with Zhi-Yun Li and Shantanu Basu are gratefully acknowledged. This research was partially supported by NASA Origins grant NAGW-3401.
no-problem/9901/astro-ph9901216.html
ar5iv
text
# Untitled Document Concave Accretion Discs and X-ray Reprocessing Eric G. Blackman, Theoretical Astrophysics, Caltech 130-33, Pasadena CA, 91125, USA (submitted to MNRAS) ABSTRACT Spectra of Seyfert Is are commonly modelled as emission from an X-ray illuminated flat accretion disc orbiting a central black hole. This provides both a reprocessed and direct component of the X-ray emission as required by observations of individual objects and possibly a fraction of the cosmological X-ray background. There is some observational motivation to at least consider the role that an effectively concave disc surface might play: (1) a reprocessed fraction $`\text{ }>1/2`$ in some Seyferts and possibly in the X-ray background, and (2) the commonality of a sharp iron line peak for Seyferts at 6.4KeV despite a dependence of peak location on inclination angle for flat disc models. Here it is shown that a concave disc may not only provide a larger total fraction of reprocessed photons, but can also reprocess a much larger fraction of photons in its outer regions when compared to a flat disc. This reduces the sensitivity of the 6.4KeV peak location to the inner disc inclination angle because the outer regions are less affected by Doppler and gravitational effects. If the X-ray source is isotropic, the reprocessed fraction is directly determined by the concavity. If the X-ray source is anisotropic, the location of iron line peak can still be determined by concavity but the total reflected fraction need not be as large as for the isotropic emitter case. The geometric calculations herein are applicable to general accretion disc systems illuminated from the center. Key Words: accretion, accretion discs; line:profiles; galaxies: active; X-rays: galaxies; Xrays: stars 1. Introduction Accretion discs are the standard paradigm to explain a wide variety of luminous galactic and extra-galactic accreting sources. Steady accretion requires dissipation of gravitational energy which produces the observed luminosity. Emission from X-ray binaries and active galactic nuclei (AGN) is thought to result from accretion onto a central massive black hole (e.g. Pringle 1981; Rees, 1984). The X-rays originate from the inner-most regions of the accretion flow and probe the associated dynamics and geometry. X-ray spectra of Seyfert I AGN have been modeled by a combination of direct and reprocessed emission (see Mushotzsky et al. (1993) for a review). The direct component is from hot $`10^9`$K electrons of optical depth $`1`$. The reprocessing component (e.g. Guilbert & Rees 1988; George & Fabian 1991) is composed of: 1) a Compton thick material in a moderate state of ionisation believed to be the thin accretion disc at $`10^5`$K,which produces iron fluorescence features and 2) possibly a Compton thin, highly ionised “warm absorber” which produces absorption features below 3 keV. If the disc extends to the inner stable orbit, a broad gravitationally red-shifted $`6.4`$ KeV iron K $`\alpha `$ fluorescence line can be produced. Material further inside can also contribute to the line (Reynolds & Begelman 1997) making a distinction between Kerr or Schwarzchild holes difficult (though see Young et al. (1998)). Generally, the iron line shape provides a diagnostic for strong gravity (Fabian et al., 1995; Tanaka et al 1996). Similar lines have also been seen in galactic black hole candidates. (Fabian et al. 1989; Done et al. 1992; Ebisawa 1996). The best studied (post ASCA) iron line is that of MCG-6-30-15 (Tanaka et al 1995; Iwasawa et al. 1996). Its profile varies with the continuum, but has been successfully modeled by reflection off of a flat Keplerian disc inclined $``$ 30 degrees to the line of sight. However, ASCA has observed the $`6.4`$ KeV Iron lines in 22 Seyfert Is (Nandra et al. 1997) and as a population, they have a dispersion of only $`\pm 3`$ degrees around $`29`$ degrees in the predicted inclination angle when modelled with a flat disc. The peak of the line always appears at or near the rest frame $`6.4`$KeV. Because the peak location is sensitive to the disc inclination angle in flat disc models (Laor 1991), the low dispersion is not expected for a distribution of randomly oriented discs—even in the presence of a dusty torus at large radii which narrows the angular range over which Seyfert Is, by class, are selected. The narrow iron line core near 6.4 KeV correlates with the intensity of the continuum flux when averaged over $`few\times 10^4`$ seconds (Iwasawa et al. 1996). Perhaps this indicates a corresponding distance between the reprocessing region and the direct X-ray source. A second issue is the ratio of total reprocessed to direct emission. In some Seyferts like MCG-6-30-15 this may slightly exceed 1 (Lee et al. 1998; Guainazzi et al. 1999), although for a population of 11 objects, the ratio seems to hover around 1 (Matt 1998). Some models of the cosmological X-ray background (c.f. Fabian 1992) also suggest that the reprocessed X-ray component may exceed the direct component by a ratio $`\text{ }>5`$. If true, this might be explained by flat disc geometries by employing an anisotropic direct X-ray source through the inverse Compton process, (Ghisellini et al 1990; Rogers 1991), direct acceleration of electrons toward the disc (Field & Rogers 1993), source motion (Reynolds & Fabian 1997), or general relativistic (GR) effects (Martocchia & Matt 1996). But other possibilities, such as concavity, deserve investigation. It important to note however, that recent models of the X-ray background (Comastri et al. 1995) do not require the reprocessed to direct emission ratio much different from 1. A disc whose outer parts are thicker or concave with increasing height at larger radii may play a role in explaining the ubiquity of the iron line peak at 6.4KeV even if the total reprocessed to direct emission ratio inferred for a given object is of order 1. The concavity allows a larger fraction of the reprocessed emission to be reprocessed in the outer parts of the disc, away from the influence of the doppler and gravitational effects. The total reprocessed to direct fraction depends not only on the concavity but also on how isotropic the X-ray source is. For an isotropic X-ray source, the total reprocessed fraction will be large if concavity plays a strong role in determing where the iron line peak is. For an anistropic souce however, the concavity may play a strong role in determining where the iron line source is even if the total reprocessed fraction is modest. Additional “concavity” beyond that of the simplest Shakura-Sunayev (Shakura 1973) discs can reflect a thickening disc with a flare due to the particular vertical structure and temperature profile (e.g. Keynon & Hartmann 1987 in the context of stellar discs.) Also, azimuthally dependent concavity can result from warping which may be tidally or radiatively driven (Terquem & Bernout 1993,1996; Pringle 1996,1997), or induced by a wind. Alternatively, the discs may incur a transition from a thick torus to a thin disc inward. Here I do not consider the dynamics in detail and just parameterize the concavity to highlight some simple effects related to the above observations. Concavity was considered by Matt et al. (1991) primarily for its effect on the iron line equivalent width, but this may not be its most important role. In section 2, I explicitly derive the ratio of reprocessed to direct emission as a function of the curvature. Splitting the reprocessed contribution into components emanating from inside and outside a critical radius $`r_c`$, I then derive the ratio of contributions to the iron line from the outer (taken to be the “narrow component” around 6.4KeV) and inner regions (taken to be the “broad component”). The calculation results and line profile examples are discussed in section 3 and section 4 is the conclusion. 2. Concavity and the Reprocessed Emission 2.1 Basic Considerations Here I take he direct X-ray source is taken to be an isotropic point emitter located above a Schwarzchild hole. (Later I will comment on the possibility of an anisotropic source.) The exact location of the X-ray source(s) is unknown. To marginally avoid considering reflection from material free-falling inside of radii $`r=6`$ (Reynolds & Begelman 1997), I take the source height to be $`H_e=10`$, where the $`r`$ and $`H_e`$ are in gravitational units of $`R_gGM/c^2`$, and $`M`$ is the hole mass. Incident hard X-rays impinge onto the accretion disc and are scattered off an optically thin outer layer (see Matt et al. 1996). The X-ray photons also excite fluorescent photons from iron atoms. Below, the $`6.4`$ KeV iron line flux is taken to be proportional to the incident flux. This assumes that the ionisation parameter is low enough over all $`r\text{ }>6`$ to ensure cold iron line emission (Matt et al. 1996). The calculations then reduce to the geometry of Fig 1. 2.2 Total Reprocessed vs. Direct Flux Ratios for Concave discs First consider the total reprocessed flux. The ratio of the observed flux in the reprocessed component to that in the direct component is the ratio of the number of photons which intercept the disc over the number of which escape directly. For the parameters chosen above, the GR effects are sub-dominant in all regions of the disc and comprise a maximum 30% correction. Note however, that GR leads to more flux impinging on the disc, so ignoring GR at first highlights a different way to achieve a a high reprocessed fraction, and thus an overall lower limit. In the Euclidean regime, the flux ratio of reprocessed to direct emission is given by the ratio of solid angle subtended by the disc to that subtended by the free space to the observer. The azimuthal angle drops out by symmetry. Take the disc’s reprocessing layer to have a height $$h(r)=a(r/6)^b[\mathrm{for}a(r/6)^b<r];r[\mathrm{for}a(r/6)^b>r],$$ $`(1)`$ where the curvature index $`b`$ implies concavity for $`b>1`$, and $`ah(r_{in})/r_{in}`$ at the inner disc radius $`r_{in}=6`$. The two regimes of (1) are imposed to enforce $`hr`$ at all radii. The dashed curves of Fig. 2 show the $`b`$ for which $`h(r_{out})=r`$ for different outer radii $`r_{out}`$. The ratio of reprocessed to direct flux, using the angles shown in Fig. 1, is then $$F_{rep}/F_{dir}=\frac{_{\pi /2Tan^1[(H_eh(r_{out}))/r_{out}]}^{\pi Tan^1[r_{in}/H_e]}Sin\theta 𝑑\theta }{_0^{\pi /2Tan^1[(H_eh(r_{out}))/r_{out}]}Sin\theta 𝑑\theta }.$$ $`(2)`$ For a strictly flat disc, the ratio is given by (2) with $`h(r)=0`$, and then $`F_{rep}/F_{dir}<1`$. 2.3 Concavity and the Reprocessed Iron Line Consider two components to the reprocessed iron line, motivated partly by the approach employed for the best-studied MCG-6-30-15 example (e.g. Iwasawa 1996). Take the first component to peak at the rest frame frequency and the second to be the remaining broad line. After choosing a core width, one can derive a corresponding critical radius, $`r_c`$, outside of which all the narrow core emission emanates. The role of concavity becomes apparent by comparing the flux ratios from the two disc regions for flat vs. curved discs. Iwasawa et al (1996) and Nandra et al. (1997) consider a narrow component width $`\pm 2.5`$% around the 6.4 KeV peak rest energy in MCG-6-30-15 and for the population of 22 objects respectively. We can estimate $`r_c`$ by assuming that the spread corresponds to a Doppler width enveloping the largest and smallest frequencies at $`r_c`$. The Doppler shift at large $`r`$ is $$\nu /\nu _e(1\pm \mathrm{\Delta }v/c),$$ $`(3)`$ where $`\nu `$ is the frequency, $`\nu _e`$ is the rest frequency $`=6.4`$keV, and $`\mathrm{\Delta }v`$ is the maximum spread in velocity. For $`\mathrm{\Delta }v`$ we take the Keplerian speed $`c(1/r)^{1/2}`$, Thus $$r_c(\mathrm{\Delta }\nu /\nu _e)^2,$$ $`(4)`$ where $`\mathrm{\Delta }\nu `$ is the frequency half-width of the core. For $`\mathrm{\Delta }\nu =0.025`$, $`r_c=1600`$, while for $`\mathrm{\Delta }\nu =0.05`$, $`r_c=400`$. Using Fig. 1 to calculate the ratio of flux emanating from outside $`r_c`$ to that from within $`r_c`$ we obtain $$F_{out}/F_{in}=\frac{_{\pi /2Tan^1[(H_eh(r_{out}))/r_{out}]}^{\pi /2Tan^1[(H_eh(r_c))/r_c]}Sin\theta 𝑑\theta }{_{\pi /2Tan^1[(H_eh(r_c))/r_c]}^{\pi Tan^1[r_{in}/H_e]}Sin\theta 𝑑\theta }.$$ $`(5)`$ Notice from Fig. 1 that the angle bounds in the integral are the same regardless of whether $`h(r_c)>H_e`$ or $`h(r_c)<H_e`$. For a flat disc, (5) is found from taking $`h=0`$ for all $`r`$. 2.3 Local Approach The above approach is the simplest for the Euclidean case, but to include GR and actually compute line profiles, GR corrections are more easily incorporated in a formalism which integrates over $`r`$. An equivalent way to compute (5) is to note that the reprocessed line flux from an element of area $`dA`$ is proportional to the impinging flux $`f(r,H_e,h)`$ at that radius projected onto the area $`dA`$ (see Fig. 1) $$dFg_r(r)f(r,H_e,h)Cos\lambda r(dr^2+dh^2)^{1/2}=g_r(r)f(r,H_e,h)Cos\lambda (1+dh^2/dr^2)^{1/2}rdr$$ $$g_r(r)(r^2+(H_eh)^2)^1(1+dh^2/dr^2)^{1/2}rdr,$$ $`(6)`$ where $`f(r,H_e,h)(r^2+(H_eh)^2)^1`$. The angle $`\lambda `$ is between the normal to the area element and the X-ray source direction (Fig. 1) so that $$Cos\lambda =Cos(\pi /2Tan^1(dh/dr)+Tan^1[(h(r)H_e)/r]).$$ $`(7)`$ The quantity $`g_r(r)=(12/r)^4(12/r)^{1/2}(1+z)^4`$ is the product of three correction factors to an otherwise Euclidean formula: The first is the Doppler + GR correction to the Euclidean illumination function as approximated by Reynolds & Begelman (1996). The second is the correction to the area measure in the integrand. The third factor $`(1+z)^4`$ is the red-shift correction. This is given by $`(1+z)=(p^\mu u_\mu )_{em}/E_{obs}`$ where subscript $`em`$ stands for emitted location (i.e. disc), $`E_{obs}`$ is the photon energy measured by the observer, and $`p^\mu u_\mu `$ is the product of the photon 4-momentum and bulk motion 4-velocity along the sight line. At finite inclination angle, some of the photons from the inner regions populate the narrow core. In addition, some inner region photons may be lost as they impinge on the outer disc and incur a second reprocessing. The formulae for a face on inner disc thus provide a lower limit to the ratio $`F_{out}/F_{in}`$ for all inclination angles. From Reynolds & Begelman (1996), we then obtain $`(1+z)^4=(13/r)^2.`$ Notice that the illumination function correction and the red-shift correction compete. The former enhances, while the latter decreases the flux. The net effect is an increase in the redshifted component and an increase flux from the inner regions, and thus a decrease in (5). I ignore the tiny corrections to $`g_r(r)`$ which result from a finite $`h`$. Using (7) in (6) gives $$F_{out}/F_{in}=\frac{[_{r_c}^{r_{out}}g_r(r^{})𝑑r^{}r^{}(1+(\frac{dh}{dr^{}})^2)^{1/2}(r^2+(H_eh)^2)^1Cos\lambda (r^{})]}{[_{r_{in}}^{r_c}g_r(r^{})𝑑r^{}r^{}(1+(\frac{dh}{dr^{}})^2)^{1/2}(r^2+(H_eh)^2)^1Cos\lambda (r^{})]}.$$ $`(8)`$ For a flat disc, $`dh/dr=0`$ and then $`Cos\lambda =(H_eh)/(r^2+(H_eh)^2)^{1/2}`$. In the limit $`g(r)=1`$ for all $`r`$, the results from using (8) are identical to those from (5). In the next section I discuss the results of the above ratios and show some line profiles. 3. Results, Line Profiles, and Discussion The solid curves of Fig. 2 show $`F_{rep}/F_{dir}`$ as a function of $`b`$ for three values of $`r_{out}`$, using $`a=1/50`$, $`H_e=10`$, and $`b1`$. The restriction to concave curvature (i.e. $`b>1`$) means that the entire disc surface sees the X-rays. The imposed restriction that $`hr`$ leads to a maximum ratio of $`5.4`$ in Fig 2. For the above parameters, a Euclidean flat disc would give only $`F_{rep}/F_{dir}0.87`$ for an isotropic source. Note that if the X-ray source were suitably anisotropic, the ratio $`F_{rep}/F_{dir}`$ need not be significantly greater than 1 even for the optimal $`b`$. In that case, the Euclidean flat disc ratio for the same anisotropy would still be of order 5 times less for the optimal $`b`$. Fig. 3 shows $`F_{out}/F_{in}`$, for different values of $`r_{out}`$ and $`r_c`$. (To be conservative, Fig. 3 employs (8) in order to include the rough GR corrections which $`lower`$ the curves relative to the Euclidean case, making the effect $`30\%`$ less pronounced. By contrast, for Fig 2, the conservative lower limit is given by the Euclidean equation (2)). There are two regions for each curve of Fig. 3. For low $`b`$, the gain from orientation toward the source wins over the decrease in flux from the extra distance to the disc at a given $`r`$ and the curves rise. However since $`H_e<<r_c`$, above the $`b`$ at which $`h(r)/r=1`$, the increasing distance between the X-ray point source and the disc for larger $`r`$ wins and the curves then decline. A Shakura-Sunaeyev (Shakura & Sunaeyev 1973) disc corresponds to $`b=9/8`$. For a flat disc the ratios are low: for $`r_c=400`$, $`F_{out}/F_{in}0.03`$ and for $`r_c=1600`$, $`F_{out}/F_{in}0.007`$. The $`r_c=400`$ solid curve lies completely above the $`r_c=1600`$ solid curve in Fig. 3 since $`r_{out}`$ is the same in both cases. This contrasts the dashed curves which address a different issue. The approximate $`24\times 10^4`$ sec delay between an increase in continuum emission and the response of the narrow core line for e.g. MCG-6-30-15, (Iwasawa et al. 1996) motivates considering that some reprocessing material resides at the associated distance from the X-ray source. For a $`10^7M_{}`$ hole, this corresponds to a distance of $`400800R_g`$. The Keplerian velocities at this $`r_c`$ are consistent with a few percent frequency width of the core. This motivates determining the amount of reprocessed emission coming from $`r_c\text{ }<r\text{ }<10r_c`$ and the results are the dashed curves of Fig 3. Because of the narrow range in $`r`$, the curves are lower than for the solid curves but there is still a range of $`b`$ for which such an outer “ring” can produce $`F_{out}/F_{in}\text{ }>1/3`$, even for $`r_{out}=4000`$, which is of order that required for MCG-6-30-15 (Iwasawa et al. 1996). By contrast, for flat discs from (2), even the more favorable case ($`r_{out}=10r_c`$, $`r_c=400`$) gives only $`F_{out}/F_{in}=0.03`$. The dashed curves in Fig. 3 cross because a decade in $`r_c`$ for higher $`r_c`$ is larger, but farther out, than a decade in $`r_c`$ for lower $`r_c`$. The fact that emission in the narrow peak could originate from large radii means that the ubiquity of observed peaks near $`6.4`$keV in the 22 Seyferts studied by Nandra et al (1997) would not be as sensitive to the inner disc inclination angle. The maximum velocity dispersions at such large radii are small (e.g. $`\mathrm{\Delta }\nu 5\%`$ at $`r=400`$). For a strictly flat disc, some tuning in inclination angle is required to produce the location of the observed peak because gravitational+transverse Doppler red, and blue shifts conspire to produce the particular peak values. Figure 4 shows some line profiles using the formulae of Fabian et al. (1989) (Esin 1998) but with a modified emissivity function to include the flat and concave disc cases. The emissivity function of Fabian et al. (1989) was $`ϵ(r_{in}/r)^2`$ and I replace this by the factor in (8): $`(1+(\frac{dh}{dr})^2)^{1/2}(r^2+(H_eh)^2)^1Cos\lambda (r)`$. This simplifies for a flat disc as discussed below (8). As expected from Figs. 2 and 3, Fig 4 shows the narrow peak at 6.4KeV for the value of $`b=1.7`$ even at an inclination angle of $`40`$deg. The effect weakens significantly for $`b=1.3`$ as expected. Secondary reprocessing (Matt et al. 1991) of inner disc radiation by outer disc radiation is not considered. This would be less likely to effect the peak because the peak is produced from emission at large radii. Also, the constraint $`hr`$ means angles of inclination $`<45`$deg are less affected by shadowing. The above results show that for a range of $`b`$ and $`r_{out}`$, the ratios $`F_{rep}/F_{dir}`$ and $`F_{out}/F_{in}`$ can be more than an order of magnitude larger than for flat discs. The concavity may thus predominantly affect the total reprocessed fraction or the line shape rather than the line equivalent width (Matt et al. 1991). The results are not strongly sensitive to the parameters $`a`$, $`H_e`$, or $`r_{in}`$ for $`H_e,r_e\text{ }>6`$, but for $`r_{in},H_e<6`$ a more detailed inclusion of the ionisation fraction and shadowing is required. The value of $`b`$ which provides both the largest $`F_{out}/F_{in}`$ and $`F_{rep}/F_{dir}`$ for the range of $`r_{out}`$ considered is $`1.5\text{ }<b\text{ }<1.9`$ out to the radius for which $`h(r)/r=1`$. (The value $`b1.5`$ is that of an isothermal disc.) A disc which changes from a thin to thick disc/torus at $`r\text{ }>r_c`$ and has reprocessng material in the torus, may be approximated by (1). Some excess in reprocessed emission can also result from a warped disc, possibly tidally or radiatively driven (e.g. Terquem & Bertout 1993,1996; Pringle 1996,1997) or induced by wind torques, but the azimuthal dependence must then be considered (e.g. Terquem & Bertout 1993). The role of an anisotropic X-ray source would change the total reprocessed fraction, but not the influence on the line peak location, or the comparisons to a flat disc. 4. Conclusions Some simple but pronounced effects of a concavely curved accretion disc are captured by estimating the total reprocessed vs. direct flux and the relative flux emanating from inside and outside of a critical radius $`r_c`$. For a range of $`b`$, the reprocessed flux from $`r>400`$ and even $`r>1600`$ can contribute significantly to a rest frame iron line peak even when the (X-ray) radiation source is at located at $`H_e10`$. This alleviates some sensitivity of disc model predictions (e.g. Laor 1991) to the disc inclination angle, as seen in Fig 4. Reprocessing at large distances also predicts a time delay between changes in the direct continuum emission and the reprocessed emission which may be observed in some Seyferts (Iwasawa et al. 1996). In addition, concave discs may produce $`F_{rep}/F_{dir}\text{ }>5`$ for an isotropic X-ray source which provides another means for some AGN models of the X-ray background to account for the high required reprocessed fraction (c.f. Fabian 1992). However, if the X-ray backround and Seyferts do not generically require a high reprocessed fraction (Comastri et al 1995; Matt 1998), the ubiquity of the iron line peak at 6.4 KeV could still be influenced by concavity when compared to a flat disc: The concavity affects not only the total reprocessed fraction for a given X-ray source anisotropy, but the relative reprocessed fraction from different parts of the disc. Finally, note that the disc could be curved or could change from a thin to a thick disc with entrained reprocessing material, such that the effective global reprocessing geometry is approximated by a concavity. Acknowledgements: Many thanks to E. Chiang, C. Peres, and S. Phinney for discussions, to A. Esin for a subroutine, and to the referee for comments. Comastri A., Setti G., Zamorani G. Hasinger G., 1995, A&A, 296, 1 Dermer C.D., 1986, ApJ, 307, 47 Dermer C.D., Liang E. P., Canfield E., 1991, ApJ, 369, 410 Done C. Mulcahaey J.S., Mushotzky R.F., Arnaud K., ApJ 395, 275. Ebisawa K., 1996, in X-ray Imaging and Spectroscopy of Hot Plasmas, ed. F.Makino & K. Matsuda, (Tokyo: Universal Academy Press) p427. Fabian A.C., Rees, M.J., Stella L., White N.E., 1989, MNRAS, 238, 729. Esin A., personal communication. Fabian A.C., et al., 1995, MNRAS, 277, L11. Fabian, A.C., 1992, in The X-ray Background Barcons, X. and Fabian, A.C. eds., (Cambridge: Cambridge Univ. Press), p305. Fabian A.C., George I.M., Miyoshi, S., Rees, M.J., 1990, MNRAS, 240, 14P. Field G.B. & Rogers, R.D., 1993, ApJ 403, 94. Galeev A.A., Rosner R., Vaiana G.S., 1979, ApJ, 229, 318 Ghisellini G., George I.M., Fabian A.C., Done C., MNRAS, 1990, 248, 14. Guilbert P.W. & Rees. M.J., 1988, MNRAS 233, 475. Guainazzi M. et al., 1999, A&A, 341, L27. Haardt F., Maraschi L., 1993, 413, 507 Iwasawa K. et al., 1996, MNRAS, 282, 1038 Kenyon S.J. & Hartmann L., 1987, ApJ, 323, 714. Laor A., 1991, ApJ, 376, 90 Livio M. & Pringle, J.E, 1996, 278, L35. Lee J.C., Fabian, A.C., Reynolds, C.S., Iwasawa, K. & Brandt, W.N., 1998, MNRAS, 300, 583. Martocchia A., Matt G., 1996, MNRAS, 282, L53. Matt G., 1998, astro-ph/9811053. Matt G., Fabian A.C., Ross R.R, 1996, MNRAS, 278, 111. Matt G., Perola G.C., Piro, L., 1991, A& A, 247, 25. Mushotzky R.F., Done C.; Pounds K.A., 1997, ARA&A, 31, 717 Nandra K., George I.M., Mushotzky R.F., Turner T.J., Yaqoob T., 1993, ApJ, 477, 602 Pringle J.E., 1996, MNRAS, 281, 357. Pringle J.E., 1997, MNRAS, 292, 136. Rees M.J., 1984, ARAA, 22, 471. Reynolds, C.S. & Begelman, M.C., 1997, ApJ, 488, 109. Reynolds, C.S. & Fabian, A.C., 1997, MNRAS, 290, L1. Rogers, R.D. & Field G.B., 1991, ApJ, 370, L57. Rogers R.D. 1991, ApJ, 383, 550. Shakura, N.I. & Sunyaev R.A., 1973, A& A, 24, 337. Tanaka, Y. et al., 1995, Nature 375, 659. Terquem C. & Bertout B., 1993, A&A, 274, 291. Terquem C. & Bertout B., 1996, MNRAS, 279, 415 Young, A.J., Ross, R.R. & Fabian, A.C., 1998, MNRAS, 300, 11. Zdziarski A.A., Fabian A.C., Nandra K., Celotti A., Rees M.J., Done C., Coppi P.S., Madejski G.M., 1994, MNRAS, 269, L55 Zdziarski A.A., Johnson W.N., Done C., Smith D., McNaron-Brown K., 1995, ApJ, 438, L63 Figure 1 Caption: Schematic of X-ray source (small circle) above disc surface (thick black curve). Two values of $`\theta (r)`$ and $`\lambda (r)`$ angles are shown for illustration as used in (2), (5) and (8). Regime (a) $`H_e<h`$, and (b) $`H_e>h`$. Figure 2 Caption: Plot of $`h(r)/r`$ (dashed curves) and $`F_{rep}/F_{dir}`$ (solid curves) vs. $`b`$ for $`r_{out}=10^5,16000,4000`$ from left to right in each curve group. Values of $`H_e=10`$, $`r_{in}=6`$, and $`a=1/50`$ are used in all curves. The flattening is the result of imposing $`h(r)r`$ for all $`r`$ in (1). The X-ray source is assumed to be isotropic for this graph. Simple anisotropies could be included by multiplying the y-axis by a constant fraction. Figure 3 Caption: $`F_{out}/F_{in}`$ vs. $`b`$ from (8) for $`r_{out}=10^5`$ (solid curves) and $`r_{out}=10r_c`$ (dashed curves). The top solid and dashed curves as measured at $`b=1`$ have $`r_c=400`$, and the bottom curves have $`r_c=1600`$. The dashed curves cross and merge with the solid curves above the $`b`$ for which $`h(r)/r=1`$. The condition $`h(r)r`$ is imposed, but the presence of the down-turns only requires $`r_c>>r_{in}`$. Figure 4 Caption: Line profiles for flat and concave models based on Fabian et al. (1989) with $`r_{out}=10^4`$ with emissivity function modified as discussed in text. Solid lines are for flat discs at inclination angles of 40 deg (broader curve) and 30 deg (narrower curve) respectively. The concave disc curves match onto the wings of the flat curves. There are 4 curved disc profiles signatured by the height of their peaks. The largest has $`b=1.7`$ and $`30`$ deg inclination. The next largest is $`b=1.7`$ and $`40`$ deg. The next is $`b=1.3`$ and $`30`$ deg, and the lowest is $`b=1.3`$ and $`40`$ deg.
no-problem/9901/astro-ph9901309.html
ar5iv
text
# Using Numerical Simulations to Gain Insight into the Structure of Superbubbles ## 1 Introduction In a recent paper, Basu, Johnstone, & Martin (1999) have modeled the shape and ionization structure of the W4 superbubble (see Normandeau & Basu, these proceedings) using the semianalytic Kompaneets (1960) model for blast wave propagation in a stratified exponential atmosphere. Our motivation in this paper is to compare the simplified Kompaneets model with more sophisticated models for superbubble expansion in a stratified medium. Our study reveals the differences between the various models, but also shows that the more detailed hydrodynamic models face difficulties in properly accounting for the shape of the W4 superbubble. However, the highly collimated W4 superbubble may be fit by numerical models which include a magnetic field with a significant vertical (perpendicular to the Galactic plane) component. ## 2 Comparison of Hydrodynamic Models We compare three classes of models for superbubble expansion in stratified media. The earliest one is due to Kompaneets (1960), and describes the propagation of a strong shock wave in an exponentially stratified medium. An analytic solution exists for the bubble shape at different times, and the time evolution is obtained by solution of an ordinary differential equation. The thin shell approximation (MacLow & McCray 1988) assumes a geometrically thin shell and determines its motion by direct integration of the momentum equation for various segments of the shell. Finally, numerical simulations (e.g., MacLow, McCray, & Norman 1989) provide the most complete hydrodynamic solutions. We reproduce all three calculations for an adiabatic bubble in an exponential atmosphere. The numerical simulations are performed with the ZEUS-2D code. Figure 1 compares the shape of the bubble at four different times for the three cases. The solid density contours represent the numerical solution. The Kompaneets model (dash-dotted line) expands most rapidly and assumes a more elongated shape than the other models. Its more rapid evolution means that it has already blown out of the atmosphere in the two lower panels. The thin shell approximation (dashed line) remains close to the inner boundary of the shell of swept-up mass in the numerical solution. The closeness of the thin shell and numerical results occurs since the thin shell model tracks the motion of the swept up mass. The numerical solution is also prone to a Rayleigh-Taylor instability at late times, when the upper shell is accelerating rapidly. We note that the Rayleigh-Taylor instability would be more pronounced if cooling was allowed in the shell (MacLow et al. 1989). An important difference between the models is that the Kompaneets model is more highly collimated before blowout than the other two models, since the vertical acceleration is unhindered by inertial effects, and occurs very rapidly. ## 3 Magnetohydrodynamic Model Due to the inertial effects mentioned above, the numerical hydrodynamic and thin shell results cannot produce highly collimated bubbles that meet the aspect ratio of the W4 superbubble, even though the less realistic Kompaneets model can do so. From H$`\alpha `$ observations of the ionized shell (Dennison, Topasna, & Simonetti 1997), we measure an aspect ratio $`\frac{z_{\mathrm{top}}}{r_{\mathrm{max}}}3.3`$, where $`z_{\mathrm{top}}`$ is the distance from the star cluster to the top of the bubble, and $`r_{\mathrm{max}}`$ is the maximum half width of the bubble. We have attempted a variety of atmospheric models in the numerical and thin shell models, and find that neither a steeper nor shallower profile than the exponential stratification can explain the high collimation of the W4 superbubble. We address this issue by carrying out magnetohydrodynamic (MHD) simulations. A vertical (along $`z`$) magnetic field provides the necessary external pressure at large heights to confine the bubble to a narrow width. Figure 2 shows the evolution of the bubble in an exponential atmosphere with an initial vertical magnetic field $`B_z=3`$ $`\mu `$G. At the latter time, the aspect ratio of the bubble matches the observed aspect ratio of the H$`\alpha `$ shell. ## 4 Discussion Although the Kompaneets model can provide a highly collimated bubble that matches the aspect ratio of the W4 superbubble, more realistic hydrodynamic models predict wider bubbles than observed in W4, due to their proper accounting of inertial effects. However, a numerical model which includes a vertical magnetic field of a few $`\mu `$G strength can achieve the required collimation. We believe that such a field can also suppress the potential Rayleigh-Taylor instability in the accelerating upper shell, thereby explaining why the observed H$`\alpha `$ shell of the W4 superbubble does not appear to be breaking up. Although a purely vertical field is an idealization and is not supported by Faraday rotation data, we point out that even an initially horizontal magnetic field can produce a significant vertical component through the action of the Parker instability (e.g., Basu, Mouschovias, & Paleologou 1997). More realistic magnetic field geometries such as these remain to be explored.
no-problem/9901/cond-mat9901163.html
ar5iv
text
# Rotational quantum friction in superfluids: Radiation from object rotating in superfluid vacuum. ## A Introduction. The body moving in the vacuum with linear acceleration $`a`$ is believed to radiate the thermal spectrum with the Unruh temperature $`T_U=\mathrm{}a/2\pi c`$ . The comoving observer sees the vacuum as a thermal bath with $`T=T_U`$, so that the matter of the body gets heated to $`T_U`$ (see references in ). Linear motion at constant proper acceleration (hyperbolic motion) leads to arbitrary high velocity. On the other hand uniform circular motion features constant centripetal acceleration while being free of the pathology of infinite velocity (see the latest references in ). The latter motion is stationary in the rotating frame, which is thus a convenient frame for study of the radiation and thermalization effects for uniformly rotating body. Zel’dovich was the first who predicted that the rotating body (say, dielectric cylinder) amplifies those electromagnetic modes which satisfy the condition $$\omega L\mathrm{\Omega }<0.$$ (1) Here $`\omega `$ is the frequency of the mode, $`L`$ is its azimuthal quantum number, and $`\mathrm{\Omega }`$ is the angular velocity of the rotating cylinder. This amplification of the incoming radiation is referred to as superradiance . The other aspect of this phenomenon is that due to quantum effects, the cylinder rotating in quantum vacuum spontaneously emits the electromagnetic modes satisfying Eq.(1) . The same occurs for any rotating body, including the rotating black hole , if the above condition is satisfied. Distinct from the linearly accelerated body, the radiation by a rotating body does not look thermal. Also, the rotating observer does not see the Minkowski vacuum as a thermal bath. This means that the matter of the body, though excited by interaction with the quantum fluctuations of the Minkowski vacuum, does not necessarily acquire an intrinsic temperature depending only on the angular velocity of rotation. Moreover the vacuum of the rotating frame is not well defined because of the ergoregion, which exists at the distance $`r_e=c/\mathrm{\Omega }`$ from the axis of rotation. The problems related to the response of the quantum system in its ground state to rotation, such as radiation by the object rotating in vacuum and the vacuum instability caused by the existense of ergoregion , etc., can be simulated in superfluids, where the superfluid ground state plays the part of the quantum vacuum. We discuss the quantum friction due to spontaneous emission of phonons and rotons in superfluid <sup>4</sup>He and Bogoliubov fermions in superfluid <sup>3</sup>He. ## B Rotating frame. Let us consider a cylinder of radius $`R`$ rotating with angular velocity $`\mathrm{\Omega }`$ in the (infinite) superfluid liquid. In bosonic superfluids the quasiparticles are phonons and rotons; in fermi superfluids these are the Bogoliubov fermions. The phonons are ”relativistic” quasiparticles: Their energy spectrum is $`E(p)=cp+\stackrel{}{p}\stackrel{}{v}_s`$, where $`c`$ is a speed of sound and $`\stackrel{}{v}_s`$ is the superfluid velocity, the velocity of the superfluid vacuum; and this phonon dispersion is represented by the Lorentzian metric (the so-called acoustic metric ): $$g^{\mu \nu }p_\mu p_\nu =0,g^{00}=1,g^{0i}=v_s^i,g^{ik}=c^2\delta ^{ik}v_s^iv_s^k.$$ (2) When the body rotates, the energy of quasiparticles is not well determined in the laboratory frame due to the time dependence of the potential, caused by the rotation of the body. But it is determined in the rotating frame, where the potential is stationary. Hence it is simpler to work in the rotating frame. If the body is rotating surrounded by the stationary superfluid, i.e. $`\stackrel{}{v}_s=0`$ in the laboratory frame, then in the rotating frame one has $`\stackrel{}{v}_s=\stackrel{}{\mathrm{\Omega }}\times \stackrel{}{r}`$. Substituting this $`\stackrel{}{v}_s`$ in Eq.(2) we get the interval $`ds^2=g_{\mu \nu }dx^\mu dx^\nu `$, which determines the propagation of phonons in the rotating frame: $$ds^2=(c^2\mathrm{\Omega }^2r^2)dt^22\mathrm{\Omega }r^2d\varphi dt+dz^2+r^2d\varphi ^2+dr^2.$$ (3) The azimuthal motion of the quasiparticles in the rotating frame can be quantized in terms of the angular momentum $`L`$, while the radial motion can be treated in the quasiclassical approximation. Then the energy spectrum of the phonons in the rotating frame is $$E=c\sqrt{\frac{L^2}{r^2}+p_z^2+p_r^2}\mathrm{\Omega }L.$$ (4) ## C Ergoregion in superfluids. The radius $`r_e=c/\mathrm{\Omega }`$, where $`g_{00}=0`$, marks the position of the ergoplane. In the ergoregion, i.e. at $`r>r_e=c/\mathrm{\Omega }`$, the energy of quasiparticle in Eq.(4) can become negative for any rotation velocity and $`\mathrm{\Omega }L>0`$. We assume that the angular velocity of rotation $`\mathrm{\Omega }`$ is small enough, so that the linear velocity on the surface of the cylinder $`\mathrm{\Omega }R`$ is less than $`v_L=c`$ (the Landau velocity for nucleation of phonons). Thus phonons cannot be nucleated at the surface of cylinder. However in the ergoplane the velocity $`v_s=\mathrm{\Omega }r`$ in the rotating frame reaches $`c`$, so that quasiparticle can be created in the ergoregion $`r>r_e`$. The process of creation is, however, determined by the dynamics, i.e. by the interaction with the rotating body; there is no radiation in the absence of the body. If $`\mathrm{\Omega }Rv_L=c`$ one has $`r_eR`$, i.e. the ergoregion is situated far from the cylinder; thus the interaction of the phonons state in the ergoregion with the rotating body is small. This results in a small emission rate and thus in a small value of quantum friction, as will be discussed below. Let us now consider other excitations: rotons and Bogoliubov fermions. Their spectra in the rotating frame are $`E(p)=\mathrm{\Delta }+{\displaystyle \frac{(pp_0)^2}{2m_0}}\mathrm{\Omega }L,`$ (5) $`E(p)=\sqrt{\mathrm{\Delta }^2+v_F^2(pp_0)^2}\mathrm{\Omega }L.`$ (6) Here $`p_0`$ marks the roton minimum in superfluid <sup>4</sup>He and the Fermi momentum in Fermi liquid, while $`\mathrm{\Delta }`$ is either a roton gap or the gap in superfluid <sup>3</sup>He-B. The Landau critical velocity for the emission of these quasiparticles is $`v_L=min\frac{E(p)}{p}\mathrm{\Delta }/p_0`$. In <sup>4</sup>He the Landau velocity for emission of rotons is smaller than that for the emission of phonons, $`v_L=c`$. That is why the ergoplane for rotons, $`r_e=v_L/\mathrm{\Omega }`$, is closer to the cylinder. However, for the rotating body the emission of the rotons is exponentially suppressed due to the big value of the allowed angular momentum for emitted rotons: the Zel’dovich condition Eq.(1) for roton spectrum is satified only for $`L>\mathrm{\Delta }/\mathrm{\Omega }1`$ (see Fig. 1b). ## D Rotating detector. Let us consider the system, which is rigidly connected to the rotating body and thus comprises the comoving detector. In superfluids the simplest model for such a detector consists of the layer near the surface of the cylinder, where the superfluid velocity follows the rotation of cylinder, i.e. $`\stackrel{}{v}_s=\stackrel{}{\mathrm{\Omega }}\times \stackrel{}{r}`$ in the laboratory frame and thus $`\stackrel{}{v}_s=0`$ in the rotating frame. This means that, as distinct from the superfluid outside the cylinder, in such a layer the quasiparticle spectrum has no $`\mathrm{\Omega }L`$ shift of the energy levels. Since in the detector matter, i.e. in the surface layer, the vorticity in the laboratory frame is nonzero, $`\stackrel{}{}\times \stackrel{}{v}_s=2\stackrel{}{\mathrm{\Omega }}0`$, this layer either contains vortices or is represented by the normal (nonsuperfluid) liquid, which is rigidly rotating with the body. Actually the whole rotating cylinder can be represented by the rotating normal liquid. The equilibrium state of the rotating normal liquid, viewed in the rotating frame, is the same as the equilibrium stationary normal liquid, viewed in the laboratory frame. The rotating cylinder can also be represented by the cluster of the quantized vortices. Such rigidly rotating clusters of vortices are experimentally investigated in superfluid <sup>3</sup>He (see e.g. ). We can discuss the complete system as consisting of two parts, each in its own ground state (see Figs. 1(a-b) for the case of Fermi liquid): (1) The matter of the detector in its ground state as seen in the rotating frame; (2) The superfluid outside the cylinder in its ground state (the ”Minkowski” vacuum) in the laboratory frame. The radiation of fermions by the rotating cylinder is described by the rotating observer as a tunneling process (Fig. 1c): fermions tunnel from the occupied negative energy levels in the detector to the unoccupied negative energy state in the ergoregion. The same can be considered as the spontaneous nucleation of pairs: the particle is nucleated in the ergoregion and its partner hole is nucleated in the comoving detector. This process causes the radiation from the rotating body and also the excitation of the detector. From the point of view of the Minkowski (stationary) observer this is described as the excitation of the superfluid system by the time dependent perturbations. ## E Radiation of phonons to the ergoregion. For the Bose case the radiation of phonons can be also considered as the process in which the particle in the normal Bose liquid in the detector tunnels to the scattering state at the ergoplane, where also the energy is $`E=0`$. In the quasiclassical approximation the tunneling probability is $`e^{2S}`$, where at $`p_z=0`$: $$S=\mathrm{Im}𝑑rp_r=L_R^{r_e}𝑑r\sqrt{\frac{1}{r^2}\frac{1}{r_e^2}}L\mathrm{ln}\frac{r_e}{R}.$$ (7) Thus all the particles with $`L>0`$ are radiated, but the radiation probability decreases at higher $`L`$. If the linear velocity at the surface is much less than the Landau critical velocity $`\mathrm{\Omega }Rc`$, the probability of radiation of phonons with the energy (frequency) $`\omega =\mathrm{\Omega }L`$ is $$we^{2S}=\left(\frac{R}{r_e}\right)^{2L}=\left(\frac{\mathrm{\Omega }R}{c}\right)^{2L}=\left(\frac{\omega R}{cL}\right)^{2L},\mathrm{\Omega }Rc.$$ (8) If $`c`$ is substituted by the speed of light, Eq.(8) is proportional to the superradiant amplification of the electromagnetic waves by rotating dielectric cylinder derived by Zel’dovich . The number of phonons with the frequency $`\omega =\mathrm{\Omega }L`$ emitted per unit time can be estimated as $`\dot{N}=We^{2S}`$, where $`W`$ is the attempt frequency $`\mathrm{}/ma^2`$ multiplied by the number of localized modes $`RZ/a^2`$, where $`Z`$ is the height of the cylinder. Since each phonon carries the angular momentum $`L`$, the cylinder rotating in superfluid vacuum (at $`T=0`$) is loosing its angular momentum, which means the quantum rotational friction. ## F Radiation of rotons and Bogoliubov quasiparticles. The minimal $`L`$ value of the radiated quasiparticles, which have the gap $`\mathrm{\Delta }`$, is determined by this gap: $`L_{min}=\frac{\mathrm{\Delta }}{\mathrm{\Omega }p_0}=\frac{v_L}{\mathrm{\Omega }}`$, where $`v_L=\mathrm{\Delta }/p_0`$ is the Landau critical velocity. Since the tunneling rate exponentially decreases with $`L`$, only the lowest possible $`L`$ must be considered. In this case the tunneling trajectory with $`E=0`$ is determined by the equation $`p=p_0`$ both for rotons and Bogoliubov quaiparticles. For $`p_z=0`$ the classical tunneling trajectory is thus given by $`p_r=i\sqrt{|p_0^2L^2/r^2|}`$. This gives for the tunneling exponent $`e^{2S}`$ the equation $$S=\mathrm{Im}𝑑rp_r=L_R^{r_e}𝑑r\sqrt{\frac{1}{r^2}\frac{1}{r_e^2}}L\mathrm{ln}\frac{r_e}{R}.$$ (9) Here the position of the ergoplane is $`r_e=L/p_0=v_L/\mathrm{\Omega }`$. Since the rotation velocity $`\mathrm{\Omega }`$ is always much smaller than the gap, $`L`$ is very big. That is why the radiation of rotons and Bogoliubov quasiparticles with the gap is exponentially suppressed. ## G Friction due to transitions in ”Minkowski vacuum”. Radiation can occur without excitation of the detector vacuum, via direct interaction of the particles in the Minkowski vacuum with the rotating body. In the rotating frame the states in the occupied band and in the conducting band have the same energy, if they have opposite momenta $`L`$. Then a transition between the two levels is energetically allowed and will occur if the Hamiltonian has a nonzero matrix element between the states $`L`$ and $`L`$. The necessary interaction is provided by any violation of the axial symmetry of the rotating body, e.g. by roughness on the surface (thus the interaction is localized ar $`rR`$). A wire moving along the circular orbit is another practical example. In case of the rotating vortex cluster the axial symmetry is always violated. In the quasiclassical approximation the process of radiation is as follows. The particle from the occupied band in the ergoregion tunnels to the surface of the rotating body, where after interaction with the nonaxisymmetric disturbance it changes its angular momentum. After that it tunnels back to the ergoregion to the conducting band. In this process both a particle and a hole are produced in the Minkowski vacuum, as a result the tunneling exponent is twice larger than in Eqs.(8) and (9). ## H Discussion. The rotational friction experienced by the body rotating in superfluid vacuum at $`T=0`$, is caused by the spontaneous quantum emission of the quasiparticles from the rotating object to the ”Minkowski” vacuum in the ergoregion. The emission is not thermal and depends on the details of the interaction of the radiation with the rotating body. In the quasiclassical approximation it is mainly determined by the tunneling exponent, which can be approximately characterized by the effective temperature $`T_{\mathrm{eff}}\mathrm{}\mathrm{\Omega }(2/\mathrm{ln}(v_L/\mathrm{\Omega }R))`$. The vacuum friction of the rotating body can be observed only if the effective temperature exceeds the temperature of the bulk superfluid, $`T_{\mathrm{eff}}>T`$. For the body rotating with $`\mathrm{\Omega }=10^3`$rad/s, $`T`$ must be below $`10^8`$K. However, high rotation velocity can be obtained in the system of two like vortices, which rotate around their center of mass with $`\mathrm{\Omega }=\kappa /\pi R^2`$ ($`\kappa `$ is the circulation around each vortex, $`R`$ is the radius of the circular orbit). The process discussed in the paper occurs only if there is an ergoplane in the rotating frame. For the superfluid confined within the external cylinder of radius $`R_{\mathrm{ext}}`$, this process occurs at high enough rotation velocity, $`r_e(\mathrm{\Omega })=v_L/\mathrm{\Omega }<R_{\mathrm{ext}}`$, when the ergoplane is within the superfluid. On the instability of the ergoregion in quantum vacuum towards emission see e.g. Ref.. If $`r_e(\mathrm{\Omega })>R_{\mathrm{ext}}`$ and ergoregion is not present, then the interaction between the coaxial cylinders via the vacuum fluctuations becomes the main mechanism for dissipation. This causes the dynamic Casimir forces between the walls moving laterally (see Review ). As in the nonideality of the cylinders is the necessary condition for quantum friction. The case of the rotating body is not the only one in superfluids, where the ergoregion is important. The ergoregion also appears for the lineraly moving textures, where the speed of the order parameter texture exceeds the local ”speed of light” . One of us (AC) wishes to thank the Low Temperature Laboratory of Helsinki University of Technology for the hospitality and EU Training and Mobility of Researches Programme Contract N<sup>o</sup> ERBFMGECT980122 for its support.
no-problem/9901/astro-ph9901135.html
ar5iv
text
# CMB in Open Inflation ## I Introduction Inflationary theory has a robust prediction: Our universe must be almost exactly flat, $`\mathrm{\Omega }_0=\mathrm{\Omega }_{\mathrm{matter}}+\mathrm{\Omega }_\mathrm{\Lambda }=1\pm O(10^4)`$. If this result is confirmed by observational data, we will have a decisive confirmation of inflationary cosmology. However, what if observational data show that the universe is open? Until very recently, we did not have any consistent cosmological models, inflationary or not, describing a homogeneous open universe. An assumption that all parts of an infinite universe can be created simultaneously and have the same value of energy density everywhere did not have any justification. This problem was solved only after the invention of inflationary cosmology. It was found that each bubble of a new phase formed during the false vacuum decay in inflationary universe looks from inside like an infinite open universe . The process of bubble formation in the false vacuum is described by the Coleman-De Luccia (CDL) instantons . If this universe continues inflating inside the bubble, then we obtain an open inflationary universe. Then by a certain fine-tuning of parameters one can get any value of $`\mathrm{\Omega }_0`$ in the range $`0<\mathrm{\Omega }_0<1`$ . Even though the basic idea of this scenario was pretty simple, it was very difficult to find a realistic open inflation model. The general scenario proposed in was based on investigation of chaotic inflation and tunneling in the theories of a single scalar field $`\varphi `$. However, no models where this scenario could be successfully realized have been proposed so far. As it was shown in , in the simplest models with polynomial potentials of the type of $`\frac{m^2}{2}\varphi ^2\frac{\delta }{3}\varphi ^3+\frac{\lambda }{4}\varphi ^4`$ the tunneling occurs not by bubble formation, but by jumping onto the top of the potential barrier described by the Hawking-Moss instanton . This process leads to formation of inhomogeneous domains of a new phase, and the whole scenario fails. The main reason for this failure is rather generic . Typically, CDL instantons exist only if $`|^2V|>H^2`$ during the tunneling (here and in the rest of the paper $`^2V`$ stays for $`^2V/\varphi ^2`$). Meanwhile, inflation, which, according to , begins immediately after the tunneling, typically requires $`|^2V|H^2`$. These two conditions are almost incompatible. This problem can be avoided in models of two scalar fields . However, in this paper we will concentrate on the one-field open inflation. We will remember why it was so difficult to realize this scenario. Then we will describe two models where this can be accomplished; one of these models was proposed recently in . The main purpose of this paper is to investigate the CMB anisotropy in these models. As we will see, CMB anisotropy in these models has some distinguishing features, which may serve as a signature for the one-filed open inflation models. ## II Toy models of one-field open inflation To explain the main features of the one-field open inflation models, let us consider an effective potential $`V(\varphi )`$ with a local minimum at $`\varphi _0`$, and a global minimum at $`\varphi =0`$, where $`V=0`$. In an $`O(4)`$-invariant Euclidean spacetime with the metric $$ds^2=d\tau ^2+a^2(\tau )(d\chi _E^2+\mathrm{sin}^2\chi _Ed\mathrm{\Omega }_2^2),$$ (1) the scalar field $`\varphi `$ and the three-sphere radius $`a`$ obey the equations of motion $$\ddot{\varphi }+3\frac{\dot{a}}{a}\dot{\varphi }=V,\ddot{a}=\frac{8\pi }{3}a(\dot{\varphi }^2+V),$$ (2) where dots denote derivatives with respect to $`\tau `$. Here and in what follows we will use the units where $`M_p=G^{1/2}=1`$. An instanton which describes the creation of an open universe was first found by Coleman and De Luccia . It is given by a slightly distorted de Sitter four-sphere of radius $`H^1(\varphi _0)`$, with $`aH^1\mathrm{sin}H\tau `$. The field $`\varphi `$ lies on the ‘true vacuum’ side of the maximum of $`V`$ in a region near $`\tau =0`$, and it is very close to the false vacuum, $`\varphi _0`$, in the opposite part of the four-sphere near $`\tau _i\pi /H`$, The scale factor $`a(\tau )`$ vanishes at the points $`\tau =0`$ and $`\tau =\tau _\mathrm{i}`$. In order to get a singularity-free solution, one must have $`\dot{\varphi }=0`$ and $`\dot{a}=\pm 1`$ at $`\tau =0`$ and $`\tau =\tau _\mathrm{i}`$. This configuration interpolates between some initial point $`\varphi _i\varphi _0`$ and the final point $`\varphi _f`$. After an analytic continuation to the Lorentzian regime, it describes an expanding bubble which contains an open universe . Solutions of this type can exist only if the bubble can fit into de Sitter sphere of radius $`H^1(\varphi _0)`$. To understand whether this can happen, remember that at small $`\tau `$ one has $`a\tau `$, and Eq. (2) coincides with equation describing creation of a bubble in Minkowski space, with $`\tau `$ being replaced by the bubble radius $`r`$: $`\ddot{\varphi }+\frac{3}{r}\dot{\varphi }=V`$ . Here the radius of the bubble can run from $`0`$ to $`\mathrm{}`$. Typically the bubbles have size greater than the Compton wavelength of the scalar field, $`rm^1(^2V)^{1/2}`$ . In de Sitter space $`\tau `$ cannot be greater than $`\frac{\pi }{H}`$, and in fact the main part of the evolution of the field $`\varphi `$ must end at $`\tau \frac{\pi }{2H}`$. Indeed, once the scale factor reaches its maximum at $`\tau \frac{\pi }{2H}`$, the coefficient $`\frac{\dot{a}}{a}`$ in Eq. (2) becomes negative, which corresponds to anti-friction. Therefore if the field $`\varphi `$ still changes rapidly at $`\tau >\frac{\pi }{2H}`$, it experiences ever growing acceleration near $`\tau _\mathrm{f}`$, and typically the solution becomes singular . Thus the Coleman-De Luccia (CDL) instantons exist only if $`\frac{\pi }{2H}>(^2V)^{1/2}`$, i.e. if $`^2V>H^2`$. This condition must be satisfied at small $`\tau `$, which corresponds to the endpoint of the tunneling, where inflation should begin in accordance with the scenario of Ref. . But this condition is opposite to the standard inflationary condition $`^2VH^2`$. This means that immediately after the tunneling the field begins rolling much faster than it was anticipated in . As a result, in many models, such as the models with the effective potential $`V(\varphi )=\frac{m^2}{2}\varphi ^2\frac{\delta }{3}\varphi ^3+\frac{\lambda }{4}\varphi ^4`$, the open inflation scenario simply does not work . This problem is very general, and for a long time we did not have any model where this scenario could be realized. We will describe two of these models here, one of which was proposed recently in . We do not know as yet whether it is possible to derive these models from some realistic theory of elementary particles, so for the moment we consider them simply as toy models of open inflation. Still we believe that these models deserve investigation because they share the generic property of all models of this class: As we expected. immediately after the tunneling one has $`^2V>H^2`$. As we will see, this condition suppresses scalar perturbations of metric produced soon after the tunneling. The supercurvature perturbations are also suppressed, whereas the tensor perturbations in these models may be quite strong. These features may help us to distinguish one-field models of open inflation based on the Coleman-De Luccia tunneling from other models of open inflation. The first model which we are going to consider has the effective potential of the following type: $$V(\varphi )=\frac{m^2\varphi ^2}{2}\left(1+\frac{\alpha ^2}{\beta ^2+(\varphi v)^2}\right).$$ (3) Here $`\alpha `$ $`\beta `$ and $`v`$ are some constants; we will assume that $`\beta v`$. The first term in this equation is the potential of the simplest chaotic inflation model $`\frac{m^2\varphi ^2}{2}`$. The second term represents a peak of width $`\beta `$ with a maximum near $`\varphi =v`$. The relative hight of this peak with respect to the potential $`\frac{m^2\varphi ^2}{2}`$ is determined by the ratio $`\frac{\alpha ^2}{\beta ^2}`$. As an example, we will consider the theory with $`m=1.5\times 10^6`$, which is necessary to have a proper amplitude of density perturbations during inflation in our model. We will take $`v=3.5`$, which, as we will see, will provide about 65 e-folds of inflation after the tunneling. By changing this parameter by few percent one can get any value of $`\mathrm{\Omega }_0`$ from $`0`$ to $`1`$. For definiteness, in this section we will take $`\beta ^2=2\alpha ^2`$, $`\beta =0.1`$. This is certainly not a unique choice; other values of these parameters to be considered in the next section can also lead to a successful open inflation scenario. The shape of the effective potential in this model is shown in Fig. 1. As we see, this potential coincides with $`\frac{m^2\varphi ^2}{2}`$ everywhere except a small vicinity of the point $`\varphi =3.5`$, but one cannot roll from $`\varphi >3.5`$ to $`\varphi <3.5`$ without tunneling through a sharp barrier. We have solved Eq. (2) for this model numerically and found that the Coleman-De Luccia instanton in this model does exist. It is shown in Fig. 2. The upper panel of Fig. 2 shows the CDL instanton $`\varphi (\tau )`$. Tunneling occurs from $`\varphi _i3.6`$ to $`\varphi _f3.4`$. The energy density decreases in this process, $`V(\varphi _f)<V(\varphi _i)`$. The lower panel of Fig. 2 shows the ratio $`^2V/H^2`$. Almost everywhere along the instanton trajectory $`\varphi (\tau )`$ one has $`|^2V|>H^2`$. That is exactly what we have expected on basis of our general arguments concerning CDL instantons. An interesting feature of the CDL instantons is that the evolution of the field $`\varphi `$ does not begin exactly at the local minimum of the effective potential. This is similar to what happens in the Hawking-Moss case , where tunneling begins and ends not at the local minimum but at the top of the effective potential; see for a recent discussion of this issue. This unconventional feature of the CDL instantons was not emphasized in because the authors concentrated on the thin wall approximation where this effect disappears. For a proper interpretation of these instantons, just as in the Hawking-Moss case, one may either glue to the point $`\tau _f`$ a de Sitter hemisphere corresponding to the local minimum of the effective potential , or use a construction proposed in . It would be very desirable to verify the Coleman-De Luccia approach by a complete Hamiltonian analysis of the tunneling in inflationary universe. The second model has the effective potential of the following type: $$V(\varphi )=\frac{m^2}{2}\left(\varphi ^2+B^2\frac{\mathrm{sinh}A(\varphi v)}{\mathrm{cosh}^2A(\varphi v)}\right)$$ (4) Here $`A`$, $`B`$ and $`v`$ are some constants. As an example, we will consider the theory with $`m=1.0\times 10^6`$, $`v=3.5`$, $`A=20`$, and $`B=4`$. The shape of the effective potential in this model is shown in Fig. 3. The Coleman-De Luccia instanton in this model is shown in Fig. 4. The upper panel of Fig. 4 shows the instanton $`\varphi (\tau )`$. Tunneling occurs from $`\varphi _i3.54`$, which almost exactly coincides with the position of the local minimum of $`V(\varphi )`$, to $`\varphi _f3.30`$. The energy density increases in this process, $`V(\varphi _f)>V(\varphi _i)`$. This may seem unphysical, but in fact such jumps are possible because of the gravitational effects. A similar effect occurs during the Hawking-Moss tunneling to the local maximum of the effective potential . The lower panel of Fig. 4 shows that almost everywhere along the trajectory $`\varphi (\tau )`$ one has $`|^2V|H^2`$. After the tunneling the scalar field slowly rolls down and then oscillates near the minimum of the effective potential at $`\varphi =0`$. During the stage of the slow rolling, the scale factor in the models which we investigated expands approximately $`e^{65}`$ times. ## III CMB anisotropy in the open inflation models Just as we expected, in both models the tunneling brings the field to the region where $`|^2V|>H^2`$. Therefore the usual scalar perturbations of density are not produced in these models immediately after the open universe formation. As we will see now, this leads to a suppression of the contribution of these perturbations to the CMB anisotropy at $`\mathrm{}10`$. In addition to these perturbations, we could encounter supercurvature perturbations which are produced in the false vacuum outside the bubble and may later penetrate into its interior during the bubble expansion. However, we did not find any supercurvature perturbations in these models. The reason why there are no supercurvature perturbations in the second model is pretty simple: The curvature of the effective potential in the false vacuum is much greater than $`H^2`$, so these perturbations are not produced outside the bubble. For the first model the reason for the absence of the supercurvature modes is less obvious because in the false vacuum there one has $`^2VH^2`$. However, all information about the interior of the bubble can be obtained by the analytical continuation of the CDL instanton, which begins away from the false vacuum, in a state with $`^2V>H^2`$. The fact that there is no region where $`^2VH^2`$ in the CDL instanton implies that the initial distance from the center of the bubble to the place where $`^2V`$ becomes smaller than $`H^2`$ (in the false vacuum outside of the CDL instanton) is greater than $`2H^1`$, i.e., it is greater than twice the size of the event horizon in de Sitter space. As a result, the fluctuations produced in the false vacuum do not penetrate into the bubble. In addition to the scalar perturbations, there also exist tensor perturbations. Unlike the standard inflation scenario, it is known that the fluctuations of the bubble wall contribute to the low frequency spectrum of tensor perturbations and the contribution can dominate over the scalar spectrum . In fact, we shall see that they can be quite significant and dominate the CMB anisotropy spectrum for small $`l`$. Below we present the scalar and tensor spectra for three models: Two of them are those discussed in the previous section. The third model is the one with the same potential form as the first model but with a different value of $`\beta `$; $`\beta ^2=\alpha ^2/2=0.0025`$. To compute the spectra, we adopt a gauge-invariant method developed by Garriga, Montes, Sasaki and Tanaka . Then we show the resulting CMB anisotropy spectra on large angular scales. ### A Scalar and tensor perturbation spectra Let us first summarize the procedure to obtain the scalar and tensor spectra. The metric describing the Lorentzian bubble configuration is given by the analytic continuation of (1) with $`\chi _E=i\chi _C+\pi /2`$: $`ds^2=d\tau ^2+a^2(\tau )(d\chi _C^2+\mathrm{cosh}^2\chi _Cd\mathrm{\Omega }_2^2).`$ (5) The scalar field configuration is still given by $`\varphi =\varphi (\tau )`$. In the one-field models of one-bubble open inflation, the scalar perturbation is conveniently described by a variable $`𝒒`$, which is essentially equivalent to the gravitational potential perturbation $`\mathrm{\Psi }_N`$ in the Newton gauge, $`𝒒={\displaystyle \frac{\mathrm{\Psi }_N}{4\pi G\dot{\varphi }}}.`$ (6) Here and below we recover $`G`$ in equations. The (even parity) tensor perturbation is described by a variable $`𝒘`$, whose relation to the transverse-traceless metric perturbation in the open universe will be given later. There are also odd parity modes for the tensor perturbation. But since the odd parity modes do not contribute to the CMB anisotropy, we shall not discuss them. Here we just mention that the form of the Lagrangians for both $`𝒒`$ and $`𝒘`$ is that for a scalar field with $`\tau `$-dependent mass . We quantize the variables $`𝒒`$ and $`𝒘`$ on the $`\chi _C=\mathrm{const}.`$ hypersurface which is a Cauchy surface and which contains all the information of the bubble configuration. We expand them in terms of the spherical harmonics $`Y_\mathrm{}m`$ and spatial eigenfunctions $`𝒒^p`$ and $`𝒘^p`$ with eigenvalue $`p^2`$: $`𝒒`$ $`=`$ $`{\displaystyle \widehat{a}_{p\mathrm{}m}f^p\mathrm{}(\chi _C)𝒒^p(\tau )Y_\mathrm{}m(\mathrm{\Omega }_2)}+\mathrm{h}.\mathrm{c}.,`$ (7) $`𝒘`$ $`=`$ $`{\displaystyle \widehat{b}_{p\mathrm{}m}f^p\mathrm{}(\chi _C)𝒘^p(\tau )Y_\mathrm{}m(\mathrm{\Omega }_2)}+\mathrm{h}.\mathrm{c}.,`$ (8) where $`\widehat{a}_{p\mathrm{}m}`$ and $`\widehat{b}_{p\mathrm{}m}`$ are the annihilation operators. The spatial eigenfunctions $`𝒒^p`$ and $`𝒘^p`$ satisfy, respectively, $`\left[{\displaystyle \frac{d^2}{d\eta _C^2}}+U_S(\eta _C)\right]𝒒^p`$ $`=p^2𝒒^p;`$ (9) $`U_S=`$ $`4\pi G\varphi _{}^{}{}_{}{}^{2}+\varphi ^{}\left({\displaystyle \frac{1}{\varphi ^{}}}\right)^{\prime \prime }4,`$ (10) $`\left[{\displaystyle \frac{d^2}{d\eta _C^2}}+U_T(\eta _C)\right]𝒘^p`$ $`=p^2𝒘^p;`$ (11) $`U_T=`$ $`4\pi G\varphi _{}^{}{}_{}{}^{2},`$ (12) where $`d\eta _C=d\tau /a(\tau )`$ and primes denote derivatives with respect to $`\eta _C`$. The potentials $`U_S`$ and $`U_T`$ both vanish for $`\eta _C\pm \mathrm{}`$, but $`U_S`$ is not necessarily positive definite. It then follows that if there exists a bound state for this eigenvalue equation, it exists discretely at some $`p^2<0`$ and corresponds to a supercurvature mode of the scalar spectrum. On the other hand, $`U_T`$ is manifestly positive definite and there is no supercurvature mode in the tensor spectrum. For both scalar and tensor perturbations, the spectrum is continuous for $`p^2>0`$. As noted before, we found no supercurvature mode in all of the three models. The equation for $`f^p\mathrm{}`$ turns out to be model-independent and is given by $`\left[{\displaystyle \frac{1}{\mathrm{cosh}^2\chi _C}}{\displaystyle \frac{}{\chi _C}}\mathrm{cosh}^2\chi _C{\displaystyle \frac{}{\chi _C}}{\displaystyle \frac{\mathrm{}(\mathrm{}+1)}{\mathrm{cosh}^2\chi _C}}\right]f^p\mathrm{}`$ (13) $`=(p^2+1)f^p\mathrm{}.`$ (14) In accordance with the Euclidean approach to the tunneling, we take the quantum states of $`𝒒`$ and $`𝒘`$ to be the Euclidean vacua. This implies that the positive frequency function $`f^p\mathrm{}`$ is regular at $`\chi _E=\pi /2`$ ($`\chi _C=0`$). Apart from the normalization, the solution is $$f^p\mathrm{}(\chi _C)\frac{1}{\sqrt{\mathrm{cosh}\chi _C}}P_{ip1/2}^{\mathrm{}1/2}(i\mathrm{sinh}\chi _C),$$ (15) where $`P_\nu ^\mu `$ is the associated Legendre function of the first kind. The normalizations of the mode functions $`f^p\mathrm{}𝒒^p`$ and $`f^p\mathrm{}𝒘^p`$ are determined by the standard Klein-Gordon normalization of a scalar field. We then analytically continue $`𝒒`$ and $`𝒘`$ to the region just inside the lightcone emanating from the center of the bubble, i.e., to the region of the open universe, by $`\chi =\chi _C+i\pi /2`$ and $`t=i\tau `$ (or $`\eta =\eta _Ci\pi /2`$). The metric there is given by $`ds^2`$ $`=`$ $`dt^2+a^2(t)(d\chi ^2+\mathrm{sinh}^2\chi d\mathrm{\Omega }_2^2)`$ (16) $`=`$ $`a^2(\eta )(d\eta ^2+d\chi ^2+\mathrm{sinh}^2\chi d\mathrm{\Omega }_2^2).`$ (17) Then the function $`f^p\mathrm{}`$ just becomes the radial function of a spatial harmonic function on a unit spatial 3-hyperboloid, $$\left(\stackrel{(3)}{\Delta }+p^2+1\right)Y^{p\mathrm{}m}=0;Y^{p\mathrm{}m}=f^p\mathrm{}(\chi )Y_\mathrm{}m(\mathrm{\Omega }_2).$$ (18) On the other hand, the spatial eigenfunctions $`𝒒^p`$ and $`𝒘^p`$ become the temporal mode functions for the scalar and tensor perturbations, respectively, in the open universe. Note that $`p1`$ corresponds to the comoving spatial curvature scale. The evolution equations for $`𝒒^p`$ and $`𝒘^p`$ take the same forms as Eqs. (10) and (12), respectively, with the replacement $`\eta _C\eta `$. We solve Eqs. (10) and (12) until the scale of the perturbation is well outside the Hubble horizon scale, i.e., until $`a^2H^2p^2+1.`$ (19) Here and in what follows, $`H`$ is not the inverse of the de Sitter radius but $`H=\dot{a}/a`$. The important quantity that determines the primordial density perturbation spectrum as well as the large angle scalar CMB anisotropies is the curvature perturbation on the comoving hypersurface, $`_c`$. The comoving hypersurface is the one on which the scalar field fluctuation $`\delta \varphi `$ vanishes. It is related to $`𝒒`$ as $$_c^p=4\pi G\dot{\varphi }𝒒^p+\frac{H}{a\dot{\varphi }^2}\frac{d}{dt}\left(a\dot{\varphi }𝒒^p\right).$$ (20) Just as in the case of the flat universe inflation, $`_c`$ remains constant in time until the perturbation scale re-enters the Hubble horizon . On the other hand, the even parity tensor perturbation in the open universe is described as $`\delta g_{ij}=a^2t_{ij};t_{ij}={\displaystyle \widehat{b}_{p\mathrm{}m}U_p(\eta )Y_{ij}^{(+)p\mathrm{}m}}+\text{h.c.},`$ (21) where $`Y_{ij}^{(+)p\mathrm{}m}`$ are the even parity tensor harmonics on the unit 3-hyperboloid . After an appropriate choice of the normalization factor, $`U_p`$ is given in terms of $`𝒘^p`$ as $`U_p={\displaystyle \frac{8\pi G}{a(p^2+1)}}{\displaystyle \frac{d}{dt}}(a𝒘^p).`$ (22) Similar to the case of the scalar perturbation, $`U_p`$ is known to remain constant in time on superhorizon scales. In Fig. 5, the scalar and tensor perturbation spectra for the first, second and third models (which we call Models 1, 2 and 3, respectively) are shown. Let us recall their model parameters: Model 1: $`\text{Eq. (}\text{3}\text{) with}\alpha ^2=0.005,\beta ^2=2\alpha ^2,`$ $`v=3.5,m=1.5\times 10^6.`$ Model 2: $`\text{Eq. (}\text{4}\text{) with}A=20,B=4,`$ $`v=3.5,m=1.0\times 10^6.`$ Model 3: $`\text{Eq. (}\text{3}\text{) with}\alpha ^2=0.005,\beta ^2=\alpha ^2/2,`$ $`v=3.5,m=1.5\times 10^6.`$ First let us consider the scalar spectra. As mentioned previously, there are no supercurvature modes in the present models. So the scalar perturbations are completely described by the continuous spectra shown in Fig 5. As seen from the figure, the scalar spectra for the three models are all alike: On the low frequency end, they decrease sharply as $`p`$ decreases, while they gradually increase for $`p10`$. As discussed in the previous section, one can interpret this feature as due to the common evolutionary behavior of any successful one-field model with the CDL tunneling. The scalar field evolves rapidly for the first few expansion times when $`^2V>H^2`$ and eventually decelerates as the slope of the effective potential becomes flatter. For $`p1`$, the spectrum approaches the one given by the standard formula for the flat universe inflation models. As shown in , the gradual increase gives rise to a peak in the spectrum at $`p10^4`$, which may have significant implications to the structure formation in the universe. To understand the shape of the scalar spectrum more quantitatively, it is useful to compare the computed spectrum with the following analytic formula , $`|_c^p|^2{\displaystyle \frac{p^3}{2\pi ^2}}=\left({\displaystyle \frac{H^2}{2\pi \dot{\varphi }}}\right)_{t=t_p}^2{\displaystyle \frac{\mathrm{cosh}\pi p+\mathrm{cos}\delta _p}{\mathrm{sinh}\pi p}}{\displaystyle \frac{p^2}{1+p^2}},`$ (23) where $`t_p`$ is an epoch slightly after the perturbation scale goes out of the Hubble horizon. This formula assumes $`^2VH^2`$ and the slow time variation of $`^2V`$. The angle $`\delta _p`$ describes the effect of the bubble wall, which is known to behave as $`\delta _p\pi p`$ for $`p0`$. The low frequency part of the spectrum is most suppressed when $`\delta _p=\pi `$. This case corresponds to the case when $`^2VH^2`$ on the false vacuum side of the instanton . In our case the condition $`^2VH^2`$ is violated at the first stages after the bubble formation. Therefore Eq. (23) should be somewhat modified for small $`p`$. Indeed, fluctuations with small $`p`$ are produced soon after the tunneling. But immediately after the tunneling one has $`^2V>H^2`$ in all models where the Coleman-De Luccia instantons exist. Therefore the perturbations with the wavelength greater than $`H^1`$ will not become “frozen” immediately after the tunneling. They will freeze somewhat later, when the field $`\varphi `$ will roll to the area with $`^2VH^2`$. But at that time their wavelength increases and their amplitude becomes smaller. As a result, Eq. (23) provides a good description of the spectrum at large $`p`$, but at small $`p`$ the amplitude of perturbations will be somewhat smaller than that given by Eq. (23). This expectation is confirmed by the results of our numerical investigation. In Fig. 6, this comparison is made for Model 1. In the figure, the upper dotted line shows the formula (23) and the solid line that approaches it for $`p1`$ is the computed one. We choose $`t_p`$ to be the time when $`a^2(2H^2^2V)=2(1+p^2)`$ and $`\delta _p=\pi `$. As one can see, the computed spectrum at small $`p`$ is significantly more suppressed than the most suppressed case of the analytic formula. As we shall see below, this large suppression relative to the analytic formula causes a large suppression of the CMB anisotropy at small $`\mathrm{}`$. The suppression of scalar perturbations with small $`p`$ and $`\mathrm{}`$ and the absence of supercurvature perturbations seem to be a generic property of the models of one-field open inflation based on the CDL tunneling. On the other hand, the spectrum become almost indistinguishable from the one given by Eq. (23) for $`p1`$. Thus the tilt of the spectrum (with a positive power-law index) is due to the slowing down of the evolution of $`\varphi `$. Now let us consider the tensor spectra. The spectra for Models 1 and 3 are indistinguishable at $`p1`$, while the spectrum for Model 2 is about a factor of 2 smaller. This difference is due to the difference in the choice of the mass parameter: The mass square for Models 1 and 3 is $`1.5^2=2.25`$ greater than that for Model 2. This results in the difference in $`H^2`$. In fact, if we multiply the spectrum of Model 2 by 2.25, it becomes almost indistinguishable from the spectrum of Model 3 for the whole range of $`p`$. Turning to the low frequency behavior, the spectrum of Model 1 at $`p1`$ differs considerably from that of Model 3: The former is larger by an order of magnitude relative to the latter at small $`p`$. This enhancement is due to the wall fluctuation modes. Recall that the parameter $`\beta `$ for Model 1 is larger than that for Model 3. Since a larger $`\beta `$ means a lower potential barrier, the wall tension is smaller for Model 1 than for Model 3. This makes the wall of Model 1 easier to vibrate. A non-dimensional quantity that represents the strength of the wall tension is given by the following integral over the instanton background : $`\mathrm{\Delta }s=4\pi G{\displaystyle \varphi ^{}{}_{}{}^{2}d\eta _C}.`$ (24) For Models 1, 2 and 3, the values of $`\mathrm{\Delta }s`$ are found as Model 1: $`\mathrm{\Delta }s=0.1681,`$ (25) Model 2: $`\mathrm{\Delta }s=0.6614,`$ (26) Model 3: $`\mathrm{\Delta }s=0.6640.`$ (27) In the thin-wall limit, $`\mathrm{\Delta }s=4\pi GR_WS_1`$, where $`R_W`$ is the wall radius and $`S_1`$ is the surface tension. Further, in this limit, $`\mathrm{\Delta }s`$ is always smaller than unity and the low frequency spectrum is enhanced by a factor $`1/\mathrm{\Delta }s^2`$ for the width $`\mathrm{\Delta }p\mathrm{\Delta }s`$ . In the present case, as we have seen in section II, the bubble walls are not at all thin. Nevertheless, this qualitative feature expected from the thin-wall limit is in good agreement with the computed tensor spectra. To see the effect of wall fluctuations more clearly, in Fig. 6, the tensor spectrum for Model 1 is compared with that given by the following approximate analytic formula derived in : $`|U_p|^2{\displaystyle \frac{p^3}{2\pi ^2}}=32\pi G\left({\displaystyle \frac{H}{2\pi }}\right)_{t=t_p}^2{\displaystyle \frac{\mathrm{cosh}\pi p1}{\mathrm{sinh}\pi p}}{\displaystyle \frac{p^2}{1+p^2}},`$ (28) where we took the large tension limit, which makes the wall fluctuations least effective. As seen from Fig. 6, the analytic formula agrees very well with the computed spectrum for $`p1`$. Hence the difference at $`p1`$ is totally due to the wall fluctuation modes. If one compares the analytic tensor spectrum in Fig. 6 with the tensor spectrum of Model 3 in Fig. 5, one sees they almost coincide with each other. This is in accordance with the fact that $`\mathrm{\Delta }s`$ of Model 3 is large, as shown in Eq. (25). Thus the bubble wall fluctuations are highly suppressed in Model 3 (and in Model 2) due to the large wall tension. ### B Large angle CMB spectra We now discuss the CMB anisotropies for Models 1, 2 and 3. We focus on the CMB anisotropy spectrum for $`\mathrm{}20`$. Since the contribution of scalar perturbations is dominated by the effect of gravitational potential perturbations, we take account of only the so-called Sach-Wolfe and integrated Sach-Wolfe effects. Although there is a possibility that $`\mathrm{\Omega }_0`$ is dominated by $`\mathrm{\Omega }_\mathrm{\Lambda }`$, here we assume the present universe is matter-dominated; $`\mathrm{\Omega }_0=\mathrm{\Omega }_{\mathrm{matter}}`$. Before going into discussion, we note one subtlety. In the one-bubble open universe scenario, the duration of inflation inside the bubble is directly related to the value of $`\mathrm{\Omega }_0`$ today. In other words, once the model parameters are fixed, the duration of inflation is fixed and consequently so the value of $`\mathrm{\Omega }_0`$. However, $`\mathrm{\Omega }_0`$ depends rather sensitively on the values of the model parameters. In particular, it takes a very small change in $`v`$ to give a different $`\mathrm{\Omega }_0`$. But such a change will not cause a change in the shape of perturbation spectra. Furthermore, the efficiency of reheating (or preheating) at the end of inflation will also affect the value of $`\mathrm{\Omega }_0`$. So, depending on a grand scenario one has in mind, the resulting $`\mathrm{\Omega }_0`$ will be different. Because of these reasons, below we present the CMB anisotropies of Models 1, 2 and 3 for several different values of $`\mathrm{\Omega }_0`$ by artificially varying it. The computed CMB spectra $`\mathrm{}(\mathrm{}+1)C_{\mathrm{}}`$ for Models 1, 2 and 3 are shown in Figs. 7, 8 and 9, respectively. The amplitudes shown there are the absolute amplitudes of the spectra for the given parameter values. It should be noted, however, that the amplitude can be tuned to fit the observed value (at certain $`\mathrm{}`$) by changing the value of $`m`$ if necessary. So, the important point is the relative amplitudes of the scalar and tensor contributions and their spectral shapes. The scalar CMB anisotropies show similar spectral behavior for all the models. Namely, their amplitudes are suppressed at small $`\mathrm{}`$. This behavior is due to the large suppression of the scalar spectra at $`p10`$ mentioned in the previous subsection. If one compares the present results with the ones shown in Figs. 4, 5 and 6 of , one sees that the tendency is opposite: The scalar spectra obtained in have a feature that they gradually decrease as $`\mathrm{}`$ increases. This is due to the integrated Sach-Wolfe effect and it is usually what one expects for open universe models. On the contrary, in the present case, because of the large suppression of the scalar spectra at $`p10`$, the corresponding CMB spectra increase for increasing $`\mathrm{}`$ and level off around $`\mathrm{}10`$. As expected from the tensor perturbation spectra shown in Fig. 5, the tensor CMB anisotropies at $`\mathrm{}510`$ are large in Model 1 due to large wall fluctuations, while they are small in Models 2 and 3. For Model 1, this enhancement causes a rise in the total spectra for $`\mathrm{}5`$, which does not seem to fit with the observed spectrum by COBE-DMR . On the other hand, the tensor contribution to the CMB anisotropies of Models 2 and 3 is small. As a result, the total spectra of Models 2 and 3 turn out to be rather flat, which is consistent with the COBE spectrum. ## IV Conclusions Despite a lot of progress in our understanding of various versions of open inflation, until now we did not know how the spectrum of CMB may look in the simplest one-field open inflation models. Previous calculations have been based on the assumption that the usual inflationary perturbations are produced inside the bubble immediately after it is formed. However, as we have argued (see also ), bubbles appear only if $`^2V>H^2`$ at the moment of their formation in the one-field models. This means that the usual inflationary perturbations are not produced at that time. In this paper we have studied the spectrum of CMB in several different models of one-field open inflation. At $`\mathrm{}10`$ the spectrum coincides with the spectrum obtained in the earlier papers on open inflation, since the mechanism of the bubble production is not very important for the behavior of the perturbations on scale much smaller than the size of the bubble. The main difference in the spectrum of CMB occurs at $`\mathrm{}O(10)`$. We have found that the spectrum of scalar CMB anisotropies has a minimum at small $`\mathrm{}`$, and reaches a plateau at $`\mathrm{}=O(10)`$. The existence of this minimum is a model-independent feature of the spectrum related to the fact that $`^2V>H^2`$ at the moment of the bubble formation in the one-field models. In all models which we have studied there are no supercurvature perturbations. Tensor CMB anisotropies are peaked at $`\mathrm{}=2`$. Relative magnitude of the scalar CMB spectra versus tensor CMB spectra at small $`\mathrm{}`$ depends on the parameters of the models, and in particular on the value of $`\mathrm{\Omega }_0`$. In some of the models, tensor perturbations are too large, which rules these models out. This effect is especially pronounced in the models with $`\mathrm{\Omega }_01`$. In some other models the tensor perturbations are very small even for $`\mathrm{\Omega }_01`$, and the combined spectrum of perturbations has a minimum at small $`\mathrm{}`$. In future satellite missions one could measure the tensor spectrum via polarization. This would make it possible to identify the scalar and tensor contributions to the CMB anisotropy and to compare them with the predictions of the one-field models of open inflation. We conclude that the the spectrum of CMB in one-field models of open inflation has certain features which will help us to verify these models and to distinguish them from other versions of inflationary theory. ### Acknowledgments It is a pleasure to thank J. García–Bellido and R. Bousso for useful and stimulating discussions. The work of A.L. was supported in part by NSF grant PHY-9870115, and the work of M.S. and T.T. was supported in part by Monbusho Grant-in-Aid for Scientific Research No. 09640355.
no-problem/9901/hep-ph9901305.html
ar5iv
text
# 1 IMSc/98/07/37 A New Mechanism for Neutrino Mass P.P.Divakaran SPIC Mathematical Institute 92, G.N.Chetty Road, Madras-600 017. and G.Rajasekaran Institute of Mathematical Sciences CIT Campus, Madras-600 113. Abstract A mechanism for generating massive but naturally light Dirac neutrinos is proposed. It involves composite Higgs within the standard model as well as some new interaction beyond the standard model. According to this scenario, a neutrino mass of 0.1 eV or higher, signals new physics at energies of 10–100 TeV or lower. The recent announcement of a depletion in the expected number on the earth’s surface of $`\nu _\mu ^{}s`$ originating from cosmic ray interactions in the atmosphere has once again focussed attention on the fundamental properties of neutrinos. The favoured explanation for this effect is that a $`\nu _\mu `$ oscillates into a neutrino of another family, most likely a $`\nu _\tau `$, implying that at least one neutrino has a nonzero mass. From the reported value of $`\delta m^210^3`$ to $`10^2eV^2`$, we can conclude that the average mass of the two neutrinos involved is bounded approximately by $`m>\frac{1}{2}\sqrt{\delta m^2}10^1eV`$. Massive but light neutrinos have intrigued model-makers for quite some time now. The most widely discussed possibility is to assume that neutrinos are Majorana particles, in which case they can be driven to a small mass by the see-saw mechanism . If, however, neutrinos turn out to be Dirac particles, we would require an alternative scenario. In this brief note, we propose such an alternative. We suggest a simple, qualitative, model-independent line of reasoning that naturally accommodates light, massive, Dirac neutrinos and draw from it information regarding the scale at which new physics beyond the standard model can be expected to come into play. Neutrinos are unique in the standard model. They are the only fermions, a part of which, namely the right-handed part $`\nu _R`$, has zero quantum numbers under $`SU(3)\times SU(2)\times U(1)`$ and as a consequence has no gauge interaction. Thus, if there are no elementary Higgs bosons and if the W,Z, the charged leptons and the quarks get their masses by dynamical breaking of symmetry induced by the $`SU(3)\times SU(2)\times U(1)`$ gauge interactions alone, then neutrinos will remain massless. In such a case, new interactions going beyond the standard model will be required for giving mass to the neutrino. If the mass scale of the new physics beyond the standard model is large enough, the mass of the neutrino will remain small. This would provide a natural mechanism for small neutrino masses. In contrast, totally arbitrary neutrino masses would result from the introduction of elementary Higgs boson, which we discard. To make our suggestion a little more concrete, let us envisage a picture in which the Higgs boson $`H`$ is a composite of fermions and antifermions bound by the $`SU(3)\times SU(2)\times U(1)`$ gauge forces through some nonperturbative mechanism . In principle, $`H`$ can be a combination of $`\overline{t}_Lt_R,\overline{b}_Lb_R\mathrm{}\overline{d}_Ld_R,\overline{\tau _L}\tau _R\mathrm{}\overline{e}_Le_R`$, but it cannot contain $`\overline{\nu }_L\nu _R`$ of any family, since $`\nu _R`$ does not have any gauge interaction. In other words, the effective Yukawa coupling $`H\overline{\nu }_L\nu _R`$ vanishes exactly, to all orders in the $`SU(3)\times SU(2)\times U(1)`$ gauge coupling constants. On the other hand, the effective Yukawa vertex $`H\overline{e}_Le_R`$ for the electron (or for any other charged fermion) exists and it has a form factor characterized by a momentum scale $`\mathrm{\Lambda }_H`$ which we take to be the electroweak scale $``$ 100 GeV as that is the only relevant scale. We may call the standard model interactions as “allowed” interactions. In this sense, masses of the charged fermions are allowed, via the Yukawa interaction, if $`H`$ has a nonvanishing vaccuum expectation value, while neutrino masses are forbidden in the regime of validity of the standard model - the only way to make neutrinos massive is to invoke forces beyond the standard model. We next go to the “first-forbidden” approximation, i.e. we include the nonstandard effects in the lowest nontrivial order. Without being committed to a specific model, we parametrize the required new physics beyond the standard model by effective four-fermion couplings with a Fermi-type couping constant generically denoted as $`G_X`$. The corresponding mass scale $`G_X^{1/2}`$ must be substantially higher than the mass scale ($``$ 100 GeV) of the standard model. This will generate the first-forbidden coupling $`H\overline{\nu }_L\nu _R`$ through the graphs shown in Fig.1, where the shaded vertex is the “allowed” Yukawa vertex with form factor, for a charged fermion which we may take to be a charged lepton $`\mathrm{}`$, so as not to violate $`B`$ and $`L`$ at the $`X`$ vertex. The corresponding effective Yukawa coupling constant $`f_\nu `$ for $`\nu `$ can be estimated : $$f_\nu f_{\mathrm{}}G_X^{\mathrm{\Lambda }_H}\frac{d^4p}{\mathit{}\mathit{}}f_{\mathrm{}}G_X\mathrm{\Lambda }_H^2$$ $`(1)`$ where $`f_{\mathrm{}}`$ is the Yukawa coupling constant for $`\mathrm{}`$ and the integral is cut off at $`\mathrm{\Lambda }_H`$ because of the form factor of the composite Higgs. If the Higgs has a nonvanishing vacuum expectation value, then we get for the neutrino mass $$m_\nu m_{\mathrm{}}G_X\mathrm{\Lambda }_H^2$$ $`(2)`$ where $`m_{\mathrm{}}`$ is the mass of the charged lepton. For $`\mathrm{\Lambda }_H100GeV`$, we arrive at $$G_X^{1/2}100\sqrt{m_{\mathrm{}}/m_\nu }GeV.$$ $`(3)`$ A lower bround on $`m_\nu `$ thus results in an upper bound on the scale of new physics $`G_X^{1/2}`$. Also, the lower the mass of $`\mathrm{}`$ to which $`\nu `$ couples at the $`\overline{\mathrm{}}_R\nu _RX`$ vertex, the lower is the bound on $`G_X^{1/2}`$ and the best bound is obtained for the charged lepton of lowest mass $`\mathrm{}`$ to which the neutrino of mass $`m_\nu `$ couples. If the dominant mixing of the atmospheric $`\nu _\mu `$ is with $`\nu _\tau `$, the best bound on $`G_X^{1/2}`$ is realised for $`\mathrm{}=\mu `$ in the above formula : $$G_X^{1/2}10^510^6GeV.$$ $`(4)`$ If the massive neutrino couples also to $`e_R`$ (for which there is no clear evidence), this bound will be reduced by a factor of about 10. In contrast to the see-saw mechanism where $`m_\nu `$ depends linearly on the mass of the heavy right-handed Majorana neutrino, in our formula (2), $`m_\nu `$ depends on the square of the mass-scale of new physics. As a result, in the scenario envisaged here, new physics would occur at much lower energies than with the see-saw mechanism and so our proposal can be confronted with experiment much earlier and either confirmed or ruled out. We discuss briefly two illustrative possibilities of new physics beyond the standard model (SM), that would lead to the two types of effective four-fermion couplings introduced in Fig.1. We must note that the type (a) coupling (Fig.1(a)) is consistent with SM symmetry, but type (b) coupling (Fig.1(b)) violates $`SU(2)\times U(1)`$. Type (a) can be obtained by the exchange of a charged or neutral scalar boson $`S`$ (as shown in Fig.(2)), with coupling constant $`h`$ and mass $`m_S`$ 100 GeV. In this case, $`G_X`$ can be replaced by $$G_X\frac{h^2}{m_S^2},$$ $`(5)`$ and hence the upper bound on $`m_S`$ would be smaller than $`10^510^6`$ GeV, if $`h`$ is less than unity. If all elementary scalars are forbidden (which is not necessary for our argument on the neutrino mass), these $`S`$ bosons also could be composite, but formed by forces beyond the standard model. (For obvious reasons $`S^o`$ must have zero vacuum expectation value). The SM-symmetry violating four-fermion coupling (type (b)) occurs in a large class of models in which the $`W`$ boson of the SM mixes with a heavier $`W`$ boson that couples to righthanded fermions. The best model of this kind is the one in which $`SU(2)_L\times U(1)`$ of the SM is extended to $`SU(2)_L\times SU(2)_R\times U(1)`$. This has two pairs of charged $`W`$ bosons, $`W_L^\pm `$ and $`W_R^\pm `$. The mass eigenstates $`W_1^\pm `$ and $`W_2^\pm `$ can be expressed through the mixing angle $`\zeta `$ : $$W_1=W_L\mathrm{cos}\zeta +W_R\mathrm{sin}\zeta $$ $`(6)`$ $$W_2=W_L\mathrm{sin}\zeta +W_R\mathrm{cos}\zeta .$$ $`(7)`$ One identifies $`W_1`$ with the known $`W`$ boson and $`W_2`$ is presumed to be heavy. The current experimental limits are $$|\zeta |<10^210^3$$ $`(8)`$ $$\beta \frac{m_{W_1}^2}{m_{W_2}^2}<0.02.$$ $`(9)`$ We also have a theoretical bound $$|\zeta |<\beta .$$ $`(10)`$ Fig.3 shows the $`W`$-exchange graphs that generate the required coupling of Fig.1b. The effective Fermi-coupling constant arising from the sum of these two graphs can be estimated to be $$G_Xg^2\mathrm{cos}\zeta \mathrm{sin}\zeta \left(\begin{array}{c}\frac{1}{m_{W_1}^2}\frac{1}{m_{W_2}^2}\end{array}\right)$$ $`(10)`$ $$\left(\frac{\zeta }{\beta }\right)\frac{g^2}{m_{W_2}^2}\frac{g^2}{m_{W_2}^2}.$$ $`(11)`$ Combining (4) and (11) and using the value of the $`SU(2)_L`$ gauge coupling constant $`g`$, we get the bound $$m_{W_2}10^410^5GeV.$$ $`(12)`$ Our conclusions are : (i) The physics of a composite Higgs or, more generally, dynamical symmetry breaking within the standard model followed by new interactions beyond the standard model provides a natural mechanism for generating very small neutrino masses. (ii) Within this scenario, a finite but small Dirac mass for neutrinos may be regarded as a signal that interesting new physics can be expected at an energy scale of 10–100 TeV or lower. Acknowledgements We thank Ramesh Anishetty, Anjan Joshipura, Sandip Pakvasa and Xerxes Tata for discussions and criticisms. PPD would like to acknowledge the use of the facilities of the Institute of Mathematical Sciences, Madras and the Tata Institute of Fundamental Research, Bombay. References and Footnotes 1. Y.Fukuda et al. (Super-Kamiokande Collaboration), hep-ex/9803006 and hep-ex/ 9805006 ; T.Kajita, Talk at ‘Neutrino 98’, Takayama, Japan. 2. M.Gell-Mann, P.Ramond and R.Slansky, in Supergravity (Ed. P. Van Nieuwenhuizen and D.Z.Freedman, North Holland, Amsterdam, 1979), p.315 ; T.Yanagida, in Proc. of the Workshop on the Unified Theory and Baryon Number in the Universe (Ed. O.Sawada and A.Sugamoto) KEK Report No.79-18, Tsukuba, Japan, 1979. 3. This nonperturbative mechanism may have its origin in the as-yet-unsolved problem of infra-red divergences that afflict the unbroken phase of the nonabelian gauge theory. 4. J.C.Pati and A.Salam, Phys. Rev. D10, 275 (1974) ; R.N.Mohapatra and J.C.Pati, Ibid. 11, 566 (1975) ; 11, 2558 (1975) ; R.N.Mohapatra and G.Senjanovic, Ibid. 12, 1502, (1975) ; P.Langacker and S.Uma Sankar, Ibid. 40, 1569 (1989). 5. Review of Particle Physics : C.Caso et al. (Particle Data Group), European Physical Journal C3, 1 (1998). 6. E.Masso, Phys. Rev. Lett. 52, 1956 (1984). Figure Captions Generation of the Yukawa coupling of the neutrino, from that of the charged lepton with form factor denoted by the shaded vertex, through new physics represented by the four-fermion coupling of two types (a) and (b). (To each diagram, one must add a corresponding diagram with all fermion lines reversed). Type (a) coupling illustrated by scalar boson exchanges. Type (b) coupling illustrated by exchanges of $`W_1`$ and $`W_2`$ gauge bosons which are mixtures of $`W_L`$ and $`W_R`$.
no-problem/9901/hep-lat9901021.html
ar5iv
text
# ITP-Budapest 547UTCCP-P-60UTHEP-397hep-lat/9901021Jan 1999 The endpoint of the first-order phase transition of the SU(2) gauge-Higgs model on a 4-dimensional isotropic lattice ## I Introduction The Minimal Standard Model predicts that the electroweak interaction undergoes a first-order phase transition at a finite temperature for light Higgs boson masses. A focus of recent studies has been whether the first-order phase transition survives with sufficient strength for a realistically heavy Higgs boson mass , since the feasibility of electroweak baryogenesis depends crucially on it. The first-order nature of the electroweak transition for light Higgs bosons can be shown within perturbation theory. However, perturbation theory breaks down for Higgs boson masses larger than about $`M_W`$ due to bad infrared behavior of the gauge-Higgs part of the electroweak theory . Hence numerical simulation techniques are needed to analyze the nature of the transition for heavy Higgs bosons. Extensive studies in this direction have already been performed within the effective 3-dimensional theory approach, in which all non-static modes of the system are integrated out perturbatively. This approach has the advantage that the full Standard Model including fermions can be mapped onto a 3-dimensional SU(2) (or $`\text{SU(2)}\text{U(1)}`$) gauge-Higgs model, as there are no fermionic static modes at finite temperature. In addition, thinning out the degree of freedom to those of a 3-dimensional theory significantly reduces the computational requirement. Results from simulations in this approach show that the first-order electroweak transition weakens as the Higgs boson mass increases , and that it turns into a continuous crossover for heavy Higgs bosons with a mass $`M_HM_W`$. Detailed studies of the endpoint of the first-order transition including its universality class have also been made. A potential problem with the 3-dimensional approach is that it relies on perturbation theory to derive the 3-dimensional action so that numerical predictions may involve systematic errors due to truncation of perturbative series. From this point of view a direct simulation of the 4-dimensional system is preferred. Results from 4-dimensional simulations provide a check on those of the 3-dimensional method. Early studies of the 4-dimensional SU(2) gauge-Higgs system were carried out in Refs. . More recently advances have been made with the use of space-time anisotropic lattice . This approach alleviates the double-scale problem that there are light modes with long wave length, $`\xi >>1/T`$, near the endpoint where the transition is of second order. In this article we report on a study of the endpoint of the SU(2) gauge-Higgs model employing 4-dimensional space-time symmetric lattices with the temporal lattice size $`N_t=2`$, building upon a previous work . Simulations have been carried out for a wide range of spatial lattice sizes, and finite-size scaling study of Lee-Yang zeros is used to find the location of the endpoint. We measure the Higgs and W boson masses around the endpoint and estimate the value of the Higgs boson mass at the endpoint. This paper is organized as follows. In Section II we present the SU(2) gauge-Higgs model lattice action and outline our strategy for finding the endpoint through Lee-Yang zeros. In Section III, following a brief discussion of susceptibility analysis, Lee-Yang zeros are examined. Another approach to find the endpoint using the Binder cumulant is also described. In Section IV we present results of the zero-temperature mass measurement. Together with our result for the scalar self-coupling constant at the endpoint obtained through Lee-Yang zero analysis, this leads to the value of the Higgs boson mass at the endpoint. Sec. V is devoted to conclusions. ## II Theory and Simulation We work with the standard SU(2) gauge-Higgs model action given by $$S=\underset{x}{}\left[\underset{\mu >\nu }{}\frac{\beta }{2}\mathrm{Tr}U_{x,\mu \nu }+\underset{\mu }{}2\kappa L_{x,\mu }\rho _x^2\lambda (\rho _x^21)^2\right],$$ (1) $$L_{x,\mu }\frac{1}{2}\mathrm{Tr}(\mathrm{\Phi }_x^{}U_{x,\mu }\mathrm{\Phi }_{x+\widehat{\mu }}),\rho _x^2\frac{1}{2}\mathrm{Tr}(\mathrm{\Phi }_x^{}\mathrm{\Phi }_x),$$ (2) where $`U_{x,\mu \nu }`$ is the product of link operators around a plaquette, $`\beta `$ is related to the tree-level gauge coupling as $`\beta =4/g^2`$, $`\kappa `$ represents the Higgs field hopping parameter and $`\lambda `$ is the scalar self-coupling. We put the system on a space-time isotropic lattice of a size $`N_t\times N_s^3`$. Finding the endpoint of the first-order finite-temperature phase transition of the model requires finite-size scaling analyses to quantitatively distinguish the case of a first-order transition from that of a crossover as the coupling parameters of the model are varied. As the main tool, we employ finite-size scaling analysis of Lee-Yang zeros on the complex $`\kappa `$ plane for fixed $`\beta `$ and $`\lambda `$ . For a first-order phase transition, the infinite volume limit of the zeros pinches the real $`\kappa `$ axis, while they stay away from it if there is no phase transition. We also supplement this method with analyses of susceptibility and Binder cumulant. Our finite-temperature simulations are carried out for the temporal lattice size $`N_t=2`$. For the spatial lattice size we take $`N_s^3=20^3,24^3,32^3,40^3,50^3`$ and $`60^3`$. The gauge coupling is fixed at $`\beta =8`$. For the scalar self-coupling we choose five values, $`\lambda =0.00075,0.001,0.00135,0.00145`$ and $`0.0017235`$, which covers the range of zero-temperature Higgs boson mass $`57M_H85`$GeV . For each value of $`\lambda `$ the scalar hopping parameter $`\kappa `$ is tuned to the vicinity of the pseudo critical point estimated by the peak position of the susceptibility of the Higgs field length squared $`\rho ^2`$. The updating algorithm is a combination of over-relaxation and heatbath methods , with the ratio of the two for the scalar part and the gauge part as specified in Ref. . We make at least $`10^5`$ iterations of this hybrid over-relaxation algorithm at each coupling parameter point for each lattice size. The list of coupling values and statistics we use in our finite-temperature simulations are listed in Table I. We also carry out zero-temperature simulations to measure the masses of Higgs and W bosons around the endpoint of the first-order phase transition. For these runs an improved algorithm of Ref. is employed. Details of the runs and results are discussed in Sec. IV. ## III Finite-temperature Results ### A Susceptibility Let us first look at the susceptibility of squared Higgs length, $$\chi _{\rho ^2}V\left(\rho ^2\rho ^2\right),$$ (3) where $`VN_s^3`$. The maximum value of the susceptibility at its peak, calculated by the standard reweighting technique as a function of $`\kappa `$, is plotted in Figure 1 against the spatial volume normalized by the critical temperature $`VT_c^3=N_s^3/N_t^3`$. Errors are estimated by the jackknife procedure with the bin size of $`10^3`$$`10^4`$ sweeps, which is listed in Table I. The slope for the smallest scalar coupling $`\lambda =0.00075`$ approaches unity for large volumes, which is consistent with a first-order transition, while that for the largest coupling $`\lambda =0.0017235`$ tends to a constant, showing an absence of a phase transition. A continuous decrease of the slope for the intermediate values of $`\lambda `$ indicates that the endpoint of the first-order transition is located in between the two extreme values. Our range of spatial volumes, unfortunately, is not sufficient to pin down the critical value of $`\lambda `$ from the susceptibility data. ### B Lee-Yang Zeros The determination of the endpoint of the finite temperature phase transition of the model, thus a characteristic feature of the phase diagram, is made by the use of the Lee-Yang zeros of the partition function $`Z`$ . Near the first-order phase transition point the partition function reads $`Z=Z_s+Z_b\mathrm{exp}(Vf_s)+\mathrm{exp}(Vf_b),`$ (4) where the indices s(b) refer to the symmetric (Higgs) phase and $`f`$ stands for the free-energy densities. Near the phase transition point we also have $`f_b=f_s+\alpha (\kappa \kappa _c),`$ (5) since the free-energy density is continuous. One then obtains $`Z\mathrm{exp}[V(f_s+f_b)/2]\mathrm{cosh}[V\alpha (\kappa \kappa _c)/2]`$ (6) which shows that for complex $`\kappa `$ $`Z`$ vanishes at $`\mathrm{Im}(\kappa )=2\pi (n1/2)/(V\alpha )`$ (7) for integer $`n`$. In case a first-order phase transition is present, these Lee-Yang zeros move to the real axis as the volume goes to infinity. If a phase transition is absent the Lee-Yang zeros stay away from the real $`\kappa `$ axis. Thus the way the Lee-Yang zeros move in this limit is a good indicator for the presence or absence of a first-order phase transition. Calculation of the partition function for complex values of $`\kappa `$ is made with the reweighting method in both imaginary and real directions of $`\kappa `$. In those cases where we have two ensembles with the same value of $`\lambda `$ and $`N_s`$, but different $`\kappa `$, we combine the two runs by setting the magnitude of the two partition functions to be equal at the midpoint between the two $`\kappa `$’s. In Fig. 2 we show the absolute value of the partition function normalized by its value at the real axis on the complex $`\kappa `$ plane, $$Z_{norm}(\kappa )\left|\frac{Z(\mathrm{Re}\kappa ,\mathrm{Im}\kappa )}{Z(\mathrm{Re}\kappa ,0)}\right|$$ (8) for $`\lambda =0.00075`$ and $`N_s=60`$. The contour line of this figure is shown in Figure 3. We observe three zeros in this case, whose distance from the real axis is roughly in the ratio $`1:3:5`$ as expected from (7) for a first-order transition. Let us call the zero nearest to the real axis as first zero, and denote its location by $`\kappa _0`$. We search for the first zero by the Newton-Raphson method applied to the equation $$Z(\mathrm{Re}\kappa ,\mathrm{Im}\kappa )=0,$$ (9) starting with an initial guess for $`\kappa _0`$ obtained from the contour plot of $`Z_{norm}(\kappa )`$. The error of $`\kappa _0`$ is estimated by the jackknife method with a bin size given in Table I, i.e., the zero search is repeated for the set of partition functions calculated from each jackknife sample of configurations, and the jackknife formula is applied to the set of $`\kappa _0`$. The results for $`\kappa _0`$ are given in Table I. We show in Fig. 4 values of the imaginary part of the first zero $`\mathrm{Im}\kappa _0(V)`$ as a function of inverse volume. Finite-size scaling theory predicts that the volume dependence of the imaginary part of the first zero is given by a scaling form, $$\mathrm{Im}\kappa _0(V)=\kappa _0^c+CV^\nu .$$ (10) For a first-order phase transition, the infinite volume limit vanishes, $`\kappa _0^c=0`$, and the exponent takes the value $`\nu =1`$. In the absence of a phase transition, $`\kappa _0^c0`$ and the value of the exponent is generally unknown. In Fig. 5 we plot results for $`\kappa _0^c`$ as a function of $`\lambda `$ obtained by fitting the volume dependence of the first zero by the form (10) (see Fig. 4 for fit lines). Both $`\kappa _0^c`$ and $`\nu `$ are taken as fit parameters, and the entire set of volume $`N_s^3=20^360^3`$ is employed. Filled symbols mean that they are directly obtained from the simulations carried out at the corresponding values of $`\lambda `$. The points plotted with open symbols are obtained from the first zero of the partition function calculated by reweighting the partition function measured at the point where $`\kappa _0^c`$ with the filled symbol of the same shape is shown. The agreement of open symbols of different shapes within errors shows that reweighting from different values of $`\lambda `$ gives consistent results between the measured points. At small couplings $`\lambda 0.001`$, $`\kappa _0^c`$ is consistent with zero, which agrees with the result of Ref. that the transition is of first order in this region. At large couplings $`\lambda 0.0013`$, $`\kappa _0^c`$ no longer vanishes, and hence there is no phase transition. In order to determine the endpoint of the phase transition, we take the three filled points at $`\lambda =0.00135,0.00145`$ and $`0.0017235`$ directly obtained from independent simulations without $`\lambda `$-reweighting, and make a fit with a function linear in $`\lambda `$. This gives the position of the endpoint to be $$\lambda _c=0.00116(16).$$ (11) In Figure 6 we show the exponent of scaling function (10). The meanings of symbols are the same as in Figure 5. For $`\lambda >\lambda _c`$, where there is no phase transition, the exponent takes a value $`\nu 0.75`$. Below the endpoint $`\lambda <\lambda _c`$, the exponent shows some trend of increase, but not quite to the value $`\nu =1`$ expected for a first-order transition. We think that this is due to insufficient volume sizes used in our simulation, for which corrections to the leading $`1/V`$ behavior are not negligible. To check this point we make an alternative fit of results for the first zero adopting a quadratic ansatz in volume given by $$\mathrm{Im}\kappa _0(V)=\kappa _0^c+CV^1+DV^2,$$ (12) and show the results for $`\kappa _0^c`$ in Figure 7. Clearly the infinite volume limit $`\kappa _0^c`$ starts to deviate from zero around $`\lambda 0.001`$, which is consistent with the estimate of $`\lambda _c`$ above, albeit located at the lower end of the one standard deviation error band. We note that the quadratic ansatz (12), formally the first three terms of a Laurent series, is expected to be correct in case of a first-order phase transition, for which (7) describes the thermodynamic limit. However, it is not a valid assumption in the region of $`\lambda `$ where there is no phase transition. Therefore, unlike the case of Fig. 5, extrapolating the results of Fig. 7 from large to small values of $`\lambda `$ to estimate the location of the endpoint $`\lambda _c`$ is not justified. ### C Binder Cumulant Let us consider the Binder cumulant (cf. ) of the space-like link operator, $$B_{L_s}(\kappa )1\frac{L_s^4}{3L_s^2^2};L_s=\frac{1}{3N_s^3N_t}\underset{x,\mu =1,2,3}{}L_{x,\mu }$$ (13) The infinite volume limit of the minimum of this quantity should deviate from 2/3 for a first-order phase transition, while it should converge to 2/3 beyond the endpoint. We evaluate the minimum of the cumulant as a function of $`\kappa `$ for a given $`\lambda `$ and volume using reweighting. We then use a scaling ansatz, $$B_{L_s}^{\mathrm{min}}=B_{L_s}^c+CV^\nu ,$$ (14) to extract the infinite-volume value $`B_{L_s}^c`$. In Fig. 8 we show $`(B_{L_s}^c2/3)`$ as a function of $`\lambda `$, where the meanings of symbols are the same as in Fig. 5. A change of behavior from non-vanishing values to those consistent with zero at $`\lambda 0.001`$ shows that the first-order phase transition terminates around this value. Linearly extrapolating the two independent data at $`\lambda =0.00075`$ and $`0.001`$ yields $`\lambda _c=0.00102(3)`$ for the endpoint, which is consistent with the result (11) from our study of Lee-Yang zeros. Note, however, that only two measured points are available for the linear extrapolation. Therefore we can not make a statement on the goodness of the fit. For this reason, we conservatively take the Lee-Yang value (11) as our best estimate of the endpoint. ## IV Critical Higgs Boson Mass To determine the physical parameters characterizing the endpoint, namely the ratio of the Higgs boson mass to the W boson mass and the renormalized gauge coupling $`g_R`$, we have to perform zero-temperature simulations. As in Refs. , we extract the Higgs boson mass $`m_H`$ in lattice units from correlators of $`\rho _x^2`$ and $`L_{x,\mu }`$. The W boson mass in lattice units $`m_W`$ is obtained from the correlator of the composite link fields $$W_x\underset{r,k=1}{\overset{3}{}}\frac{1}{2}\mathrm{Tr}(\tau _r\alpha _{x+\widehat{k}}^+U_{xk}\alpha _x),$$ (15) where $`\tau _r`$ is the Pauli matrix and $`\alpha _x`$ is the angle part of $`\mathrm{\Phi }_x`$ such that $`\mathrm{\Phi }_x\rho _x\alpha _x`$ with $`\alpha _x\mathrm{SU}(2)`$. Masses are extracted from the correlators fitting to a hyperbolic cosine plus a constant function. Simple uncorrelated least-square fits and correlated fits with eigenvalue smoothing proposed by Michael and McKerrell are used. The application of this method is discussed in detail in Ref. . The actual procedure of extracting the mass parameters is the following. First we determine the reasonable time intervals for fitting the correlator data. The guideline is to choose as large an interval as possible with reasonable $`\chi ^2`$/d.o.f. value. For this purpose correlated fits with eigenvalue smearing are used. We find this to be necessary since the data are strongly correlated for different time distances. Having fixed the fitting time interval, we next carry out uncorrelated fits. To perform this fit, we divide the data sample into subsamples, and estimate the errors of correlators from the statistical fluctuations of subsample averages. The best fit value of the masses is taken to be the number given by the uncorrelated fit. The value of the Higgs boson mass is obtained by fitting to a linear combination of the two different correlators for $`\rho _x^2`$ and $`L_{x,\mu }`$. The errors on the masses are determined by jackknife analyses over subsamples. The masses obtained by the correlated fits with eigenvalue smearing are in all cases well within the error bars of the uncorrelated fits. Our zero-temperature simulations are carried out at two points given by $`(\lambda ,\kappa =\kappa _c(\lambda ,N_t=2))`$ for $`\lambda =0.0011`$ and $`0.00125`$ employing several lattice sizes to examine finite-volume effects. The run parameters and results for masses are collected in Table II. The size of subsamples is typically 500 sweeps. Our results do not show significant volume dependence (see Fig. 9), except for the two smallest spatial volumes $`N_s=8^3`$, $`10^3`$ for which somewhat different values are obtained compared to those of other volumes. We then discard those results and take an average over the rest of the volumes. This yields the values given in Table III. Setting $`M_W=80`$ GeV, we obtain $`M_H`$ $`=`$ $`70.9\pm 1.1\text{GeV}(\lambda =0.0011)`$ (16) $`M_H`$ $`=`$ $`76.8\pm 1.1\text{GeV}(\lambda =0.00125).`$ (17) Making a linear interpolation to the critical value $`\lambda _c=0.00116(16)`$ from the Lee-Yang zero analysis, we find $$M_{H,c}=73.3\pm 6.4\text{GeV},$$ (18) where the error is dominated by that of $`\lambda _c`$. From measurements of Wilson loops we also determine the values of the renormalized gauge coupling $`g_R`$ using the method described in Refs. . The potential as a function of the distance $`R`$ is fitted by $$V(R)=\frac{A}{B}e^{MR}+C+DG(M,R,L_s),$$ (19) where $`G(M,R,L_s)`$ stands for lattice artifacts (cf. ). The potential is determined from the rectangular Wilson loops by fitting the time dependence with three exponentials. A stable fit is obtained in all cases. The potential is then fitted by (19) using all R values. Our results for the fit parameters and $`g_R^2`$ for various spatial size lattices are shown in Table IV. We see that $`g_R`$ is constant within errors. The averaged values are given in Table III. The values do agree within errors, showing that our simulations for the two $`\lambda `$ values correspond to the same renormalized gauge coupling. Therefore the linear extrapolation to $`\lambda _c`$ mentioned above is justified, since we use Higgs masses at equal renormalized gauge couplings. Finally, let us try to estimate the effect of fermions and the U(1) gauge boson on our result. We make this estimation through the perturbative expression for the parameter $`x=\lambda _3/g_3^2`$ of the dimensionally reduced model in terms of the physical parameters of the Standard Model . Using our results for the Higgs boson mass and the renormalized gauge coupling, we find $`x_c=0.121\pm 0.020`$ for the endpoint. Including the effect of fermions and the U(1) gauge boson, this value corresponds to $`M_{H,c}=80\pm 7`$ GeV. ## V Conclusions We have studied the endpoint of the finite-temperature first-order transition of the SU(2) gauge-Higgs model on a space-time isotropic lattice of a temporal extension $`N_t=2`$. The results from Lee-Yang zero and Binder cumulant analyses show that the first-order phase transition terminates at $`\lambda _c=0.00116(16)`$ and turns into a smooth crossover for $`\lambda >\lambda _c`$. Setting $`M_W=80`$ GeV our result for the critical Higgs boson mass is $`M_{H,c}=73.3\pm 6.4`$ GeV. This is consistent within error with the value $`M_{H,c}=74.6\pm 0.9`$ GeV obtained in a 4-dimensional anisotropic lattice simulation for the same temporal size. The same work also reported that the critical mass decreases for larger temporal size, and extrapolates to $`M_{H,c}=66.5\pm 1.4`$ GeV in the continuum limit. This value is consistent with the 3-dimensional result $`66.2`$ GeV . Thus results from various methods, in three and four dimensions, agree well. For a comparison with the experimental lower bound $`M_H>87.9`$ GeV for the Higgs boson mass, we need to include the effect of fermions and U(1) gauge boson. The good agreement of critical mass from the four- and three-dimensional simulations noted above imply that this may be made perturbatively, with which we find $`M_{H,c}=80\pm 7`$ GeV for our $`N_t=2`$ simulation. This value is about 10% larger, albeit with a comparable error, than the result $`M_{H,c}=72.4\pm 1.7`$ GeV in the continuum limit obtained from a 4-dimensional anisotropic study, possibly due to scaling violations. We also note that the 3-dimensional approach reported the values $`M_{H,c}=72.4\pm 0.9`$ GeV and $`M_{H,c}=72\pm 2`$ GeV. Combining all the available results, we conclude that the electroweak baryogenesis within the Minimal Standard Model is excluded. ## Acknowledgements Part of this work was carried out while Z.F. was visiting KEK by the Foreign Researcher Program of the Ministry of Education. Part of numerical calculations was made on VPP-500/30 at the Information Processing Center of University of Tsukuba and the PMS-11G PC-farm in Budapest. This work is supported in part by Grants-in-Aid of the Ministry of Education of Japan (Nos. 09304029, 10640246), Hungarian Science Foundation grants (No. OTKA-T016240/T022929) and Hungarian Ministry of Education grant (No. FKP-0128/1997).
no-problem/9901/hep-lat9901019.html
ar5iv
text
# KEK-CP-082KEK Preprint 98-217January 1999 Non-perturbative determination of quark masses in quenched lattice QCD with the Kogut-Susskind fermion action ## Abstract We report results of quark masses in quenched lattice QCD with the Kogut-Susskind fermion action, employing the Reguralization Independent scheme (RI) of Martinelli et al. to non-perturbatively evaluate the renormalization factor relating the bare quark mass on the lattice to that in the continuum. Calculations are carried out at $`\beta =6.0`$, $`6.2`$, and $`6.4`$, from which we find $`m_{ud}^{\overline{\mathrm{MS}}}(2\mathrm{G}\mathrm{e}\mathrm{V})=4.23(29)\mathrm{MeV}`$ for the average up and down quark mass and, with the $`\varphi `$ meson mass as input, $`m_s^{\overline{\mathrm{MS}}}(2\mathrm{G}\mathrm{e}\mathrm{V})=129(12)\mathrm{MeV}`$ for the strange mass in the continuum limit. These values are about 20% larger than those obtained with the one-loop perturbative renormalization factor. The values of quark masses are fundamental parameters of the Standard Model which are not directly accessible through experimental measurements. Lattice QCD allows their determination through a calculation of the functional relation between quark masses and hadron masses. For this reason a number of lattice QCD calculations have been carried out to evaluate quark masses, employing the Wilson, clover or Kogut-Susskind (KS) fermion action . An important ingredient in these calculations is the renormalization factor relating the bare lattice quark mass to that in the continuum. While perturbation theory is often used to evaluate this factor, uncertainties due to higher order terms are quite significant in the range of the QCD coupling constant accessible in today’s numerical simulations. A non-perturbative determination of the renormalization factor is therefore necessary for a reliable calculation of quark masses, and effort in this direction has recently been pursued for the Wilson and clover fermion actions . The need for a non-perturbative determination of the renormalization factor is even more urgent for the KS action since the one-loop correction is as large as 50% in present simulations. In this article we report a study to meet this need : we calculate the renormalization factor of bi-linear quark operators for the KS action non-perturbatively using the Reguralization Independent scheme (RI) of Ref. developed for the Wilson/clover actions. The results for the scalar operator, combined with our previous calculation of bare quark masses , lead to a non-perturbative determination of the quark masses in the continuum limit. In the RI sheme, the renormalization factor of a bi-linear operator $`𝒪`$ is obtained from the amputated Green function, $$\mathrm{\Gamma }_𝒪(p)=S(p)^10|\varphi (p)𝒪\overline{\varphi }(p)|0S(p)^1$$ (1) where the quark two-point function is defined by $`S(p)=0|\varphi (p)\overline{\varphi }(p)|0`$. The quark field $`\varphi (p)`$ with momentum $`p`$ is related to the one-component KS field $`\chi (x)`$ by $`\varphi _A(p)=_y\mathrm{exp}(ipy)\chi (y+aA)`$, where $`y_\mu =2an_\mu `$, $`p_\mu =2\pi n_\mu /(aL)`$ with $`L/4n_\mu <L/4`$ and $`A_\mu =0,1`$. Bi-linear operators have a form $$𝒪=\underset{yABab}{}\overline{\varphi }_A^a(y)\overline{(\gamma _S\xi _F)}_{AB}U_{AB}^{ab}(y)\varphi _B^b(y)$$ (2) where $`\overline{(\gamma _S\xi _F)}`$ refers to Dirac ($`\gamma _S`$) and KS flavor ($`\xi _F`$) structure , and the indices $`a`$ and $`b`$ refer to color. The factor $`U_{AB}^{ab}(y)`$ is the product of gauge link variables along a minimum path from $`y+aA`$ to $`y+aB`$. We note that $`U_{AB}(y)`$ is absent for scalar and pseudo scalar operators as these operators are local. The renormalization condition imposed on $`\mathrm{\Gamma }_𝒪(p)`$ is given by $$Z_𝒪^{\mathrm{RI}}(p)Z_\varphi (p)=\mathrm{Tr}[P_𝒪^{}\mathrm{\Gamma }_𝒪(p)]$$ (3) where $`P_𝒪^{}=\overline{(\gamma _S^{}\xi _F^{})}`$ is the projector onto the tree-level amputated Green function. The wave function renormalization factor $`Z_\varphi (p)`$ can be calculated by the condition $`Z_V(p)=1`$ for the conserved vector current corresponding to $`\overline{(\gamma _\mu I)}`$. Since the RI scheme explicitly uses the quarks in external states, gauge fixing is necessary. We employ the Landau gauge throughout the present work. The relation between the bare operator on the lattice and the renormalized operator in the continuum takes the form, $$𝒪_{\overline{\mathrm{MS}}}(\mu )=U_{\overline{\mathrm{MS}}}(\mu ,p)Z_{\mathrm{RI}}^{\overline{\mathrm{MS}}}(p)/Z_𝒪^{\mathrm{RI}}(p)𝒪$$ (4) where $`U_{\overline{\mathrm{MS}}}(\mu ,p)`$ is the renormalization-group running factor in the continuum from momentum scale $`p`$ to $`\mu `$. We adopt the naive dimensional regularization (NDR) with the modified minimum subtraction scheme ($`\overline{\mathrm{MS}}`$) in the continuum. The factor $`Z_{\mathrm{RI}}^{\overline{\mathrm{MS}}}(p)`$ provides matching from the $`\mathrm{RI}`$ scheme to the $`\overline{\mathrm{MS}}`$ scheme. These two factors are calculated perturbatively in the continuum. For our calculation of the quark mass we apply the relation (4) in the scalar channel in the chiral limit, i.e, $`1/Z_m^{\mathrm{RI}}=Z_S^{\mathrm{RI}}`$. Our calculations are carried out in quenched QCD. Gauge configurations are generated with the standard plaquette action at $`\beta =6.0`$, $`6.2`$, and $`6.4`$ on an $`32^4`$ lattice. For each $`\beta `$ we choose three bare quark masses tabulated in Table I where the inverse lattice spacing $`1/a`$ is taken from our previous work . We calculate Green function for $`15`$ momentum in the range $`0.038553(ap)^21.9277`$. Quark propagators are evaluated with a source in momentum eigenstate. We find that the use of such a source results in very small statistical errors of $`O(0.1\%)`$ in Green functions. The RI method completely avoids the use of lattice perturbation theory. We do not have to introduce any ambiguous scale, such as $`q^{}`$ , to improve on one-loop results. An important practical issue, however, is whether the renormalization factor can be extracted from a momentum range $`\mathrm{\Lambda }_{\mathrm{QCD}}pO(1/a)`$ keeping under control the higher order effects in continuum perturbation theory, non-perturbative hadronization effects, and the discretization error on the lattice. These effects appear as $`p`$ dependence of the renormalization factor in (4), which should be absent if these effects are negligible. In Fig. 1 we compare the scalar renormalization factor $`Z_S^{\mathrm{RI}}(p)`$ with that for pseudo scalar $`Z_P^{\mathrm{RI}}(p)`$ for three values of bare quark mass $`am`$ at $`\beta =6.0`$. From chiral symmetry of the KS fermion action, we naively expect a relation $`Z_S^{\mathrm{RI}}(p)=Z_P^{\mathrm{RI}}(p)`$ for all momenta $`p`$ in the chiral limit. Clearly this does not hold with our result toward small momenta, where $`Z_P^{\mathrm{RI}}(p)`$ rapidly increases as $`m0`$, while $`Z_S^{\mathrm{RI}}(p)`$ does not show such a trend. To understand this result, we note that chiral symmetry of KS fermion leads to the following identities between the amputated Green function of the scalar $`\mathrm{\Gamma }_S(p)`$, pseudo scalar $`\mathrm{\Gamma }_P(p)`$, and the quark two-point function $`S(p)^1`$ : $`\mathrm{\Gamma }_S(p)={\displaystyle \frac{}{m}}S(p)^1`$ (5) $`\mathrm{\Gamma }_P(p)={\displaystyle \frac{1}{2m}}\left[\overline{(\gamma _5\xi _5)}S(p)^1+S(p)^1\overline{(\gamma _5\xi _5)}\right]`$ (6) We also find numerically that the quark two-point function can be well represented by $$S(p)^1\underset{\mu }{}\overline{(\gamma _\mu I)}\mathrm{\Sigma }_\mu ^{}(p)iC_\mu (p)+M(p)$$ (7) with two real functions $`C_\mu (p)`$ and $`M(p)`$, where $`\mathrm{\Sigma }_\mu ^{}(p)=\mathrm{cos}(ap_\mu )i\overline{(\gamma _\mu \gamma _5\xi _\mu \xi _5)}\mathrm{sin}(ap_\mu )`$. From (6), (7), and (3) we obtain the relations between the renormalization factors and $`M(p)`$, $`Z_S^{\mathrm{RI}}(p)Z_\varphi (p)=M(p)/m`$ (8) $`Z_P^{\mathrm{RI}}(p)Z_\varphi (p)=M(p)/m`$ (9) In Fig. 2 $`M(p)`$ in the chiral limit obtained by a linear extrapolation in $`m`$ is plotted. It rapidly dumps for large momenta, but largely increases toward small momenta. Combined with (9) this implies that $`Z_P^{\mathrm{RI}}(p)`$ diverges in the chiral limit for small momenta, which is consistent with the result in Fig. 1. The function $`M(p)`$ is related to the chiral condensate as follows : $$\varphi \overline{\varphi }=\underset{p}{}\mathrm{Tr}[S(p)]=\underset{p}{}\frac{M(p)}{_\mu C_\mu (p)^2+M(p)^2}$$ (10) A non-vanishing value of $`M(p)`$ for small momenta would lead to a non-zero value of the condensate. Therefore the divergence of $`Z_P^{\mathrm{RI}}(p)`$ near the chiral limit is a manifestation of spontaneous breakdown of chiral symmetry; it is a non-perturbative hadronization effect arising from the presence of massless Nambu-Goldstone boson in the pseudo scalar channel. While we do not expect the pseudo scalar meson to affect the scalar renormalization factor $`Z_S^{\mathrm{RI}}(p)`$, as indeed observed in the small quark mass dependence seen in Fig. 1, the above result raises a warning that $`Z_S^{\mathrm{RI}}(p)`$ may still be contaminated by hadronization effects for small momenta. In Fig. 3 we show the momentum dependence of $`Z_m(\mu ,p,1/a)U_{\overline{\mathrm{MS}}}(\mu ,p)Z_{\mathrm{RI}}^{\overline{\mathrm{MS}}}(p)Z_S^{\mathrm{RI}}(p)`$ which is the renormalization factor from the bare quark mass on the lattice to the renormalized quark mass at scale $`\mu `$ in the continuum. Here we set $`\mu =2\mathrm{G}\mathrm{e}\mathrm{V}`$ and use the three-loop formula for $`U_{\overline{\mathrm{MS}}}`$ and $`Z_{\mathrm{RI}}^{\overline{\mathrm{MS}}}`$. While $`Z_m(\mu ,p,1/a)`$ should be independent of the quark momentum $`p`$, our results show a sizable momentum dependence which is almost linear in $`(ap)^2`$ for large momenta (filled symbols in Fig. 3). For small momenta we consider that the momentum dependence arises from non-perturbative hadronization effects on the lattice and the higher order effects in continuum perturbation theory. It is very difficult to remove these effects from our results. Toward large momenta, however, these effects are expected to disappear. The linear dependence on $`(ap)^2`$, which still remains, should arise from the discretization error on the lattice, i.e., $$Z_m(\mu ,p,1/a)=m^{\overline{\mathrm{MS}}}(\mu )/m+(ap)^2Z_\mathrm{H}+O(a^4)$$ (11) with the constant $`Z_\mathrm{H}`$ corresponding to the mixing to dimension 5 operators on the lattice. This relation implies that, if we take a continuum extrapolation of $`Z_m(\mu ,p,1/a)m`$ at a fixed physical momentum $`p`$, the discretization error in $`Z_m`$ is removed. This procedure also removes the $`a^2`$ discretization error in the lattice bare quark mass $`m`$ itself reflecting that in hadron masses. The momentum $`p`$ should be chosen in the region where the linear dependence on $`(ap)^2`$ is confirmed in our results. This region starts from a similar value of $`p^23\mathrm{G}\mathrm{e}\mathrm{V}^2`$ for the three $`\beta `$ values, and extends to $`p^21.9/a^2`$, the highest momentum measured. Hence we are able to use only a rather narrow range $`3\mathrm{G}\mathrm{e}\mathrm{V}^2<p^2<6.6\mathrm{GeV}^2`$, the upper bound dictated by the value of $`1.9/a^2`$ for the largest lattice spacing at $`\beta =6.0`$. In Fig. 4 we show the continuum extrapolation for the averaged up and down quark mass at $`\mu =2\mathrm{G}\mathrm{e}\mathrm{V}`$. Filled circles are obtained for $`p=1.8\mathrm{GeV}`$ and squares for $`p=2.6\mathrm{GeV}`$ for which the value of $`Z_m`$ is obtained by a linear fit in $`(ap)^2`$ employing the filled points in Fig. 3. The bare quark mass is determined by a linear extrapolation of pseudo scalar meson mass squared in the Nambu-Goldstone channel $`\overline{(\gamma _5\xi _5)}`$ and that of vector meson mass in the $`VT`$ channel $`\overline{(\gamma _i\xi _i)}`$ to the physical point of $`\pi `$ and $`\rho `$ meson masses. We observe that the continuum extrapolation completely removes the momentum dependence of the quark mass at finite lattice spacings. Furthermore the values are substantially larger than those obtained with one-loop perturbation theory (open circles for $`q^{}=1/a`$ and squares for $`q^{}=\pi /a`$ ). Making a linear extrapolation in $`a^2`$, our final result in the continuum limit is $$m_{ud}^{\overline{\mathrm{MS}}}(2\mathrm{G}\mathrm{e}\mathrm{V})=4.23(29)\mathrm{MeV}.$$ (12) where we adopt the value for $`p=2.6\mathrm{GeV}`$ since this is the largest momentum accessible and the momentum dependence is negligible. This value is about $`20\%`$ larger than the perturbative estimates : $`3.46(23)\mathrm{MeV}`$ for $`q^{}=1/a`$ and $`3.36(22)\mathrm{MeV}`$ for $`q^{}=\pi /a`$. We collect the values of renormalization factor and quark masses in Table II and III. Applying our renormalization factor to the strange quark mass, we obtain $`m_s^{\overline{\mathrm{MS}}}(2\mathrm{G}\mathrm{e}\mathrm{V})=`$ $`106.0(7.1)\mathrm{MeV}`$ for $`m_K`$ (13) $`=`$ $`129(12)\mathrm{MeV}`$ for $`m_\varphi `$ (14) where we use $`K`$ or $`\varphi `$ meson mass to determine the bare strange mass. Results from perturbative estimation are given in Table IV. The CP-PACS Collaboration recently reported the results $`m_{ud}^{\overline{\mathrm{MS}}}(2\mathrm{G}\mathrm{e}\mathrm{V})=4.6(2)\mathrm{MeV}`$, $`m_s^{\overline{\mathrm{MS}}}(2\mathrm{G}\mathrm{e}\mathrm{V})=115(2)\mathrm{MeV}(m_K)`$ and $`143(6)\mathrm{MeV}(m_\varphi )`$ from a large-scale precision simulation of hadron masses with the Wilson action. Our values are 10% smaller, which may be due to the use of one-loop perturbative renormalization factor in the CP-PACS analysis. This work is supported by the Supercomputer Project No.32 (FY1998) of High Energy Accelerator Research Organization (KEK), and also in part by the Grants-in-Aid of the Ministry of Education (Nos. 08640404, 09304029, 10640246, 10640248, 10740107, 10740125). S.K. and S.T. are supported by the JSPS Research Fellowship.
no-problem/9901/cond-mat9901175.html
ar5iv
text
# Asymptotic temperature dependence of the superfluid density in liquid 4He ## I Introduction The behavior of <sup>3</sup>He and <sup>4</sup>He liquid on one hand, and of the ideal Fermi and Bose gas on the other hand, strongly suggests that there is an intimate relation between the $`\lambda `$ transition in <sup>4</sup>He and the Bose-Einstein condensation (BEC) of the ideal Bose gas (IBG). Because of the neglect of the interactions one will not expect that the IBG reproduces all properties of liquid <sup>4</sup>He, in particular not those properties that are directly related to the interactions (like the specific heat or the compressibility). There are, however, some basic properties of liquid <sup>4</sup>He that may be explained by the IBG (like the irrotational superfluid flow). We start by discussing a discrepancy between the IBG and liquid <sup>4</sup>He that is —as we will point out— disturbing in view of the suggested intimate relation between the BEC and the $`\lambda `$ transition. The critical behavior of the condensate fraction of the IBG is $$\frac{\rho _0}{\rho }|t|^{2\beta },\beta =\frac{1}{2},$$ (1) where $`t=(TT_\lambda )/T_\lambda `$ is the relative temperature; the IBG transition temperature is equated with that of the $`\lambda `$ transition. The condensate fraction is commonly identified with the superfluid fraction; this identification explains a number of experimental findings of which the most important one is that a superfluid current has no vortices. In contrast to Eq. (1), the experimental superfluid fraction behaves like $$\frac{\rho _\mathrm{s}}{\rho }|t|^{2\nu },\nu \frac{1}{3}.$$ (2) The suggested intimate connection between the BEC and the $`\lambda `$ transition is in conflict with $`\beta \nu `$. The values $`\beta =1/2`$ and $`\nu 1/3`$ imply $`\rho _0\rho _\mathrm{s}`$ just below the transition. The standard solution of the conflict between Eqs. (1) and (2) appears to be the renormalization-group method. In this approach one starts from a Ginzburg-Landau ansatz for the free energy (or enthalpy) that leads to the critical exponent $`1/2`$ for the order parameter. For the considered universality class the renormalization procedure yields then values near to $`1/3`$ for the critical exponent of the order parameter. This may well serve as an explanation of the experimental value of $`\nu 1/3`$ in Eq. (2) but it does not resolve the conflict between Eq. (1) and Eq. (2): A renormalization is appropriate for the Landau value $`\beta =1/2`$ but not for the IBG value $`\beta =1/2`$. The reason is that the IBG value is obtained by an exact evaluation of the partition sum. The exact evaluation of the partition sum implies a summation over arbitrarily small momenta (or, correspondingly, arbitrarily large distances). Therefore, the reasoning behind the renormalization procedure (analytic Ginzburg-Landau ansatz for finite regions or blocks, and subsequent transformation to larger and larger blocks) cannot be applied to the IBG free energy. Moreover, the critical exponent $`\beta =1/2`$ of the IBG cannot be changed within the IBG frame without destroying the mechanism leading to the BEC. The exponent $`\beta =1/2`$ is characteristic for the IBG with the BEC phase transition. We have now discussed two points: (i) We expect an intimate relation between the Bose-Einstein condensation and the $`\lambda `$ transition. (ii) The IBG value $`\beta =1/2`$ should be taken seriously (because it is a result of an exact evaluation of a partition sum, and because $`\beta 1/2`$ is not compatible with the BEC mechanism). From these two points we conclude the following: The theoretical $`\beta =1/2`$ in Eq. (1) and the experimental $`\nu 1/3`$ in Eq. (2) are in conflict with each other. Within the frame of the ideal Bose gas model we propose to resolve this conflict by the assumption that noncondensed particles move coherently with the condensate. This means that we no longer identify the condensate with the superfluid fraction; the condensate is only part of the superfluid phase. A coherent motion can be described by multiplying the real single particle functions of noncondensed particles by the complex phase factor of the condensate. The superfluid density $`\rho _\mathrm{s}`$ is then made up by the condensate density $`\rho _0`$ plus the density $`\rho _{\mathrm{coh}}`$ of the coherently comoving, low momentum noncondensed particles. This concept leads to an expression and eventually to a fit formula for the temperature dependence of the superfluid density. We will stick to the essential characteristic of the IBG (in particular the BEC) but introduce some modifications (for example, Jastrow factors) that are necessary for a realistic approach to liquid <sup>4</sup>He. This modified IBG is called almost ideal Bose gas model (AIBG). The AIBG has been introduced some years ago as an attempt to explain the (nearly) logarithmic singularity of the specific heat. Some consequences of the decomposition $`\rho _\mathrm{s}=\rho _0+\rho _{\mathrm{coh}}`$ have been discussed in Refs. and . The present paper is devoted to the investigation of the temperature dependence of the superfluid density in this model. The necessary details of the underlying model, the AIBG, will be given below. The form of the temperature dependence of the superfluid fraction is derived in Sec. II. This leads to a fit formula for the temperature dependence of $`\rho _s`$ that is applied to experimental data and compared to other fit formulas (Sec. III). Sec. IV discusses the temperature dependence of the condensate density. Section V presents scaling arguments on the basis of an effective Ginzburg-Landau model; this includes a qualitative explanation of the coherent comotion of noncondensed particles and leads to restrictions for some of the parameters of the fit formula. ## II AIBG form of the superfluid fraction ### A Many-body wave function Following Chester we multiply the IBG wave function $`\mathrm{\Psi }_{\mathrm{IBG}}`$ by Jastrow factors $`F=f_{ij}`$, $$\mathrm{\Psi }=F\mathrm{\Psi }_{\mathrm{IBG}}=\underset{i<j}{\overset{N}{}}f_{ij}(r_{ij})\mathrm{\Psi }_{\mathrm{IBG}}(𝐫_1,\mathrm{},𝐫_N;n_𝐤).$$ (3) We consider $`i=1,2,\mathrm{},N`$ atoms in a volume $`V`$. The occupation numbers $`n_𝐤`$ are parameters of the wave function; in physical quantities they are eventually replaced by their statistical expectation values $`n_𝐤`$. The Jastrow factors take into account the most important effects of the realistic interactions; with a suitable choice for the $`f_{ij}`$ (for example, $`f_{ij}(r)=\mathrm{exp}[(a/r)^b]`$ with $`a`$ and $`b`$ determined by a variational principle) the wave function (3) leads to a realistic pair-correlation function. The IBG wave function $`\mathrm{\Psi }_{\mathrm{IBG}}`$ in Eq. (3) is the symmetrized product of single-particle functions. We display this structure admitting at the same time a phase field $`\mathrm{\Phi }`$ of the condensate: $$\mathrm{\Psi }=𝒮F\left[\mathrm{exp}(\mathrm{i}\mathrm{\Phi })\right]^{n_0}\underset{𝐤\mathrm{\hspace{0.17em}0}}{}\left[\phi _𝐤\right]^{n_𝐤}.$$ (4) Here $`𝒮`$ denotes the symmetrization operator. The $`\phi _𝐤`$ are the real single-particle functions of the noncondensed particles. The schematic notation $`[\phi _𝐤]^{n_k}`$ stands for the product $`\phi _𝐤(𝐫_{\nu +1})\phi _𝐤(𝐫_{\nu +2})\mathrm{}\phi _𝐤(𝐫_{\nu +n_k})`$; this notation applies also to $`[\mathrm{exp}(\mathrm{i}\mathrm{\Phi })]^{n_0}`$. All $`n_0`$ condensed particles adopt the same phase factor $`\mathrm{exp}(\mathrm{i}\mathrm{\Phi }(𝐫))`$ forming the macroscopic wave function $$\psi (𝐫)=\sqrt{\frac{n_0}{V}}\mathrm{exp}\left[\mathrm{i}\mathrm{\Phi }(𝐫)\right].$$ (5) The phase field $`\mathrm{\Phi }`$ describes the coherent motion of the condensate particles. (Actually, one has to construct a suitable coherent state. This point is, however, not essential for the following discussion.) This motion is superfluid if the velocity $`𝐮_\mathrm{s}=\mathrm{}\mathrm{\Phi }/m`$ is sufficiently small. Equations (4) and (5) are a well-known description for a superfluid motion in the IBG. In this description the superfluid fraction $`\rho _\mathrm{s}/\rho `$ equals the condensate fraction $`n_0/N=\rho _0/\rho `$. The role of the Jastrow factors in this context will be discussed in Sec. II B. In order to dissolve the discrepancy between Eqs. (1) and (2) we assume that noncondensed particles move coherently with the condensate. This is possible if noncondensed particles adopt the macroscopic phase of the condensate: $$\mathrm{\Psi }=𝒮F\left[\mathrm{exp}(\mathrm{i}\mathrm{\Phi })\right]^{n_0}\underset{0<kk_{\mathrm{coh}}}{}\left[\phi _𝐤\mathrm{exp}(\mathrm{i}\mathrm{\Phi })\right]^{n_k}\underset{k>k_{\mathrm{coh}}}{}\left[\phi _𝐤\right]^{n_k}.$$ (6) We assume the phase ordering for all states with momenta below a certain coherence limit $`k_{\mathrm{coh}}`$. For the low lying states with $`n_k1`$ such phase ordering is relatively easy because it requires only a small entropy decrease. At this stage, $`k_{\mathrm{coh}}`$ should be considered as a model parameter. In Sec. V A the existence and the size of this coherence limit will be made plausible. We evaluate particle current for the wave function (6): $$𝐣_\mathrm{s}(𝐫,n_k)=\mathrm{\Psi }|\underset{n=1}{\overset{N}{}}\widehat{𝐣}_n+\mathrm{c}.\mathrm{c}.|\mathrm{\Psi }=\frac{\rho }{N}\frac{\mathrm{}}{m}\left(n_0+\underset{k<k_{\mathrm{coh}}}{}^{}n_k\right)\mathrm{\Phi }.$$ (7) The prime at the sum over the momenta $`k<k_{\mathrm{coh}}`$ means that the $`k=0`$ contribution is excluded. In coordinate space, the current operator reads $`𝐣_n=\mathrm{i}\mathrm{}_n/(2m)+\mathrm{c}.\mathrm{c}`$. It acts on all $`𝐫_i`$-dependences. Because of the added conjugate complex term all contributions from the real functions (the $`f_{ij}`$ in $`F`$ or the $`\phi _k`$) cancel. The only surviving terms are those where $`𝐣_n`$ acts on the phase $`\mathrm{\Phi }`$. For a superfluid motion with $`𝐮_\mathrm{s}=\mathrm{}\mathrm{\Phi }/m`$ and in the statistical average, $`𝐣_\mathrm{s}`$ of Eq. (7) equals $`\rho _\mathrm{s}𝐮_\mathrm{s}`$. We may then read off the superfluid fraction, $$\frac{\rho _s}{\rho }=\frac{1}{N}\left(n_0+\underset{k<k_{\mathrm{coh}}}{}^{}n_k\right)=\frac{\rho _0+\rho _{\mathrm{coh}}}{\rho }.$$ (8) This expression will be evaluated in Sec. II C. The ansatz (6) leading to Eq. (8) shows in which way noncondensed particles may contribute to the superfluid density. ### B Condensate density We discuss in some detail what is meant by the terminus “condensate density”, in particular with respect to the Jastrow factors in Eqs. (4) or (6). The exact condensate density may be defined by $$\mathrm{\Psi }|\widehat{\varphi }^+(𝐫)\widehat{\varphi }(𝐫^{})|\mathrm{\Psi }\stackrel{|𝐫𝐫^{}|\mathrm{}}{}\rho _0^{\mathrm{exact}},$$ (9) where $`\mathrm{\Psi }`$ is the exact many-body state and the $`\widehat{\varphi }^+`$ and $`\widehat{\varphi }`$ are single-particle creation and annihilation operators. For finite temperatures one has to take the statistical expectation value of $`\rho _0^{\mathrm{exact}}`$ (we do not introduce a different symbol). For an IBG wave function $`\mathrm{\Psi }_{\mathrm{IBG}}`$ the condensate density is given by $`\rho _0^{\mathrm{model}}=n_0/V`$. In the statistical average this this model condensate density becomes $$\rho _0^{\mathrm{model}}=\frac{n_0}{V}.$$ (10) The exact many-body state in Eq. (9) may be approximated by Eq. (3), or by $`\mathrm{\Psi }F`$ for the ground state. In this case the relation between both condensate densities is well known: The model condensate fraction is depleted by the Jastrow factors $`F`$, for example, from $`\rho _0^{\mathrm{model}}/\rho =1`$ to $`\rho _0^{\mathrm{exact}}/\rho 0.1`$ for $`T=0`$. The above calculation leading to Eq. (8) demonstrates the following point: In contrast to the density $`\rho _0^{\mathrm{model}}`$, the current density $`\rho _0^{\mathrm{model}}𝐮_\mathrm{s}`$ is not depleted. The reason is that in Eq. (7) all derivatives of the real Jastrow factors cancel (because of the added conjugate complex term). On the basis of this point we arrive at the following statements about the role of the densities $`\rho _0^{\mathrm{model}}`$, $`\rho _0^{\mathrm{exact}}`$, and $`\rho _\mathrm{s}`$. 1. Since the current density $`\rho _0^{\mathrm{model}}𝐮_\mathrm{s}`$ is not depleted we may identify $`\rho _0^{\mathrm{model}}`$ (and not $`\rho _0^{\mathrm{exact}}`$) with the square $`|\psi |^2`$ of the macroscopic wave function. Irrespective of the Jastrow factors we may use Eq. (5) as it stands. For a superfluid flow, the phase $`\mathrm{\Phi }(𝐫)`$ of the macroscopic wave function (5) fixes the velocity field $`𝐮_\mathrm{s}=\mathrm{}\mathrm{\Phi }/m`$. The basic relations for the superfluidity (like $`\mathrm{curl}𝐮_\mathrm{s}=0`$ and the Feynman-Onsager quantization rule) are not affected by the Jastrow factors in the many-body wave function. 2. The exact condensate density is a quantity of its own right. It is the density of the zero momentum particles in the liquid helium. Recently Wyatt reported about a rather clear experimental evidence for this condensate. For a review about the attempts to determine $`\rho _0^{\mathrm{exact}}`$ experimentally we refer to Sokol. 3. The assumption that noncondensed particles move coherently with the condensate is introduced by the step from Eq. (4) to Eq. (6). Again, this step does not alter the basic relations following from Eq. (5) (like $`𝐮_\mathrm{s}=\mathrm{}\mathrm{\Phi }/m`$, $`\mathrm{curl}𝐮_\mathrm{s}=0`$ and the Feynman-Onsager quantization rule). 4. For $`T=0`$ the value $`\rho _0^{\mathrm{model}}/\rho =1`$ yields $`\rho _\mathrm{s}/\rho =1`$ \[as we will see, $`\rho _{\mathrm{coh}}`$ in Eq. (8) contributes only in the vicinity of $`T_\lambda `$\]. In contrast to this the connection between $`\rho _0^{\mathrm{exact}}/\rho 0.1`$ with $`\rho _\mathrm{s}/\rho =1`$ is less obvious. For $`T0`$ the value $`\rho _0^{\mathrm{model}}/\rho 1`$ implies $`\rho _\mathrm{s}/\rho 1`$. For describing $`1\rho _\mathrm{s}/\rho `$ quantitatively one must however include phonons. This is not done in Eqs. (4) or (6) because our primary object is the asymptotic temperature region. We summarize this subsection: As far as the superfluid current is concerned the model condensate density is not depleted. The model condensate density is the fundamental constituent of the superfluid density. In the following the model condensate density $`\rho _0^{\mathrm{model}}`$ will again be denoted by $`\rho _0`$ and called condensate density. ### C Superfluid density We evaluate the expression (8) for the superfluid density. Our model assumes expectation values $`n_k`$ that are of the IBG form, $$n_k=\frac{1}{\mathrm{exp}[(ϵ_k\mu )/k_\mathrm{B}T]1}=\frac{1}{\mathrm{exp}(x^2+\tau ^2)1}.$$ (11) Here $`\mu `$ is the chemical potential, $`ϵ_k=\mathrm{}^2k^2/2m`$ are the single-particle energies, and $`k_\mathrm{B}`$ is Boltzmann’s constant. In the last expression we introduced the dimensionless quantities $`\tau ^2=\mu /k_\mathrm{B}T`$ and $$x=\frac{\lambda |𝐤|}{\sqrt{4\pi }},\text{with}\lambda =\frac{2\pi \mathrm{}}{\sqrt{2\pi mk_\mathrm{B}T}}.$$ (12) The transition temperature of the IBG is given by the following condition for the thermal wave length $`\lambda `$: $$\lambda (T_\lambda )=\left[v\zeta (3/2)\right]^{1/3},$$ (13) where $`\zeta (3/2)=2.6124`$ denotes Riemann’s zeta function. In applying our almost ideal Bose gas model (AIBG) to the real system we identify $`T_\lambda `$ with the actual transition temperature. In the following we use the relative temperature $$t=\frac{TT_\lambda }{T_\lambda }.$$ (14) We evaluate the condensate density: $$\frac{\rho _0}{\rho }=1\stackrel{}{}\frac{n_𝐤}{N}=1(1+t)^{3/2}\frac{g_{3/2}(\tau )}{\zeta (3/2)}.$$ (15) Riemann’s generalized zeta function is given by $`g_p(\tau )=_1^{\mathrm{}}\mathrm{exp}(n\tau ^2)/n^p`$, and $`\zeta (p)=g_p(0)`$. The chemical potential $`\mu `$ or, equivalently, $`\tau `$ may be expanded for $`|t|1`$: $$\tau (t)=\sqrt{\frac{\mu }{k_\mathrm{B}T}}=\{\begin{array}{ccc}at+bt^2+\mathrm{}\hfill & & (t>0)\hfill \\ a^{}|t|+b^{}t^2+\mathrm{}\hfill & & (t<0)\hfill \end{array}.$$ (16) For $`t>0`$ Eq. (15) with $`\rho _0/\rho =0`$ yields $`(1+t)^{3/2}g_{3/2}(\tau )=\zeta (3/2)`$. This condition determines the temperature dependence of $`\tau (t)`$ and in particular the coefficients $`a`$, $`b`$, …, for example $`a=3\zeta (3/2)/(4\pi ^{1/2})`$. For $`t<0`$ the IBG yields $`\tau =0`$. In the AIBG we admit nonvanishing coefficients $`a^{}`$, $`b^{}`$,… in Eq. (16). This makes the expansion (16) more symmetric; it corresponds to a phenonemological gap between the condensate level and the noncondensed particles. A coefficient $`a^{}0`$ does not affect the BEC as the most important feature of IBG. It avoids, however, the divergence of the static structure factor $`S(k)`$ for $`k0`$ and greatly improves the unrealistic ($`T^{3/2}`$) behavior of the specific heat. In view of the successful roton picture it is not too surprising that a gap is necessary for a quantitative description of the superfluid density (or of the specific heat). As we will see, a realistic description of liquid helium requires $`a^{}3`$; the next coefficient $`b^{}`$ will not be needed. From Eq. (15) and with Eq. (16) we obtain $$\frac{\rho _0}{\rho }=f|t|+gt^2+\mathrm{}(t<0)$$ (17) where $$f=\frac{3}{2}+\frac{2\sqrt{\pi }a^{}}{\zeta (3/2)}.$$ (18) We evaluate now the density of the comoving particles $$\frac{\rho _{\mathrm{coh}}}{\rho }=\underset{k<k_{\mathrm{coh}}}{}^{}\frac{n_k}{N}=\frac{4(1+t)^{3/2}}{\sqrt{\pi }\zeta (3/2)}_0^{x_{\mathrm{coh}}}\frac{x^2dx}{\mathrm{exp}(x^2+\tau ^2)1}.$$ (19) Using $`y/[\mathrm{exp}(y)1]=1y/2+y^2/12\mathrm{}`$ we obtain $$\frac{\rho _{\mathrm{coh}}}{\rho }=\frac{4(1+t)^{3/2}}{\sqrt{\pi }\zeta (3/2)}\left(x_{\mathrm{coh}}\tau \mathrm{arctan}\frac{x_{\mathrm{coh}}}{\tau }\frac{x_{\mathrm{coh}}^{\mathrm{\hspace{0.17em}3}}}{6}+\frac{x_{\mathrm{coh}}^{\mathrm{\hspace{0.17em}5}}}{60}+\frac{x_{\mathrm{coh}}^{\mathrm{\hspace{0.17em}3}}\tau ^2}{36}\pm \mathrm{}\right).$$ (20) The convergence of this expression is excellent; for the actual parameter values and for $`|t|0.1`$ the terms not shown are of the order $`10^8`$. We have not yet specified the coherence limit $`k_{\mathrm{coh}}`$. For $`|t|1`$ we will find $`\rho _\mathrm{s}\rho _{\mathrm{coh}}k_{\mathrm{coh}}`$ for the superfluid density and $`\rho _\mathrm{s}k_{\mathrm{coh}}^{\mathrm{\hspace{0.17em}2}}k_{\mathrm{coh}}^{\mathrm{\hspace{0.17em}3}}`$ for the kinetic energy of the fluctuations. Requiring that this kinetic energy scales with the free energy $`Fn_0^2t^2`$ yields $$k_{\mathrm{coh}}|t|^{2/3}.$$ (21) This scaling argument will be presented in more detail in Sec. V. Inserting Eq. (21) in Eq. (20) and using Eq. (17), the superfluid fraction contains the powers $`|t|^{2/3}`$, $`|t|`$, $`|t|^{4/3}`$, and so on: $$\frac{\rho _s}{\rho }=\frac{\rho _0+\rho _{\mathrm{coh}}}{\rho }=a_1|t|^{2/3}+a_2|t|+a_3|t|^{4/3}+\mathrm{}.$$ (22) ### D AIBG assumptions We summarize in which points the AIBG, the almost ideal Bose gas model, deviates from the IBG: 1. The IBG wave function $`\mathrm{\Psi }_{\mathrm{IBG}}`$ is multiplied by Jastrow factors, $`\mathrm{\Psi }=F\mathrm{\Psi }_{\mathrm{IBG}}`$. This is a well-known approach. 2. By the symmetric expansion (16) we admit a gap between the condensed and the noncondensed particles. This modification preserves the most basic features of the IBG, in particular the BEC mechanism and the critical exponent $`\beta =1/2`$. 3. The noncondensed single-particle states below the coherence limit $`k_{\mathrm{coh}}`$ adopt the macroscopic phase of the condensate. The leading exponent for the coherence limit $`k_{\mathrm{coh}}`$ is determined from a scaling argument. ## III Fit to experimental data We compare the temperature dependence of our model expression for the superfluid density with experimental data. The model expression contains unknown parameters; it provides a fit formula for the data. It will turn out that this fit formula is significantly better than comparable fit formulas. ### A Asymptotic temperature range #### 1973 data by Greywall and Ahlers A restriction to the first three terms in the expansion (22) yields a three-parameter model fit (MF) $$\frac{\rho _s}{\rho }=a_1|t|^{2/3}+a_2|t|+a_3|t|^{4/3}\text{(MF)}.$$ (23) Figure 1 shows that the MF yields an excellent reproduction of the data by Greywall and Ahlers for saturated vapor pressure. We used all data points with temperatures $`|t|0.03`$. For a minor improvement of the fit we shifted the temperature values by $`0.5\times 10^7`$; this is well below the experimental uncertainty of $`\delta t=2\times 10^7`$. The parameters of the fit shown in Fig. 1 are: $$a_1=2.3233,a_2=1.0258,a_3=2.0065.$$ (24) As an alternative we consider the standard fit (SF) $$\frac{\rho _s}{\rho }=k|t|^\xi \left(1+D|t|^\mathrm{\Delta }\right)\text{(SF)},$$ (25) which is used by Greywall and Ahlers, and that is motivated by the renormalization-group theory. The fourth parameter $`\mathrm{\Delta }`$ is often set equal to 1/2 because the fit is not very sensitive to it. We will use the SF with $`\mathrm{\Delta }=0.5`$ as a three-parameter ansatz. The fit parameters are found by minimizing the sum $`\chi ^2`$ of the quadratic deviations, $$\chi ^2=\underset{i=1}{\overset{N_\mathrm{d}}{}}W\left[\left(\frac{\rho _\mathrm{s}}{\rho }\right)_{\mathrm{fit}}\left(\frac{\rho _\mathrm{s}}{\rho }\right)_{\mathrm{exp}}\right]^\text{2}=\underset{i=1}{\overset{N_\mathrm{d}}{}}\frac{1}{\sigma _{\mathrm{rel}}^2}\left[\frac{(\rho _\mathrm{s}/\rho )_{\mathrm{fit}}}{(\rho _\mathrm{s}/\rho )_{\mathrm{exp}}}1\right]^\text{2}.$$ (26) Here $`N_\mathrm{d}`$ is the number of data points and $`\sigma _{\mathrm{rel}}`$ is the relative standard deviation. The standard deviation $`\sigma `$ for $`(\rho _\mathrm{s}/\rho )_{\mathrm{fit}}(\rho _\mathrm{s}/\rho )_{\mathrm{exp}}`$ is given by $`W=1/\sigma ^2`$. The dominant experimental error is that in the temperature. This leads to the weight $`W=|t|^{2/3}/\delta |t|^2`$, where $`\delta |t|=\mathrm{max}(2\times 10^7,10^3|t|)`$ is the temperature uncertainty. The corresponding $`2\sigma _{\mathrm{rel}}`$ line is shown in Fig. 1. In the given form both MF and SF (with $`\mathrm{\Delta }=0.5`$) are three-parameter fits. We compare both fits by calculating their $`\chi ^2`$ ratio: $$\frac{\chi _{\mathrm{SF}}^2}{\chi _{\mathrm{MF}}^2}8.8(\text{data for }|t|0.03).$$ (27) As seen from Fig. 1 the MF reproduces the experimental data ($`\chi ^2/N_\mathrm{d}1.10`$). The large ratio (27) means that the SF does not reproduce the data in the considered temperature range. We remark that the SF fits the data in the considerably smaller range $`|t|0.004`$. This smaller range is used in Ref. , presumably because it was realized that the SF does not fit the data in the larger range. For a three-parameter fit the range $`|t|0.004`$ appears to be rather small; we note that already a one-parameter fit ($`a_1|t|^{2/3}`$) reproduces the data within 2% in the relatively large range $`|t|0.08`$. We considered also the data at higher pressures by Greywall and Ahlers. Here we found ratios $`\chi _{\mathrm{SF}}^2/\chi _{\mathrm{MF}}^2`$ between 1 and 2, and values of $`\chi _{\mathrm{MF}}^2/N_\mathrm{d}`$ in the range between 3.6 and 15. This means that the MF is only slightly better than the SF without yielding satisfactory fits. This is (at least partly) caused by jumps in the experimental data points. For example, compared to a smooth fit curve (SF or MF or any reasonable fit formula) there is a jump of more than ten standard deviations between the data points $`(|t|,\rho _\mathrm{s}/\rho )=(\mathrm{0.001\hspace{0.17em}439\hspace{0.17em}1},\mathrm{\hspace{0.17em}0.028\hspace{0.17em}144})`$ and $`(\mathrm{0.001\hspace{0.17em}263\hspace{0.17em}1},\mathrm{\hspace{0.17em}0.025\hspace{0.17em}624})`$ for $`P=7.27`$bar. #### 1993 data by Goldner, Mulders and Ahlers Newer measurements of the superfluid density are reported by Goldner et al. and by Marek et al.. We consider the data by Goldner et al. because these authors published an explicit data list. The data extend to about $`|t|=0.01`$; all these data are used for the fits. Fig. 2 shows how the three-parameter MF reproduces these data. We discuss this result in a number of points: 1. Obviously the scatter of the data is generally larger than the estimated error (listed as $`\delta \rho _\mathrm{s}/\rho `$ in Ref. , and called $`\sigma `$ in Fig. 2). There are several jumps of the size of ten standard deviations; the most dominant jump (between the values for $`|t|=0.00031910`$ and $`|t|=0.00039793`$) is about 30 times larger than the estimated error. This statement is basically independent of the fit formula used (see also Fig. 3). It is extremely unlikely that the actual superfluid fraction contains such jumps. The different sizes of the jumps restrict the possibility to discriminate between various fit formulas. This is also the reason why we considered first the older 1973 data by Greywall and Ahlers. 2. The three-parameter SF yields a significantly larger $`\chi ^2`$ value: $$\frac{\chi _{\mathrm{SF}}^2}{\chi _{\mathrm{MF}}^2}2.7.$$ (28) 3. Goldner et al. used the following extended standard fit (ESF) $$\frac{\rho _s}{\rho }=k_0|t|^\xi \left(1+D|t|^\mathrm{\Delta }\right)\left(1+k_1|t|\right)\text{(ESF)}$$ (29) with $`\mathrm{\Delta }=1/2`$. Using the same parameters as in Ref. we obtained $`\chi _{\mathrm{ESF}}^2/\chi _{\mathrm{MF}}^21.4`$. This might appear as a small difference between MF and ESF. A comparison between Figs. 2 (MF) and 3 (ESF) shows, however, that the MF does a better job although it has one parameter less. Goldner et al. noted that there is a serious discrepancy between the ESF and the data, in particular in the range $`|t|10^5`$ to $`10^6`$ (their Fig. 17). The comparison between Fig. 2 and 3 shows that this discrepancy is significantly smaller for our model fit. This improvement is not so evident in the $`\chi ^2`$ ratio because the $`\chi ^2`$ values are on a high level for any fit formula (due to the jumps). 4. Looking at the scatter of the data one might tentatively assume a standard deviation that is five times larger than the one assumed. Drawing then a new $`2\sigma `$ line the discrepancies in the range $`|t|=10^5`$ to $`10^6`$ in Fig. 2 may be judged as not very significant. They may, however, hint at an unexplained structure in the temperature dependence of the superfluid fraction. ### B Extension to lower temperatures We apply the model expression for the superfluid fraction in the temperature range $`1.2\mathrm{K}<T<T_\lambda `$, where $`|t|`$ is no longer much smaller than 1. For this purpose we use the model expressions (15) and (19) for $`\rho _0`$ and $`\rho _{\mathrm{coh}}`$, respectively, and expand $`\tau `$ and $`x_{\mathrm{coh}}`$ (rather than $`\rho _\mathrm{s}`$ itself) into the relevant powers of $`|t|`$. The expansion $`\tau =a^{}|t|+b^{}t^2+\mathrm{}`$ may be broken off after the first term because $`\tau 0`$ corresponds to a gap and leads to an exponential decrease \[$`\mathrm{exp}(\tau ^2)`$\] of the noncondensed contribution in Eqs. (15) and (19). Therefore, the noncondensed contributions become rather small before the next terms in the expansion for $`\tau `$ contribute significantly. As far as the coherence limit $`k_{\mathrm{coh}}`$ is concerned we have no information about the continuation of Eq. (21) into an expansion. In view of the success of Eq. (23) we will certainly not admit exponents that would violate the form (22). In accordance with Eq. (22) we may admit the form $`x_{\mathrm{coh}}=x_1|t|^{2/3}+x_2|t|+x_3|t|^{4/3}+\mathrm{}`$. This expansion may be broken off, too, because $`\rho _{\mathrm{coh}}`$ of Eq. (19) is damped exponentially \[$`\mathrm{exp}(\tau ^2)`$\] for increasing $`|t|`$. Including the terms with the parameters $`x_1`$, $`x_2`$, and $`x_3`$ preserves the variability for the parameters $`a_1`$, $`a_2`$, and $`a_3`$ in Eq. (22). Due to the exponential damping of the noncondensed contributions a cut in the expansions for $`\tau `$ and $`x_{\mathrm{coh}}`$ leads much further than a cut in the expansion for $`\rho _\mathrm{s}/\rho `$ itself. In this way we arrive at the following unified model fit (UMF) formula: $$\frac{\rho _\mathrm{s}}{\rho }=1(1+t)^{3/2}\frac{g_{3/2}(\tau )}{\zeta (3/2)}+\frac{4(1+t)^{3/2}}{\sqrt{\pi }\zeta (3/2)}_0^{x_{\mathrm{coh}}}\frac{x^2dx}{\mathrm{exp}(x^2+\tau ^2)1}\text{(UMF)}$$ (30) with $$\tau =a^{}|t|,x_{\mathrm{coh}}=\mathrm{max}(0,x_1|t|^{2/3}+x_2|t|+x_3|t|^{4/3}).$$ (31) As we will see, this formula provides a unified description of the asymptotic region as well as of the less asymptotic (the “roton”) region. The parameters $`x_1`$, $`x_2`$, and $`x_3`$ are related to the $`a_1`$, $`a_2`$, and $`a_3`$ in Eq. (22) and essentially fixed by the asymptotic region. We have restricted $`x_{\mathrm{coh}}`$ explicitly to non-negative values because the expression $`x_1|t|^{2/3}+x_2|t|+x_3|t|^{4/3}`$ might become negative for larger $`|t|`$ values \[where, however, the density $`\rho _{\mathrm{coh}}`$ tends to zero anyway because the exponential decrease $`\mathrm{exp}(\tau ^2)`$; see also Fig. 5\]. For a fit in the range $`1.2\mathrm{K}<T<T_\lambda `$ we combined the data by Greywall and Ahlers for $`|t|<0.04`$ and that by Clow and Reppy (run IV) for $`|t|>0.04`$. At $`|t|=0.04`$ both data sets are compatible with each other. The systematic errors and the deviation due to slightly different pressures (roughly 1% between saturated vapor or normal pressure) just happen to cancel each other. Clow and Reppy remark that their “values of $`\rho _\mathrm{s}/\rho `$ have a scatter of about 1/2%”; we interpreted this as $`\sigma _{\mathrm{rel}}=0.005`$ for our fit \[i.e. for the minimization of Eq. (26)\]. A fit of the combined data leads to a result that is quite similar to Fig. 1 for $`|t|<0.04`$ and that is shown in Fig. 4 for $`|t|>0.04`$. The fit parameters are: $$a^{}=3.0380,x_1=2.6998,x_2=0.8063,x_3=3.9631\text{(UMF)}.$$ (32) Alternatively we may use the parameter $`f`$, Eq. (18), and calculate the parameters $`a_1`$, $`a_2`$, and $`a_3`$ following from the asymptotic expansion of Eq. (30): $$f=5.6225,a_1=2.3323,a_2=0.8035,a_3=0.4704.$$ (33) If an expansion is broken off as in Eq. (23) the last term tries effectively to simulate the missing terms. Since the UMF supplies higher-order terms it is not surprising that the last coefficients in Eqs. (24) and (33) are quite different. Alternatively we used the data by Tam and Ahlers that extend, however, only down to 1.5 K. This yields similar parameter values. The standard fit for temperatures above $`1\mathrm{K}`$ but excluding the asymptotic region is the two-parameter roton fit (RF), $$\frac{\rho _\mathrm{s}}{\rho }=\frac{A}{\sqrt{T}}\mathrm{exp}\left(\frac{\mathrm{\Delta }}{k_\mathrm{B}T}\right)\text{(RF)}.$$ (34) Using the data of Ref. (run IV) we obtain $$\frac{\chi _{\mathrm{RF}}^2}{\chi _{\mathrm{UMF}}^2}4\text{for }1.2\mathrm{K}<T<2.07\mathrm{K}.$$ (35) This ratio is reduced to 2 if we restrict the temperature by $`T<2\mathrm{K}`$. These ratios imply that the unified model expression is quite good for intermediate temperatures, too. The RF is based on Landau’s quasiparticle model that cannot be extended to $`T_\lambda `$ without loosing its physical basis. The standard description for the range $`1.2\mathrm{K}<T<T_\lambda `$ would be a combination of the SF (25) and the RF (34). In contrast to this, our model provides a unified fit (30) in this range. Although containing one parameter less (than the combination of SF and RF) this unified fit is superior to the standard description. As already mentioned, the expansion (16) implies a gap between the condensed and noncondensed particles. This gap appears to be essential for the reproduction of the data in the intermediate range $`T1\mathrm{K}`$. This gap should in some way be related to the roton gap $`\mathrm{\Delta }`$. This relation cannot be expected to be simple and obvious because one gap belongs to a model (Landau) for $`TT_\lambda `$ and the other to a model (AIBG) for $`TT_\lambda `$. We note that our gap vanishes for $`TT_\lambda `$, and that the roton concept becomes less sharp for increasing temperature (for $`T=1\mathrm{K}`$ the widths of roton states are already comparable to their energies). For $`TT_\lambda `$ Landau’s quasiparticle model is, of course, the right model. The model fit (30) yields still reasonable values for $`\rho _\mathrm{s}/\rho `$ but it must fail in the quantitative reproduction of $`1\rho _\mathrm{s}/\rho `$ because the phonons are not described by the wave function (6). ## IV Condensate density The unified model fit, Eq. (30) with Eq. (32), defines the decomposition of the superfluid density into the condensate density and the coherently comoving density. The temperature dependence of this decomposition is displayed in Fig. 5. In this section we discuss in particular the temperature dependence of the condensate density. The contribution of $`\rho _{\mathrm{coh}}`$ is decisive near $`T_\lambda `$ but negligible for lower temperatures. The comoving density $`\rho _{\mathrm{coh}}`$ carries some entropy because it does not correspond to a single quantum state. This entropy content is quite small because it is due to the lowest single-particle states with $`n_𝐤1`$. It is below the present experimental limits but should be detectable; for these points we refer to Refs. and . As shown in Sec. III B, the expression $`\tau =a^{}|t|`$ works quite well for fitting the data down to about 1.2 K. Inserting $`\tau =a^{}|t|`$ in Eq. (15) yields $$\frac{\rho _0}{\rho }=1(1+t)^{3/2}\frac{g_{3/2}(a^{}|t|)}{\zeta (3/2)}.$$ (36) Using $`a^{}`$ of Eq. (32), this temperature dependence is shown by the dashed line in Fig. 5. The asymptotic expansion of Eq. (36) reads $`\rho _0/\rho f|t|`$, Eq. (17), where $$f=\frac{3}{2}+\frac{2\sqrt{\pi }a^{}}{\zeta (3/2)}5.6.$$ (37) The numerical value is taken from Eq. (33). We found that $$\frac{\rho _0}{\rho }1\left(\frac{T}{T_\lambda }\right)^f.$$ (38) may be used as an approximation for Eq. (36). The maximum relative difference between Eqs. (38) and (36) is about 2%. For $`TT_\lambda `$ both expressions, (38) and (36), yield $`\rho _0/\rho f|t|`$. The right-hand side of Eq. (38) is an old fit formula for the superfluid fraction $`\rho _\mathrm{s}/\rho `$ (for example, Fig. 27 of Ref. ). In the framework of our model, this historic fit formula may be interpreted as the approximation $`\rho _\mathrm{s}\rho _0`$. The obvious shortcomings of Eq. (38) as an approximation for $`\rho _\mathrm{s}/\rho `$ are the following: (i) The neglect $`\rho _{\mathrm{coh}}`$ leads to a qualitatively wrong asymptotic behavior (difference between the full and the dashed line in Fig. 5). (ii) The step from Eq. (36) to Eq. (38) as well as the use of $`\tau =a^{}|t|`$ make the expression an approximate one already for $`\rho _0/\rho `$. (iii) For small temperatures $`1\rho _\mathrm{s}/\rho =(T/T_\lambda )^f`$ is quantitatively wrong (because the phonons have not been taken into account). We consider once more the exact condensate fraction $`\rho _0^{\mathrm{exact}}/\rho `$ introduced in Sec. II B. We denote its value at $`T=0`$ by $`n_\mathrm{c}`$. Assuming that the depletion of the condensate (from 1 to $`n_\mathrm{c}0.1`$) is temperature independent we obtain $$\frac{\rho _0^{\mathrm{exact}}}{\rho }n_\mathrm{c}\frac{\rho _0}{\rho }n_\mathrm{c}\left(1\frac{T^f}{T_\lambda ^f}\right).$$ (39) as an approximate expression for the temperature dependence of the exact condensate fraction. The experimental temperature dependence is given in Fig. 2 of Snow et al.. Within the relatively large experimental uncertainties the expression (39) agrees with the data. ## V Effective Ginzburg-Landau model In our approach, the macroscopic wave function (5), $$\psi (𝐫)=\sqrt{\frac{n_0}{V}}\mathrm{exp}\left[\mathrm{i}\mathrm{\Phi }(𝐫)\right]=\sqrt{\rho _0}\mathrm{exp}\left[\mathrm{i}\mathrm{\Phi }(𝐫)\right],$$ (40) plays the role of the order parameter. We investigate the free energy as a function of this order parameter. ### A Coherence limit We start by presenting a qualitative argument for the existence and the meaning of the coherence limit $`k_{\mathrm{coh}}`$. The macroscopic wave function $`\psi `$ may contain equilibrium and nonequilibrium excitations. A superfluid motion with $`𝐮_\mathrm{s}=(\mathrm{}/m)\mathrm{\Phi }`$ is a nonequilibrium excitation. At finite temperatures, there are thermal fluctuations of the order parameter, i.e., equilibrium excitations. We consider the average momentum of these fluctuations, $$k_{\mathrm{fluct}}=\overline{|\mathrm{\Phi }|}.$$ (41) The bar denotes the statistical average. The momentum $`k_{\mathrm{fluct}}`$ will be a function of the temperature. It is related to the correlation length $`\xi 1/k_{\mathrm{fluct}}`$. The single-particle states are described by real functions $`\phi _𝐤`$ in Eq. (4). We consider the possibility of phase fluctuations for a low-lying state with $`n_𝐤1`$, too. After the replacement $`\phi _𝐤\phi _𝐤\mathrm{exp}(\mathrm{i}\mathrm{\Phi }_𝐤)`$ in Eq. (4) these fluctuations may be described by the fields $`\mathrm{\Phi }_𝐤(𝐫)`$. Let us first assume that the additional phases vanish, $`\mathrm{\Phi }_𝐤=0`$. In this case, the average kinetic energy $`\mathrm{}^2k_{\mathrm{fluct}}^{\mathrm{\hspace{0.17em}2}}/2m`$ of a condensed particle would exceed that of a noncondensed particle with $`k<k_{\mathrm{fluct}}`$. The energy sequence of the single-particle states is, however, a prerequisite of the BEC; the condensate must be formed by the particles with the lowest energy. In order to preserve the energy sequence of the low-lying states we require the phase ordering $$\mathrm{\Phi }_𝐤(𝐫)=\mathrm{\Phi }(𝐫)\text{for}kk_{\mathrm{fluct}}.$$ (42) This argument does not apply to the states with higher momenta. By this qualitative argument we obtain the many-body wave function (6) with $$k_{\mathrm{coh}}=k_{\mathrm{fluct}}.$$ (43) ### B Free energy The statistical expectation value $`\rho _0|t|`$ can be obtained by minimizing the common Landau energy $`F_\mathrm{L}/V=Rt|\psi |^2+U|\psi |^4`$ (with regular coefficients $`R`$ and $`U`$). The fluctuation term $`F_{\mathrm{fluct}}/V=(\mathrm{}^2/2m)|\psi |^2`$ equals the kinetic energy density $`\rho _0𝐮^2/2`$ of the condensate only; here $`𝐮=(\mathrm{}/m)\mathrm{\Phi }`$. The phase coherence assumed in Eq. (6) implies that $`\rho _0𝐮^2`$ must be replaced by $`\rho _\mathrm{s}𝐮^2`$. This leads to the following effective Ginzburg-Landau ansatz $$\frac{F_{\mathrm{GL}}}{V}=\frac{F_{\mathrm{fluct}}+F_\mathrm{L}}{V}=\frac{\mathrm{}^2}{2m}\frac{\rho _s}{\rho _0}|\psi |^2+Rt|\psi |^2+U|\psi |^4.$$ (44) Assuming that the leading exponent of $`x_{\mathrm{coh}}`$ is not greater than 1, Eq. (20) yields $$\rho _{\mathrm{coh}}x_{\mathrm{coh}}.$$ (45) The equilibrium fluctuation term becomes then $$F_{\mathrm{fluct}}\rho _sk_{\mathrm{fluct}}^{\mathrm{\hspace{0.17em}2}}=(\rho _0+\rho _{\mathrm{coh}})k_{\mathrm{coh}}^{\mathrm{\hspace{0.17em}2}}\rho _0x_{\mathrm{coh}}^{\mathrm{\hspace{0.17em}2}}+x_{\mathrm{coh}}^{\mathrm{\hspace{0.17em}3}}.$$ (46) The asymptotic form of the Landau part of the free energy behaves like $$F_\mathrm{L}Rt\rho _0+U\rho _0^{\mathrm{\hspace{0.17em}2}}t^2.$$ (47) We require now scaling invariance. This means that $`F_{\mathrm{fluct}}`$ must have the same leading $`|t|`$ dependence as $`F_\mathrm{L}`$, i.e., $`\rho _0x_{\mathrm{coh}}^{\mathrm{\hspace{0.17em}2}}+x_{\mathrm{coh}}^{\mathrm{\hspace{0.17em}3}}t^2`$. From $`\rho _0x_{\mathrm{coh}}^{\mathrm{\hspace{0.17em}2}}t^2`$ we would obtain $`x_{\mathrm{coh}}|t|^{1/2}`$ and $`x_{\mathrm{coh}}^{\mathrm{\hspace{0.17em}3}}|t|^{3/2}`$ in contradiction to the scaling assumption. Therefore, scaling requires $`x_{\mathrm{coh}}^{\mathrm{\hspace{0.17em}3}}|t|^2`$ or $$k_{\mathrm{coh}}|t|^{2/3}.$$ (48) This implies $`\rho _s\rho _{\mathrm{coh}}|t|^{2/3}`$ for the superfluid density and $`\xi 1/k_{\mathrm{fluct}}|t|^{2/3}`$ for the correlation length. The mass coefficient $`\rho _s/\rho _0|t|^{1/3}`$ in Eq. (44) is singular. Ginzburg and Sobyanin have introduced a comparable effective Ginzburg-Landau model with nonanalytic coefficients, too. In Ref. the nonanalytic coefficients (like $`|t|^{4/3}`$ for the $`|\psi |^2`$ term) are phenomenologically introduced in order to reproduce the right critical exponents. The divergent mass coefficient $`\rho _s/\rho _0|t|^{1/3}`$ damps the critical fluctuations such that Eq. (44) becomes scaling invariant. In this sense, the model (44) has properties similar to the common Ginzburg-Landau ansatz in $`d=4`$ dimensions. This means that Eq. (44) might be used down to $`|t|=0`$ and that the critical exponent of $`\rho _s`$ might be indeed exactly $`2/3`$. This possibility is supported by the excellent fit obtained for Eq. (23). ### C Further scaling restrictions The equilibrium Landau free energy contains integer powers of $`t`$ only: $$F_\mathrm{L}\mathrm{}t^2+\mathrm{}t^3+\mathrm{}t^4+\mathrm{}.$$ (49) The asymptotic form of the superfluid density (22) is compatible with the expansion $`x_{\mathrm{coh}}x_1|t|^{2/3}+x_2|t|+x_3|t|^{4/3}+\mathrm{}`$. In the fluctuation term this expansion will, however, in general lead to noninteger exponents: $$F_{\mathrm{fluct}}\rho _\mathrm{s}k_{\mathrm{coh}}^{\mathrm{\hspace{0.17em}2}}=(\rho _0+\rho _{\mathrm{coh}})k_{\mathrm{coh}}^{\mathrm{\hspace{0.17em}2}}\mathrm{}t^2+\mathrm{}|t|^{7/3}+\mathrm{}|t|^{8/3}+\mathrm{}t^3+\mathrm{}.$$ (50) Scaling for Eq. (44) implies also that the amplitudes of the nonanalytic terms vanish. This condition yields relations between the expansion parameters $`a^{}`$, $`b^{}`$, … and $`x_1`$, $`x_2`$, $`x_3`$, … that may also be expressed by the coefficients $`a_i`$ in Eq. (22). The condition of a vanishing amplitude of the $`|t|^{7/3}`$ term can be evaluated straightforwardly and yields $$x_2=\frac{\sqrt{\pi }\zeta (3/2)}{8}0.58\text{ or }a_2=1.$$ (51) These theoretical values compare well with the fitted values given in Eqs. (24) or (33). The condition of a vanishing amplitude of the $`|t|^{8/3}`$ term yields $$x_3=\frac{\pi [\zeta (3/2)]^2}{64x_1}\frac{a^2}{3x_1}$$ (52) and a corresponding expression for $`a_3`$. These relations are not fulfilled by the parameter values found in the fits. The reason is probably the following: The expansions (22) and (31) are cut after the $`|t|^{4/3}`$ term. In a fit it is then in particular the last term that tries to simulate the neglected terms. ## VI Concluding remarks We have modified the IBG in such a way that it might be applied to liquid helium. We summarize the novel views and main results of our approach. 1. The IBG value $`\beta =1/2`$ for the critical exponent of the condensate should be taken seriously. It is not subject to renormalization because it results from a calculation that already includes a summation over arbitrarily large lengths, and it is essential for the BEC mechanism. 2. The model condensate contributes fully to the superfluid density; it is not depleted by the Jastrow factors. 3. In order to reproduce the critical exponent $`\nu 1/3`$ of the superfluid density we have assumed that noncondensed particles below a certain momentum $`k_{\mathrm{coh}}`$ move coherently with the condensate. The coherence limit $`k_{\mathrm{coh}}`$ has been made plausible in Sec. V A. The contribution of noncondensed particles to the superfluid density offers a solution of the so-called macroscopic problem of liquid helium. This problem reads as follows: If the superfluid density corresponds to single quantum state ($`\rho _\mathrm{s}|\psi |^2`$) then the approach to an equilibrium state \[with $`\rho _\mathrm{s}=\rho _\mathrm{s}(T)`$\] cannot be understood. 4. We have derived a fit formula for the temperature dependence of the superfluid density. This fit formula reproduces the data significantly better than comparable expressions. This feature as well as qualitative scaling arguments suggest that the critical exponent $`\nu `$ of the superfluid density might be exactly equal to 2/3. 5. The temperature dependence of the decomposition of superfluid density into the model condensate density and the coherently comoving density is given. A simple formula for the temperature dependence of the depleted condensate density is presented.
no-problem/9901/math9901109.html
ar5iv
text
# The symplectic Floer homology of the figure eight knot ## 1. Introduction In , we generalized the Casson-Lin invariant to the symplectic theory point of view. Our symplectic Floer homology of knots serves a new invariant for knots, and its Euler characteristic is half of the signature of knots. We showed that the symplectic Floer homology of the unknotted knot is trivial in . The natural question arises as whether there is a nontrivial knot with trivial symplectic Floer homology. We answer this question in this paper by computing the symplectic Floer homology of the figure eight knot. Although we know that the signature of the figure eight knot $`4_1=\overline{\sigma _1\sigma _2^1\sigma _1\sigma _2^1}`$ is zero, the signature does not suffice to give the information of our finer invariant - the symplectic Floer homology. For the square knot, we computed in that the symplectic Floer homology is nontrivial even though its signature is zero. Our main result is the following. Theorem The symplectic Floer homology of the figure eight knot $`4_1=\overline{\sigma _1\sigma _2^1\sigma _1\sigma _2^1}`$ is $$HF_i^{\text{sym}}(4_1)=CF_i^{\text{sym}}(4_1)=0,\text{for all }i𝐙_4.$$ To our knowledge, this is the first trivial symplectic Floer homology involving nontrivial information. It is still an open question about if there is a non-homotopy 3-sphere with trivial instanton Floer homology. We wish to build the relation between our symplectic Floer homology of knots and the instanton Floer homology of homology 3-spheres through the Dehn surgery technique. Using the calculation of the figure eight knot, we hope to find an example of non-homotopy 3-sphere with trivial instanton Floer homology. ## 2. The symplectic Floer homology ### 2.1. The symplectic Floer homology of braids We briefly recall our definition of the Floer homology of braids in this subsection. See for more details. For any knot $`K=\overline{\beta }`$ with $`\beta B_n`$, the braid group, the space $`(S^2K)^{[i]}`$ can be identified with the space of $`2n`$ matrices $`X_1\mathrm{},X_n`$, $`Y_1,\mathrm{},Y_n`$ in $`SU(2)`$ satisfying (1) $$\text{tr}(X_i)=\text{tr}(Y_i)=0,\text{for }i=1,\mathrm{},n,$$ (2) $$X_1X_2\mathrm{}X_n=Y_1Y_2\mathrm{}Y_n.$$ Note that $`\pi _1(S^2K)`$ is generated by $`m_{x_i},m_{y_i}(i=1,2,\mathrm{},n)`$ with one relation $`_{i=1}^nm_{x_i}=_{i=1}^nm_{y_i}`$. There is a unique reducible conjugacy class of representations $`s_K:\pi _1(S^3K)U(1)`$ such that $$s_K([m_{x_i}])=s_K([m_{y_i}])=\left[\begin{array}{cc}i& 0\\ 0& i\end{array}\right].$$ Let $`^{}(S^2K)^{[i]}`$ be the subset of $`(S^2K)^{[i]}`$ consisting of irreducible representations. Then $`^{}(S^2K)^{[i]}`$ is a monotone symplectic manifold of dimension $`4n6`$ by Lemma 2.3 in . The symplectic manifold $`(M,\omega )`$ is called monotone if $`\pi _2(M)=0`$ or if there exists a nonnegative $`\alpha 0`$ such that $`I_\omega =\alpha I_{c_1}`$ on $`\pi _2(M)`$, where $`I_\omega (u)=_{S^2}u^{}(\omega )𝐑`$ and $`I_{c_1}(u)=_{S^2}u^{}(c_1)𝐙`$ for $`u\pi _2(M)`$. The braid $`\beta `$ induces a diffeomorphism $`\varphi _\beta :^{}(S^2K)^{[i]}^{}(S^2K)^{[i]}`$. The induced diffeomorphism $`\varphi _\beta `$ is symplectic, and the fixed point set of $`\varphi _\beta `$ is $`^{}(S^3K)^{[i]}`$ (see Lemma 2.4 in ). Let $`H:^{}(S^2K)^{[i]}\times 𝐑𝐑`$ be a $`C^{\mathrm{}}`$ time-dependent Hamiltonian function with $`H(x,s)=H(\varphi _\beta (x),s+1)`$. Let $`X_s`$ be the corresponding vector field from $`\omega (X_s,)=dH_s(,s)`$, and $`\psi _s`$ be the corresponding flow $$\frac{d\psi _s}{ds}=X_s\psi _s,\psi _0=id.$$ Then we have $`\psi _{s+1}\varphi _\beta ^H=\varphi _\beta \psi _s`$, where $`\varphi _\beta ^H=\psi _1^1\varphi _\beta `$. Let $`\mathrm{\Omega }_{\varphi _\beta }`$ be the space of smooth paths $`\alpha `$ in $`^{}(S^2K)^{[i]}`$ such that $`\alpha (s+1)=\varphi _\beta (\alpha (s))`$. The symplectic action $`a_H:\mathrm{\Omega }_{\varphi _\beta }𝐑/2\alpha N𝐙`$ is given by $$da_H(\gamma )\xi =_0^1\omega (\dot{\gamma }X_s(\gamma ),\xi )𝑑s.$$ So the critical points of $`a_H`$ are the fixed points of $`\varphi _\beta ^H`$. For $`x\text{Fix}(\varphi _\beta ^H)`$, define $`\mu (x)=\mu _u(x,s)(mod2N)`$, where $`\mu _u`$ is the Maslov index and $`N=N(K)`$ is the minimal value of the first Chern number of the tangent bundle of $`^{}(S^2K)^{[i]}`$. The integer $`N(K)`$ is a knot invariant. Thus we have a $`𝐙_{2N}`$-graded symplectic Floer chain complex: $$CF_i^{\text{sym}}=\{x\text{Fix}(\varphi _\beta )^{}(S^2K)^{[i]}:\mu (x)=i\},i𝐙_{2N}.$$ The following is Proposition 4.1 and Theorem 4.2 of . ###### Theorem 2.1. For a knot $`K=\overline{\beta }`$ with the property that $`\pi _2(^{}(S^2K)^{[i]})=0`$ or $`\alpha N(K)=0`$, there is a well-defined $`𝐙`$-graded symplectic Floer homology $`HF_{}^{\text{sym}}(\varphi _\beta )`$. The symplectic Floer homology $`\{HF_i^{\text{sym}}(\varphi _\beta )\}_{i𝐙_{2N}}`$ is a knot invariant and its Euler number is half of the signature of the knot (see ). ### 2.2. The symplectic Floer homology of the figure eight knots The figure eight knot $`4_1`$ has the braid representative $`\sigma _1\sigma _2^1\sigma _1\sigma _2^1`$. The knot $`4_1`$ has signature zero since $`4_1`$ is equivalent (by an orientation preserving homeomorphism) to its mirror image $`\overline{4_1}`$. So the figure eight knot is amphicheiral. Also it is well-known that the figure eight knot is not a slice knot, and represents an element of order 2 in the knot cobordism group (see ). We calculate the symplectic Floer homology of the figure eight knot by identifying the fixed points of the induced symplectic diffeomorphism in §2.1. Let $`^{}(S^24_1)^{[i]}`$ be the subset of $`(S^24_1)^{[i]}`$ consisting of irreducible representations. Then $`^{}(S^24_1)^{[i]}`$ can be also identified with $`(H_3S_3)/SU(2)`$ in Lin’s notation , i.e., the set of 6-tuple $`(X_1,X_2,X_3,Y_1,Y_2,Y_3)SU(2)^6`$ satisfying $`\text{tr}(X_j)=\text{tr}(Y_j)=0(j=1,2,3)`$ and $$X_1X_2X_3=Y_1Y_2Y_3.$$ By operating the conjugation on $`X_3`$ and $`Y_3`$, we may assume that $$X_3=\left(\begin{array}{cc}i& 0\\ 0& i\end{array}\right),Y_3=\left(\begin{array}{cc}i\mathrm{cos}\theta & \mathrm{sin}\theta \\ \mathrm{sin}\theta & i\mathrm{cos}\theta \end{array}\right),0\theta \pi .$$ If $`\theta =0`$ and $`\pi `$, then we get two copies of $`(H_2S_2)/SU(2)`$ which is the pillow case (a 2-sphere with four cone points deleted ). For $`0<\theta <\pi `$, the identification reduces down to the following $$X_1X_2\left(\begin{array}{cc}\mathrm{cos}\theta & i\mathrm{sin}\theta \\ i\mathrm{sin}\theta & \mathrm{cos}\theta \end{array}\right)=Y_1Y_2.$$ Let $`R_\theta `$ be the representations in $`^{}(S^24_1)^{[i]}`$ satisfying the above equation. So the space $`R_\theta `$ is the non-singular piece in $`^{}(S^2K)^{[i]}`$. For $`0<\theta ,\theta ^{^{}}<\pi `$, the space $`R_\theta `$ is diffeomorphic to the space $`R_\theta ^{^{}}`$. In particular, they are all diffeomorphic to $`R_{\pi /2}`$. In this case, we see that $`^{}(S^24_1)^{[i]}`$ is a generalized pillow case: $$^{}(S^24_1)^{[i]}=\underset{0\theta \pi }{}R_\theta .$$ The fixed point set of $`\varphi _{4_1}`$ is $`^{}(S^34_1)^{[i]}`$ by Lemma 2.4 in . So we have, for $`\sigma =\sigma _1\sigma _2^1\sigma _1\sigma _2^1`$, $`\text{Fix}(\varphi _{4_1})=\{(X_1,X_2,X_3)SU(2)^3|\sigma (X_j)=X_j,j=1,2,3\}`$ up to conjugation. Let $`B_n`$ be the braid group of rank $`n`$ with the standard generators $`\sigma _1,\mathrm{},\sigma _{n1}`$, and $`F_n`$ be the free group of rank $`n`$ generated by $`x_1,\mathrm{},x_n`$. Then the automorphism of $`F_n`$ representing $`\sigma _k`$ is given by (still denote it by $`\sigma _k`$) (3) $`\sigma _k:`$ $`x_kx_kx_{k+1}x_k^1`$ $`x_{k+1}x_k`$ $`x_lx_l,lk,k+1.`$ By (3), we compute the followings. $`\sigma _1\sigma _2^1\sigma _1\sigma _2^1(x_1)`$ $`=\sigma _1\sigma _2^1\sigma _1(x_1^1)=\sigma _1\sigma _2^1(x_1x_2^1x_1^1)`$ $`=\sigma _1(x_1x_2x_3x_2^1x_1^1)`$ $`=(x_1x_2x_1^1)x_1x_3x_1^1(x_1x_2x_1^1)^1`$ $`=x_1x_2x_3x_2^1x_1^1.`$ $`\sigma _1\sigma _2^1\sigma _1\sigma _2^1(x_2)`$ $`=\sigma _1\sigma _2^1\sigma _1(x_2x_3^1x_2^1)`$ $`=\sigma _1\sigma _2^1(x_1x_3^1x_1^1)=\sigma _1(x_1^1x_2x_1)`$ $`=x_1x_2^1x_1x_2x_1^1.`$ $`\sigma _1\sigma _2^1\sigma _1\sigma _2^1(x_3)`$ $`=\sigma _1\sigma _2^1\sigma _1(x_2^1)=\sigma _1\sigma _2^1(x_1^1)`$ $`=\sigma _1(x_1)=x_1x_2x_1^1.`$ Therefore the fixed point set of $`\varphi _{4_1}`$ is the set of points $`(X_1,X_2,X_3)SU(2)^3`$ such that $`\text{tr}(X_j)`$ $`=0,j=1,2,3,`$ $`X_1X_2X_3X_2^1X_1^1`$ $`=X_1,`$ $`X_1X_2^1X_1X_2X_1^1`$ $`=X_2,`$ $`X_1X_2X_1^1`$ $`=X_3,`$ up to conjugation. Up to conjugation, we can assume that $$X_2=\left(\begin{array}{cc}i& 0\\ 0& i\end{array}\right),X_1=\left(\begin{array}{cc}i\mathrm{cos}\theta & \mathrm{sin}\theta \\ \mathrm{sin}\theta & i\mathrm{cos}\theta \end{array}\right),0\theta \pi .$$ From the last equation in the above, we obtain $`X_1X_2X_1^1`$ $`=\left(\begin{array}{cc}i\mathrm{cos}\theta & \mathrm{sin}\theta \\ \mathrm{sin}\theta & i\mathrm{cos}\theta \end{array}\right)\left(\begin{array}{cc}i& 0\\ 0& i\end{array}\right)\left(\begin{array}{cc}i\mathrm{cos}\theta & \mathrm{sin}\theta \\ \mathrm{sin}\theta & i\mathrm{cos}\theta \end{array}\right)`$ $`=\left(\begin{array}{cc}i\mathrm{cos}2\theta & \mathrm{sin}2\theta \\ \mathrm{sin}2\theta & i\mathrm{cos}2\theta \end{array}\right)=X_3.`$ So the matrix $`X_3`$ is completely determined by the parameter $`\theta [0,\pi ]`$. This is, in fact, a key to complete the calculation. Now substituting $`X_3`$ into the relation $`\sigma _1\sigma _2^1\sigma _1\sigma _2^1(X_1)=X_1`$, we have $`X_1X_2X_3^1X_2^1X_1^1`$ $`=\left(\begin{array}{cc}\mathrm{cos}\theta & i\mathrm{sin}\theta \\ i\mathrm{sin}\theta & \mathrm{cos}\theta \end{array}\right)\left(\begin{array}{cc}i\mathrm{cos}2\theta & \mathrm{sin}2\theta \\ \mathrm{sin}2\theta & i\mathrm{cos}2\theta \end{array}\right)\left(\begin{array}{cc}\mathrm{cos}\theta & i\mathrm{sin}\theta \\ i\mathrm{sin}\theta & \mathrm{cos}\theta \end{array}\right)`$ $`=\left(\begin{array}{cc}i\mathrm{cos}4\theta & \mathrm{sin}4\theta \\ \mathrm{sin}4\theta & i\mathrm{cos}4\theta \end{array}\right)=X_1.`$ This reduces to the equations (4) $$\mathrm{cos}4\theta =\mathrm{cos}\theta ,\mathrm{sin}4\theta =\mathrm{sin}\theta .$$ Similarly, we compute $$\sigma _1\sigma _2^1\sigma _1\sigma _2^1(X_2)=\left(\begin{array}{cc}i\mathrm{cos}3\theta & \mathrm{sin}3\theta \\ \mathrm{sin}3\theta & i\mathrm{cos}3\theta \end{array}\right)=X_2=\left(\begin{array}{cc}i& 0\\ 0& i\end{array}\right),$$ to get the equations (5) $$\mathrm{cos}3\theta =1,\mathrm{sin}3\theta =0.$$ Thus the fixed point of $`\varphi _{4_1}`$ can be identified with $$X_1=\left(\begin{array}{cc}i\mathrm{cos}\theta & \mathrm{sin}\theta \\ \mathrm{sin}\theta & i\mathrm{cos}\theta \end{array}\right),X_2=\left(\begin{array}{cc}i& 0\\ 0& i\end{array}\right),X_3=\left(\begin{array}{cc}i\mathrm{cos}2\theta & \mathrm{sin}2\theta \\ \mathrm{sin}2\theta & i\mathrm{cos}2\theta \end{array}\right),0\theta \pi ,$$ subject to equations (4) and (5). Using the equations (5) and the angle addition formulae for sine and cosine functions with $`4\theta =3\theta +\theta `$, (4) becomes (6) $$\mathrm{sin}\theta =0,\mathrm{cos}\theta =0.$$ There is no solution for (6). Hence (7) $$\text{Fix}(\varphi _{4_1})=\mathrm{}\text{(empty set)}.$$ ###### Theorem 2.2. The symplectic Floer homology of the figure eight knot $`4_1=\overline{\sigma _1\sigma _2^1\sigma _1\sigma _2^1}`$ is $$HF_i^{\text{sym}}(4_1)=CF_i^{\text{sym}}(4_1)=0,\text{for all }i𝐙_{2N}.$$ Proof: Since the $`𝐙_{2N}`$-graded symplectic Floer chain complex $`CF_i^{\text{sym}}(4_1)`$ is generated by $`\text{Fix}(\varphi _{4_1})`$, the result follows from (7). ∎ ### 2.3. The symplectic Floer homology of knots with braid representatives in $`B_3`$ It seems that the method in §2.2 can be adapted to knots with braid representatives in $`B_3`$. We are going to illustrate another example to show that the computation for the figure eight knot in §2.2 is quite lucky. Let $`K=5_2`$ be the knot with 5-crossings. We have the braid representative $`\sigma _1^2\sigma _2^2\sigma _1^1\sigma _2`$ for the knot $`5_2`$ (see ). Thus the fixed points of $`\varphi _{5_2}`$ can be identified, by the same method in §2.2, with the set of points $`(X_1,X_2,X_3)SU(2)^3`$ such that $`\text{tr}(X_j)`$ $`=0,j=1,2,3,`$ $`X_1X_2X_3X_1X_2^1X_1^1X_3^1X_2^1X_1^1`$ $`=X_1,`$ $`X_1X_2X_3^1X_1^2X_2^1X_1^1`$ $`=X_2,`$ $`X_1X_2X_1^1X_2^1X_1^1`$ $`=X_3,`$ up to conjugation. This follows a straightforward calculation of $`\sigma _1^2\sigma _2^2\sigma _1^1\sigma _2(x_j)(j=1,2,3)`$. Again we can compute $`X_3`$ from the last equation in the above. $$X_1X_2X_1^1X_2^1X_1^1=\left(\begin{array}{cc}i\mathrm{cos}3\theta & \mathrm{sin}3\theta \\ \mathrm{sin}3\theta & i\mathrm{cos}3\theta \end{array}\right)=X_3.$$ Then $`\sigma _1^2\sigma _2^2\sigma _1^1\sigma _2(X_j)=X_j(j=1,2)`$ gives us $`\sigma _1^2\sigma _2^2\sigma _1^1\sigma _2(X_1)`$ $`=\left(\begin{array}{cc}i\mathrm{cos}6\theta & \mathrm{sin}6\theta \\ \mathrm{sin}6\theta & i\mathrm{cos}6\theta \end{array}\right)=X_1`$ $`\sigma _1^2\sigma _2^2\sigma _1^1\sigma _2(X_2)`$ $`=\left(\begin{array}{cc}i\mathrm{cos}5\theta & \mathrm{sin}5\theta \\ \mathrm{sin}5\theta & i\mathrm{cos}5\theta \end{array}\right)=X_2.`$ Thus we need to solve the equations (8) $$\mathrm{cos}6\theta =\mathrm{cos}\theta ,\mathrm{sin}6\theta =\mathrm{sin}\theta ,\mathrm{cos}5\theta =1,\mathrm{sin}5\theta =0.$$ There are three solutions of (8) with $`\theta =\frac{\pi }{5},\frac{3\pi }{5},\pi `$. Let $`\rho _j(j=1,2,3)`$ be the corresponding fixed points of $`\varphi _{5_2}`$ in $`^{}(S^25_2)^{[i]}`$. By following the method in , for $`K=5_2`$, we have all type I double points so that the correction term $`\mu =0`$. Using the definition of Goeritz matrix in §1 of , we get the Goeritz matrix of $`5_2`$: $$G(5_2)=\left(\begin{array}{ccc}4& 3& 1\\ 3& 4& 1\\ 1& 1& 2\end{array}\right).$$ By the theorem 6 of , we have $$\text{Signature}(5_2)=\text{Signature}(G(5_2))\mu =2.$$ By Theorem 2.1, the Euler characteristic of the symplectic Floer homology of $`5_2`$ is one. ###### Proposition 2.3. The symplectic Floer chain complex of $`5_2`$ is given by: one of the odd chain groups is generated by one of $`\rho _j(j=1,2,3)`$; even chain groups are generated by the rest two fixed points of $`\varphi _{5_2}`$. It is nontrivial to determine the Maslov index of $`\rho _j`$ and the possible Floer boundary map in order to complete the calculation.
no-problem/9901/hep-ph9901309.html
ar5iv
text
# Untitled Document Definition of the Model: The NMSSM (Next-to-minimal SSM, or (M+1)SSM) is defined by the addition of a gauge singlet superfield $`S`$ to the MSSM. The superpotential $`W`$ is scale invariant, i.e. there is no $`\mu `$-term. Instead, two Yukawa couplings $`\lambda `$ and $`\kappa `$ appear in $`W`$. Apart from the standard quark and lepton Yukawa couplings, $`W`$ is given by $`W=\lambda H_1H_2S+{\displaystyle \frac{1}{3}}\kappa S^3+\mathrm{}`$ (1) and the corresponding trilinear couplings $`A_\lambda `$ and $`A_\kappa `$ are added to the soft susy breaking terms. The vev of $`S`$ generates an effective $`\mu `$-term with $`\mu =\lambda S`$. The constraint NMSSM (CNMSSM) is defined by universal soft susy breaking gaugino masses $`M_0`$, scalar masses $`m_0^2`$ and trilinear couplings $`A_0`$ at the GUT scale, and a number of phenomenological constraints: \- Consistency of the low energy spectrum and couplings with negative Higgs and sparticle searches. \- In the Higgs sector, the minimum of the effective potential with $`H_1`$ and $`H_20`$ has to be deeper than any minimum with $`H_1`$ and/or $`H_2=0`$. Charge and colour breaking minima induced by trilinear couplings have to be absent. (However, deeper charge and colour breaking minima in ”UFB” directions are allowed, since the decay rate of the physical vacuum into these minima is usually large compared to the age of the universe .) Cosmological constraints as the correct amount of dark matter are not imposed at present. (A possible domain wall problem due to the discrete $`Z_3`$ symmetry of the model is assumed to be solved by, e.g., embedding the $`Z_3`$ symmetry into a $`U(1)`$ gauge symmetry at $`M_{GUT}`$, or by adding non-renormalisable interactions which break the $`Z_3`$ symmetry without spoiling the quantum stability .) The number of free parameters of the CNMSSM, ($`M_{1/2}`$, $`m_0`$, $`A_0`$, $`\lambda `$, $`\kappa `$ \+ standard Yukawa couplings), is the same as in the CMSSM ($`M_{1/2}`$, $`m_0`$, $`A_0`$, $`\mu `$, $`B+idem`$). The new physical states in the CNMSSM are one additional neutral Higgs scalar and Higgs pseudoscalar, respectively, and one additional neutralino. In general these states mix with the corresponding ones of the MSSM with a mixing angle proportional to the Yukawa coupling $`\lambda `$. However, in the CNMSSM $`\lambda `$ turns out to be quite small, $`\lambda <\mathrm{\hspace{0.33em}0.1}`$ (and $`\lambda 1`$ for most allowed points in the parameter space) . Thus the new physical states are generally almost pure gauge singlets with very small couplings to the standard sector. Phenomenology of the CNMSSM: The new states in the Higgs sector can be very light, a few GeV or less, depending on $`\lambda `$ . Due to their small couplings to the $`Z`$ boson they will escape detection at LEP and elsewhere, i.e. the lightest “visible” Higgs boson is possibly the next-to-lightest Higgs of the NMSSM. The upper limits on the mass of this visible Higgs boson (and its couplings) are, on the other hand, very close to the ones of the MSSM, i.e. $`<\mathrm{\hspace{0.33em}140}`$ GeV depending on the stop masses . The phenomenology of sparticle production in the CNMSSM can differ considerably from the MSSM, depending on the mass of the additional state $`\stackrel{~}{S}`$ in the neutralino sector: If the $`\stackrel{~}{S}`$ is not the LSP, it will hardly be produced, and all sparticle decays proceed as in the MSSM with a LSP in the final state. If, on the other hand, the $`\stackrel{~}{S}`$ is the LSP, the sparticle decays will proceed differently: First, the sparticles will decay into the NLSP, because the couplings to the $`\stackrel{~}{S}`$ are too small. Only then the NLSP will realize that it is not the true LSP, and decay into the $`\stackrel{~}{S}`$ plus an additional cascade. The condition for a singlino LSP scenario can be expressed relatively easily in terms of the bare parameters of the CNMSSM: Within the allowed parameter space of the CNMSSM, the lightest non-singlet neutralino is essentially a bino $`\stackrel{~}{B}`$. Since the masses of $`\stackrel{~}{S}`$ and $`\stackrel{~}{B}`$ are proportional to $`A_0`$ and $`M_{1/2}`$, respectively, one finds, to a good approximation, that the $`\stackrel{~}{S}`$ is the true LSP if the bare susy breaking parameters satisfy $`|A_0|<\mathrm{\hspace{0.33em}0.4}M_{1/2}`$. Since $`A_0^2>\mathrm{\hspace{0.33em}9}m_0^2`$ is also a necessary condition within the CNMSSM, the singlino LSP scenario corresponds essentially to the case where the gaugino masses are the dominant soft susy breaking terms. Note, however, that the $`\stackrel{~}{B}`$ is not necessarily the NLSP in this case: Possibly the lightest stau $`\stackrel{~}{\tau }_1`$ is lighter than the $`\stackrel{~}{B}`$, since the lightest stau can be considerably lighter than the sleptons of the first two generations. Nevertheless, most sparticle decays will proceed via the $`\stackrel{~}{B}\stackrel{~}{S}+\mathrm{}`$ transition, which will give rise to additional cascades with respect to decays in the MSSM. The properties of this cascade have been analysed in , and in the following we will briefly discuss the branching ratios and the $`\stackrel{~}{B}`$ life times in the different parameter regimes: a) $`\stackrel{~}{B}\stackrel{~}{S}\nu \overline{\nu }`$: This invisible process is mediated dominantly by sneutrino exchange. Since the sneutrino mass, as the mass of $`\stackrel{~}{B}`$, is essentially fixed by $`M_{1/2}`$ , the associated branching ratio varies in a predictable way with $`M_{\stackrel{~}{B}}`$: It can become up to 90% for $`M_{\stackrel{~}{B}}30`$ GeV, but decreases with $`M_{\stackrel{~}{B}}`$ and is maximally 10% for $`M_{\stackrel{~}{B}}>\mathrm{\hspace{0.33em}65}`$ GeV. b) $`\stackrel{~}{B}\stackrel{~}{S}l^+l^{}`$: This process is mediated dominantly by the exchange of a charged slepton in the s-channel. If the lightest stau $`\stackrel{~}{\tau }_1`$ is considerably lighter than the sleptons of the first two generations, the percentage of taus among the charged leptons can well exceed $`\frac{1}{3}`$. If $`\stackrel{~}{\tau }_1`$ is lighter than $`\stackrel{~}{B}`$, it is produced on-shell, and the process becomes $`\stackrel{~}{B}\stackrel{~}{\tau }_1\tau \stackrel{~}{S}\tau ^+\tau ^{}`$. Hence we can have up to 100% taus among the charged leptons and the branching ratio of this channel can become up to 100%. c) $`\stackrel{~}{B}\stackrel{~}{S}S`$: This two-body decay is kinematically allowed if both $`\stackrel{~}{S}`$ and $`S`$ are sufficiently light. (A light $`S`$ is not excluded by Higgs searches at LEP1, if its coupling to the $`Z`$ is too small .) However, the coupling $`\stackrel{~}{B}\stackrel{~}{S}S`$ is proportional to $`\lambda ^2`$, whereas the couplings appearing in the decays a) and b) are only of $`O(\lambda )`$. Thus this decay can only be important for $`\lambda `$ not too small. In , we found that its branching ratio can become up to 100% in a window $`10^3<\lambda <\mathrm{\hspace{0.33em}10}^2`$. Of course, $`S`$ will decay immediately into $`b\overline{b}`$ or $`\tau ^+\tau ^{}`$, depending on its mass. (If the branching ratio $`Br(\stackrel{~}{B}\stackrel{~}{S}S)`$ is substantial, $`S`$ is never lighter than $`5`$ GeV.) If the singlet is heavy enough, its $`b\overline{b}`$ decay gives rise to 2 jets with $`B`$ mesons, which are easily detected with $`b`$-tagging. In any case, the invariant mass of the $`b\overline{b}`$ or the $`\tau ^+\tau ^{}`$ system would be peaked at $`M_S`$, making this signature easy to search for. d) $`\stackrel{~}{B}\stackrel{~}{S}\gamma `$: This branching ratio can be important if the mass difference $`\mathrm{\Delta }M=M_{\stackrel{~}{B}}M_{\stackrel{~}{S}}`$ is small ($`<\mathrm{\hspace{0.33em}5}`$ GeV). Further possible final states like $`\stackrel{~}{B}\stackrel{~}{S}q\overline{q}`$ via $`Z`$ exchange have always branching ratios below 10%. (The two-body decay $`\stackrel{~}{B}\stackrel{~}{S}Z`$ is never important, even if $`\mathrm{\Delta }M`$ is larger than $`M_Z`$: In this region of the parameter space $`\stackrel{~}{\tau }_1`$ is always the NLSP, and thus the channel $`\stackrel{~}{B}\stackrel{~}{\tau }_1\tau `$ is always prefered.) The $`\stackrel{~}{B}`$ life time depends strongly on the Yukawa coupling $`\lambda `$, since the mixing of the singlino $`\stackrel{~}{S}`$ with gauginos and higgsinos is proportional to $`\lambda `$. Hence, for small $`\lambda `$ (or a small mass difference $`\mathrm{\Delta }M`$) the $`\stackrel{~}{B}`$ can be so long lived that it decays only after a macroscopic lenght of flight $`l_{\stackrel{~}{B}}`$. An approximate formula for $`l_{\stackrel{~}{B}}`$ (in meters) is given by $`l_{\stackrel{~}{B}}[m]210^{10}{\displaystyle \frac{1}{\lambda ^2M_{\stackrel{~}{B}}[GeV]}},`$ (2) and $`l_{\stackrel{~}{B}}`$ becomes $`>1`$ mm for $`\lambda <\mathrm{\hspace{0.33em}6}10^5`$. To summarize, the following unconventional signatures are possible within the CNMSSM, compared to the MSSM: a) additional cascades attached to the original vertex (but still missing energy and momentum): one or two additional $`l^+l^{}`$, $`\tau ^+\tau ^{}`$ or $`b\overline{b}`$ pairs or photons, with the corresponding branching ratios depending on the parameters of the model. b) one or two additional $`l^+l^{}`$ or $`\tau ^+\tau ^{}`$ pairs or photons with macroscopically displaced vertices, with distances varying from millimeters to several meters. These displaced vertices do not point towards the interaction point, since an additional invisible particle is produced. More details on the allowed branching ratios and life times can be found in , applications to sparticle production processes et LEP 2 are published in , and differential (spin averaged) cross sections of the $`\stackrel{~}{B}\stackrel{~}{S}`$ decay are available upon request. References U. Ellwanger, M. Rausch de Traubenberg, C. Savoy, Nucl. Phys. B 492 (1997) 21 U. Ellwanger, C. Hugonie, ”Constraints from Charge and Colour Breaking Minima in the (M+1)SSM”, in preparation C. Panagiotakopoulos, K. Tamvakis, hep-ph/9809475 S.A. Abel, Nucl. Phys. B 480 (1996) 55 U. Ellwanger, M. Rausch de Traubenberg, C. Savoy, Z. Phys. C 67 (1995) 665 U. Ellwanger, C. Hugonie, Eur. Phys. J. C5 (1998) 723 U. Ellwanger, C. Hugonie, hep-ph/9812427
no-problem/9901/math-ph9901020.html
ar5iv
text
# Untitled Document
no-problem/9901/hep-ph9901424.html
ar5iv
text
# Cherenkov radiation by neutrinos ## Cherenkov radiation by neutrinos Ara N. Ioannisian<sup>1,2</sup>, Georg G. Raffelt<sup>2</sup> $`1`$ Yerevan Physics Institute, Yerevan 375036, Armenia $`2`$ Max-Planck-Institut für Physik (Werner-Heisenberg-Institut), Föhringer Ring 6, 80805 München, Germany ### Abstract We discuss the Cherenkov process $`\nu \nu \gamma `$ in the presence of a homogeneous magnetic field. The neutrinos are taken to be massless with only standard-model couplings. The magnetic field fulfills the dual purpose of inducing an effective neutrino-photon vertex and of modifying the photon dispersion relation such that the Cherenkov condition $`\omega <|𝐤|`$ is fulfilled. For a field strength $`B_{\mathrm{crit}}=m_e^2/e=4.41\times 10^{13}\mathrm{Gauss}`$ and for $`E=2m_e`$ the Cherenkov rate is about $`6\times 10^{11}\mathrm{s}^1`$. In many astrophysical environments the absorption, emission, or scattering of neutrinos occurs in dense media or in the presence of strong magnetic fields . Of particular conceptual interest are those reactions which have no counterpart in vacuum, notably the decay $`\gamma \overline{\nu }\nu `$ and the Cherenkov process $`\nu \nu \gamma `$. These reactions do not occur in vacuum because they are kinematically forbidden and because neutrinos do not couple to photons. In the presence of a medium or $`B`$-field, neutrinos acquire an effective coupling to photons by virtue of intermediate charged particles. In addition, media or external fields modify the dispersion relations of all particles so that phase space is opened for neutrino-photon reactions of the type $`12+3`$. If neutrinos are exactly massless as we will always assume, and if medium-induced modifications of their dispersion relation can be neglected, the Cherenkov decay $`\nu \nu \gamma `$ is kinematically possible whenever the photon four momentum $`k=(\omega ,𝐤)`$ is space-like, i.e. $`𝐤^2\omega ^2>0`$. Often the dispersion relation is expressed by $`|𝐤|=n\omega `$ in terms of the refractive index $`n`$. In this language the Cherenkov decay is kinematically possible whenever $`n>1`$. Around pulsars field strengths around the critical value $`B_{\mathrm{crit}}=m_e^2/e=4.41\times 10^{13}\mathrm{Gauss}`$. The Cherenkov condition is satisfied for significant ranges of photon frequencies. In addition, the magnetic field itself causes an effective $`\nu `$-$`\gamma `$-vertex by standard-model neutrino couplings to virtual electrons and positrons. Therefore, we study the Cherenkov effect entirely within the particle-physics standard model. This process has been calculated earlier in . However, we do not agree with their results. Our work is closely related to a recent series of papers who studied the neutrino radiative decay $`\nu \nu ^{}\gamma `$ in the presence of magnetic fields. Our work is also related to the process of photon splitting that may occur in magnetic fields as discussed, for example, in Refs. . Photons couple to neutrinos by the amplitudes shown in Figs. 1(a) and (b). We limit our discussion to field strengths not very much larger than $`B_{\mathrm{crit}}=m_e^2/e`$. Therefore, we keep only electron in the loop. Moreover, we are interested in neutrino energies very much smaller than the $`W`$\- and $`Z`$-boson masses, allowing us to use the limit of infinitely heavy gauge bosons and thus an effective four-fermion interaction (Fig. 1(c)). The matrix element has the form $$=\frac{G_F}{\sqrt{2}e}Z\epsilon _\mu \overline{\nu }\gamma _\nu (1\gamma _5)\nu (g_V\mathrm{\Pi }^{\mu \nu }g_A\mathrm{\Pi }_5^{\mu \nu }),$$ (1) where $`\epsilon `$ is the photon polarization vector and $`Z`$ its wave-function renormalization factor. For the physical circumstances of interest to us, the photon refractive index will be very close to unity so that we will be able to use the vacuum approximation $`Z=1`$. $`g_V=2\mathrm{sin}^2\theta _W+\frac{1}{2}`$ and $`g_A=\frac{1}{2}`$ for $`\nu _e`$, and $`g_V=2\mathrm{sin}^2\theta _W\frac{1}{2}`$ and $`g_A=\frac{1}{2}`$ for $`\nu _{\mu ,\tau }`$. Following Refs. $`\mathrm{\Pi }^{\mu \nu }`$ and $`\mathrm{\Pi }_5^{\mu \nu }`$ are $`\mathrm{\Pi }^{\mu \nu }(k)`$ $`=`$ $`{\displaystyle \frac{e^3B}{(4\pi )^2}}\left[(g^{\mu \nu }k^2k^\mu k^\nu )N_0(g_{}^{\mu \nu }k_{}^2k_{}^\mu k_{}^\nu )N_{}+(g_{}^{\mu \nu }k_{}^2k_{}^\mu k_{}^\nu )N_{}\right],`$ $`\mathrm{\Pi }_5^{\mu \nu }(k)`$ $`=`$ $`{\displaystyle \frac{e^3}{(4\pi )^2m_e^2}}\left\{C_{}k_{}^\nu (\stackrel{~}{F}k)^\mu +C_{}\left[k_{}^\nu (k\stackrel{~}{F})^\mu +k_{}^\mu (k\stackrel{~}{F})^\nu k_{}^2\stackrel{~}{F}^{\mu \nu }\right]\right\},`$ (2) here $`\stackrel{~}{F}^{\mu \nu }=\frac{1}{2}ϵ^{\mu \nu \rho \sigma }F_{\rho \sigma }`$, where $`F_{12}=F_{21}=B`$. The $``$ and $``$ decomposition of the metric is $`g_{}=\mathrm{diag}(,0,0,+)`$ and $`g_{}=gg_{}=\mathrm{diag}(0,+,+,0)`$. $`k`$ is the four momentum of the photon. $`N_0`$, $`N_{}`$,$`N_{}`$, $`C_{}`$ and $`C_{}`$ are functions on $`B`$,$`k_{}^2`$ and $`k_{}^2`$. They are real for $`\omega <2m_e`$, i.e. below the pair-production threshold. The four-momenta conservation constrains the photon emission angle to have the value $$\mathrm{cos}\theta =\frac{1}{n}\left[1+(n^21)\frac{\omega }{2E}\right],$$ (3) where $`\theta `$ is the angle between the emitted photon and incoming neutrino. It turns out that for all situations of practical interest we have $`|n1|1`$ . This reveals that the outgoing photon propagates parallel to the original neutrino direction. It is easy to see that the parity-conserving part of the effective vertex ($`\mathrm{\Pi }^{\mu \nu }`$) is proportional to the small parameter $`(n1)^21`$ and the parity-violating part ($`\mathrm{\Pi }_5^{\mu \nu }`$) is not. It is interesting to compare this finding with the standard plasma decay process $`\gamma \overline{\nu }\nu `$ which is dominated by the $`\mathrm{\Pi }^{\mu \nu }`$. Therefore, in the approximation $`\mathrm{sin}^2\theta _W=\frac{1}{4}`$ only the electron flavor contributes to plasmon decay. Here the Cherenkov rate is equal for (anti)neutrinos of all flavors. We consider at first neutrino energies below the pair-production threshold $`E<2m_e`$. For $`\omega <2m_e`$ the photon refractive index always obeys the Cherenkov condition $`n>1`$ . Further, it turns out that in the range $`0<\omega <2m_e`$ $`C_{}`$,$`C_{}`$ depend only weakly on $`\omega `$ so that it is well approximated by its value at $`\omega =0`$. For neutrinos which propagate perpendicular to the magnetic field, a Cherenkov emission rate can be written in the form $`\mathrm{\Gamma }{\displaystyle \frac{4\alpha G_F^2E^5}{135(4\pi )^4}}\left({\displaystyle \frac{B}{B_{\mathrm{crit}}}}\right)^2h(B)=2.0\times 10^9\mathrm{s}^1\left({\displaystyle \frac{E}{2m_e}}\right)^5\left({\displaystyle \frac{B}{B_{\mathrm{crit}}}}\right)^2h(B),`$ (4) where $$h(B)=\{\begin{array}{cc}(4/25)(B/B_{\mathrm{crit}})^4\hfill & \text{for }BB_{\mathrm{crit}}\text{,}\hfill \\ 1\hfill & \text{for }BB_{\mathrm{crit}}\text{.}\hfill \end{array}$$ (5) Turning next to the case $`E>2m_e`$ we note that in the presence of a magnetic field the electron and positron wavefunctions are Landau states so that the process $`\nu \nu e^+e^{}`$ becomes kinematically allowed. Therefore, neutrinos with such large energies will lose energy primarily by pair production rather than by Cherenkov radiation (for recent calculations see ). The strongest magnetic fields known in nature are near pulsars. However, they have a spatial extent of only tens of kilometers. Therefore, even if the field strength is as large as the critical one, most neutrinos escaping from the pulsar or passing through its magnetosphere will not emit Cherenkov photons. Thus, the magnetosphere of a pulsar is quite transparent to neutrinos as one might have expected. ### Acknowledgments It is pleasure to thanks the organizers of the Neutrino Workshop at the Ringberg Castle for organizing a very interesting and enjoyable workshop.
no-problem/9901/hep-ph9901338.html
ar5iv
text
# DIQUARKS AS EFFECTIVE PARTICLES IN HARD EXCLUSIVE SCATTERING 11footnote 1Talk given by W. Schweiger at the “International Conference on Nuclear and Particle Physics with CEBAF at Jefferson Lab”, Dubrovnik, Croatia, Nov. 1998. ## DIQUARKS AS EFFECTIVE PARTICLES IN HARD EXCLUSIVE SCATTERING <sup>1</sup><sup>1</sup>1Talk given by W. Schweiger at the “International Conference on Nuclear and Particle Physics with CEBAF at Jefferson Lab”, Dubrovnik, Croatia, Nov. 1998. CAROLA F. BERGER, BERNHARD LECHNER, and WOLFGANG SCHWEIGER Institute of Theoretical Physics, University of Graz A-8010 Graz, Universitätsplatz 5, AUSTRIA email: wolfgang.schweiger@kfunigraz.ac.at In the context of hard hadronic reactions diquarks are a useful phenomenological device to model non-perturbative effects still observable in the kinematic range accessible by present-day experiments. In the following we present diquark-model predictions for $`\gamma \gamma p\overline{p}`$ and $`\mathrm{\Lambda }\overline{\mathrm{\Lambda }}`$. We also sketch how the (pure quark) hard-scattering formalism for exclusive reactions involving baryons can be reformulated in terms of quarks and diquarks. As an application of these considerations we analyze the magnetic proton form factor with regard to its quark-diquark content. Keywords: perturbative QCD, diquarks, hard hadronic processes, two-gamma reactions, proton magnetic form factor In a series of papers (and references therein) a systematic study of hard exclusive reactions has been attempted within a model based on perturbative QCD in which baryons, however, are treated as quark-diquark rather than three-quark systems. The processes which have been treated in a consistent way as yet include baryon form factors in the space- and time-like region , real and virtual Compton scattering , two-photon annihilation into proton-antiproton , the charmonium decay $`\eta _\mathrm{c}p\overline{p}`$ and photoproduction of the $`K^+`$-$`\mathrm{\Lambda }`$ final state . Like the usual hard-scattering formalism (HSF) for exclusive hadronic reactions the diquark model is based on factorization of short- and long-distance dynamics; a hadronic amplitude is expressed as a convolution of a hard-scattering amplitude, calculable within perturbative QCD, with distribution amplitudes (DAs) which contain the (non-perturbative) bound-state dynamics of the hadronic constituents. The introduction of diquarks is, above all, motivated by the requirement to extend the HSF from (asymptotically) large down to intermediate momentum transfers ($`p_{}^2\stackrel{>}{}\mathrm{\hspace{0.17em}4}\text{GeV}^2`$). This is the momentum-transfer region where some experimental data exist, but where still persisting non-perturbative effects, observable, e.g., as scaling violations or violation of hadronic helicity conservation, prevent the pure quark HSF to become fully operational. Diquarks may thus be considered as an effective way to cope with such effects. The model, as applied in Refs. , comprises scalar (S) as well as axial-vector (V) diquarks. V-diquarks are important if one wants to describe spin observables which require the flip of baryonic helicities. For the Feynman rules of electromagnetically and strongly interacting diquarks, as well as for the choice of the quark-diquark distribution amplitudes of octet baryons we refer to Ref. . Here it is only important to mention that the composite nature of diquarks is taken into account by multiplying each of the Feynman diagrams entering the hard scattering amplitude with diquark form factors. These are parameterized by multipole functions with the power chosen in such a way that in the limit $`p_{}\mathrm{}`$ the scaling behavior of the pure quark HSF is recovered. We want to present here a very recent application of the diquark model concerning the class of reactions $`\gamma \gamma B\overline{B}`$, where $`B`$ represents an octet baryon. In contrast to foregoing work, we have now considered these processes within the full model including also vector-diquarks. Furthermore, baryon-mass effects are taken into account in a rigorous way by means of a systematic expansion in the parameter (baryon mass/photon energy). With the same set of model parameters as in Refs. we find that the integrated cross-section data (available only) for the $`p`$-$`\overline{p}`$ and the $`\mathrm{\Lambda }`$-$`\overline{\mathrm{\Lambda }}`$ channel are very well reproduced (cf. Fig. 1). By comparing the solid and the dash-dotted line it can also be observed that in the few-GeV range baryon-mass effects are still sizable. For details of the calculation and results for other octet-baryon channels we refer to Ref. . As the applications mentioned above demonstrate, diquarks are obviously a very useful phenomenological concept (not only) in the field of hard hadronic processes. Physically speaking, diquarks represent effective particles which describe strong quark-quark correlations in baryonic wave functions. Within the pure quark HSF such correlations seem indeed necessary to obtain reasonable results, even for the simplest exclusive observables such as the nucleon magnetic form factors . A more formal justification of diquarks can be obtained by observing that the diquark model should evolve into the pure quark HSF in the limit of asymptotically large momentum transfers. This suggests a reformulation of the pure quark HSF in terms of quark and diquark degrees of freedom. Two obvious constraints for this reformulation are that the leading order hard-scattering amplitude on the quark-diquark level should also consist only of tree graphs (like in the pure quark HSF) and that the result of this reformulation should be independent of the choice of the two quarks which are grouped to a diquark. It has been proved in Ref. that a reformulation of the pure quark HSF fulfilling both constraints is indeed possible. If we employ this reformulation to analyze the proton magnetic form factor with respect to its diquark content, we find the isospin $`0`$ scalar $`S[ud]`$ diquark to provide the by far most important contribution (cf. Tab. 1). This is not only the case for the proton DA proposed by Chernyak et al. but holds also for other DA models. The reformulation of the pure quark HSF in terms of quarks and diquarks requires to study the general Lorentz structure of two-quark subgraphs to obtain the Lorentz covariants and corresponding (Lorentz-invariant) vertex functions of the various gauge-boson diquark vertices. This gives valuable clues how gauge-boson diquark vertices and corresponding form factors could be improved in the naive diquark model. However, in order to arrive at an effective model in the sense that it reproduces the results of the pure quark HSF (and not only the scaling behavior) in the limit of asymptotically large momentum transfers one should take these vertices literally and use the vertex-function results as asymptotic constraints for the parameterization of the diquark form factors. A corresponding program is presently carried out.
no-problem/9901/astro-ph9901356.html
ar5iv
text
# Magnetic CVs in Globular Clusters ## 1. Introduction With the advent of high resolution imaging telescopes in space, the cores of globular clusters have become available for studies of compact binaries (CBs), defined (here) as short period ($`<`$$``$ 1d) systems containing a compact object (white dwarf (WD), neutron star (NS) or black hole). CBs are observable as cataclysmic variables (CVs), low mass x-ray binaries (LMXBs), and millisecond pulsars (MSPs) and play a particularly important role in globulars: as the most compact (hard) binaries, they are not only the survivors of “binary burning” which dynamically heat the cluster cores while destroying wide binaries, but are the saviors of the cluster against complete core collapse (cf. Hut et al 1992). CVs are likely to be dominant, since white dwarfs (WDs) should vastly outnumber neutron stars (NSs) in clusters: for a Salpeter IMF with differential mass index $``$2.4, the WD progenitors ($`<`$$``$ 8 $`M_{}`$) must exceed those for NSs by a factor $``$(8/0.8)<sup>1.4</sup> $``$25. Conversely, the measured fraction, $``$, of CBs containing WDs (CVs) vs. NSs (LMXBs + MSPs) then limits the primordial IMF of the cluster and subsequent dynamical evolution of its stellar remnants. A reduction in $``$ is expected, for example, by considerations of stable mass transfer and mass segregation (e.g. Bailyn, Garcia and Grindlay 1990). Thus the study of CVs in globulars constrains both stellar and dynamical evolution. Not only the number and spatial distribution, but the very nature of CVs in globulars is emerging as a clue to their origin. On the basis of HST/FOS spectra (Grindlay et al 1995; GC95) showing moderately strong HeII ($`\lambda `$4686) emission for the three H$`\alpha `$ emission objects discovered (Cool et al 1995; CG95) in the core of the nearest core-collapsed globular, NGC 6397, we have suggested they may be magnetic CVs (MCVs) of the DQ Her (hereafter intermediate polar, IP) type in which the accreting WD has a magnetic field B<sub>WD</sub> $`>`$$``$ 0.1- 1 MG sufficient to truncate the inner edge of the accretion disk. More detailed analysis of NGC 6397, including a fourth CV candidate found in deep HST imaging (Cool et al 1998; CG98) and spectroscopy (Edmonds et al 1999; EG99), provides additional evidence that these first 4 spectroscopically confirmed CVs in a globular cluster core may be IPs (EG99), whereas only $``$10% of CVs in the field are of the DQ Her type (cf. Patterson 1994). If confirmed with both more detailed optical (and x-ray studies) and larger samples, this would suggest MCVs and magnetic WDs are somehow enhanced in globular cluster cores, providing yet more evidence that stellar evolution in globulars is affected by close encounters. One possibility (Grindlay 1996) is that since rotation of stellar cores increases in encounters or mergers, dynamo production of magnetic fields may result and the resulting WDs may be preferentially magnetic. In this paper, we briefly discuss the various search techniques for finding (and then studying) CVs and MCVs in globulars. We summarize the H$`\alpha `$-imaging technique conducted with HST which has yielded 3 CV candidates in NGC 6397 (CG95) and 2 each in NGC 6752 (Bailyn et al 1996; BR96) and $`\omega `$-Cen (Carson et al 1999). We focus on NGC 6397 for which the initial spectroscopic followup studies (GC95) as well as UBVI photometry revealed a fourth CV (CG98). We compare the optical emission line and continuum spectra of the 4 CVs in NGC 6397 with correlations found for field CVs and find they are consistent with those for IPs. Using the new spectrophotometry of EG99, and the preliminary results of our follow-up deep (75ksec) ROSAT observation (Grindlay, Metchev and Cool 1999; GMC99), we compare the x-ray vs. optical properties of the brightest 4 spectroscopically confirmed CVs and find they are consistent with the optical vs. x-ray correlations displayed by IPs in the disk. We report a fifth CVcandidate in the cluster core from the deep ROSAT study and find a likely optical counterpart as a near uv-excess star measured by CG98 that would also be consistent in its x-ray/optical flux ratio. Although the dim x-ray sources in NGC 6397 resemble IPs, we also briefly reconsider whether they might instead be very low accretion rate and optically thin disks as expected for either quiescent dwarf novae containing WDs or quiescent LMXBs containing NSs. We conclude with a brief comparison of our current results for NGC 6397 with those for 47 Tuc, for which Verbunt and Hasinger (1998; VH98) have a moderate ROSAT exposure. Interesting differences and similarities in the CV and CB populations may already be apparent. Upcoming HST and AXAF observations will provide much more sensitive tests of the possible MCV excess in globulars. ## 2. CV Searches in Globulars CVs have long been sought in globular clusters for a variety of (good) reasons: known distances to then fix mass transfer rates $`\dot{\mathrm{m}}`$ ; Pop II environments to test halo models; formation histories likely to include stellar encounters to contrast with primordial binary evolution (only) for field objects; and many others. Here we briefly review the search techniques and their completeness for finding MCVs. ### 2.1. UV-excess and Variability Since most CVs have been discovered in the field as blue variables, with dwarf novae (DNe) being the most common and novae the most extreme examples, initial searches in globulars have emphasized these properties. Indeed the only pre-HST spectroscopically confirmed CV in a globular (V101 in M5; Margon et al 1981) is a DN as originally suggested by Oosterhoff on the basis of outbursts. We note below that the total CV (and perhaps CB) population in globulars is strongly centrally concentrated so that V101, which could never be detected (from the ground) in the core, requires either formation from a primordial cluster binary or ejection from the core. The cores of two clusters have been moderately well searched with HST imaging for blue variables: NGC 6752 (Shara et al 1996) has yielded only upper limits for DNe, whereas 47 Tuc has produced only one confirmed and one possible DN system (Paresce and de Marchi 1994). Although these searches have certainly been sensitive to DN outbursts, the ability to detect $``$0.1-0.2 mag flickering typical of quiescent DNe (with apparent magnitudes $`>`$$``$ 21 for either cluster) is questionable for blind searches (though less so for identified objects, where neighbor subtraction can be accomplished more reliably). The fact that the CV candidates in NGC 6397 and NGC 6752 have now been found to be so red that in V-I they are nearly on the cluster main sequence (CG98), though still with moderate U-B excess, also suggests that uv-excess alone is not a requirement for cluster CVs. This is reinforced by the relatively uv-deficient spectral distribution of the one cluster CV studied now in the far-uv, CV1 in NGC 6397 (EG99). If the cluster CVs are dominated by MCVs then both uv-excess and dwarf nova type variability are expected to be suppressed due to the truncation of the inner portion of the accretion disk. Thus CV searches should be constructed to be independent of these criteria. ### 2.2. H$`\alpha `$ Emission Imaging All CVs except some DNe in outburst show emission lines, with the Balmer lines and H$`\alpha `$ most intense. This motivated our search strategy, developed initially in relatively shallow ground-based searches (e.g. Cool 1993) and culminating with HST and the initial results for NGC 6397 (CG95). An emission line search using narrow-band imaging would ideally use three filters: one centered on the emission line and two flanking continuum filters to measure both local continuum and slope while averaging over adjacent absorption line features (as used in our CTIO search of NGC 6752 (cf. Cool 1993)). Given the WFPC filter set, we have chosen the available H$`\alpha `$ filter F656N (with width W<sub>L</sub> $``$20 $`\mathrm{\AA }`$) and a single broad continuum filter (F675W with width W<sub>L</sub> $``$913 $`\mathrm{\AA }`$; yielding $``$ magnitudes). H$`\alpha `$ emission candidates are then identified in the color magnitude diagram formed by $``$ vs. H$`\alpha `$ \- $``$ as “blue” objects. Calibration of this photometry is self contained by the measure of blue stragglers and horizontal branch stars, with their stronger H$`\alpha `$ absorption, which show up as a separate track of “red” objects offset by $``$0.15mag in the CMD (cf. CG95). If MCVs dominate, the H$`\alpha `$ searches should be relatively sensitive. However, the searches are per force limited by the narrow-band throughput of the F656N filter and consequent long total effective integrations to achieve interestingly deep limiting magnitudes. In NGC 6397, the closest globular with a high density core, we shall obtain (in cycle 7) 15 HST orbits to reach an effective limiting magnitude of $``$24 (M<sub>V</sub> $``$11.5;$``$10$`\sigma `$). This would span $`>`$$``$ 3-4 binary orbits of the $`<`$$``$ 6h expected orbital periods (given the limits on secondary mass as well as disk absolute magnitudes derived by EG99) of the cluster CVs. However, once identified in H$`\alpha `$, variability studies can be conducted and the likely orbital periods P<sub>b</sub> (e.g. $``$3.5 - 5h for CVs 1-4 in NGC 6397; cf. EG99) can be measured from the accompanying short R exposures as done successfully for the 2 CV candidates in NGC 6752 (BR96). Detection of pulsation periods P<sub>p</sub>$`<`$$``$ P<sub>b</sub> would provide direct confirmation that they are indeed IPs and may be possible for the brightest candidates by searching for modulations in the B continuum by temporal analysis of STIS spectra as we have proposed. ### 2.3. Dim X-ray Sources A new population of dim x-ray sources was discovered in globular cluster cores and proposed as most likely to be CVs (Hertz and Grindlay 1983; HG83). Given the usual advantage of known distances to globular clusters, the luminosities of these dim sources could be derived with greater accuracy than for field CVs (although the fainter source fluxes precluded spectral determinations). Since the initial (relatively shallow) Einstein surveys yielded luminosities L<sub>x</sub> (0.2-4keV) $``$10<sup>32.5-34.5</sup> erg/s, and since at least one of the dim sources (in NGC 6440) was regarded as the likely detection of a quiescent NS transient, HG83 concluded the dim sources were likely a mixture of both CVs and quiescent LMXBs (qLMXBs). Verbunt et al (1984) argued that all the dim sources discovered with the Einstein survey were most likely qLMXBs although it was already evident from studies of field CVs (e.g. Patterson and Raymond, 1985a,b; PRa,b) that the luminosities of field CVs extended above $``$10<sup>32.5</sup> erg/s in the 0.5-4.5 keV Einstein band. Much greater sensitivity surveys have now been conducted with ROSAT. Here we only consider the initial (shallow) survey of NGC 6397 (Cool et al 1993; CG93) and the current (deepest) results on NGC 6397 (GMC99) and 47 Tuc (VH98). CG93 discovered three sources with L<sub>x</sub> $``$10<sup>31.5-32</sup> erg/s within $``$10$`^{\prime \prime }`$ of the center of NGC 6397. These comfortably overlap typical CV luminosities, and indeed our initial H$`\alpha `$ imaging survey of NGC 6397 with HST (CG95) revealed three optical candidates, with spectra showing them to be most likely IPs (GC95). The deeper survey of GMC99 shows at least a 4th source (and probably several more) in the central core, as discussed below. MCVs might be expected to dominate the ROSAT survey since they typically have F<sub>x</sub>/F<sub>opt</sub> values greater than non-magnetic CVs (PRa, Patterson 1994, Beuermann 1998). However, the H$`\alpha `$ survey found (blindly) the same sources within the expected sensitivity limits suggesting that any possible MCV excess in globulars is not a result of x-ray selection alone. ## 3. Evidence for MCVs MCVs in globulars were suggested as a possible observable class by Chanmugam, Ray and Singh (1991). The first evidence for their detection was contained within the HST/FOS spectra of CV candidates 1-3 in NGC 6397 which showed all to have moderately strong HeII emission (GC95). Although HeII ($`\lambda `$4686) emission is not unique to MCVs, and in fact PRb show that it is correlated with accretion rate $`\dot{\mathrm{m}}`$ and present with EW(HeII) $``$3$`\mathrm{\AA }`$ in most CVs, Silber (1992) has shown that the apparent excitation as measured by the ratio of equivalent widths $`𝒳`$ = EW(HeII)/EW(H$`\beta `$) correlates with magnetic nature, with $`𝒳`$ $`>`$$``$ 0.3 for intermediate polar (IP; or DQ Her type) or polar (or AM Her type) systems. Although CVs 1 - 3 in fact have $`𝒳`$ = 0.32, 0.34, and 0.25, and CV 4 has $`𝒳`$ = 0.07, EG99 show that their spectra (cf. Figure 1) and continuum properties are in fact most consistent with IPs. Why should the MCVs have enhanced HeII emission ? The likely reason is the larger optically thin coronal region inside the inner edge of a magnetically truncated accretion disk, which is then more readily photoionized by soft x-ray emission from the accretion column onto the WD. Doppler imaging maps of HeII vs. H$`\beta `$ support this general picture. Fig. 1. Mean spectra of CVs 1-3 (GC95) and CV4 in NGC 6397 (from EG99) showing the higher excitation HeI and HeII emission (particularly CVs 1-3) expected for IP type MCVs. (the feature at $``$$`\lambda `$5560 is instrumental). ### 3.1. Colors and Spectrophotometry with HST EG99 have investigated the question of how the four CV candidates in NGC 6397 (the only spectra available for CVs in the cores of globulars) compare in both their continuum and line ratio properties with CVs generally. Since the V-I colors of these objects are nearly coincident with the main sequence for the cluster (CG98), reasonably accurate magnitudes and masses for the secondaries can be derived (CG98, EG99) and the disk absolute magnitudes determined. From plotting these disk magnitudes and the continuum ratios at H$`\beta `$/H$`\alpha `$ vs. the excitation parameter, $`𝒳`$, the objects closely resemble correlations in these quantities obeyed by MCVs, as seen in Figure 2. Fig. 2. Correlations of continuum fluxes at H$`\beta `$ vs. H$`\alpha `$ and derived disk absolute magnitude vs. the excitation ratio, $`𝒳`$, which support the hypothesis that CVs 1-4 in NGC 6397 are DQ Her types (from EG99). In Figure 3 we explore the apparent relationship between $`𝒳`$ and EW(H$`\beta `$), which itself is proportional to the relative x-ray to optical flux (see below), for the 4 CVs in NGC 6397 as well as 7 IPs in the field. The emission line data (EW values for HeII and H$`\beta `$) for the disk IPs are taken from Williams (1983) (for GK Per, EX Hya, YY Dra, FO Aqr and AO Psc), Steiner et al (1981) (V1223 Sgr) and Motch et al (1996) (V709 Cas) and from EG99 for the four CVs in NGC 6397. This emission line “CMD” shows that $`𝒳`$ is (weakly) anti-correlated with EW(H$`\beta `$). This could reflect a variation in inner disk radius (from either WD magnetic field, B<sub>WD</sub>, or accretion rate, $`\dot{\mathrm{m}}`$ ) if the HeII emission from within the inner disk increases less with increasing accretion rate $`\dot{\mathrm{m}}`$ than does H$`\beta `$ from the outer disk. Although not noted, a similar correlation may be inferred from the data plotted by Echevarria (1988). The cluster CVs could define the lower B field end of the sequence, since the alternative of higher $`\dot{\mathrm{m}}`$ (alone) is not consistent with their relatively faint disk absolute magnitudes (cf. Figure 2). Comparison with P<sub>b</sub> and P<sub>p</sub> values given by Hellier (1996) reveals no correlation. Fig. 3: Excitation ratio, $`𝒳`$, vs. EW(H$`\beta `$) for CVs 1-4 in NGC 6397 vs. disk IPs. ### 3.2. X-ray vs. Optical Properties of CVs in NGC 6397 Additional tests for the MCV nature of the globular cluster CVs are possible by comparing their x-ray vs. optical properties with those of IPs in the disk. Using the initial ROSAT x-ray fluxes of the three dim sources, C1-C3, in the central core of NGC 6397 (CG93) and their probable optical counterpart CVs 1-3 (cf. CG95 for associations), CG95 found these three objects were consistent with the distance-independent correlation between F<sub>x</sub>/F<sub>opt</sub> and EW(H$`\beta `$) found by PRa for disk CVs. Using the actual measured EW(H$`\beta `$) values (rather than inferred from H$`\alpha `$ magnitudes, as in CG95), Grindlay and Cool (1996) refined this correlation and compared CVs 1-3 with the qLMXB Cen X-4 (cf. discussion below). Here we use our more accurate spectrophotometry given in EG99 and preliminary results from our deep (75ksec) ROSAT/HRI observation (GMC99). We find at least one additional dim source, C4, in the core of NGC 6397 as well as additional fainter sources near the core. C4 is $``$8$`^{\prime \prime }`$ due west of C3 (=CV2) and in the 75ksec ROSAT/HRI observation had total flux $``$80 counts vs. $``$160, 70 and 150 counts for dim sources C1-C3, respectively. Since the positions for CV1 and CV4 (CG98) are only $``$3$`^{\prime \prime }`$ apart, they are not resolved by the HRI (with $``$5$`^{\prime \prime }`$ resolution) and we divide the detected counts for the dim source C2 between them. The derived positions for C1-C3 are each within $``$2$`^{\prime \prime }`$ of the positions for CVs 1-4, lending confidence to the optical identifications and allowing us to search for the possible counterpart of the new source, C4. Re-examination of the HST/UBVI images reported in CG98 in fact yields a very probable identification for a 5th(!) CV in the core of NGC 6397: candidate CV5, with apparent magnitude V = 21.7 and (U-B) = -0.8 (visible in Figure 3 of CG98) is within the $``$3$`^{\prime \prime }`$ error circle of dim source C4. This star was too faint to be detectable in our original H$`\alpha `$ search (CG95) but with F<sub>x</sub>/F<sub>opt</sub> $``$3.2, it should be easily detected in our forthcoming deep (HST cycle 7) H$`\alpha `$ survey of NGC 6397, given the strong correlation between F<sub>x</sub>/F<sub>opt</sub> and EW(H$`\beta `$) (PRa). In Figure 4 we show the derived relation between the ratio of x-ray (ROSAT band) to optical (V band) fluxes, F<sub>x</sub>/F<sub>opt</sub>, vs. EW(H$`\beta `$). Only CVs1-4 (with measured EW(H$`\beta `$) values) are shown, along with the same field IPs as plotted in Figure 3, so that their optical vs. x-ray properties may be compared. The flux ratio F<sub>x</sub>/F<sub>opt</sub> has been computed for each object as the measured flux in the V band (5000 - 6000$`\mathrm{\AA }`$) and the ROSAT band (0.5 - 2.5 keV) in order to use measured (not extrapolated) values. We use the visual magnitude without interstellar reddening so that the NGC 6397 CVs, with measured cluster extinction of A<sub>V</sub> = 0.58 (cf. CG98) may be more properly compared with the disk CVs which are all within (typically) 500 pc and only moderately reddened. If this correction is not made for the cluster CVs, (e.g. if some disk CVs are also reddened), their flux ratios would increase by $``$0.2 on the log scales plotted in the figures below. The x-ray fluxes have been computed for all objects assuming a relatively hard bremsstrahlung spectrum with kT = 10 keV since this is generally appropriate for disk IPs (cf. Patterson 1994), and with an absorption column of NH = 1. $`\times `$ 10<sup>21</sup> cm<sup>-2</sup>. This NH is the interstellar value for NGC 6397 and thus a lower limit since disk IPs generally appear to be self-absorbed with NH values well in excess of interstellar values (Patterson 1994, Hellier 1996). A measure of the uncertainty in F<sub>X</sub> from both NH and spectral differences for the disk IPs can be obtained by comparing the ROSAT 0.5-2.5 keV fluxes (from PSPC survey fluxes given by Verbunt et al 1997 for all but YY Dra and V709 Cas, for which HRI fluxes from Norton et al 1998 are used) with extrapolating the 2-10 keV fluxes and spectral fits given by Patterson (1994) (for all but V709 Cas) into the 0.5-2.5 keV band. The error bars on the F<sub>x</sub>/F<sub>opt</sub> plots denote this hard vs. soft flux spectral uncertainty, and the line plotted is log(F<sub>x</sub>/F<sub>opt</sub>) = -2.21 + 1.45 log \[EW(H$`\beta `$)\] as found by PR85a for all CVs. Note that only 4/7 of the disk IPs are above the line and that the F<sub>x</sub>/F<sub>opt</sub> values for CV1 and CV4 are particularly uncertain. Fig. 4: X-ray/optical flux ratio vs. EW(H$`\beta `$) values for CVs 1-4 in NGC 6397 vs. disk IPs compared with PR relation for field CVs as well as Cen X-4. For comparison we show in Figure 5 the same relation for HeII vs. F<sub>x</sub>/F<sub>opt</sub>. Fig. 5: F<sub>x</sub>/F<sub>opt</sub> vs. EW(HeII) and approximate fit. The approximate linear fit to the log-log data plotted is given by log(F<sub>x</sub>/F<sub>opt</sub>) = -2.5 + 2.5 log \[EW(HeII)\] Thus the x-ray/optical flux is more strongly dependent on the HeII line strength than on H$`\beta `$. Finally, we investigate the possible relation between $`𝒳`$ and F<sub>x</sub>/F<sub>opt</sub> directly since the correlations of F<sub>x</sub>/F<sub>opt</sub> with both EW(H$`\beta `$) and EW(HeII) might at first suggest a positive correlation with $`𝒳`$. However, algebraically, the approximate log-log relations given above would predict $`𝒳`$$``$(F<sub>x</sub>/F<sub>opt</sub>)<sup>-4/15</sup>, which is plotted in Figure 6 with the same data points. Fig. 6: Excitation ratio, $`𝒳`$, vs. F<sub>x</sub>/F<sub>opt</sub> for CVs 1-4 in NGC 6397 vs. disk IPs. We note that since both CV1 and GK Per have their F<sub>x</sub>/F<sub>opt</sub> ratios most strongly affected by their relatively massive secondary companions, their F<sub>x</sub>/F<sub>opt</sub> ratios in Figures 3-6 may be anomalously low. ### 3.3. Are the Objects in NCG 6397 Really MCVs ? The spectra, photometry and x-ray/optical properties of CVs 1-4 in NGC 6397 (cf. EG99) are consistent with them being MCVs (IPs), yet until pulsations or other signatures unique to IPs are detected there are still questions: Are they quiescent Dwarf Novae ? Their faint disks, and the apparent lack of DNe generally in globulars (Shara et al 1996) could both be indicative of WZ Sge type systems in which the recurrence time for DN outbursts has become very long at the low accretion rates possibly implied by the faint (optically thin) disks. Are they quiescent LMXBs ? In Figures 4 and 5 we have plotted the F<sub>x</sub>/F<sub>opt</sub> and EW(H$`\beta `$), EW(HeII) values, respectively, for the classic qLMXB Cen X-4. The x-ray flux is from the recent ASCA spectral measurement (Campagna et al 1997) and the optical magnitudes and EW values are from Chevalier et al (1989) and McClintock and Remillard (1990). It is clear that Cen X-4 is offset from the bulk of the IPs in both correlations, and even moreso from CVs1-4 (the forthcoming measurement of EW(H$`\alpha `$) $``$EW(H$`\beta `$) for CV5, with log(F<sub>x</sub>/F<sub>opt</sub>) $``$0.5, will provide an additional test). However, since the qLMXB x-ray luminosities (e.g. Cen X-4) are $`>`$$``$ 10$`\times `$ larger than the $``$1-4 $`\times `$ 10<sup>31</sup> erg/s values for CVs1-4 (and also CV5), a qLMXB interpretation is less likely. ## 4. Discussion It is striking that CVs 1-5 are all within $``$7$`^{\prime \prime }`$ ($``$0.08pc) of their centroid position in NGC 6397, which is itself $``$3$`^{\prime \prime }`$ NW of the position given by Sosin (1997) for the cluster center. The radial extent of these central CVs is comparable with the 5$`^{\prime \prime }`$ core radius derived by Sosin (1997) for this post core collapse (PCC) cluster as well as for the distribution of bright central blue stragglers (BSs) noted by Auriere et al (1990). This may support the suggestion (Grindlay 1996) that the required magnetic WDs in cluster cores (if indeed the CVs are IPs) are produced in BSs. If the core is in equipartition and BSs have masses $``$2$`\times `$ the turnoff value or $``$1.5$`M_{}`$, the implied WD masses in CVs1-4 are $``$1$`M_{}`$ given their $``$0.5$`M_{}`$ (EG99) secondaries. The distribution of dim sources in 47 Tuc reported by VH98 is similar: the central 5 are within $``$20$`^{\prime \prime }`$ of the cluster center, or again comparable to the cluster core radius and BS distribution of this non-PCC cluster, although the dim sources are each typically $``$10-50$`\times `$ more luminous (the brightest may be qLMXBs). The underlying extended emission in the core quoted by VH98, with total luminosity $``$4 $`\times `$ 10<sup>32</sup> erg/s, is about twice the total core luminosity of NGC 6397 and may reflect a similar distribution of ($``$10) fainter CVs. Since the core of 47 Tuc contains $`>`$$``$ 10$`\times `$ the mass of the NGC 6397 core, the nearly comparable numbers of CVs suggest the core collapse may have triggered a burst of CB production in NGC 6397. If so, it is remarkable that NGC 6397 as yet contains no compelling evidence for CBs containing NSs: no MSPs have yet been reported (despite at least two surveys) whereas at least 11 are known in 47 Tuc. This may reflect differences in the cluster IMFs or NS retention. Upcoming high resolution x-ray imaging and spectra with AXAF of both clusters, and the deep H$`\alpha `$ survey of NGC 6397 (with 47 Tuc still needed), will help measure the CV nature and content. AXAF, in particular, will resolve CVs 1 vs. 4 in NGC 6397 (thus removing the uncertainties in their F<sub>x</sub>/F<sub>opt</sub> values as plotted in Figures 4 and 5) as well as the fainter sources in both clusters. ACIS spectra can also test for whether the sources have the hard spectra typical of IPs. However, deep STIS spectra and temporal analysis for pulsations are also needed to clarify if the objects are indeed dominated by IPs. #### Acknowledgments. I thank A. Cool, P. Edmonds, and S. Metchev for assistance with analysis and HST grant GO-6742 for partial support. ## References Auriere, M., Ortolani, S. & Lauzeral, C. 1990, Nature, 344, 638. Bailyn, C., Garcia, M. and Grindlay, J. 1990, ApJ, 357, L35. Bailyn, C. et al 1996, ApJ, 473, L31 (BR96). Beuermann, K. 1998, in Perspectives in High Energy Astronomy & Astrophysics (P.C. Agrawal & P.K. Visvanathan, eds.), Universities Press, p. 100. Campagna, S. et al 1997, A&A, 324, 941. Carson, J., Cool, A., Grindlay, J. et al 1999, in preparation. Chanmugam, G., Ray, A. and Singh, K. 1991, ApJ, 375, 600. Chevalier, C. et al 1989, A&A, 210, 114. Cool, A. 1993, Ph.D. Thesis (Harvard). Cool, A. et al 1993, ApJ, 410, L103 (CG93). Cool, A. et al 1995, ApJ, 439, 695 (CG95). Cool, A. et al 1998, ApJ, 508, L75 (CG98). De Marchi, G. and Paresce, F. 1994, A&A, 281, L13. Echevarria, J. 1988, MNRAS, 233, 513. Edmonds, P. et al 1999, ApJ, in press (EG99). Grindlay, J. et al 1995, ApJ, 455, L47 (GC95). Grindlay, J. 1996, Proc. IAU Symp. 174 (J. Makino & P. Hut, eds.), 171. Grindlay, J. and Cool, A. 1996, Proc. IAU Symp. 174 (J. Makino & P. Hut, eds.), 349. Grindlay, J., Metchev, S. and Cool, A. 1999, in preparation. Hellier, C. 1996, Proc. IAU Colloq. 158 (A. Evans & J. Wood, eds.), 143. Hertz, P. and Grindlay, J. 1983, ApJ, 275. 105 (HG83). Hut, P. et al 1992, PASP, 104, 981. Margon, B., Downes, R. and Gunn, J. 1981, ApJ, 247, L89. McClintock, R. and Remillard, R. 1990, ApJ, 350, 386. Motch, C. et al 1996, A&A, 307, 459. Norton, A.J. et al 1998, A&A, in press (astro-ph/9811310). Paresce, F. and De Marchi, G. 1994, ApJ, 427, L33. Patterson, J. 1994, PASP, 106, 209. Patterson, J. and Raymond, J. 1985a, ApJ, 292, 535 (PRa). Patterson, J. and Raymond, J. 1985b, ApJ, 292, 550 (PRb). Shara, M. et al 1996, ApJ, 471, 804. Silber, A. 1992, Ph.D. Thesis (MIT). Sosin, C. 1997, Ph.D. Thesis (U.C. Berkeley). Steiner, J.E. et al 1981, ApJ, 249, L21. Verbunt, F., van Paradijs, J. and Elson, R. 1984, MNRAS 210, 899. Verbunt, F. et al 1997, A&A, 327, 602. Verbunt, F. and Hasinger, G. 1998, A&A, 336, 895 (VH98). Williams, G. 1983, ApJS, 53, 523.
no-problem/9901/astro-ph9901174.html
ar5iv
text
# 1 Mission status ## 1 Mission status The Rossi X-ray Timing Explorer (Bradt, Rothschild & Swank 1993) was launched on 30 December 1995. Since then it has carried out a diverse observing program that has been open to the entire astronomical community since a month after launch. It carries an All-Sky Monitor (Levine et al. 1996; Levine 1998) which, together with a flexible spacecraft pointing capability, permits rapid (hours) acquisition of new or recurrent transient sources, sources entering new or interesting states, and gamma-ray burst afterglows. The pointed instruments are a large Proportional Counter Array (PCA; 2 – 60 keV; Jahoda et al. 1996) and a rocking High Energy X-ray Timing Experiment (HEXTE; 15 – 200 keV; Rothschild et al. 1998). All three instruments continue to operate close to their design state. It is hoped that operations can continue for several more years. There are no on-board expendables which would limit the spacecraft life. Information about the mission as well as data products may be accessed on the web through: http://heasarc.gsfc.nasa.gov/docs/xte/. The source intensities from the ASM are posted every few hours on: http://heasarc.gsfc.nasa.gov/xte\_weather/. ## 2 Scientific accomplishments: overview The RXTE was designed to study compact objects and the material in their environs with emphasis on temporal studies with high statistics together with broad band-band spectroscopy. Over 150 papers had been accepted in the refereed literature by the summer of 1998. RXTE has been highly influential in conduct of science in many wavebands. Over 120 IAU Circulars had announced RXTE discoveries of immediate interest and these have been followed by numerous reports from other observatories, gamma-ray, x-ray, radio and optical. At http://heasarc.gsfc.nasa.gov/whatsnew/xte/papers.html, circulars and papers may be found. The areas in which RXTE has made important contributions are listed here, with a few sample references. They are extracted from the Proposal to the 1998 Senior Review of NASA Astrophysics Missions Operations and Data Analysis authored by J. Swank, F. Marshall & the RXTE Users’ Group. Thereafter, I will give brief overviews, from my perspective, of two areas wherein RXTE has broken substantial new ground, namely, kiloHertz oscillations and microquasars. 1. Behavior of matter in regimes of strong gravity through the temporal and spectral signatures of kiloHertz pulsars and variability in microquasars (see refs. below). 2. Spinup evolution of neutron stars through the characteristics of kHz pulsars and the discovery of the first accretion powered millisecond pulsar (see refs. below). 3. Formation of relativistic astrophysical jets through the multiwavelength study of (galactic) microquasars (see refs. below). 4. AGN unified models and emission mechanisms through multiwavelength (esp. TeV) studies (e.g., Cantanese et al. 1997), detection of iron line and reflection components in individual Sy1 and Sy2 galaxies (e.g., Weaver, Krolik & Pier 1998, Nandra et al. 1998), and temporal studies with long term sampling with both the PCA and the ASM instruments. See for example the variability of the BL Lac objects Mkn 501 and Mkn 421 in Fig. 1. 5. High magnetic fields in neutron stars through the study of (1) the magnetosphere/disk boundary with low-frequency QPOs (e.g., Kommers, Chakrabarty & Lewin 1998), Type II bursts, the propeller effect (e.g., Cui 1997), and cyclotron lines (Kreykenbohm et al. 1999); (2) the discovery of the fastest rotation powered pulsar ($`P=16`$ ms; Marshall et al. 1998), and (3) the discovery of x-ray pulsations supporting the identification of a magnetar , a neutron star with an extraordinarily high magnetic field ($`2\times 10^{10}`$ T), as a soft gamma-ray repeater (Kouveliotou et al. 1998). 6. Transient sources through PCA slews and ASM monitoring which have revealed $``$15 previously unknown sources and numerous recoveries of previously known sources, together with follow-on studies with the PCA and other observatories. Several new examples of radio-jet systems have been revealed (see below). Sample light curves from the RXTE/ASM extending over 2.5–yr are shown in Fig. 1. 7. State changes in binary systems through temporal/spectral tracking during major changes in x-ray flux and spectrum. In the case of Cyg X–1, a change of corona size is indicated (Cui et al. 1997). 8. Superorbital quasi periodicities in high and low mass binaries as well as in black-hole binaries through their discovery or confirmation with the ASM. (e.g., $`P`$ 60 d in SMC X1; See Fig. 1). Some are most likely due to precessing accretion disks similar to the 35–d period in the well known Her X–1. However, the evolution of wave forms and periods indicates relatively complex underlying physics (Levine 1998). 9. Gamma-ray burst afterglows through rapid position determinations with the ASM and PCA. Five burst positions with positions accurate to a few arcminutes in one or two dimensions have been reported to the community within hours of the event (Smith et al. 1999). One of the three known GRB with measured extragalactic red shifts (GRB 980703) was first located with the ASM on RXTE (Levine, Morgan & Muno 1998, Djorgovski et al. 1998). Another (GRB 970828) had a very bright afterglow in x rays but no discernable optical or radio afterglow (Remillard et al. 1997; Groot et al. 1998). 10. X-ray emission regions in cataclysmic variables through PCA tracking of eclipse transitions with precisions of tens of kilometers (e.g., Hellier 1997), and wind-wind collisions in Eta Carina through repeated PCA spectral observations (Corcoran et al. 1997). 11. Diffuse source spectra from the galactic plane (Valinia & Marshall 1998), supernova remnants (e.g., Allen et al. 1997), and clusters of galaxies (e.g., Rephaeli & Gruber 1999) to high energies ($`>`$ 10 keV) with PCA and HEXTE. ## 3 KiloHertz oscillations in low-mass x-ray binaries The most prominent area of RXTE accomplishment is that of kiloHertz oscillations. Relatively high-Q quasiperiodic oscillations (QPO) at kHz frequencies (up to 1230 Hz) in Low-Mass X-ray Binaries (LMXB) have been found in the persistent flux of 18 sources (as of this writing, Dec. 1998). Five of these, and one other, exhibit quite coherent, but transient, oscillations in the frequency range 290 – 590 Hz during Type I (thermonuclear) bursts. One additional source, a transient, exhibited sustained coherent pulsations at 401 Hz with Doppler shifts characteristic of a binary orbit. Reviews of the field may be found in van der Klis (1998, 1999). These new phenomena are probing the processes taking place close to the neutron stars where the effects of General Relativity are important. For example, if the highest-frequency kHz QPO observed in a given source is interpreted as the Kepler frequency of the inner accretion disk, it should be limited by the frequency of the innermost stable orbit allowed in general relativity ($`r=6GM/c^2`$ for Schwarzschild geometry). Since 17 of the sources exhibit maximum frequencies in the relatively narrow 1000–1200 Hz range, these frequencies may indeed represent the innermost stable orbits. The phenomenon is also placing constraints on the equations of state of neutron stars as we illustrate below. The first discovered examples of the quasi-periodic oscillations at kHz frequencies were in Sco X-1 (van der Klis et al. 1996) and 4U 1728–34 (Strohmayer et al. 1996). These QPO usually occur in pairs. As the source intensity increases, the two QPO generally increase in frequency with a frequency difference that remains approximately constant (Strohmayer et al. 1996); see Fig. 2. If the higher-frequency peak of the pair is the Kepler velocity of a blob of orbiting disk material, the lower-frequency peak could arise from the interaction of the blob with the magnetosphere which is co-rotating with the spinning neutron star. The observed lower frequency would thus be a beat frequency, such as that postulated for QPOs at much lower frequencies ($`60`$ Hz) in the 1980’s (Alpar and Shaham 1985, Lamb et al. 1985). The difference of the two frequencies would be the neutron-star spin frequency. For the 17 sources that exhibit two frequencies, the differences range from $``$250 to $``$350 Hz, indicating neutron-star spins in this range. (See Table 1 in van der Klis 1999.) The increase of frequency of the oscillations with intensity (Fig. 2) can be understood in this picture as being due to an increase in the Kepler frequency arising from a decrease in the size of the magnetosphere caused, in turn, by the increased ram pressure of the accreting material (Ghosh & Lamb 1992). In fact, the source 4U 1820–30 exhibits frequencies that increase with flux until they saturate at about 1060 Hz and 800 Hz (Fig. 3). This suggests that the innermost stable orbit has been reached (Zhang et al. 1998b). But since this plot is a compilation of data from different observations that might have differing intensity-frequency relations (see below), the effect could be an artifact (Mendez et al. 1999). If indeed the maximal frequencies represent the innermost stable orbits, the highest observed frequency seen to date (1228 Hz) yields a neutron star mass of $``$2.0 $`M_{}`$, which is significantly above the canonical 1.4 $`M_{}`$. Further, the radius of the neutron star must not exceed the radius of the Kepler orbit corresponding to the maximum observed frequency in a given source. This limit, together with the marginally stable orbit just discussed above, places constraints on allowed equations of state for neutron stars (Miller, Lamb & Psaltis 1998, see Fig. 4). The current limits do not yet distinguish among the plotted equations of state, but the potential for doing so is clearly there. The interpretation that the difference frequency is the neutron-star spin frequency gains credence from the discovery of nearly coherent pulsing during x-ray bursts in several sources at about, or at about twice, the difference frequency (e.g., Strohmayer et al. 1996). The frequency of the burst oscillations in 4U 1728–34 is stable from burst to burst within about 0.01% over a period of 1.6 years (Strohmayer et al. 1998; Fig. 5). This stability is a strong indicator that these pulsations directly represent the neutron-star spin. They could arise from a transient hot spot (or hot spots) in the runaway thermonuclear burning on the neutron-star surface (Bildsten 1995). If there were a hot spot at each of two opposed magnetic poles, the detected frequency would be twice the spin frequency. This picture needs refinement or modification for several reasons: (1) the frequency difference as a function of intensity (or of one of the two frequencies) is not constant in Sco X–1 (van der Klis et al. 1997), 4U 1608 (Mendez et al. 1998), 4U 1735–44 (Ford et al. 1998), and possibly in all sources (Psaltis et al. 1998), (2) the frequencies in bursts are not strictly 1.0 or 2.0 times the difference frequencies in all sources, especially in 4U 1636–536 (Mendez, van der Klis & van Paradijs 1998), (3) there are small frequency drifts during a single burst (Fig 5), and (4) although there are correlations between source intensity and QPO frequency on short time scales (hours), the correlations do not hold up over long periods (days); very different intensities can yield the same QPO frequency, e.g., in Aql X–1 (Zhang et al. 1998a). These problems are not necessarily fatal to the basic picture; there are various proposed scenarios to explain the discrepancies. Alternatively, the answers could lie in very different directions, see, e.g., Stella & Vietri (1999). The so-called discrepancies are actually excellent probes with which to verify or discard theoretical models. For example, the frequency difference as a function of intensity in Sco X–1 requires quantitative understanding; see, for example, Stella & Vietri 1999. Also, the frequency vs intensity dilemma (item 4 above) has been clarified by the discovery of monotonic mapping of frequency with position in the color-color diagram of 4U 1608–52 (Mendez et al. 1999). Finally, as noted above, the beat-frequency model was originally introduced to explain some of the lower-frequency quasi-periodic oscillations ($``$ 6 – 60 Hz) in LMXB in the 1980’s. One cannot explain both kinds of oscillations (low-frequency and kHz) in the same source with this one model. Attempts to rationalize these phenomena include (1) a sonic-point model to explain the kHz oscillations as arising from interactions between radiation and orbiting blobs at the sonic-point radius (Miller, Lamb & Psaltis 1998), (2) nodal precession of the inner disk, dominated by the Lense-Thirring effect, to explain the lower-frequency oscillations (Stella & Vietri 1998), and (3) periastron precession to explain the lower-frequency peak of the kHz twin peaks (Stella & Vietri 1999). The latter model can reproduce the changes in the frequency difference but does not attempt to explain the apparent coincidences with the frequencies of the kHz QPO during bursts in some sources. Strong-field GR is required to calculate the periastron-precession frequency. Thus, as pointed out by the authors, if their model is validated, the kHz QPO phenomenon provides an unprecedented testbed for strong-field General Relativity. These indicators of the neutron-star spin are only indirect (through the beat frequency) or fleeting (during bursts). The detection of coherent, persistent, accretion-powered pulsing at millisecond periods had so far eluded RXTE researchers. This elusive goal was reached with the recent (April 1998) RXTE discovery of highly coherent 401–Hz x-ray pulsing in the persistent flux of a transient source (Wijnands & van der Klis 1998). A binary orbit was easily tracked with the Doppler shifts of the 401–Hz pulsations (Chakrabarty & Morgan 1998, Fig. 6). The orbital period is 2.01 hr which indicates a companion mass less than 0.1 $`M_{}`$. The Doppler variation demonstrated without doubt that the pulsations arose from the neutron-star spin. The source thus became the first known accretion-powered millisecond pulsar.The source had been detected in a previous transient episode (September 1996) with the SAX Wide Field Camera (and also in RXTE/ASM data retrospectively) and two x-ray bursts were observed from it (in ’t Zand et al. 1998). It is known as SAX J1808.4–3658. The rapid pulsing discovery occurred during a later outburst (April 1998) that was revealed in RXTE/PCA data during a spacecraft slew (Marshall 1998). These discoveries of millisecond-period x-ray sources fill an important link in the spin evolution of neutron stars. It had long been postulated that millisecond radio pulsars are spun up in x-ray binaries (Radhakrishnan & Srinivasan 1982, Alpar et al. 1982), and LMXB were prime candidates because of their lack of coherent pulsations at lower frequencies. This long-sought evolutionary link has now been established. This is the successful attainment of one of RXTE’s major goals. ## 4 Microquasars Another area of major accomplishment by RXTE is that of “microquasars”. The discovery of transient galactic x-ray emitting objects with superluminal radio jets, GRO 1655–40 and GRS 1915+105 (e.g., Tingay et al. 1995, Mirabel & Rodriguez 1994), and the well-determined high mass ($`7.0\pm 0.2`$ $`M_{}`$) of the compact object in GRO 1655–40 (Orosz & Bailyn 1997) focused attention on the fact that counterparts of (black-hole) quasars are close by in the Galaxy. Their proximity allows studies with much higher statistics, and their lower (stellar) masses lead to much smaller time constants for motions of matter in the vicinity of the compact object. The time constants scale linearly with mass, e.g., the orbital period of Kepler matter in the innermost stable orbit. Thus a 1–year intensity variation in the vicinity of a $`10^8`$$`M_{}`$ quasar would occur in 3 s in a galactic 10–$`M_{}`$ microquasar. Other galactic x-ray sources are known to exhibit evidence for radio jets through episodic non-thermal radio emission and/or diffuse emission or resolved jets. These include the long-known Cyg X–3, GX 339–4, Cir X–1, SS433 and Cyg X–1 (see van Paradijs 1995 for references), and also the RXTE discovered or recovered transients XTE J1748–288 (Rupen & Hjellming 1998), GRS 1739–278 (Hjellming et al. 1996, Durouchoux et al 1996) and CI Cam (Hjellming & Mioduszcwski 1998). One of these sources, Cir X–1 most likely contains a neutron star (Tennant, Fabian & Shafer 1986, Shirey, Bradt & Levine 1999), and another, CI Cam, is a symbiotic system. Altogether these sources are a rich resource for the understanding of the role accretion disks play in jet formation. It is fortunate that the superluminal sources GRS 1915+105 and GRO 1655–40 have exhibited extensive activity during the RXTE mission. GRO 1655–40 was active for about 16 months beginning in April 1996, and GRS 1915+105 has been active since the beginning of the mission. The latter source exhibits a variety of states in its long-term variability as measured with the RXTE/ASM (Fig. 7). In the high-statistics data from the RXTE/PCA, its x-ray variability is dramatic and varied, including rapid oscillations, sudden dips, sharp spikes, etc., all accompanied with spectral changes (Fig. 8; Greiner, Morgan & Remillard 1996, Morgan, Remillard & Greiner 1997, Taam, Chen & Swank 1997). Next, I present briefly three areas of substantive progress in microquasar studies. ### 4.1 Initiation of accretion The sudden turn-on of GRO 1655–40 in x rays was fortuitously monitored in BVRI during the week just prior to the x-ray turn on (Fig. 9; Orosz et al. 1997). A linear increase of flux was seen in all four bands. It began first in the I band, 6.1 d before the commencement of the linear x-ray rise. The increases in the R, V and B bands commenced systematically later with the latter occurring 5.0 days before the x-ray commencement. This sequence suggests that the initiating event of a transient outburst was a wave of instability propagating inward in the disk (Lasota, Narayan & Yi 1996). This is an important breakthrough in the determination of the causes of x-ray nova outbursts. ### 4.2 Accretion-jet correlations There have been clear coincidences between radio/infrared non-thermal flares and x-ray events, both on the longer time scales of the ASM data (Pooley & Fender 1997) and shorter-term events in the PCA data (Pooley & Fender 1997, Eikenberry et al. 1998, Mirabel et al. 1998). The latter type of x-ray event consists of a large x-ray dip ($``$15 minutes) that contains a pronounced spike (Fig. 8, bottom panel). Such an event is associated with an infrared flare and a delayed radio flare as shown in an event captured by Mirabel et al. (1998, Fig. 10). Five and possibly six IR/x-ray coincidences of this type were reported by Eikenberry et al. (1998) and in no case was the coincidence violated! These x-ray dips are repetitive, occurring irregularly at intervals of a half hour or so (Fig. 8). The source may reside in this state for hours to days. This is only one of several oscillatory states in which the source can find itself; see Fig. 8. The spectral evolution of an infrared/radio flare has been shown to represent a single relativistically expanding plasmoid (Eikenberry & Fazio 1997, Mirabel et al. 1998, Fender & Pooley 1998). These IR/radio events are small, i.e., mini flares. A series of them emitted when the source is in this state could give rise to a single large superluminal outburst. It thus appears that the jets are quantized, not continuous, and that RXTE is seeing the “pump” that creates them! X-ray spectral fits (Fig. 11, Swank et al. 1997) during these events show a softening of the disk-black body component, which can be interpreted as the disappearance of the inner part of the disk as proposed by Belloni et al. (1997a,b). Thereafter, the gradually increasing temperature and decreasing radius of the disk component would represent the refilling of the disk. The power-law component suddenly softens at a sharp x-ray spike near time 1600 s when the disk is nearly full. Mirabel et al. (1998) suggest that this spike is the initiating event of the flare (see the IR flare in Fig. 10). The frequency of the associated low- frequency QPO (Fig. 11) appears qualitatively to track the disk radius as if it were the Kepler frequency at this or an associated radius. But the situation is not this simple given the existence of other QPOs, e.g., 67 Hz, in the system (see Remillard et al. 1999). As noted, GRS 1915+105 exhibits some half dozen states with different temporal/spectral variability, not all of which fit this simple disk-depletion picture. Additional multifrequency studies are needed as are more comprehensive models. ### 4.3 High-frequency QPO in Microquasars The microquasars exhibit quasi periodic oscillations (QPO) that are quite variable in frequency and also some that are relatively stable. These QPO have a large potential for probing the physics of the systems. The highest frequencies (Fig. 12), namely 67 Hz in GRS 1915+105 (Morgan, Remillard & Greiner 1997) and 300 Hz in GRO 1655–40 (Remillard et al. 1999) do not drift in frequency. They have led to intriguing speculation about their origins. The high frequencies place them close to the central black hole, and models usually invoke General Relativity. Suggested origins include the innermost stable orbit (Morgan et al. 1987), Lense-Thirring precession (Cui, Zhang & Chen 1998), diskoseismic oscillations (Nowak, et al. 1997), and oscillations in the centrifugal barrier (Titarchuk, Lapidus, Muslimov 1998). Some investigators are using these data and models to arrive at the angular momentum of the central black hole. The black-hole mass, $`7.0\pm 0.2`$ $`M_{}`$, of GRO 1655–40 and the 300–Hz oscillations in this source suggest negligible black-hole angular momentum if the oscillations are the Kepler frequency of the innermost stable orbit. On the other hand, if the 300 Hz oscillations are due to Lense-Thirring precession in the inner disk, they imply a maximally rotating black hole (Cui, Zhang & Chen 1998). The latter view gains some support from the measured high disk temperature which is indicative of the small inner disk radius expected for prograde orbital motion of a maximally rotating black hole (Zhang, Cui & Chen 1997). These conclusions are highly model dependent and therefore uncertain. Nevertheless, it is impressive to that the angular momentum of black holes is now being addressed by the community with data from RXTE . This was not dreamed of even a few years ago. All in all, it is clear that the jet formation processes, the conditions of disk stability, and the formation of the power-law component are being explored with a powerful and effective tool, namely the temporal/spectral/statistical power of RXTE. The behavioral detail now being acquired from microquasars extends well beyond that which can be obtained from the much more distant extragalactic quasars. ## 5 Conclusions The RXTE is making important strides in the study of compact objects, both galactic and extragalactic, in a wide variety of studies by a large international community of observers. The discovery of 401–Hz kHz coherent pulsations in a low-mass x-ray binary has established definitively an important link in the evolution of neutron stars. The kHz QPO in 18 systems and coherent pulsations during bursts give additional strong indications of neutron-star spins at frequencies of a few hundred Hz. These QPO provide information about the behavior of matter in the immediate vicinity of the neutron star and are placing limits on the possible equations of state of neutron stars. The temporal/spectral signatures of the various behaviors in microquasars are diverse, yet repeatable and well described with high statistics. They are powerful probes of these systems and should serve as powerful discriminators of models. At the same time, the complexity makes difficult the construction of a comprehensive model of the emission processes. The results currently point toward black-hole masses and angular momenta, the nature of disk instabilities, and the precise events that initiate the jets signified by radio/IR flares. These results clearly have applicability to extragalactic quasars. The temporal variability of x-ray spectra can, in principle, track the changing geometry of the several physical components of the system (e.g., disk and corona). However this requires that these physical components be securely identified with the spectral components. This is a major challenge now confronting microquasar researchers. RXTE studies are probing phenomena where strong General Relativity is important because of the proximity of the emitting plasmas to the central gravitational object. For example, frame dragging has been invoked for some high frequency QPOs, and the orbital frequency at the innermost stable orbit may have been encountered in LMXB systems. Measurements of these and other GR effects are now within the realm of RXTE capabilities. ## Acknowledgments The author is grateful for the efforts of the entire RXTE team and the many observers whose work has contributed to the productivity of RXTE . He is especially grateful to the staff and students of the RXTE group at M.I.T. for many helpful and stimulating conversations. Helpful comments for this manuscript were provided by R. Remillard and L. Stella. This work was supported in part by NASA under contract NAS5–30612. The author further acknowledges with gratitude the support and hospitality provided to him during his sabbatical year at the Osservatorio Astronomico di Roma. This report was completed while overlooking the “Pines of Rome”. ## 6 References Allen, G. E. et al. 1997, ApJ, 487, L97 Alpar, M. & Shaham, J. 1985, Nature, 316, 239 Alpar, M. Cheng, A., Ruderman, M. & Shaham, J. 1982, Nature, 300, 728 Belloni, T., Mendez, M., King, A. R., van der Klis, M. & van Paradijs, J. 1997a, ApJ, 479, L145 Belloni, T., Mendez, M., King, A. R., van der Klis, M. & van Paradijs, J. 1997b, ApJ, 488, L109 Bildsten, L. 1995, ApJ, 438, 852 Bradt, H. V., Rothschild, R. E. & Swank, J. H. 1993, A&AS, 97, 355 Cantanese, M. et al. 1997, ApJ, 487, L143 Chakrabarty, D. & Morgan, E. 1998 Nature, 394, 346 Corcoran, M. F., Ishibashi, K., Swank, J., Davidson, K., Petre, R. & Schmitt, M. 1997, Nature, 390, 587 Cui, W. 1997, ApJ, 482, L163 Cui, W., Zhang, S. N., Focke, W. & Swank, J. H. 1997, ApJ, 484, 383 Cui, W., Zhang, S, N. & Chen, W. 1998, ApJ, 492, 53 Djorgovski, S. G. et al. 1998, ApJ, 508, L17 Durouchoux P. et al. 1996, IAU Circ. 6383 Eikenberry, S.S. & Fazio, G. G. 1997, ApJ, 475, L53 Eikenberry S. S., Matthews, K., Morgan, E. H., Remillard, R. A. & Nelson, R. W. 1998, ApJ, 494, L61 Fender, G. G. & Pooley, R. P. 1998, MNRAS 300, 573 Ghosh, P. & Lamb, F. K. 1992, in X-ray Binaries & Recycled Pulsars, ed. E. P. J. van den Heuvel & S. A. Rappaport (Dordrecht: Kluwer) 487 Ford, E. C., van der Klis, M., van Paradijs, J., Mendez, M., Wijands, R. & Kaaret, P. 1998, ApJ, 508, L155 Greiner, J., Morgan, E. H. & Remillard, R. A. 1996, ApJ, 473, L107 Groot, P. J., et al. 1998, ApJ, 493, L27 Hellier, C. 1997, MNRAS, 291, 71 Hjellming, R. M. 1996, IAU Circ. 6383 Hjellming, R. M. & Mioduszcwski, A. J. 1998, IAU Circs. 6857, 6862, 6872 in ’t Zand, J. J., Heise, J., Muller, J. M., Bazzano, A., Cocchi, M., Natalucci, L. & Ubertini, P. 1998, Astr. Astrophys., 331, L25 Jahoda, K. et al. 1996, in EUV, X-ray, and Gamma-ray Instrumentation for Space Astronomy VII, ed. O. H. W. Sigmund & M. A. Grummin, Proc. SPIE 2808, 59 Kommers, J. M., Chakrabarty, D. & Lewin, W. H. G. 1998, ApJ, 497, L33 Kouveliotou, C., et al. 1998, Nature, 393, 235 Kreykenbohm, I., et al. 1999, A&A, submitted (astro-ph 9810282) Lasota, J. P., Narayan, R. & Yi, I. 1996, A&A, 314, 813 Lamb, F. K., Shibazaki, N., Alpar, M. & Shaham, J. 1985, Nature, 317, 681 Levine, A. M. 1998, Nucl. Phys. B (Proc. Suppl.), 69/1–3, 196 \[Proc. of The Active X-ray Sky , eds. L. Scarsi, H. Bradt, P. Giommi & F. Fiore, North-Holland\] Levine, A. M. et al. 1996, ApJ, 469, L33 Levine, A. M., Morgan, E. & Muno, M. 1998, IAU Circ. 6966 Marshall, F. E. 1998, IAU Circ. 6876 Marshall, F. E., Gotthelf, E. V., Zhang, W., Middleditch, J. & Wang, Q. D. 1998, ApJ, 499, L179 Mendez, M., van der Klis, M., Ford, E. C., Wijnands, R. & van Paradijs, J. 1999 ApJ Letters (in press) Mendez, M., van der Klis, M. & van Paradijs 1998, ApJ, 506, L117 Mendez, M., van der Klis, M., Wijnands, R., Ford, E., van Paradijs, J. & Vaughan, B. 1998, ApJ. 505 , L23. Miller, M. C., Lamb, F. K. & Psaltis, D. 1998, ApJ, 508, 791 Mirabel, I. F. & Rodriguez, L.F. 1994, Nature, 371, 46 Mirabel, I. F., Dhawan, V., Chaty, S., Rodriguez, L. F., Marti, J., Robinson, C. R., Swank, J. H. & Geballe, T. 1998, A&A, 330, L9 Morgan, E. H., Remillard, R. A. & Greiner, J. 1997, ApJ, 482, 993 Nandra, K. et al. 1998, ApJ, 505, 594 Nowak, M. A., Wagoner, R. V., Begelman, M. C. & Lehr, D. E. 1997, ApJ, 477, L91 Orosz, J. A. & Bailyn, C. D. 1997, ApJ, 477, 876 Orosz, J. A., Remillard, R. A., Bailyn, C. D., McClintock, J. E. 1997, ApJ, 478, L83 Pooley, G. G. & Fender, R. P. 1997, MNRAS, 292, 925 Psaltis, D. et al. 1998, ApJ, 501, L95 Radhakrishnan, V. & Srinivasan, G. 1982, Curr. Sci. 51, 1096 Remillard, R. A., Wood, A., Smith, D. & Levine, A. 1997, IAU Circ. 6726; see also IAU Circ. 6728 Remillard, R. A., Morgan, E. M., McClintock, J. E., Bailyn, C. D. & Orosz, J. A. 1999, ApJ, in press (astro-ph 9806049) Rothschild, R. E. et al. 1998, ApJ, 496, 538 Rephaeli, Y. & Gruber, D. 1999, in preparation Rupen, M. P. & Hjellming, R. M. 1998, IAU Circ. 6938 Shirey, R. E., Bradt, H. V. & Levine, A. M. 1999, ApJ (in press) Smith, D. A. et al. 1999, ApJ, in preparation Stella, L. & Vietri, M. 1998, ApJ, 492, L59 Stella, L. & Vietri, M. 1999, PRL, 82, 17 Strohmayer, T. E., Zhang, W., Swank, J. H., Smale, A., Titarchuk, L., Day, C. & Lee, U. 1996, ApJ, 469, L9 Strohmayer, T., Zhang, W., Swank, J. & Lapidus, I. 1998, ApJ, 503, L147 Swank, J. H., Chen, X., Markwardt, C. & Taam, R. 1997, in Proceedings of Accretion Processes in Astrophysics: Some Like it Hot , U. of Md. Oct. 1997, eds. S. Holt & T. Kallman; astro-ph 9801220 Taam, R. E., Chen, X. & Swank, J. H. 1997, ApJ, 485, L83 Titarchuk, L., Lapidus, I., Muslimov, A. 1998, ApJ, submitted, astro-ph 9712348 Tennant, A. F., Fabian, A. C. & Shafer, R. A. 1986, MNRAS, 221, 27p Tingay, S. J. et al. 1995, Nature, 374, 141 Valinia, A. & Marshall, F. E. 1998, ApJ, 505, 134 van der Klis, M. et al. 1996, ApJ, 469, L1 van der Klis, M. et al. 1997, ApJ, 481, L97 van der Klis, M. 1998, Nucl. Phys. B (Proc. Suppl.) 69/1–3(1998)103. \[Proc. of The Active X-ray Sky , eds. L. Scarsi, H. Bradt, P. Giommi & F. Fiore, North-Holland\]. van der Klis, M. 1999, Proceedings of the Third William Fairbank, Rome, June 1998; astro-ph 9812395. van Paradijs, J. 1995, in X-ray Binaries, eds. W. H. G. Lewin, J. van Paradijs & E. P. J. van den Heuvel (Cambridge: Cambridge Univ. Press), 536 Weaver, K. A., Krolik, J. H. & Pier, E. A. 1998, ApJ, 498, 213 Wijnands, R. & van der Klis, M. 1998, Nature, 394, 346 Zhang, S. N., Cui, W., Chen, W. 1997, ApJ, 482, L155 Zhang, W. et al. 1998a, ApJ, 495, L9 Zhang, W., Smale, A., Strohmayer, T. & Swank, J. 1998b, ApJ. 500, L171 ## Figure Captions Fig. 1. Sample of All-Sky Monitor light curves from Mar. 1996 to June 1998 showing, top to bottom, a microquasar, the flare star CI Cam, two black-hole binaries, probable disk precession in a neutron-star binary, and two faint BL Lac objects. The ordinate is count rate adjusted to the center of the field of view of a single ASM camera; the Crab nebula would yield $``$75 c/s. (A. Levine, pvt. comm.) Fig. 2. Power density spectra of 4U 1728–34 in three intensity states. Low frequency QPO are evident at 20–40 Hz as are two peaks at $``$1 kHz which move to higher frequencies as the intensity increases. (From Strohmayer et al. 1996) Fig. 3. Frequency of the two QPO’s at kHz frequencies in 4U 1820–30 as a function of intensity. The saturation suggests that the innermost stable orbit has been reached, but this conclusion has been questioned — see text. (Zhang et al. 1998) Fig. 4. Constraints on mass and radius of neutron star in the non-rotating approximation. The highest frequencies detected limit the neutron star mass to $``$1.8 $`M_{}`$. If rotation is taken into account, the limits will increase up to at most 2.2 $`M_{}`$. (Miller, Lamb & Psaltis 1998) Fig. 5. Dynamic power spectra of two bursts from 4U 1728–34 separated in time by 1.6 y. In each case, the frequency settles to a stable period at 364.0 Hz with frequencies that agree within 0.03 Hz. (From Strohmayer et al. 1998) Fig. 6. Doppler curve for the 401–Hz pulsar discovered with RXTE. The maximum delay is 63 ms, and the binary orbital period is 2.01 h. (From Chakrabarty & Morgan 1998) Fig. 7. RXTE/ASM light curve of GRS 1915+105 with hardness ratio (5–12)/(3–5) from Mar. 1996 through Sept. 1998. The marks at the top indicate the times of PCA pointings. (R. Remillard, pvt. comm.) Fig. 8. Three types of variability of GRS 1915+105 in RXTE/PCA data (E. Morgan, pvt. comm.) Fig. 9. Precursor outburst activity in GRO 1655–40. The source intensity is shown in the optical (BVRI bands) and in the delayed x-ray flux. The onset times are progressively later and later as the radiation band hardens. (Orosz et al. 1997) Fig. 10. Large x-ray dip with spike with simultaneous radio and infrared flares in GRS 1915+105. Other x-ray dips of this type are shown in the bottom panel of Fig. 8. (Mirabel et al. 1998) Fig. 11. X-ray character of GRS 1915+105 during and near a dip+spike event, from RXTE data. Top to bottom: x-ray light curve, inner disk temperature, inner-disk radius, photon index of power-law component, dynamic power spectrum. (Swank et al. 1998) Fig. 12. Power density spectra of two microquasars showing the high frequency and apparently stable QPOs at 67 Hz and 300 Hz. (Morgan, et al. 1997, Remillard et al. 1999)
no-problem/9901/cond-mat9901106.html
ar5iv
text
# Elementary mechanisms governing the dynamics of silica \[ ## Abstract A full understanding of glasses requires an accurate atomistic picture of the complex activated processes that constitute the low-temperature dynamics of these materials. To this end, we generate over five thousand activated events in silica glass, using the activation-relaxation technique; these atomistic mechanisms are analysed and classified according to their activation energies, their topological properties and their spatial extend. We find that these are collective processes, involving ten to hundreds of atoms with a continuous range of activation energies; that diffusion and relaxation occurs through the creation, annihilation and motion of single dangling bonds; and that silicon and oxygen have essentially the same diffusivity. \] Glassiness is a dramatic slowing down of the kinetics of a liquid as the temperature decreases below some typical value. Experiments have yielded considerable information about the macroscopic character of this phenomenon, but very few techniques provide the local probe needed to understand its microscopic origin . On the theoretical side, significant progress has been made recently in understanding the supercooled region, but little is known about the atomistic nature of the relaxation and diffusion dynamics taking place at temperatures below the glass transition . Using a new Monte Carlo technique, the activation-relaxation technique, we map in detail the activated processes of g-SiO<sub>2</sub> taking place at low temperatures. The activation-relaxation technique (ART) is a method that allows an efficient sampling of activated processes (events) in complex continuous systems . Moves are defined directly in the configurational energy landscape and can reach any level of complexity required by the dynamics; they can involve hundreds of atoms crossing barriers as high as 25 eV. In a two-step process, a configuration is first brought from a local minimum to an adjacent saddle point and then relaxed to a new minimum. Such an event is shown in figure 1. Each event is accepted or rejected following a standard Metropolis procedure. In this work, we study two independent runs on 1200-atom cells of SiO<sub>2</sub>, modeled with the screened-Coulomb potential of Nakano et al. which has been shown to give realistic structures and a good account of a number of dynamical properties. We prepare these runs starting from randomly packed unit cells, and relax them through 5000 ART iterations. This procedure ensures absence of correlation, both with the crystalline state as well as between runs. After relaxation, a further 5000 ART iterations are performed on each cell. Slightly more than half of these iterations show a clean convergence to a saddle point, providing a database of 5645 events. An analysis of these events can give us a unique glimpse at the basic nature of activated mechanisms in this material. We checked for systematic effects caused by the initialization procedure or by the potential used: a comparison with events from a shorter run, starting from an MD-prepared 576-atom sample, indicates that the nature of the events is independent of the preparation mode; a comparison with events from a shorter run in which the van Beest potential was used, with parameters as in , indicates that, unless stated otherwise, the results presented here are at least qualitatively similar between these potentials. The efficiency of ART does not depend directly on the height of the activation barrier or the complexity of the move. The likelihood for a particular event to be sampled by ART, however, is not clearly related to the preferences of nature; entropic considerations, for instance, are left out. This draw-back is not present in molecular dynamics (MD). However, the time scales accessible to MD are limited by the phonon time scale: events in g-SiO<sub>2</sub> can only be generated with some efficiency if the simulations are performed at elevated temperatures of 4000 K or more ; events sampled at these high temperatures are likely to provide an incomplete representation of those occurring below the glass transition. Our approach is therefore to generate a whole distribution of events with ART, and to obtain an overview of the possible types of activated mechanisms in g-SiO<sub>2</sub>, by classifying these in terms of energy, defects or topological changes. Further study of the details of the energy landscape will be necessary in order to simulate the dynamics of low temperature configurations. During the events acquisition, the configurational energy decreases by about 30 meV per atom. The density of coordination defects fluctuates but does not show a clear trend. With the Nakano potential, the samples have roughly the right density for g-SiO<sub>2</sub> and about one percent of the bonds between O and Si are missing, compared to perfect coordination. The defects produced are almost uniquely dangling bonds; we do not find any homopolar bond nor, consequently, any superoxide radical or Frenkel pair . The van Beest potential , that we used for comparison, tends to produce dense phases and its cut-off needs to be finely tuned to get the right density. Even in that case, it favors overcoordination . We first look at properties averaged over the whole database. Figure 2 shows broad and continuous distributions of the activation barriers and the energy asymmetries (initial to final minimum energy difference). We find that the height of the barrier and the asymmetry of the well are only weakly correlated. Besides the barrier, the spatial extend of events is another important quantity. For each event, we determine the number of atoms displaced by more than a threshold distance $`r_c`$. The number of atoms participating in an event depends of course on the value of this threshold. Typically, an event is accompanied by a local volume contraction or expansion. In elastic media, the displacement of the surrounding atoms decreases quadratically away from the center of the event. The number of atoms moving more than a cut-off distance $`r_c`$ will therefore decrease as $`r_{c}^{}{}_{}{}^{3/2}`$, as long as $`r_c`$ is in the elastic regime, and the number of atoms much smaller than the sample size; in our case, this scaling is obeyed between 0.05 and 1 Å. In figure 3 we plot this distribution for a threshold of $`r_c=0.1`$ Å, the typical vibrational amplitude of silicon at room temperature. As can be seen from this figure, events typically involve the motion of hundreds of atoms with simultaneous diffusion of both species to varying degrees: diffusion therefore should not be thought of in terms of elemental jumps but of complex rearrangements. In terms of correlation, larger events are not found to require a higher activation energy: size and energetics are almost entirely uncorrelated. Moreover, no correlation is found between the distance by which Si or O move and the corresponding activation barriers; Si and O have thus the same activation energy. These results provide a consistent picture for macroscopic diffusion. Theoretical work by Limoge suggests that the effective activation energy should be around the maximum of its distribution, i.e., here, about 5 eV . A similar activation energy was found by Litton and Garofalini for O and Si in MD simulations of molten silica, although the diffusion mechanism described is different from what we see, probably due to the high temperature of the simulations (from 4800 to 7200 K). Experiments report activation energies of 6.0 eV for Si , obtained in electrically fused quartz, and 4.7 eV for O, with a much smaller prefactor , obtained in vapor-phase deposited amorphous silica. The difference in experimental activation energies might be caused by the different sample preparation techniques, resulting in different types of impurities . More microscopic information on the nature of the events can be obtained by studying the topology of the network. For this purpose, we divide the events into three distinct categories: perfect events where only perfectly coordinated atoms change neighbors, conserved events that involve only diffusion of coordination defects (dangling and floating bonds) and events that create or anneal coordination defects. Amorphous Si and g-SiO<sub>2</sub> are thought to be conceptually similar, both described by Zachariasen’s continuous random networks. However, while perfect events play a central role for both relaxation and self-diffusion in a-Si , they are rare in g-SiO<sub>2</sub>: the strong ionicity of SiO<sub>2</sub> enforces chemical ordering, so that atomic exchanges have to occur at the second neighbor level, inducing more strain or larger topological rearrangements than in a-Si. Perfect events comprise only about one percent of the total number produced. A third of these events involves local topological rearrangements, mostly two Si exchanging a pair of neighboring O. Such moves can only happen with a relatively low energy barrier if the local rigidity of the network is reduced by nearby undercoordinated atoms. Two thirds of the perfect events do not involve a topological modification but simply some slight local rearrangement, with displacements on the order of $`0.1`$ Å and asymmetries of about $`10^4`$ eV. Such events could be candidates for tunneling states. We find 906 conserved events, i.e., events describing the diffusion of defects. Such defects are almost exclusively dangling bonds, on both Si ($`E^{}`$ centers) and O (non-bridging oxygens), although a few highly energetic floating bonds on O are also present. We see no sign of point defects, which would show themselves by a strong spatial correlation between dangling or floating bonds. Events describe overwhelmingly single-dangling-bond diffusion mechanisms. The simplest of these is a jump of a dangling bond from one atom to its neighbor, an example of which is given in Figure 1. More complex events are also seen, involving jumps to the second or third neighbor, or local rearrangements along a loop. All these mechanisms have relatively well defined barriers and asymmetries. From their statistics we can obtain structural information. For instance, a comparison of near-neighbor dangling bond diffusion involving different topological rearrangements shows that the average cost of creating a 3-fold ring in silica is $`1.5\pm 0.2`$ eV. This value is larger than the $`0.250.81`$ eV of ab-initio calculations on fully relaxed molecules , suggesting that the local strain on the network caused by topological disorder can affect significantly their effective energy. More than 80 percent of the events produced involve the creation or the annihilation of coordination defects, with a wide spectrum of energies and configurations. Events with a low barrier and asymmetry, the ones determining the dynamics, are often topologically simple, like the annihilation of one or two pairs of dangling bonds or their creation. In effect, the creation (or annihilation) of a pair of defect costs (saves) much less energy than would be naively thought by simply considering the breaking of a bond in a crystal or a molecule: the elastic energy stored in the network will often counter the bonding energy. Contrary to what is found in crystalline silica, the creation of a defect in the glass can have an activation energy and asymmetry that is comparable to those associated with their diffusion. For example, creating a pair of dangling bonds in order to remove a 3-fold ring costs only about 0.4 eV, much less than what would be expected in an unstrained environment. The above results provide the following picture regarding relaxation and diffusion in g-SiO<sub>2</sub>. Mechanisms responsible for relaxation and diffusion in g-SiO<sub>2</sub> are the creation, diffusion and annihilation of coordination defects, and can require the collective displacement of hundreds of atoms. The types of defects that dominate the dynamics are dangling bonds, either attached to a Si atom ($`E^{}`$ centers), or to an O atom (non-bridging oxygens); a pair of these defects can easily be created and annealed, with an activation energy that is often similar to what is required for the diffusion of these defects. Moreover, all these mechanisms involve O and Si with almost equal weight, indicating that the two species should diffuse with roughly the same activation barrier. These elementary mechanisms are fundamentally different from those found in amorphous silicon, which underlines the rich diversity in the microscopic dynamics of network glasses. Acknowledgements. Part of the calculations were carried out on the CRAY T3E of HPAC. This work is supported in part by the “Stichting voor Fundamenteel Onderzoek der Materie (FOM)”, which is financially supported by the “Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO)”, and by the NSF under grant DMR 9805848.
no-problem/9901/astro-ph9901324.html
ar5iv
text
# Variability in Blazars ## 1 Introduction Ejection of relativistic plasma from a compact central engine is thought to account for the appearance and observational properties of a number of fascinating systems in astronomy, including galactic black hole jet sources, radio galaxies, quasars, and gamma-ray bursts (GRBs). Several arguments have led to this conclusion, perhaps the most important being measurements of apparent transverse superluminal motion in multi-epoch VLBI observations of radio quasars at the sub-pc scale. Apparent transverse speeds exceeding $`10`$c are found in many sources (e.g., Vermeulen & Cohen 1994). The interpretation of these observations in terms of bulk plasma outflow is not conclusive, however, as this effect could be related to the pattern speed of the emitting regions rather than to bulk plasma ejection. Another argument for relativistic plasma outflow yields a lower limit to the Doppler factor $`𝒟`$ from the measured radio flux density, the angular diameter of the radio emission region, and the upper limit of the self-Compton X-ray flux (e.g., Marscher 1987; Ghisellini 1989). An accurate measurement of $`𝒟`$ through this method requires contemporaneous radio and X-ray measurements and, moreover, an accurate determination of the radio self-absorption frequency. These conditions are met only rarely, but do point to relativistic motions in some flat-spectrum radio quasars. Other tests for beaming try to establish conditions for the impossibility of intense, rapidly variable emission from stationary radiation sources. The argument of Elliot & Shapiro (1974) contrasts the range of allowed black hole masses for luminosities governed by Eddington-limited accretion, and variability time scales constrained by the light-travel time across a region with dimensions corresponding to the Schwarzschild radius of the black hole. A stationary, Eddington-limited emitting region is not possible if $`L_{48}/\mathrm{\Delta }t(\mathrm{days})1`$, where $`10^nL_n`$ ergs s<sup>-1</sup> and $`\mathrm{\Delta }t(\mathrm{days})`$ are the observed luminosities and variability time scales in days, respectively. Klein-Nishina corrections must be applied (Dermer & Gehrels 1995) for observations at $`\gamma `$-ray energies. Opaqueness of the emitting region to $`\gamma `$-$`\gamma `$ attenuation has also been used to argue in favor of beaming (Maraschi et al. 1992). In its simplest form, the constraint from the compactness parameter $`\mathrm{}=4\pi m_ec^3/\sigma _T=6.6\times 10^{29}`$ ergs s<sup>-1</sup> cm<sup>-1</sup> implies that if $`L_{45}/\mathrm{\Delta }t(\mathrm{days})1`$, then beaming is implied. Here the luminosity refers to emission near 1 MeV where the pair attenuation cross section is largest; consequently this test is most sensitive for observations near 1 MeV. Otherwise, assumptions about the cospatial origin of gamma rays and lower energy radiation must be justified by observations of correlated variability, since the cross section of $`\stackrel{>}{}\mathrm{\hspace{0.25em}100}`$ MeV photons with each other is negligible (Dermer & Gehrels 1995). Correlated X-ray and TeV observations of Mrk 421 (Macomb et al. 1995; Buckley et al. 1996) and Mrk 501 (Catanese et al. 1997) have demonstrated that 2-10 keV X-rays and $`\stackrel{>}{}\mathrm{\hspace{0.25em}300}`$ GeV $`\gamma `$ rays originate from the same region, verifying the cospatial assumption for these sources. Their luminosities are not large enough to establish beaming through $`\gamma `$-$`\gamma `$ transparency arguments, but can be used to determine the mean magnetic field $`H`$ in the emitting region and establish a lower limit to the Doppler factor $`𝒟`$ through a newly proposed beaming test. This test is described in more detail below. Because $`\gamma `$-ray observations probe the region nearest the black hole, it is important to critically examine these tests. An early hope was that such measurements could discriminate between accelerating and decelerating jet models (e.g., Marscher 1999) by charting the variation of $`𝒟`$, thereby revealing whether the evolution of a blazar flare is accompanied by a prompt phase of Doppler variation. The possibility that the Doppler factor of the emitting region can change, however, introduces intrinsic variability which must be distinguished from variability produced by radiative cooling of the emitting particles. It is therefore important to consider processes which change the bulk Lorentz factor of the radiating plasma. This is done in Section 2. In Section 3, we present numerical simulation results showing the effects of bulk plasma (or plasmoid) deceleration. Implications for beaming tests and blazar models are discussed in Section 4. ## 2 Blast-Wave Physics Crucial for understanding variability behavior in blazars is to treat the injection of relativistic nonthermal particles in the comoving plasma fluid frame properly. The blast-wave physics developed to model GRB afterglows (e.g., Vietri 1997; Waxman 1997; Wijers et al. 1997) offers a solution to the particle injection problem, and provides a method to deal with the deceleration of the emitting plasma. The basic idea is that the energy of the injected nonthermal particles comes at the expense of the directed bulk kinetic energy of the fluid. The variation of the bulk Lorentz factor $`\mathrm{\Gamma }`$ of the radiating fluid can be simply obtained in a one-zone approximation through a momentum conservation equation (Dermer & Chiang 1998). Suppose that the system produces an outflow with total energy $`E_0`$ and initial bulk Lorentz factor $`\mathrm{\Gamma }_0`$. Because most of the energy of the flow is bound up in the kinetic energy of baryons, assumed here to be protons, then $`E_0=\mathrm{\Gamma }_0N_{\mathrm{th}}m_pc^2`$, where $`N_{\mathrm{th}}`$ is the total number of protons. Thus $`\mathrm{\Gamma }_0`$ represents the baryon loading of the system. It is straightforward to write a conservation equation for the radial (or $`\widehat{x}`$) momentum component of the fluid, given by $$\mathrm{\Pi }_x(x)=m_pcP\{(1+a)N_{\mathrm{th}}+_0^{\mathrm{}}𝑑p\gamma [N_{\mathrm{pr}}(p;x)+aN_\mathrm{e}(p;x)]\},$$ $`(1)`$ where $`am_e/m_p`$, $`PB\mathrm{\Gamma }=\sqrt{\mathrm{\Gamma }^21}`$, and $`p=\beta \gamma =\sqrt{\gamma ^21}`$. The functions $`N_\mathrm{k}(p;x)dN_\mathrm{k}(p;x)/dp`$ represents the comoving distribution functions of particles of type k = pr (protons) or k = e (electrons) at location $`x`$. This expression assumes no particle escape. Eq. (1) indicates that the momentum of the bulk plasma consists of both the inertia from the thermal protons associated with the baryon loading of the explosion, and the inertia bound up in the nonthermal proton and electron distributions. The latter functions evolve when nonthermal particles are injected into the plasma and when the energy of the nonthermal particles is radiated. As a plasmoid or blast wave traverses the surrounding medium, it intercepts and sweeps up material. A proton and electron pair is captured by the plasmoid with a Lorentz factor $`\mathrm{\Gamma }`$ in the comoving plasma frame. The plasmoid captures protons and electrons from the surrounding medium at the rate $$dN_{\mathrm{pr},\mathrm{sw}}(p,x)/dx=dN_{\mathrm{e},\mathrm{sw}}(p,x)/dx=n_{\mathrm{ext}}(x)A(x)\delta (pP),$$ $`(2)`$ The quantity $`n_{\mathrm{ext}}(x)`$ is the density of particles in the surrounding medium, and $`A(x)`$ is the cross-sectional area of the plasmoid which is effective at sweeping up material from the external medium. The power of nonthermal particle kinetic energy injected into the comoving frame is simply $$\dot{E}_{\mathrm{ke}}=m_pc^2_0^{\mathrm{}}𝑑p(\gamma 1)[\dot{N}_{\mathrm{pr},\mathrm{sw}}(p,x)+a\dot{N}_{\mathrm{e},\mathrm{sw}}(p,x)],$$ $`(3)`$ where the time derivative refers to time in the comoving frame. The distance $`\delta x`$ traveled during the comoving time interval $`\delta t`$ is $`\delta x=\delta t/(B\mathrm{\Gamma }c)`$. Eqs. (2) and (3) therefore imply $$\dot{E}_{\mathrm{ke}}=m_pc^2B\mathrm{\Gamma }(\mathrm{\Gamma }1)(1+a)cn_{\mathrm{ext}}(x)A(x)$$ $`(4)`$ (Blandford & McKee 1976). It is important to note that the fraction $`1/(1+a)=`$ 99.95% of the energy initially injected into the comoving frame is carried by protons, though plasma processes can be effective at transforming this energy to electrons or magnetic field. By using eq. (1) to write $`\mathrm{\Pi }_x(x+\delta x)`$, to which is added a term $`(d\mathrm{\Pi }_x^{\mathrm{rad}}/dx)\delta x`$ to account for radiation losses, one obtains an equation of motion for the dynamics of the blast wave by expanding to first order in $`\delta x`$ and using momentum conservation, i.e., $`\mathrm{\Pi }_x(x+\delta x)=\mathrm{\Pi }_x(x)`$. It is $$\frac{dP(x)/dx}{P(x)}=\frac{n_{\mathrm{ext}}(x)A(x)\mathrm{\Gamma }(x)}{(1+a)N_{\mathrm{th}}+_0^{\mathrm{}}𝑑p\gamma [N_{\mathrm{pr}}(p,x)+aN_\mathrm{e}(p;x)]}.$$ $`(5)`$ If external Compton scattering processes operate, an additional term must be added to take into account the momentum impulse from the scattered external photons (Böttcher & Dermer 1999). Eq. (5) is the basic equation for calculating the dynamics of a plasmoid by sweeping up material from the surrounding medium, and can be solved in a number of limiting cases. In the relativistic limit ($`\mathrm{\Gamma }1`$) and the blast-wave case where the area $`A(x)x^2`$, there are two important regimes: the adiabatic (or non-radiative) and radiative regimes, where the swept-up particles retain all or none of their kinetic energy, respectively. Considering the simplest case where the density of the external medium can be parameterized by the expression $`n_{\mathrm{ext}}(x)=n_0(x/x_d)^\eta `$, one finds that $`\mathrm{\Gamma }(x)\mathrm{\Gamma }_0`$ for $`xx_d`$, and $`\mathrm{\Gamma }(x)x^g`$ for $`xx_d`$, where $`g=3\eta `$ and $`g=(3\eta )/2`$ in the adiabatic and radiative regimes, respectively. The deceleration radius $$x_d=[\frac{(3\eta )E_0}{4\pi f_bn_0\mathrm{\Gamma }_0^2m_pc^2}]^{1/3}$$ $`(6)`$ (e.g., Rees & Mészáros 1992), and represents the characteristic distance beyond which the behavior changes from a coasting solution to a decelerating solution. The term $`f_b`$ represents the fraction of the full sky into which the explosion energy is ejected. For intermediate radiative regimes where a fraction $`\zeta `$ of the swept-up energy is retained in the blast wave, so that a fraction $`(1\zeta )`$ is dissipated as radiation or lost through escape, Dermer et al. (1999) show that $`g=(3\eta )/(1+\zeta )`$. In general, one may describe the dynamics of a blast-wave which decelerates by sweeping up material from a surrounding medium which is distributed according to the relation $`n_{\mathrm{ext}}(x)x^\eta `$ by an expression of the form $$\mathrm{\Gamma }(x)\mathrm{\Gamma }_0/[1+(x/x_d)^g].$$ $`(7)`$ The relationship between the location $`x`$ of the blast wave and the observing time $`t_{\mathrm{obs}}`$ can be obtained by noting that the radiating element travels a distance $`\delta x=𝒟\mathrm{\Gamma }Bc\delta t_{\mathrm{obs}}/(1+z)`$ during the observing time interval $`\delta t_{\mathrm{obs}}`$, where $`z`$ is the redshift of the source, $`𝒟=[\mathrm{\Gamma }(1B\mathrm{cos}\theta )]^1`$ is the Doppler factor, and $`\theta `$ is the angle between the direction of motion of the radiating element (or the jet axis) and the observer’s direction. From this relation, one can show that $`xt_{\mathrm{obs}}`$ when $`xx_d`$, and $`xt_{\mathrm{obs}}^{1/(2g+1)}`$ when $`xx_d`$. The above expressions are sufficient to treat analytically the basic effects from blast wave deceleration and energization through the sweep-up process. Assuming that the fraction $`(1\zeta )`$ of the swept-up power is dissipated in the form of radiation, then the radiated power in the comoving frame at $`x`$ is $`\dot{E}(1\zeta )\mathrm{\Gamma }^2(x)n_{\mathrm{ext}}(x)A(x)`$. The received power from a portion of the blast wave directed along the line-of-sight to the observer is equal to $`\dot{E}`$ amplified by a factor $`\mathrm{\Gamma }^2(x)/(1+z)^2`$ due to the transformations of energy and time. If the observer is outside the Doppler cone of the plasmoid, the emission is weak until the radiating plasma has slowed down sufficiently so that the Doppler cone intercepts the line of sight. At later times, the emission approaches the behavior found in the case where the observer’s line-of-sight intercepts the radiating region. If $`\psi `$ denotes the opening angle of the plasmoid (or jet), then two limits are important when the area of the plasmoid increases $`x^2`$. From the above discussion, the received bolometric power $`P(t_{\mathrm{obs}})(1\zeta )\mathrm{\Gamma }^4(x)n_{\mathrm{ext}}(x)A(x)/(1+z)^2`$. The dynamics of the blast wave changes from a coasting solution to a decelerating solution when it passes the deceleration radius $`x_d`$, which occurs at the observing time $$t_d=\mathrm{\Gamma }_0(1B_0\mathrm{cos}\theta )(1+z)x_d/(c\mathrm{\Gamma }_0)(1+z)x_d/(2\mathrm{\Gamma }_0^2c);$$ $`(8)`$ here $`B_0=\sqrt{1\mathrm{\Gamma }_0^2}`$ and the right-hand expression of eq. (8) refers to the case when the plasmoid is directed along the line-of-sight. For the case $`\theta \stackrel{<}{}\psi `$, $`P_p(t_{\mathrm{obs}})t_{\mathrm{obs}}^{2\eta }`$ when $`t_{\mathrm{obs}}t_d`$, and $`P_p(t_{\mathrm{obs}})t_{\mathrm{obs}}^{(2\eta 4g)/(2g+1)}`$ when $`t_{\mathrm{obs}}t_d`$. For observations at $`\theta \stackrel{>}{}\psi `$, the emission from the blast wave begins to intercept the observer’s line-of-sight when $`\theta =1/\mathrm{\Gamma }(x)`$, which occurs when $`t_{\mathrm{obs}}t_d(\mathrm{\Gamma }_0\theta )^{(2g+1)/g}`$. At this time, the received power is nearly equal to the value supposing that $`\theta \stackrel{<}{}\psi `$. We therefore see that the simplest model employing blast-wave physics shows how a plasmoid is energized by sweeping up material and converting it into nonthermal particle energy in the comoving frame (see Dermer et al. 1999 for more details). For a jet directed along the observer’s line-of-sight (which is generally thought of as the standard model for blazars), the sweeping-up process produces a flare with bolometric flux rising $`t_{\mathrm{obs}}^{2\eta }`$. After a sufficient amount of material has been swept up to cause the blast wave or plasmoid to decelerate, the Doppler deboosting overpowers the additional energization to cause the received flux to decay $`t_{\mathrm{obs}}^{(2\eta 4g)/(2g+1)}`$. For adiabatic ($`g=3/2`$) and radiative ($`g=3`$) blast waves in a uniform surrounding medium with $`\eta =0`$, the light curves decay $`t_{\mathrm{obs}}^1`$ and $`t_{\mathrm{obs}}^{10/7}`$, respectively. The decaying flux is entirely a consequence of the decreasing Doppler factor. Thus it is not valid to interpret a decaying flux as evidence for cooling of the emitting particles without a discriminant between the effects of cooling and deceleration. ## 3 Numerical Calculations Using the code described in Chiang & Dermer (1999), we can illustrate the effects described in the previous section for a pure synchrotron flare. In the simulation shown in Fig. 1, we let the central engine eject $`10^{48}`$ ergs of plasma with a baryon loading given by $`\mathrm{\Gamma }_0=40`$ into 10% of the full sky. Thus the opening angle of the jet is $`11.5^{}`$. The jet plasma passes through a medium with a uniform density of 0.01 cm<sup>-3</sup>, and as it sweeps up this material it converts it with high efficiency into nonthermal power-law electrons with a number injection index $`dN/d\gamma \gamma ^3`$. The energy of the injected electrons ranges between $`\gamma =\mathrm{\Gamma }`$ and 1% of the maximum energy given by balancing the electron synchrotron loss time scale and the Larmor time scale in a magnetic field. We assume that the magnetic field energy density is 10% of the downstream energy density of the swept-up particles. This represents a magnetic field of 0.5 Gauss during the phase before the blast wave begins strongly to decelerate. Electron synchrotron cooling is taken into account in the calculation, but makes only a small contribution to the variability shown here, which is due overwhelmingly to energization of the plasmoid by sweeping up particles, and to the subsequent deceleration and Doppler deboosting that results from this process. The general progress of the flare for the given parameters is to rise rapidly at X-ray and soft $`\gamma `$-ray energies. The flare then sweeps to lower energies on a much longer time scale. Note that the flare reaches larger $`\nu F_\nu `$ peak fluxes at higher energies where it is most variable. At lower photon energies, the variability is less extreme, and the peak $`\nu F_\nu `$ value reached is lower. The dashed curve in Fig. 1 shows a one-day time integration over the flaring emission. As can be seen, the time-integrated spectrum is much softer than the time-resolved spectra because it represents a superposition of hard spectra which peak at successively lower energies. When fitting blazar flare data using a model such as described here, it is therefore necessary to perform time-integrations over the flaring emission appropriate to the sampling time of the detector. Because gamma-ray telescopes require long observing periods to accumulate sufficient statistics, this is especially important for jointly fitting hard X-ray/soft $`\gamma `$-ray data, or MeV-GeV flares resulting from the synchrotron self-Compton (SSC) or external Compton scattering process. Fig. 2 shows light curves for the model synchrotron flare shown in Fig. 1 at X-ray, optical and radio frequencies, both along the jet axis ($`\theta =0^{}`$; thick curves), and at $`20^{}`$ to the jet axis (thin curves). When observing along the jet axis, the peak $`\nu F_\nu `$ flux measured at higher frequencies is much greater than the peak $`\nu F_\nu `$ flux measured at lower frequencies. By contrast, when observing outside the opening angle of the jet, i.e., when $`\theta >\psi `$, one sees that the range of peak fluxes becomes much less. Consequently, a flux-limited telescope observing at higher photon energies will be much more likely to detect beamed sources along the jet axis than a flux-limited telescope sensitive to lower energies, which will detect on-axis and off-axis sources with nearly equal likelihood. This effect is a consequence of the energization and deceleration of the radiating plasma which causes the received emission to sweep to lower energies at late times, and has important implications for any statistical analyses of jet sources. In usual statistical treatments, one generally assumes that the relative flux observed at different angles to the jet axis is governed by the factor $`𝒟^{3+\alpha }`$, independent of photon energy (here $`\alpha `$ is the energy index of the flux density $`F_\nu \nu ^\alpha `$; see, e.g., Urry & Padovani 1995). As shown here, the situation is much more complicated for flaring sources, which blazars most certainly are. A flux-limited X-ray survey primarily detects X-ray jet emission from aligned sources, with off-axis jet sources being too faint to detect at X-ray energies. By contrast, a flux-limited survey at radio energies will detect on-axis and off-axis sources at comparable flux levels. Thus we expect and do see the parent population of radio quasars, namely the radio galaxies. The parent population of X-ray selected BL Lac objects, by contrast, are at such a low flux level to hardly be detectable. Fig. 3 shows the integrated energy fluence measured from a blazar flare at different photon energies, illustrating the effect just described. We have considered only the synchrotron emission up to this point, but the same behavior operates for the SSC flux (Dermer, Mitman, & Chiang 1999, in preparation). Variability will be greatest at higher photon energies, where the largest $`\nu F_\nu `$ fluxes are reached for on-axis sources. The variability time scale will be longer and the flux roughly equal for aligned and misdirected blazars at lower photon energies. This same general behavior probably applies to external Compton scattering emission as well, though additional complications from the narrower beaming cone of ECS compared to the synchrotron and the SSC processes (Dermer 1995) must be taken into account in this case. ## 4 Beaming Tests and Blazar Models As noted in the Introduction, the correlated X-ray and TeV observations provide a new test for beaming in blazars. This test has been recently applied to observations of Mrk 501 by Catanese et al. (1997), Dermer (1998), and Kataoka et al. (1999). In this test, it is assumed that the X-ray emission is nonthermal synchrotron emission, and that the variability is a result of synchrotron losses in a magnetic field of mean intensity $`H`$. If the minimum variability time scale measured at energy $`\overline{E}`$ is denoted by $`\delta t_{\mathrm{obs}}^{\mathrm{min}}`$, then $$H(\mathrm{G})0.8\{\frac{(1+z)}{𝒟\overline{E}(\mathrm{keV})[\delta t_{\mathrm{obs}}^{\mathrm{min}}(\mathrm{hr})]^2}\}^{1/3}.$$ $`(9)`$ (e.g., Tashiro et al. 1995; Takahashi et al. 1996). An upper limit on the mean magnetic field $`H`$ in the emitting region is implied because the electrons producing the highest energy synchrotron emission have Lorentz factors $`\stackrel{>}{}\mathrm{\hspace{0.25em}2}\times 10^6E_\mathrm{C}(\mathrm{TeV})(1+z)/𝒟`$, where $`E_\mathrm{C}(\mathrm{TeV})`$ is the measured energy in TeV of the highest-energy gamma-rays. Synchrotron emission correlated with the TeV flux requires that the electrons radiate in a magnetic field at least as great as $`H(\mathrm{G})11ϵ_{\mathrm{obs},\mathrm{syn}}𝒟/[E_\mathrm{C}^2(\mathrm{TeV})(1+z)]`$, where $`ϵ_{\mathrm{obs},\mathrm{syn}}`$ is the measured dimensionless energy of the highest energy synchrotron photons produced by the electrons which produce the TeV radiation. When compared with the value of $`H`$ inferred through equations (9), one obtains an expression for the Doppler factor, given by $$𝒟1.7\frac{(1+z)[E_\mathrm{C}(\mathrm{TeV})]^{3/2}}{\overline{ϵ}_{\mathrm{obs}}^{1/4}(\mathrm{keV})(\delta t_{\mathrm{obs}}^{\mathrm{min}})^{1/2}(ϵ_{\mathrm{obs},\mathrm{syn}})^{3/4}}$$ $`(10)`$ (Dermer 1998). A lower limit to $`𝒟`$ is obtained if the TeV flux does not exhibit a clear cutoff due to the high-energy cutoff in the electron distribution function. With the advent of air Cherenkov telescopes such as Whipple and HEGRA detecting BL Lac objects to TeV and tens of TeV energies, this test could in principle provide the largest inferred Doppler factors of all known beaming tests (J. H. Buckley, 1998, private communication). It is necessary, however, to discriminate clearly between bulk deceleration effects and radiative cooling effects. Unfortunately, the effects of deceleration mimic those of radiative cooling in a variety of ways (Chiang 1999). This is true both for the energy-dependent time lags produced by synchrotron cooling, and for the clockwise loop diagrams produced by flaring sources when data is plotted in a spectral index/intensity display (see also Kirk et al. 1999). One such discriminant might be the slower decay of the SSC emission compared to synchrotron emission (Dermer 1998), but a detailed numerical simulation will be required to fully resolve this question. Finally, we note that the existence of the process of plasmoid deceleration weakens arguments (Buckley 1998) against hadronic models based on the long cooling time scales of protons through photo-meson, photon-pion, and proton synchrotron processes (e.g., Biermann & Strittmatter 1987) compared to the observed rapid variability time scales of BL Lacs. As shown here, neither extremely high energy protons nor intense radiation fields are required to produce rapid variability, which can result solely from Doppler deboosting due to the deceleration of the radiating region. This, and the fact that most of the nonthermal particle energy injected into the plasmoid is initially in the form of relativistic protons according to the blast wave physics described here, seems to give new life to hadronic models of blazars (e.g., Mannheim 1993; Gaisser et al. 1995) provided, of course, that hadronic models can successfully fit the generic two component $`\nu F_\nu `$ blazar spectrum. If radiative cooling does not produce the variability behavior, hadronic models must also contend with low radiative efficiencies in an uncooled model (see Böttcher & Dermer 1998; Totani 1998 for related questions in GRB models). The variability behavior normally attributed to radiative cooling processes would, in the hadronic models, instead be a consequence of plasmoid deceleration. Thus a discriminant between cooling and deceleration effects is also of central importance to distinguish leptonic and hadronic models of blazars. ## 5 Summary Recent observations of power-law X-ray afterglow observations of GRBs have stimulated the development of new physics for calculating radiation from relativistic plasma outflows. The energization of nonthermal particles in the radiating plasma occurs through a process of sweeping up material from the surrounding medium. The internal nonthermal particle energy is extracted from the directed kinetic energy of the bulk plasma, causing the bulk plasma to decelerate. We have examined idealized flaring behaviors produced when bulk plasma sweeps up particles from a uniform medium. Inhomogeneities in the surrounding medium as well as in the relativistic plasmoid will complicate the situation. We have shown that * Bulk plasma deceleration effects must be included in models and statistical studies of blazars; * Time integrations over the varying Doppler factor of the radiating plasma must be performed when fitting blazar data, employing integration ranges appropriate to the observing times of the different telescopes; * A newly developed test for beaming in blazars using correlated X-ray and TeV variability observations must answer the criticism that the variability is due not to radiative cooling but rather to Doppler deboosting; * Hadronic models for blazars do not have to confront the difficulty of the long radiative cooling time scales of hadrons, since variability can be achieved through plasmoid deceleration. Important for further progress on these questions is an observational discriminant between cooling and deceleration processes. Acknowledgments I acknowledge valuable discussions on blast-wave physics and synchrotron radiation with Jim Chiang, Markus Böttcher, and Hui Li, and thank Jim Chiang for the use of his code. This work was supported by the Office of Naval Research and the Compton Observatory Guest Investigator Program
no-problem/9901/hep-ex9901010.html
ar5iv
text
# 1 Introduction ## 1 Introduction A sizeable fraction of the final states produced in high energy collisions shows the characteristic feature of large amounts of hadronic energy in small angular regions. These collimated sprays of hadrons (called jets) are the observable signals of underlying short distance processes and are considered to be the footprints of the underlying partonic final states. Quantitative studies of jet production require a precise jet definition, which is given by a jet finding algorithm. Jets so defined exhibit an internal structure which is sensitive to the mechanism by which a complex aggregate of observable hadrons evolves from a hard process. The understanding of this mechanism involves higher orders of the strong coupling constant in perturbation theory as well as non-perturbative contributions. This is a challenging task for theory. Recently, for some specific hadronic final state quantities, encouraging results have been obtained by exploiting the characteristic power behaviour of non-perturbative effects and by analytical, approximate calculations of perturbative QCD parton evolution down to the semi-soft regime . Furthermore, since jet production rates are used to test the predictions of perturbative QCD, the understanding of their detailed properties and internal structure is an important prerequisite. The internal structure of jets has been studied in $`e^+e^{}`$ and in hadron-hadron collisions . At the $`e^\pm p`$ collider HERA, these investigations can be performed in photoproduction ($`Q^20\mathrm{GeV}^2`$) and in deep-inelastic scattering (DIS) at large squared four momentum transfers $`Q^2`$. In a previous publication we have measured the $`E_T`$ dependence of the jet width in photoproduction. Recently, the ZEUS collaboration has investigated jet shapes in photoproduction and in DIS at $`Q^2>100`$ $`\mathrm{GeV}^2`$ . Both analyses are carried out in the laboratory frame. This means that for DIS at high $`Q^2`$ mostly events with only one jet enter the analysis. The hadronization of the current jet in deep-inelastic scattering in the Breit frame has already been studied with event shape variables , charged particle multiplicities and fragmentation functions . In this paper we take the first steps towards a complete understanding of jet properties in DIS. We analyse the hadronization of jets in multijet production in the Breit frame. The Breit frame, where the virtual photon interacts head-on with the proton, has been chosen in this analysis because here the produced transverse<sup>1</sup><sup>1</sup>1transverse with respect to the $`z`$-axis which is given by the axis of the virtual photon and the proton. energy, $`E_{T,\mathrm{Breit}}`$, directly reflects the hardness of the underlying QCD process. We present measurements of internal jet structure in a sample of inclusive dijet events with transverse jet energies of $`E_{T,\mathrm{Breit}}>5`$ GeV, $`10<Q^2120\mathrm{GeV}^2`$ and $`210^4x_{\mathrm{Bj}}810^3`$. This is the $`E_{T,\mathrm{Breit}}`$ range where jet cross section measurements are currently performed at HERA and compared to perturbative QCD calculations (e.g. ). The analysis is based on data taken in 1994 with the H1 detector at HERA when $`27.5`$GeV positrons collided with $`820`$GeV protons. The data correspond to an integrated luminosity of $`_{\mathrm{int}}2\text{pb}^1`$. Jets are defined in the Breit frame by $`k_{}`$ and cone jet algorithms. Two observables, jet shapes and, for the first time, subjet multiplicities, are studied. The jet shape measures the radial distribution of the transverse jet energy around the jet axis. For the $`k_{}`$ cluster algorithm we have also measured the multiplicity of subjets, resolved at a resolution scale which is a fraction of the jet’s transverse energy. Both observables are presented for different ranges of the transverse jet energy and the pseudo-rapidity<sup>2</sup><sup>2</sup>2The pseudo-rapidity $`\eta `$ is defined as $`\eta \mathrm{ln}(\mathrm{tan}\theta /2)`$ where $`\theta `$ is the polar angle with respect to the proton direction. This definition is chosen in both the laboratory frame and the Breit frame. of the jets in the Breit frame. The paper is organized as follows. Section 2 gives a brief description of the H1 detector. In section 3 we introduce the jet algorithms used in the analysis and give the definition of the measured observables in section 4. In section 5 we give a short description of the QCD models which are used for the correction of the data and to which the results are later compared (in section 9). The data selection and the correction procedure are described in sections 6 and 7 and the results are discussed in section 8. ## 2 The H1 Detector A detailed description of the H1 detector can be found elsewhere . Here we briefly introduce the detector components relevant for this analysis: the liquid argon (LAr) calorimeter , the backward lead-scintillator calorimeter (BEMC) , and the tracking chamber system . The hadronic energy flow is mainly measured by the LAr calorimeter extending over the polar angular range $`4.4^{}<\theta <154^{}`$ with full azimuthal coverage. The polar angle $`\theta `$ is defined with respect to the proton beam direction ($`+z`$ axis). The LAr calorimeter consists of an electromagnetic section ($`2030`$ radiation lengths) with lead absorbers and a hadronic section with steel absorbers. The total depth of both calorimeters varies between $`4.5`$ and $`8`$ interaction lengths. Test beam measurements of the LAr calorimeter modules show an energy resolution of $`\sigma _E/E0.50/\sqrt{E[\text{GeV}]}0.02`$ for charged pions . The absolute scale of the hadronic energy is known for the present data sample to $`4\%`$. The scattered positron is detected by the BEMC with a depth of $`22.5`$ radiation lengths covering the backward region of the detector, $`155^{}<\theta <176^{}`$. The electromagnetic energy scale is known to an accuracy of $`1\%`$. The calorimeters are surrounded by a superconducting solenoid providing a uniform magnetic field of $`1.15`$ T parallel to the beam axis in the tracking region. Charged particle tracks are measured in two concentric jet drift chamber modules (CJC), covering the polar angular range $`15^{}<\theta <165^{}`$. The forward tracking detector covers $`7^{}<\theta <25^{}`$ and consists of drift chambers with alternating planes of parallel wires and others with wires in the radial direction. A backward proportional chamber (BPC) with an angular acceptance of $`151^{}<\theta <174.5^{}`$ improves the identification of the scattered positron. The spatial resolution for reconstructed BPC hits is about 1.5 mm in the plane perpendicular to the beam axis. ## 3 Jet Definitions The jet algorithms used in this analysis are applied to the particles boosted into the Breit frame. Particle refers here either to an energy deposit in the detector (see section 6), to a stable hadron or a parton in a QCD model calculation. In all cases the scattered positron is excluded. The Breit frame is defined by $`\stackrel{}{q}+2x_{\mathrm{Bj}}\stackrel{}{P}=0`$, where $`\stackrel{}{q}`$ and $`\stackrel{}{P}`$ are the momenta of the exchanged boson and the incoming proton. The $`z`$-axis is defined as the direction of the incoming proton. In the following analysis we use two different jet definitions: a cone algorithm and a $`k_{}`$ cluster algorithm. Both jet definitions are invariant under boosts along the $`z`$-direction. The recombination of particles is carried out in the $`E_T`$ recombination scheme, which is based on transverse energies $`E_T`$, pseudo-rapidities $`\eta `$ and azimuthal angles $`\varphi `$ of the particles. The transverse energy and the direction of a jet are defined by $$E_{T,\mathrm{jet}}=\underset{i}{}E_{T,i},\eta _{\mathrm{jet}}=\frac{_iE_{T,i}\eta _i}{_iE_{T,i}},\varphi _{\mathrm{jet}}=\frac{_iE_{T,i}\varphi _i}{_iE_{T,i}},$$ (1) where the sums run over all particles $`i`$ assigned to the jet<sup>3</sup><sup>3</sup>3All particles are considered massless by setting $`E_i=|\stackrel{}{p_i}|`$.. ### 3.1 Cone Algorithm Based on the original proposal of Sterman and Weinberg many different implementations of cone algorithms have been developed. While the basic idea of the cone algorithm is simple and very intuitive, an operational definition is non-trivial. The resulting jet cross sections depend on how the algorithm treats the choice of jet initiators and configurations of overlapping jet cones. It has repeatedly been pointed out that many definitions of cone algorithms are not infrared and/or collinear safe . In this analysis we use the definition implemented in the algorithm PXCONE which does not suffer from the problems discussed in . This definition, which corresponds closely to the Snowmass proposal and to the algorithm used in the CDF experiment , is also used by the OPAL collaboration . Particles are assigned to jets based on their spatial distance $`R`$ in pseudo-rapidity and azimuth space ($`R^2=\mathrm{\Delta }\eta ^2+\mathrm{\Delta }\varphi ^2`$). The algorithm operates as follows: 1. Each particle is considered as a seed of a jet, for which steps 2-4 are performed. 2. The jet momentum is calculated from all particles within a cone of radius $`R_0`$ around the seed direction using eq. (1). 3. If the jet direction differs from the seed direction, the jet direction is taken as the new seed direction and step 2 is repeated. 4. When the jet direction is stable the jet is stored in the list of “protojets” (if it is not identical with a protojet already found). 5. The steps 2 to 4 are repeated for all midpoints of pairs of protojets as seed directions<sup>4</sup><sup>4</sup>4In practice it is sufficient to do this only for pairs of protojets with a distance between $`R_0`$ and $`2R_0`$.. This leads to the infrared safety of the procedure . 6. Protojets with transverse energies of $`E_{T,\mathrm{jet}}<ϵ`$ are removed from the list. The cut-off parameter $`ϵ`$ specifies below which transverse energies protojets are not considered in the overlap treatment (steps 7-8). 7. All remaining protojets that have more than a fraction $`f`$ of their transverse energy contained in a protojet of higher transverse energy are deleted. 8. All particles that are contained in more than one protojet are assigned to the protojet whose center is nearest in $`(\eta ,\varphi )`$. 9. The jet momenta are recalculated using eq. (1). All protojets with $`E_{T,\mathrm{jet}}<ϵ`$ are deleted and the remaining ones are called jets. The jets with the highest transverse energies are considered in the analysis. Due to the reassignment of particles to jets and the recalculation of the jet axis (steps 7, 8) it may happen that single particles within a jet have a distance larger than $`R_0`$ to the jet axis. This analysis is made with the parameter settings $`ϵ=5\text{GeV}`$, $`f=0.75`$ and a cone radius of $`R_0=1.0`$. ### 3.2 Inclusive $`𝒌_{\mathbf{}}`$ Algorithm The ambiguities that occur for cone jet definitions (choice of seeds, overlapping cones) are avoided in cluster algorithms which successively recombine particles to jets. One definition of such an algorithm (proposed in and implemented in the KTCLUS algorithm ) has properties very similar to cone algorithms. As in the cone algorithm the clustering procedure is based on the longitudinally boost-invariant quantities $`E_T,\mathrm{\Delta }\eta ,\mathrm{\Delta }\varphi `$. The minimum of all distances between particles is determined and either the corresponding pairs of particles are merged into pseudo-particles or single (pseudo-) particles are declared as jets. This process is iterated until no particles are left: 1. We start with a list of all particles and an empty list of jets. 2. For each particle $`i`$ as well as for each pair of particles ($`i,j`$) the distances $`d_i`$ and $`d_{ij}`$ are calculated $$d_i=E_{T,i}^2R_0^2\text{and}\text{ }d_{ij}=\mathrm{min}(E_{T,i}^2,E_{T,j}^2)R_{ij}^2\mathrm{with}R_{ij}^2=\mathrm{\Delta }\eta _{ij}^2+\mathrm{\Delta }\varphi _{ij}^2.$$ (2) 3. The smallest value of all the $`d_i`$ and $`d_{ij}`$ is labeled $`d_{\mathrm{min}}.`$ 4. If $`d_{\mathrm{min}}`$ belongs to the set of $`d_{ij}`$, the particles $`i`$ and $`j`$ are merged into a new particle using the recombination prescription in eq. (1) and removed from the list of particles. 5. If $`d_{\mathrm{min}}`$ belongs to the set of $`d_i`$, the particle $`i`$ is removed from the list of particles and added to the list of jets. 6. When no particles are left (i.e. all particles are included in jets) the procedure is finished. The last jets that entered the list are the ones with highest transverse energies. These jets are considered in the analysis. This jet definition implies that particles with $`R_{ij}<R_0`$ are subsequently merged, so that all final jets are separated by distances $`R_{ij}>R_0`$. It is still possible that particles inside a jet have a distance $`R_{ij}>R_0`$ to the jet axis and that particles with $`R_{ij}<R_0`$ are not part of the jet. The parameter $`R_0`$ is set to $`R_0=1.0`$. ## 4 The Observables Two observables of internal jet structure are investigated in this analysis. They are sensitive to different aspects of jet broadening. The jet shapes are studied for the cone and the $`k_{}`$ algorithm. This observable measures the radial distribution of the transverse jet energy only and is affected by hard and by soft processes over the whole radial range. A natural choice for studying the internal structure of jets with the $`k_{}`$ cluster algorithm is the multiplicity of subjets, resolved at a resolution scale which is a fraction of the jet’s transverse energy. These subjet multiplicities are sensitive to more local structures of relative transverse momentum within a jet. Here the perturbative and the non-perturbative contributions are better separated. While at larger values of the resolution parameter perturbative contributions dominate, at smaller values non-perturbative contributions become increasingly important. ### 4.1 The Jet Shape The jet shape $`\mathrm{\Psi }(r)`$ is defined as the fractional transverse jet energy contained in a subcone of radius $`r`$ concentric with the jet axis, averaged over all considered jets in the event sample $$\mathrm{\Psi }(r)\frac{1}{N_{\mathrm{jets}}}\underset{\mathrm{jets}}{}\frac{E_T(r)}{E_{T,\mathrm{jet}}},$$ (3) where $`N_{\mathrm{jets}}`$ is the total number of these jets. As proposed in , only particles assigned by the jet algorithm to the jet are considered. Usually the denominator in the definition of $`\mathrm{\Psi }`$ is given by the summed $`E_T`$ of all particles within a radius $`R_0`$ to the jet axis. This means that $`\mathrm{\Psi }(r/R_0=1)=1`$. In our definition (3) of $`\mathrm{\Psi }`$ the denominator is given by the transverse energy of the jet. Since neither for the cone nor for the $`k_{}`$ definition are all particles necessarily assigned to a jet within a radius of $`r/R_0<1`$ to the jet axis, $`\mathrm{\Psi }(r/R_0=1)`$ is not constrained to have the value of one. With this choice of our observable we are also sensitive to the amount of transverse jet energy outside the radius $`R_0`$. ### 4.2 Subjet Multiplicities For each jet in the sample the clustering procedure is repeated for all particles assigned to the jet. The clustering is stopped when the distances $`y_{ij}`$ between all particles $`i,j`$ are above some cut-off $`y_{\mathrm{cut}}`$ $$y_{ij}=\frac{\mathrm{min}(E_{T,i}^2,E_{T,j}^2)}{E_{T,\mathrm{jet}}^2}\frac{\left(\mathrm{\Delta }\eta _{ij}^2+\mathrm{\Delta }\varphi _{ij}^2\right)}{R_0^2}>y_{\mathrm{cut}}$$ (4) and the remaining (pseudo-)particles are called subjets. The parameter $`y_{\mathrm{cut}}`$ defines the minimal relative transverse energy between subjets inside the jet and thus determines the extent to which the internal jet structure is resolved. From this definition it follows that for $`y_{\mathrm{cut}}>0.25`$ no subjet is resolved (therefore the number of subjets is one), while for $`y_{\mathrm{cut}}0`$ every particle in the jet is a subjet. The observable that is studied in this analysis is the average number of subjets for a given value of the resolution parameter, for values $`y_{\mathrm{cut}}10^3`$. ## 5 QCD Models A simulation of the detailed properties of the hadronic final state is available in the form of Monte Carlo event generators. They include the matrix element of the hard subprocess in first order of the strong coupling constant $`\alpha _s`$, approximations of higher order QCD radiation effects, and a model to describe the non-perturbative transition from partons to hadrons. The LEPTO Monte Carlo incorporates the $`𝒪(\alpha _s)`$ QCD matrix element and takes higher order parton emissions to all orders in $`\alpha _s`$ approximately into account using the concept of parton showers based on the leading logarithm DGLAP equations . QCD radiation can occur before and after the hard subprocess. The formation of hadrons is performed using the LUND string model implemented in JETSET . The HERWIG Monte Carlo also includes the $`𝒪`$$`(\alpha _s)`$ QCD matrix element, but uses another implementation of the parton shower cascade which takes coherence effects fully into account. The hadronization is simulated with the cluster fragmentation model . In ARIADNE gluon emissions are treated by the colour dipole model assuming a chain of independently radiating dipoles spanned by colour connected partons. The first emission in the cascade is corrected to reproduce the matrix element to first order in $`\alpha _s`$ . DJANGO provides an interface between the event generators LEPTO or ARIADNE and HERACLES which makes it possible to include $`𝒪(\alpha )`$ QED corrections at the lepton line. ## 6 Data Selection The analysis is based on H1 data taken in $`1994`$ corresponding to an integrated luminosity of $`_{int}2\text{pb}^1`$. The event selection closely follows that described in a previous publication . DIS events are selected where the scattered positron is measured in the acceptance region of the BEMC at energies where trigger efficiencies are approximately 100 %. To ensure a good identification of the scattered positron and to suppress background from misidentified photoproduction events the following cuts are applied: * The cluster of the positron candidate must have an energy-weighted mean transverse radius below $`5\text{cm}`$. * A reconstructed BPC hit within $`5\text{cm}`$ of the straight line connecting the shower center with the event vertex is required. * The $`z`$ position of the reconstructed event vertex must be within $`\pm 30\text{cm}`$ of the nominal position. * A cut on $`35\text{GeV}<(Ep_z)<70\text{GeV}`$ is applied, where the sum runs over all energy deposits in the calorimeter. In neutral current DIS events without undetected photon radiation the quantity $`(Ep_z)`$ is expected to be equal to twice the energy of the initial state positron. This cut reduces the contribution from photoproduction events as well as events where hard photons are radiated collinear to the incoming positron. The event kinematics are calculated from the polar angle $`\theta _{el}`$ and the energy $`E_{el}^{}`$ of the scattered positron via $`Q_{el}^2=2E_0E_{el}^{}(1+\mathrm{cos}\theta _{el})`$, $`y_{el}=1E_{el}^{}/(2E_0)(1\mathrm{cos}\theta _{el})`$ and $`x_{\mathrm{Bj}}=Q^2/(sy)`$. $`E_0`$ denotes the energy of the incoming positron and $`s`$ the $`ep`$ centre-of-mass energy squared. Events are only accepted, if $`E_{el}^{}>11`$ GeV, $`156^{}<\theta _{el}<173^{}`$, $`Q^2>10\mathrm{GeV}^2`$ and $`y>0.15`$. The resulting kinematic range is $`10<Q^2120\mathrm{GeV}^2`$ and $`210^4x_{\mathrm{Bj}}810^3`$. Jets are defined by the algorithms described in section 3. The input for the jet algorithms consists of a combination of energy clusters from the calorimeter and track momenta measured in the central and forward trackers (as described in ). While all energy clusters are considered, the four momentum of each single track is only allowed to contribute up to a momentum of $`350\text{MeV}`$. This procedure partly compensates for energy losses in the calorimeter due to dead material and noise thresholds. It reduces the dependence of the jet finding efficiency on the pseudo-rapidity of the jet and improves the reconstruction of the transverse jet energy . The objects from tracking and calorimeter information are boosted to the Breit frame where the jet algorithms are applied. We select events with at least two identified jets with transverse energies of $`E_{T,\mathrm{Breit}}>5\text{GeV}`$ in $`1<\eta _{\text{jet,lab}}<2`$. The two jets with the highest $`E_{T,\mathrm{Breit}}`$ are considered in the analysis. The event sample for the inclusive $`k_{}`$ algorithm (the cone algorithm) consists of 2045 (2657) dijet events. ## 7 Correction of the Data The data are corrected for detector effects and QED radiation from the lepton. The detector response is determined using events from Monte Carlo event generators that were subjected to a detailed simulation of the H1 detector. The following event generators are used: ARIADNE interfaced in DJANGO (with and without the inclusion of QED corrections) and LEPTO. Both generators give a good description of the kinematic variables of the inclusive DIS data sample as well as of the angular and transverse energy distributions of the jets . We also observe a reasonable description of the observables introduced in section 4 (see section 9). The measured data points are corrected bin-by-bin for detector effects. Using the generated event samples, the correction factor for each bin is determined as the ratio of the generated value of the observable and the value that is reconstructed after detector simulation. These correction factors are independent of the inclusion of QED radiation effects as included in DJANGO. Their dependence on details of the modeling of the hadronic final state is taken into account by considering the difference between the correction factors from ARIADNE and LEPTO as systematic uncertainty. For the $`k_{}`$ (cone) algorithm the corrections for $`\mathrm{\Psi }(r)`$ are below $`10\%`$ ($`13\%`$) for subcone radii $`r>0.3`$ and always below $`27\%`$ ($`23\%`$). The corrections for $`N_{\mathrm{subjet}}(y_{\mathrm{cut}})`$ are always below $`7\%`$. The correction factors from both QCD models are in good agreement (they differ typically by not more than $`2\%`$) for the jet shapes as well as for the subjet multiplicities . The final correction factors are taken to be the mean values of the two models, taking the spread as the error. In addition we have varied the calibration of the hadronic energy scale in the data sample in the range of $`\pm 4\%`$ around the nominal value. The error is estimated as the maximal deviation from the results at the nominal value. For all observables it is at most $`5\%`$. The overall systematic error is calculated by adding the errors from the model dependence and from the uncertainty of the hadronic energy scale in quadrature. In all figures the statistical and systematic errors are added in quadrature. Since each jet enters in all bins of a distribution, all errors are correlated. The background from misidentified photoproduction events is estimated with a sample of photoproduction events generated with PHOJET and is found to be negligible. ## 8 Results The jet shape and the subjet multiplicity are presented as functions of quantities directly related to the single jets, namely the transverse jet energy ($`E_{T,\mathrm{Breit}}`$) and the pseudo-rapidity ($`\eta _{\text{Breit}}`$) in the Breit frame. We also investigated whether the observables depend on the event kinematics. The jet shapes and subjet multiplicities were compared for two bins of $`Q^2`$ ($`Q^2<20\mathrm{GeV}^2`$ and $`Q^2>20\mathrm{GeV}^2`$) and $`x_{\mathrm{Bj}}`$ ($`x_{\mathrm{Bj}}<810^4`$ and $`x_{\mathrm{Bj}}>810^4`$) respectively. No dependence on $`Q^2`$ and $`x_{\mathrm{Bj}}`$ has been observed. ### 8.1 Jet Shapes The radial dependence of the jet shape $`\mathrm{\Psi }(r)`$ for the $`k_{}`$ algorithm is shown in Fig. 1 in different ranges of the pseudo-rapidity in the Breit frame. The results for jets of transverse energies $`5<E_{T,\mathrm{Breit}}<8\mathrm{GeV}`$ and $`E_{T,\mathrm{Breit}}>8\mathrm{GeV}`$ are superimposed. The jet shape $`\mathrm{\Psi }(r)`$ increases faster with $`r`$ for jets at larger transverse energies, indicating that these jets are more collimated. The same tendency is seen for the jets defined by the cone algorithm which are compared to the jets found by the $`k_{}`$ algorithm in Fig. 2. For both jet definitions we also observe a dependence of the jet shape on the pseudo-rapidity of the jets. Jets towards the proton direction (at larger values of $`\eta _{\text{Breit}}`$) are broader than jets towards the photon direction (smaller $`\eta _{\text{Breit}}`$). In the region where the jets are most collimated ($`E_{T,\mathrm{Breit}}>8\mathrm{GeV}`$ and $`\eta _{\text{Breit}}<2.2`$), very similar jet shapes are observed for the $`k_{}`$ and cone algorithms. The broadening of the jets for smaller $`E_{T,\mathrm{Breit}}`$ and larger $`\eta _{\text{Breit}}`$ is more pronounced for the cone jet definition. Recently jet shapes have been measured in dijet production in photon-photon collisions for jets defined by a cone algorithm at transverse energies comparable to those presented here. The jet shapes in photon-photon collisions (where no $`\eta `$ dependence is observed) are very similar to those measured in DIS in the Breit frame at $`\eta _{\text{Breit}}<1.5`$. ### 8.2 Subjet Multiplicities The subjet multiplicities for the $`k_{}`$ algorithm are displayed in Fig. 3. The average number of subjets $`N_{\mathrm{subjet}}(y_{\mathrm{cut}})`$ as a function of the subjet resolution parameter at $`y_{\mathrm{cut}}10^3`$ is plotted. Towards smaller values of $`y_{\mathrm{cut}}`$, an increasing number of jet fragments with smaller relative transverse momenta is resolved. The number of subjets at a given value of $`y_{\mathrm{cut}}`$ reflects the amount of relative transverse momentum with respect to the jet axis. The subjet multiplicity is therefore a measure of the broadness of the jet. At $`y_{\mathrm{cut}}=10^3`$ a jet is on average resolved into $`4.1`$$`4.6`$ subjets, depending on $`E_{T,\mathrm{Breit}}`$ and $`\eta _{\text{Breit}}`$ of the jet<sup>5</sup><sup>5</sup>5On average the jets in the data (as in the simulated events) consist of eleven calorimetric energy clusters. For the LEPTO generator this is also approximately the average multiplicity of stable particles inside the jets.. For almost all values of $`y_{\mathrm{cut}}`$ the subjet multiplicity is larger for jets at smaller $`E_{T,\mathrm{Breit}}`$ and larger $`\eta _{\text{Breit}}`$, indicating broader jets. A summary of the results for both observables is given in Fig. 4. Here the $`E_{T,\mathrm{Breit}}`$ and the $`\eta _{\text{Breit}}`$ dependence of the jet shape and the average number of subjets are shown at an intermediate value of the resolution parameter (jet shape: $`r=0.5`$ and subjet multiplicity: $`y_{\mathrm{cut}}=10^2`$). Although the subjet multiplicities are sensitive to the jet broadening in a different way than the jet shapes, consistent conclusions can be drawn for both measurements. The jet broadening depends on both the transverse jet energy as well as the pseudo-rapidity in the Breit frame. While the pseudo-rapidity dependence is most pronounced at smaller transverse jet energy, the transverse energy dependence is stronger in the forward region (at larger pseudo-rapidities). ## 9 Comparison with QCD Model Predictions The predictions of different QCD models are compared in Fig. 5 to the jet shapes measured for the $`k_{}`$ algorithm. The models LEPTO, ARIADNE and HERWIG all show $`E_{T,\mathrm{Breit}}`$ and $`\eta _{\text{Breit}}`$ dependences similar to that seen in the data. LEPTO gives the best description of the measured shapes for $`\eta _{\text{Breit}}<2.2`$ while at $`\eta _{\text{Breit}}>2.2`$ the predicted jet shapes are too broad. A reasonable description is also obtained by the ARIADNE model except for jets at smaller pseudo-rapidities where the jet shapes have the tendency to be too narrow. For the HERWIG model the jet shapes are narrower than those in the data in all $`E_{T,\mathrm{Breit}}`$ and $`\eta _{\text{Breit}}`$ regions. The same observations as above are made when comparing these QCD models with the subjet multiplicities and with the jet shapes for the cone algorithm (not shown here). In QCD models the evolution of a jet is described by perturbative contributions (radiation of partons) and non-perturbative contributions (hadronization). Studies based on the LEPTO and HERWIG parton shower models show that all observables studied in this analysis are strongly influenced by hadronization. This process has the largest impact on the jet broadening in our kinematic region (Fig. 6). Basic characteristics of the perturbative contributions are however still visible after hadronization. The model prediction suggests that the large difference between quark and gluon-initiated jets before hadronization survives the hadronization process. This especially applies to jets with large transverse energies . Fig. 6 shows the jet shapes and the subjet multiplicities as predicted by the LEPTO parton shower model for the $`k_{}`$ algorithm, separately for quark and gluon jets at $`E_{T,\mathrm{Breit}}>8\mathrm{GeV}`$ and $`\eta _{\text{Breit}}<1.5`$. Gluon jets are broader than quark jets. The same prediction is obtained by the HERWIG parton shower model. Although the jets in HERWIG are slightly narrower, the differences between gluon and quark jets are equally large. In the phase space considered here, LEPTO and HERWIG (in agreement with next-to-leading order calculations) predict a fraction of approximately $`80\%`$ photon-gluon fusion events with two quarks in the partonic final state. The jet samples of these models are therefore dominated by quark jets. Both model predictions for the jet shapes and the subjet multiplicities therefore mainly reflect the properties of the quark jets as can be seen in Fig. 6. These predictions give a reasonable description of the data. Thus, we conclude, that the jets we observe are consistent with being mainly initiated by quarks. ## 10 Summary Measurements of internal jet structure in dijet events in deep-inelastic scattering in the kinematic domain $`10<Q^2120\mathrm{GeV}^2`$ and $`210^4x_{\mathrm{Bj}}810^3`$ have been presented. Jet shapes and subjet multiplicities have been studied for jets of transverse energies $`E_{T,\mathrm{Breit}}>5\mathrm{GeV}`$ defined by $`k_{}`$ and cone jet algorithms in the Breit frame. The radial dependence of the jet shape and the dependence of the average number of subjets on the subjet resolution parameter $`y_{\mathrm{cut}}`$ are both sensitive to different aspects of jet broadening. For both observables a dependence of the jet broadness on the transverse energy $`E_{T,\mathrm{Breit}}`$ and on the pseudo-rapidity in the Breit frame $`\eta _{\text{Breit}}`$ is seen. With increasing $`E_{T,\mathrm{Breit}}`$ jets are narrower. Jets of the same $`E_{T,\mathrm{Breit}}`$ become broader towards the proton direction. This effect is more pronounced at lower $`E_{T,\mathrm{Breit}}`$. At lower $`E_{T,\mathrm{Breit}}`$ jets defined by the $`k_{}`$ algorithm are more collimated than jets defined by the cone algorithm, while at higher $`E_{T,\mathrm{Breit}}`$ both algorithms produce very similar jets. The QCD models LEPTO, ARIADNE and HERWIG roughly reproduce the dependence of the jet shape and the subjet multiplicities on $`E_{T,\mathrm{Breit}}`$ and $`\eta _{\text{Breit}}`$ as seen in the data. LEPTO has a tendency to produce broader jets in the proton direction than measured. HERWIG and ARIADNE produce jets which are too collimated especially at higher transverse energies. We have reported earlier that in the same kinematic domain the predicted jet rates from LEPTO and HERWIG are about a factor of two below the data . Since these models are able to reproduce the internal jet structure, this failure must be largely connected to an inadequate modeling of the underlying hard partonic subprocess. According to the parton shower models LEPTO and HERWIG, quark and gluon initiated jets differ both at the parton and at the hadron level. Both models predict that the jet sample is dominated by quark initiated jets. Since these models describe our data, we conclude that the observed jet structures are compatible with those of quark initiated jets. ## 11 Acknowledgments We are grateful to the HERA machine group whose outstanding efforts have made and continue to make this experiment possible. We thank the engineers and technicians for their work in constructing and now maintaining the H1 detector, our funding agencies for financial support, the DESY technical stuff for continual assistance, and the DESY directorate for the hospitality which they extend to the non-DESY members of the collaboration.
no-problem/9901/cond-mat9901181.html
ar5iv
text
# The effect of temperature jumps during polymer crystallization ## I Introduction Upon crystallization from solution and the melt many polymers form lamellae where the polymer chain traverses the thin dimension of the crystal many times folding back on itself at each surface. (The crystal geometry is shown by the example configuration in Figure 1) Although lamellar crystals were first observed over forty years ago their physical origin is still controversial. In particular the explanations for the dependence of the lamellar thickness on temperature offered by the two dominant theoretical approaches—the Lauritzen-Hoffman surface nucleation theory and the entropic barrier model of Sadler and Gilmer—differ greatly. One of the common features of the two theories is that they both argue that the observed crystal thickness is close to the thickness at which the crystal growth rate is a maximum. However, recently a new description of the mechanism of thickness selection has been presented. In this approach the observed thickness corresponds instead to the one thickness, $`l^{}`$, at which growth with constant thickness can occur. Crystals initially thicker (thinner) than $`l^{}`$ will thin (thicken) as the crystals grow until the thickness $`l^{}`$ is reached. This dynamical convergence can be described by a fixed-point attractor which relates the thickness of a layer to the thickness of the previous layer. The value of the thickness at the fixed point is $`l^{}`$. This mechanism has been found for two simple models of polymer crystallization. In the first model the polymer crystal grows, as in the Lauritzen-Hoffman theory, by the successive deposition of stems (a stem is a straight portion of the polymer chain that traverses the thin dimension of the lamella) across the growth face. A configuration produced by this model is shown in Figure 1 to illustrate the mechanism. The crystal thins down from the initial thickness to $`l^{}`$ within five to ten layers and then continues to grow at that thickness The second model is that used by Sadler and Gilmer, the behaviour of which they interpreted in terms of an entropic barrier. In this Sadler-Gilmer (SG) model the connectivity of the polymer is modelled implicitly, the growth face can be rough, and lateral correlations along the growth face can be weak. That we find the same mechanism in these two very different models is a sign of its generality. Further support for the new mechanism is provided by the experimental observation that a temperature change during crystallization produces a step on a lamella. This step is a result of the thickness of the crystal dynamically converging to $`l^{}`$ for the new temperature as the crystal grows. Furthermore it has been suggested that a detailed characterization of the step profiles by atomic-force microscopy could allow the fixed-point attractors that underlie the mechanism to be obtained. In this paper we examine this suggestion more carefully by performing simulations of temperature jumps for the Sadler-Gilmer model. In particular, we investigate the effect that rounding of the crystal profile near to the growth face and fluctuations in the crystal thickness may have on the shape of the steps. It is hoped that this work will aid the experimental interpretation of step profiles. ## II Methods In the SG model the growth of a polymer crystal results from the attachment and detachment of polymer units at the growth face. The rules that govern the sites at which these processes can occur are designed to mimic the effects of the chain connectivity. In the original three-dimensional version of the model, kinetic Monte Carlo simulations were performed to obtain many realizations of the polymer crystals that result. Averages were then taken over these configurations to get the properties of the model. Under many conditions the growth face is rough and the correlations between stems in the direction parallel to the growth face are weak. Therefore, an even simpler two-dimensional version of the model was developed in which lateral correlations are neglected entirely, and only a slice through the polymer crystal perpendicular to the growth face is considered. The behaviour of this new model was found to be very similar to the original three-dimensional model. The geometry of the model is shown in Figure 2. Changes in configuration can only occur at the outermost stem and stems behind the growth face are ‘pinned’ because of the chain connectivity. There are three ways that a polymer unit can be added to or removed from the crystal: (1) The outermost stem can increase in length upwards. (2) A new stem can be initiated at the base of the previous stem. (3) A polymer unit can be removed from the top of the outermost stem. The ratio of the rate constants for attachment ($`k^+`$) and detachment ($`k^{}`$) of a polymer unit are related to the thermodynamics of the model through $$k^+/k^{}=\mathrm{exp}(\mathrm{\Delta }F/kT),$$ (1) where $`\mathrm{\Delta }F`$ is the change in free energy on addition of a particular polymer unit. The above equation only defines the relative rates and not how the the free energy change is apportioned between the forward and backward rate constants. We follow Sadler and Gilmer and choose $`k^+`$ to be constant. We use $`1/k^+`$ as our unit of time. In the model the energy of interaction between two adjacent crystal units is -$`ϵ`$ and the change in entropy on melting of the crystal is given by $`\mathrm{\Delta }S=\mathrm{\Delta }H/T_m=2ϵ/T_m`$, where $`T_m`$ is the melting temperature (of an infinitely thick crystal) and $`\mathrm{\Delta }H`$ is the change in enthalpy. It is assumed that $`\mathrm{\Delta }S`$ is independent of temperature. Here, as with Sadler and Gilmer, we do not include any contribution from chain folds to the thermodynamics. From the above considerations it follows that the rate constants for detachment of polymer units are given by $`k^{}(i,j)`$ $`=`$ $`k^+\mathrm{exp}(2ϵ/kT_m2ϵ/kT)i1,ij`$ (2) $`k^{}(i,j)`$ $`=`$ $`k^+\mathrm{exp}(2ϵ/kT_mϵ/kT)i=1,i>j,`$ (3) where $`i`$ is the length of the outermost stem and $`j`$ the length of the stem in the previous layer. The first term in the exponents is due to the gain in entropy as a result of the removal of a unit from the crystal, and the second term is due to the loss of contacts between the removed unit and the rest of the crystal. There are two ways to examine the behaviour of the model. In one approach the model is formulated in terms of a set of rate equations which can easily be solved numerically to yield the steady-state solution of the model. This is the method that we used for the most part in our previous study of the Sadler-Gilmer model. However, as we wish to examine the evolution of the system towards the steady state, we use kinetic Monte Carlo to grow a set of representative crystals. In this work we deliberately start growing these crystals from a well-defined non-steady-state initial configuration—either a crystal of constant thickness different from $`l^{}`$, or a crystal grown at a different temperature. Averages are then taken over these crystals to obtain information about the convergence of the system towards the steady state. At each step in the kinetic Monte Carlo simulation a state, $`b`$, is randomly chosen from the three states connected to the current state, $`a`$, with a probability given by $$P_{ab}=\frac{k_{ab}}{_b^{}k_{ab^{}}},$$ (4) and update the time by an increment $$\mathrm{\Delta }t=\frac{\mathrm{log}\rho }{_bk_{ab}},$$ (5) where $`\rho `$ is a random number in the range . Depending on the conditions we use from tens of thousands to millions of steps to grow each individual crystal, and then take averages over many thousands of crystals. The version of the SG model used here has two variables: $`kT_m/ϵ`$ and $`T/T_m`$. Here, as in our previous work on the SG model, we use $`kT_m/ϵ=0.5`$, unless otherwise stated. Sadler and Gilmer have shown that the basic properties of the model were independent of the value of $`kT_m/ϵ`$ within the parameter range that they studied. ## III Results Before considering simulations of actual temperature jumps we examine a slightly simpler case, namely the convergence of the crystal thickness to $`l^{}`$ when the initial configuration is a crystal of constant thickness different from $`l^{}`$. These cases will provide a useful comparison to the convergence to a new $`l^{}`$ caused by a change in temperature. Figure 3 shows two example crystal profiles when the the thickness of the initial crystal is larger than $`l^{}`$. It can be clearly seen that the crystal quickly converges to $`l^{}`$ whatever the initial thickness of the crystal leading to a downward step on the surface. For the crystal that is initially twenty units thick there is little backward motion of the growth front in the early stages of the simulations because a thick crystal such as this is very stable. Therefore, after growth only the outer layer of the initial crystal has a thickness different from its initial value. The step has a sharp downward edge. By contrast, the step on the crystal that was initially ten units thick is more rounded (Figure 3b). The outer layers of the initial crystal are now noticeably less than ten up to about five layers back into the initial crystal and the curvature of the step changes from negative to positive around the initial position of the edge of the crystal. It is also worth noting that we previously found that the initial growth rate increases with the thickness of the initial crystal. The cause of this behaviour is similar to that for the more pronounced rounding of the steps that result from growth on thinner crystals. Both are related to the smaller number of detachment steps in the kinetic Monte Carlo simulations when the initial crystal is thicker because of the greater thermodynamic driving force for growth. In Figure 3b, as well as the average profile with respect to the position of the edge of the initial crystal, we also show the average profile measured with respect to the minimum position of the growth front in each individual crystal. Now the sharp edge of the step is regained. Therefore, the rounding of the step for the space-fixed profile can be understood to result from the variation in the amount by which the growth face transiently retreats back into the initial crystal. To use the step profile to reveal the fixed-point attractor that describes the convergence to $`l^{}`$ we plot in Figure 4 the thickness of a layer against the thickness of the previous layer (i.e. the points ($`l_{j1},l_j`$) where $`j`$ is the position of a layer) as we go through the step. The initial points from layers behind the step are all at ($`l_{\mathrm{init}},l_{\mathrm{init}}`$), where $`l_{\mathrm{init}}`$ is the thickness of the initial crystal. As one passes through the step the points leave ($`l_{\mathrm{init}},l_{\mathrm{init}}`$) and follow a path which converges to ($`l^{},l^{}`$). It is the nature of this path that is of interest. In particular, the assumption of our previous suggestion that the steps produced by temperature changes could provide insight into the mechanisms of polymer crystallization was that this path should follow the fixed-point attractor, $`l_n(l_{n1})`$. $`l_n(l_{n1})`$ is defined as the average thickness of a layer $`n`$ in the bulk of the crystal given that the thickness of the previous layer is $`l_{n1}`$ and is obtained from the steady-state solution of the rate equations describing the SG model (for details see Ref. ). In Figure 4 we also plot $`l_n(l_{n1})`$ to compare with the path taken by ($`l_{j1},l_j`$). For the crystal that is initially twenty units thick the path jumps from (20,20) to the fixed-point attractor in two steps and then follows it down to the fixed point. The intermediate point is due to the slight rounding of the outer layer of the initial crystal. The plot for the crystal that is initially 10 units thick is similar, except that the number of steps taken to reach the fixed-point attractor is larger because of the greater rounding of the the initial crystal. However, when the position is measured with respect to the minimum position of the growth front the path of ($`l_{j1},l_j`$) immediately steps onto the fixed-point attractor from the point (10,10). The third step profile in Figure 3 shows the profile when the thickness of the initial crystal is less than $`l^{}`$. In this case the initial crystal is unstable with respect to the melt/solution and so the growth face retreats (Figure 5a). Only once a fluctuation to a greater thickness occurs does the growth face begin to advance. Generation of this fluctuation is a slow process because there is an energetic cost associated with the stems which overhang the previous layer. As the distance that the growth face initially goes backwards has a wide variation between individual simulations, the gradient of the step in a space-fixed frame of reference is very shallow. Only when the profile is measured with respect to the minimum position of the growth face does a clear picture of the thickening emerge. In this frame of reference there is a sharp upward step and the thickness quickly reaches $`l^{}`$ (Figure 3c). Furthermore, the path of ($`l_{j1},l_j`$) jumps straight from (7,7) onto the thickening branch of the fixed-point attractor (Figure 4c). At this point it is right to consider which frame of reference—space-fixed or fixed with respect to the minimum position of the growth face—is more appropriate to the step profiles on real polymer crystals. The profiles resulting from the two frames of reference represent two limits in the degree of correlations between events along the growth face. The space-fixed frame of reference maintains the assumption of the two-dimensional SG model that there are no correlations between adjacent stems along the growth face. It is this assumption that leads to the variation in the distance by which the growth face retreats. However, this assumption is not always a good one. In particular, it seems likely that the nucleation of a thicker region would propagate laterally. Therefore, we expect the degree of correlation to be closer to the limit obtained from using a frame of reference fixed with respect to the minimum position of the growth face. This latter approach is equivalent to assuming that the line of the step along the fold surface is straight, rather than rough. Having shown that for the situation when growth occurs from a crystal that is of constant thickness a plot of ($`l_{j1},l_j`$) can reveal the fixed-point attractor that underlies the growth mechanism of polymer crystals, we now proceed to consider the steps that result from temperature jumps (from a temperature $`T_1`$ to a temperature $`T_2`$) during growth. As the thickness of polymer crystals are approximately inversely proportional to the degree of supercooling, a decrease in temperature will lead to a downward step, and an increase in temperature will lead to an upward step. We consider the effects of a decrease in temperature first. In Figure 6a we show the average crystal profile before and after the decrease in temperature. Firstly, unlike the situation considered above, the new growth after the temperature change is on a crystal where there are variations in the layer thickness and where the layers near to the growth face are on average thinner. The rounded profile at the growing edge of the crystal is a characteristic property of the SG model and plays a key role in Sadler and Gilmer’s explanation of polymer crystallization in terms of an entropic barrier. In the growth after the temperature jump not all of the rounding present at the edge of the crystal at the time of the jump is removed. The profile of the resulting step initially follows the profile of the rounded edge, before changing curvature and smoothly converging to $`l^{}`$ for the new temperature (Figure 6a). The path of ($`l_{j1},l_j`$) for the step initially leaves ($`l^{}(T_1),l^{}(T_1)`$) and follows the same path as ($`l_{j1},l_j`$) for the crystal edge at $`T_1`$. Only when this line meets $`l_n(l_{n1},T_2)`$ does the path change slope and follow the fixed-point attractor down to ($`l^{}(T_2),l^{}(T_2)`$) (Figure 7a). This basic scenario holds for all temperature decreases. The main differences are only in the degree to which the step reflects the rounding of the crystal edge, which in turn depends upon the relative slopes of $`l_n(l_{n1},T_2)`$ and the path of ($`l_{j1},l_j`$) for the crystal edge at $`T_1`$. For instance, if the product of the slopes is one then the crossover will occur halfway between $`l^{}(T_1)`$ and $`l^{}(T_2)`$, and for the example shown in Figure 7a this is approximately the case. The parameters in the model that can affect the two slopes are $`kT_m/ϵ`$, $`T_1`$ and $`T_2`$. We do not intend to survey the full parameter space, but instead just comment on the effect of varying each parameter alone on the example in Figure 7a. For example, if we decrease $`kT_m/ϵ`$ the slope of the fixed-point attractor becomes closer to one. Therefore, the convergence of the thickness to $`l^{}(T_2)`$ is more gradual (Figure 6a) and the path of ($`l_{j1},l_j`$) follows the fixed point attractor to a greater extent (Figure 7b). However, increasing $`T_1`$ has an opposite effect. It makes the slope of ($`l_{j1},l_j`$) for the crystal edge closer to one—the rounding is more gradual and extends deeper into the crystal away from the crystal edge. Therefore, the path of ($`l_{j1},l_j`$) follows the fixed point attractor to a lesser extent. Finally, changing $`T_2`$ has relatively little effect on the relative slopes and so the crossover remains roughly midway between $`l^{}(T_1)`$ and $`l^{}(T_2)`$. In Figure 6b we show three examples of steps that result from temperature increases. In these cases a crystal of thickness of $`l^{}(T_1)`$ is unstable with respect to the melt/solution at $`T_2`$ and so the crystal growth face initially retreats after the temperature jump (Figure 5). In one of the cases ($`T_1=0.935`$) we chose $`T_1`$ so that $`l^{}(T_1)7`$ enabling us to make a comparison with the step that was produced when the initial crystal had a constant thickness of 7 units. From Figure 5a we can see that growth begins markedly earlier when there is a temperature jump. The reason for this difference becomes clear when we examine the step profiles shown in Figure 6b. The majority of the step is behind the minimum position of the growth face after the temperature jump. The growth face must retreat until it reaches a region of the crystal where the layer thickness is larger than the average value at $`T_1`$. New growth then begins from this position and the crystal thickens the small amount necessary to reach $`l^{}(T_2)`$. Only for this new growth does ($`l_{j1},l_j`$) follow the fixed-point attractor (Figure 7c). Growth is more rapid than for the case where growth is from an initial crystal of constant thickness because encountering already present fluctuations during the retreat of the growth face is a more common event than the generation of new fluctuations to larger thickness. Interestingly, the part of the step resulting from already present fluctuations has a well-defined behaviour. In the bulk of the crystal (where the influence of the growth face can no longer be felt) there is a symmetry between the directions towards and away from the growth face. Therefore, $`l_{n1}(l_n)=l_n(l_{n1})`$; i.e. in the bulk of the crystal the dependence of the thickness of layer $`n1`$ on the thickness of layer $`n`$ is the same as the dependence of the thickness of layer $`n`$ on the thickness of layer $`n1`$. Given that at the minimum position of the growth face there is a fluctuation to a certain amount larger than $`l^{}`$, the layers behind this would therefore be expected to obey $`l_{n1}(l_n,T_1)`$. So, it is unsurprising that a plot of $`(l_j,l_{j1})`$ for $`j0`$ follows the fixed point attractor for $`T_1`$. From this it simply follows that a plot of $`(l_{j1},l_j)`$ for $`j0`$ follows the curve, $`l_n^{}(l_{n1},T_1)`$, formed by reflecting $`l_n(l_{n1},T_1)`$ in $`y=x`$ (Figure 7c). For $`j>0`$ $`(l_{j1},l_j)`$ jumps from $`l_n^{}(l_{n1},T_1)`$ onto the fixed point attractor for $`T_2`$. From Figures 5, 6b and 7c we can ascertain some of the effects of changing the size of the temperature increase. When the temperature jump is larger the growth face retreats further because a larger fluctuation in the thickness needs to be encountered to nucleate new growth. Furthermore, the larger temperature jump the more of the step is a result of new growth. Interestingly, for the smallest temperature increase the growth face retreats to a fluctuation in thickness which is on average larger than $`l^{}(T_2)`$. Therefore, the profile displays a lip (Figure 6b) and in the new growth the crystal thins slightly to reach the new $`l^{}`$. The effect of $`kT_m/ϵ`$ is somewhat similar to the effect of the magnitude of the temperature increase. A larger proportion of the step is the result of new growth when $`kT_m/ϵ`$ is smaller. Presumably, this is because there is less variation in the stem length for smaller values of $`kT_m/ϵ`$. ## IV Discussion In this paper we have seen that the step profiles obtained by simulations of temperature jumps using the SG model do reveal information about the fixed-point attractor that we have recently argued underlies the mechanism of polymer crystallization. This strengthens our suggestion that temperature jump experiments can provide insight into the physics of polymer crystallization. However, the step profiles for temperature decreases also reflect the rounding of the crystal edge and those for temperature increases also reflect the variations in the stem length that are present in the crystals. These additional features mean that the interpretation of experimental step profiles in terms of fixed-point attractor curves requires considerable care. This disadvantage is partly offset by the the fact that an experimental study of the step profiles could reveal information about aspects of polymer crystallization not originally anticipated. However, we note that it is not clear to what extent these additional features occur for the steps on real polymer crystals—they may just reflect some of the simplifications in the SG model. Firstly, although the crystal edge is always rounded in the SG model, there is, as far as we are aware, not yet any direct experimental evidence of this effect occurring in real polymer crystals under normal crystallization conditions. Secondly, in an alternative model of polymer crystallization, rounding only occurs at small supercoolings. Secondly, it may be that the SG model overestimates the roughness of the fold surface of polymer crystals. In the SG model changes in the length of a stem can occur only when that stem is at the growth face. Once the growth face has passed through a region the stem length variations that arise from the kinetics of growth are frozen in. However, it may be that in real crystals there are annealing mechanisms (which only require local motion of stems) that can act to reduce the magnitude of these fluctuations to their equilibrium values. Such processes might make the crystal growth following a temperature increase more similar to the growth from an initial crystal of constant thickness less than $`l^{}`$. Recent atomic-force microscopy experiments on the fold surface of polyethylene shed some light on this issue. Only for a minority of crystal could the folds be clearly resolved. It was suggested that the lack of clear order for the majority of crystals resulted from differences in the heights of the folds, i.e. there is some surface roughness. ###### Acknowledgements. The work of the FOM Institute is part of the research programme of ‘Stichting Fundamenteel Onderzoek der Materie’ (FOM) and is supported by NWO (‘Nederlandse Organisatie voor Wetenschappelijk Onderzoek’). JPKD acknowledges the financial support provided by the Computational Materials Science program of the NWO and by Emmanuel College, Cambridge.
no-problem/9901/quant-ph9901037.html
ar5iv
text
# The Exact Solution to the Schrödinger Equation with the Octic Potential ## Abstract The Schrödinger equation with the central potential is first studied in the arbitrary dimensional spaces and obtained an analogy of the two-dimensional Schrödinger equation for the radial wave function through a simple transformation. As an example, applying an $`\mathrm{𝑎𝑛𝑠𝑎𝑡𝑧}`$ to the eigenfunctions, we then arrive at an exact closed form solution to the modified two-dimensional Schrödinger equation with the octic potential, $`V(r)=ar^2br^4+cr^6dr^4+er^{10}`$. PACS numbers: 03. 65. Ge. 1. Introduction It is well known that the general framework of the nonrelativistic quantum mechanics is by now well understood, however, whose predictions have been carefully proved against observations. It is of importance to know whether some familiar problems are a particular case of a more general scheme. On behalf of this purpose, it is worthwhile to study the Schrödinger equation in the arbitrary dimensional spaces. This topic has attracted much more attention to many authors\[4-8\]. With respect to the arbitrary dimensional Schrödinger equation, it is readily to arrive at a simple analogy of the two-dimensional Schrödinger equation for the radial wave function through a simple transformation. On the other hand, the exact solutions to the fundamental dynamical equations play an important role in physics. As we know, the exact solutions to the Schrödinger equation are possible only for several potentials and some approximation methods are frequently used to arrive at the solutions. Recently, the study of higher order anharmonic potentials have been much more desirable to physicists and mathematicians\[10-12\], who want to understand a few newly discovered phenomena (for instance, structural phase transitions, polaron formation in solids and the concept of false vacuo in filed theory) in the different fields of physics. Unfortunately, in these anharmonic potentials, not much work has been carried out on the octic potential except for some simpler study by an $`\mathrm{𝑎𝑛𝑠𝑎𝑡𝑧}`$ to the eigenfunctions in the three-dimensional spaces. With the wide interest in the lower-dimensional field theory recently, however, it seems reasonable to study the two-dimensional Schrödinger equation with the octic potential. We has succeeded in dealing with the Schrödinger equation with some anharmonic potentials by this $`\mathrm{𝑎𝑛𝑠𝑎𝑡𝑧}`$\[15-17\]. Consequently, we attempt to study the two-dimensional Schrödinger equation with the octic potential, to our knowledge, which is not appeared in the literature. The purpose of this paper is to demonstrate the modified Schrödinger equation in the arbitrary dimensional spaces and give a concrete application to the two-dimensional Schrödinger equation with the octic potential. The paper is organized as follows. Section 2 studies the Schrödinger equation with the central potential in the arbitrary dimensional spaces and obtains an analogy of the two-dimensional Schrödinger equation for the radial wave function through a simple transformation. In Sec. 3, as an example, applying an $`\mathrm{𝑎𝑛𝑠𝑎𝑡𝑧}`$ to the eigenfunctions, we obtain an exact closed form solution to the modified two-dimensional Schrödinger equation with the octic potential. 2. The modified Schrödinger equation Throughout this paper the natural unit $`\mathrm{}=1`$ and $`\mu =1/2`$ are employed. Following the Refs. , in the $`N`$ dimensional Hilbert spaces, the radial wave function $`\psi (r)`$ for the Schrödinger equation for the stationary states can be written as $$\left[\frac{d^2}{dr^2}+\frac{(N1)}{r}\frac{d}{dr}+(EV(r))\frac{\mathrm{}(\mathrm{}+N2)}{r^2}\right]\psi (r)=0,$$ $`(1)`$ where $`\mathrm{}`$ denotes the angular momentum quantum number. In order to make the coefficient of the first derivative vanish, we may furthermore define a new radial wave function $`R(r)`$ by means of the equation, $$\psi (r)r^\rho R(r),$$ $`(2a)`$ where $`\rho `$ is an unknown parameter and will be given in the following. Substituting Eq. (2a) into Eq. (1), we will arrive at an algebraic equation containing the parameter $`\rho `$ as $$2\rho +(N1)=0.$$ Consequently, the Eq. (2a) will be read as $$\psi r^{\frac{(N1)}{2}}R(r),$$ $`(2b)`$ which will lead to the radial wave function $`R(r)`$ satisfying $$\left\{\frac{d^2}{dr^2}\left[\mathrm{}(\mathrm{}+N2)+\frac{1}{4}(N1)(N3)+(EV(r))\right]\frac{1}{r^2}\right\}R(r)=0.$$ $`(3)`$ Through a simple deformation, $$\mathrm{}(\mathrm{}+N2)+\frac{1}{4}(N1)(N3)=\left[\mathrm{}+\frac{1}{2}(N2)\right]^2\frac{1}{4},$$ we may introduce a parameter $$\eta \mathrm{}+\frac{1}{2}(N2),$$ $`(4)`$ so that the Eq. (3) will be written as $$\left[\frac{d^2}{dr^2}+(EV(r))\frac{(\eta ^21/4)}{r^2}\right]R(r)=0,$$ $`(5)`$ which is our desired result. In other words, we have modified the Schrödinger equation in the arbitrary dimensional spaces into a simple analogy of the two-dimensional radial Schrödinger equation after introducing a parameter $`\eta `$ given in Eq. (4), which relies on a linear combination between $`N`$ and the angular momentum quantum number $`\mathrm{}`$. As mentioned above, we want to solve this modified Schrödinger equation with the octic potential in two dimensions applying an $`\mathrm{𝑎𝑛𝑠𝑎𝑡𝑧}`$ to the eigenfunctions in the next section. 3. An $`\mathrm{𝑎𝑛𝑠𝑎𝑡𝑧}`$ to the eigenfunctions Consider the two-dimensional Schrödinger equation with a potential $`V(r)`$ that depends only on the distance $`r`$ from the origin $$H\psi (r,\phi )=\left(\frac{1}{r}\frac{}{r}r\frac{}{r}+\frac{1}{r^2}\frac{^2}{\phi ^2}\right)\psi (r,\phi )+V(r)\psi (r,\phi )=E\psi (r,\phi ),$$ $`(6)`$ where here and hereafter the potential $$V(r)=ar^2br^4+cr^6dr^8+er^{10},d<0.$$ $`(7)`$ Due to the symmetry of the potential, let $$\psi (r,\phi )=r^{1/2}R_m(r)e^{\pm im\phi },m=0,1,2,\mathrm{}.$$ $`(8)`$ It is easy to find from Eq. (5) that the radial wave function $`R_m(r)`$ satisfies the following radial equation $$\frac{d^2R_m(r)}{dr^2}+\left[EV(r)\frac{m^21/4}{r^2}\right]R_m(r)=0,$$ $`(9)`$ where the parameter $`\lambda =m=\mathrm{},N=2`$; $`m`$ and $`E`$ denote the angular momentum number and energy, respectively. For the solution of Eq. (9), we make an $`\mathrm{𝑎𝑛𝑠𝑎𝑡𝑧}`$\[13-15\] for the ground state $$R_{m0}(r)=\mathrm{exp}[p_{m0}(r)],$$ $`(10)`$ where $$p_{m0}(r)=\frac{1}{2}\alpha r^2\frac{1}{4}\beta r^4+\frac{1}{6}\tau r^6+\kappa \mathrm{ln}r.$$ $`(11)`$ After calculating, we arrive at the following equation $$\frac{d^2R_{m0}(r)}{dr^2}\left[\frac{d^2p_{m0}(r)}{dr^2}+\left(\frac{dp_{m0}(r)}{dr}\right)^2\right]R_{m0}(r)=0.$$ $`(12)`$ Compare Eq. (12) with Eq. (9) as before and obtain the following set of equations $$\kappa (\kappa 1)=(m+1/2)(m1/2),\tau ^2=e,$$ $`(13a)`$ $$\beta ^2+2\alpha \tau =c,2\beta \tau =d,$$ $`(13b)`$ $$\alpha ^22\beta \kappa 3\beta =a,$$ $`(13c)`$ $$5\tau +2\tau \kappa 2\alpha \beta =b,$$ $`(13d)`$ $$E=\alpha (1+2\kappa ).$$ $`(13e)`$ It is not difficult to obtain the values of parameters $`\tau `$ and $`\kappa `$ from the Eq. (13a) written as $$\tau =\pm \sqrt{e},\kappa =m+1/2\mathrm{or}m+1/2.$$ $`(14)`$ In order to retain the well-behaved solution at the origin and at infinity, we choose positive sign in $`\tau `$ and $`\kappa `$ as $`m+1/2`$. According to these choices, the Eq. (13b) will give the other parameter values as $$\beta =\frac{d}{2\sqrt{e}},\alpha =\frac{d^24ce}{8e\sqrt{e}}.$$ $`(15)`$ Besides, it is readily to obtain from the Eqs. (13c) and (13d) that $$a=\frac{d^48ced^2+16c^2e^2+64de^2\sqrt{e}(\kappa +3/2)}{64e^3},$$ $`(16a)`$ $$b=\frac{8e^2\sqrt{e}(5+2\kappa )d(d^24ec)}{8e^2},$$ $`(16b)`$ which are the constraints on the parameters of the octic potential. The eigenvalue $`E`$, however, will be given by Eq. (13e) as $$E=\frac{(1+2\kappa )(d^24ce)}{8e\sqrt{e}}.$$ $`(17)`$ The corresponding eigenfunctions Eq. (10) can now be read as $$R_{m0}=N_0r^\kappa \mathrm{exp}\left[\frac{1}{2}\alpha ^2\frac{1}{4}\beta r^4+\frac{1}{6}\tau r^6\right],$$ $`(18)`$ where $`N_0`$ is the normalized constant and here and hereafter the parameters $`\alpha ,\beta `$ and $`\kappa `$ are the same as the values given above. As a matter of fact, the normalized constant $`N_0`$ can be calculated in principle from the normalized relation $$_0^{\mathrm{}}|R_{m0}|^2𝑑r=1,$$ $`(19)`$ which implies that $$N_0=\left[\frac{1}{\omega }\right]^{1/2},$$ $`(20)`$ where $$\omega _0^{\mathrm{}}r^{2\kappa }\mathrm{exp}\left[\alpha r^2\frac{1}{2}\beta r^4+\frac{1}{3}\tau r^6\right]𝑑r.$$ $`(21)`$ In this case, however, the normalization of the eigenfunctions becomes a very difficult task. Considering the values of the parameters of the potential, we fix them as follows. The value of parameter $`c,d,e`$ are first fixed, for example $`c=1.0,d=2`$ and $`e=4.0`$, the value of the parameters $`a`$ and $`b`$ are given by Eq. (11) for $`m=0`$. By this way, the parameters turn out to $`a=1.96,b=11.8,c=4.0,d=2.0,e=4.0`$ and $`\alpha =0.188,\beta =0.5,\tau =2`$. The ground state energy corresponding to these values is obtained as $`E=0.375`$. Actually, when we study the property of the ground state, as we know, the unnormalized radial wave function will not affect the main features of the wave function. We have plotted the unnormalized radial wave function $`R_{00}(r)`$ in fig. 1 for the ground state. To summarize, we first deal with the Schrödinger equation with the central potential in the arbitrary dimensional spaces and obtain an analogy of the two-dimensional Schrödinger equation for the radial wave function through a simple transformation. As an example, we obtain an exact closed solution to the Schrödinger equation with the octic potential using a simpler $`\mathrm{𝑎𝑛𝑠𝑎𝑡𝑧}`$ and simultaneously two constrains on the parameters of the potential are arrived at from the compared equation. The other studies to the Schrödinger equation with the related anharmonic potential in two dimensions are in progress. Acknowledgments. This work was supported by the National Natural Science Foundation of China and Grant No. LWTZ-1298 from the Chinese Academy of Sciences.
no-problem/9901/astro-ph9901403.html
ar5iv
text
# Spatial Variability in the Ratio of Interstellar Atomic Deuterium to Hydrogen. I. Observations toward 𝛿 Orionis by the Interstellar Medium Absorption Profile Spectrograph ## 1 Introduction The relative abundances of the light elements not only substantiate the standard interpretation for Big Bang Nucleosynthesis<sup>1</sup><sup>1</sup>1However see Gnedin & Ostriker (1992) and Burbidge & Hoyle (1998) for contemporary viewpoints that differ from this interpretation. (BBN) (Reeves et al. 1973; Epstein, Lattimer, & Schramm 1976), but they also hold the key for our determining the universal ratio of baryons to photons, commonly designated by the parameter $`\eta `$ (Boesgaard & Steigman 1985; Olive et al. 1990; Smith, Kawano, & Malaney 1993). There has been considerable interest in measuring the abundance of deuterium, since its production was strongly regulated by photodestruction in the radiation bath during the BBN, making D/H a strong discriminant of $`\eta `$. Deuterium is also destroyed in stars. After having passed through one or more generations of stars, diffuse gases that we can observe have probably had their deuterium abundances reduced to values below those that result from BBN. Thus it is important to observe systems that have different levels of chemical enrichment and mixing (Timmes et al. 1997), so that we can untangle the effects of the two fundamental destruction mechanisms, i.e., the photodestruction accompanying BBN and the astration of material as the universe matures. A key step in this area of research is to form a solid foundation of measurements of D in the chemically evolved gas in the disk of our Galaxy. Ultimately, when these results are combined with determinations for distant gas systems that have not aged as much, we expect to achieve a better understanding about the processing of gas through stars, which is interesting in its own right (Steigman & Tosi 1992, 1995; Dearborn, Steigman, & Tosi 1996; Scully et al. 1997; Tosi et al. 1998), and this in turn should allow us to extrapolate the concentration of D back to an era very soon after its primordial production. An important foundation in recognizing the relationship between D/H and some measure of stellar processing, such as the relative abundances of elements produced in stellar interiors, is that an empirical relationship between the two forms a unique sequence. If this turns out not to be true, then more elaborate interpretations of chemical evolution may be needed. To explore this issue, we have embarked on a program to revisit some lines of sight studied by other investigators (§2), but this time using much higher resolution spectra obtained with the Interstellar Medium Absorption Profile Spectrograph (IMAPS). In this paper, we investigate the spectrum of $`\delta `$ Ori A (HD 36486). This star has a spectral classification of O9.5 II, is a spectroscopic binary, and is a member of the Ori OB1 association that has a distance modulus of 8.5 ($`d=500\mathrm{pc}`$) (Humphreys 1978). In a companion paper (Sonneborn et al. 1999) we will report on results for $`\gamma ^2`$ Vel and $`\zeta `$ Pup. The basic properties of our spectrum of $`\delta `$ Ori and the instrument that recorded it are discussed in §3.1, followed by examinations of systematic errors that could arise in the determination of a very weak contamination signal (§3.2), the intensity of the scattered light background (§3.3), and absorption features from other species (§3.4). In §4.1 we describe how we obtained independent information on the shape of the velocity profile for material toward $`\delta `$ Ori, so that we could undertake our study with only a small number of unknown, free parameters. We have paid considerable attention to minimizing the errors and evaluating them in a fair and consistent manner (§4.2). For our value of $`N`$(D I) reported in §4.3 to be useful, we must compare it with $`N`$(H I), and we must strive to make the accuracy of the latter as good as or better than the former. In §5 we discuss our comprehensive investigation of the IUE archival data that show L$`\alpha `$ absorption in the spectrum of $`\delta `$ Ori. This special analysis combined with our determination of $`N`$(D I) ultimately led to our determination of the atomic D/H toward $`\delta `$ Ori reported in §6. We relate this result to the abundances of other elements relative to H in §7 and discuss its significance in §8. ## 2 Previous Measurements of Atomic D/H in the Galaxy Early measurements of D/H obtained from the Copernicus satellite (resolution $`15\mathrm{km}\mathrm{s}^1`$ FWHM) and IUE (resolution $`25\mathrm{km}\mathrm{s}^1`$ FWHM), summarized by Vidal-Madjar & Gry (1984), had a few cases that differed by more than the reported errors from a general average $`\mathrm{D}/\mathrm{H}1.5\times 10^5`$. At face value, this suggested that D/H varies from one location to the next. McCullough (1992) revisited this problem and asserted that the evidence for such variations was not convincing. In making his claim that all of the data were consistent with a constant value for D/H, McCullough rejected all of the deviant cases on the grounds that the complexity of their velocity structures made the measurements much less accurate than originally claimed. The high resolution ($`2.5\mathrm{km}\mathrm{s}^1`$ FWHM) and good sensitivity of the GHRS instrument on HST enabled an accumulation of very accurate observations of the interstellar L$`\alpha `$ H and D absorption features superposed on the broader chromospheric L$`\alpha `$ emission lines of nearby F, G and K type stars. The best determinations of D/H were those toward $`\alpha `$ Aur (Capella), where Linsky, et al. (1995) found that $`\mathrm{D}/\mathrm{H}=1.60_{0.19}^{+0.14}\times 10^5`$, and toward HR 1099, where Piskunov, et al. (1997) obtained $`\mathrm{D}/\mathrm{H}=1.46\pm 0.09\times 10^5`$. The issue of whether or not atomic deuterium to hydrogen ratios toward other cool stars differ from these values has been an elusive one, although it seems clear that one could rule out deviations greater than about 50% in either direction (Wood, Alexander, & Linsky 1996; Dring et al. 1997; Piskunov et al. 1997). The chief problem has been that the measurements of $`N(\mathrm{H}\mathrm{I})`$ toward late-type stars were very dependent on assumptions about the shape of the underlying emission profile (Linsky & Wood 1996; Piskunov et al. 1997) or the compensations for additional, broad absorptions caused by hydrogen walls associated with the stellar wind cavities around either the Sun or the target stars (Linsky & Wood 1996; Wood & Linsky 1998). Even so, these investigations revealed some intriguing, convincing variations for the abundances of D I with respect to those of Mg II. Unfortunately, the significance of these changes is clouded by the possibility that they could result simply from alterations in the amount of depletion of Mg onto dust grains (Murray et al. 1984; Jenkins, Savage, & Spitzer 1986; Sofia, Cardelli, & Savage 1994; Fitzpatrick 1997). Lemoine, et al (1996) observed the interstellar H and D L$`\alpha `$ absorption features in the spectrum of the DA white dwarf G191$``$B2B with the GHRS and reported their determinations for D/H. Later, high-resolution observations by Vidal-Madjar, et al (1998) brought forth some refinements in the interpretation of the velocity structures of the absorption profiles, leading to a determination $`\mathrm{D}/\mathrm{H}=1.12\pm 0.08\times 10^5`$ for all of the material in front of this star. If one allows for the fact that a contribution from the Local Interstellar Cloud (LIC) is somewhat blended with those of more distant clouds and adopts the $`\alpha `$ Aur result for the LIC, D/H toward the other material could be of order $`9\times 10^6`$. This low value for D/H is supported by observations of the hot subdwarf BD $`+28\mathrm{°}4211`$ reported by Gölz, et al. (1998), $`\mathrm{D}/\mathrm{H}=8_4^{+7}\times 10^6`$, although the error bar is large enough to include the results obtained for $`\alpha `$ Aur, HR 1099, and other late-type stars. For lines of sight that have hydrogen column densities that are small enough to analyze using the L$`\alpha `$ profile, there is the danger that improper allowances for either L$`\alpha `$ emission (cool stars) or absorption (hot dwarfs) could lead to errors. Moreover, in some circumstances hydrogen walls associated with either the target stars or the Sun can lead to complications. One way to bypass these problems is to examine the higher Lyman series absorption features toward more distant, early-type stars with much more foreground material, as was done with the Copernicus satellite. We also have the benefit of sampling the interstellar medium well outside our immediate vicinity. However, a principal weakness of Copernicus was its limited resolving power ($`15\mathrm{km}\mathrm{s}^1`$ FWHM).<sup>2</sup><sup>2</sup>2Another drawback with Copernicus was that all observations had to be taken in sequence, since the spectrometer was a scanning instrument. Vidal-Madjar, et al. (1982) obtained inconsistent results for different Lyman series lines in the spectrum of $`ϵ`$ Per, an effect which they attributed to the influence of stellar features that varied with time. In large part, the Copernicus investigators had to model the instrumentally smeared, detailed velocity structure of the gas, with guidance from high-resolution observations of Na I features recorded from the ground. Unfortunately, the sodium D lines are a poor standard for comparison because their strengths are dependent on ionization equilibria that are entirely different from those of D I and H I. In this study we revisit the case for $`\delta `$ Ori, originally observed with Copernicus by Laurent, et al. (1979), but now with new observations taken with an instrument with considerably better velocity resolution than Copernicus. ## 3 Observations and Data Reduction ### 3.1 Basic Properties of the Spectra A far-UV spectrum of $`\delta `$ Ori over the wavelength interval 930 to 1150 Å was recorded in a series of exposures lasting 54 min over various observing intervals between 22 November and 3 December 1996 by the Interstellar Medium Absorption Profile Spectrograph (IMAPS). This series of observations was undertaken during the ORFEUS-SPAS II mission (Hurwitz et al. 1998) on STS-80, which was the second orbital flight of IMAPS. IMAPS is a simple, objective-grating echelle spectrograph that can record the far-UV spectrum of a bright, early-type star with sufficient resolution to show many of the velocity structures in the interstellar lines. Jenkins, et al. (1996) present a detailed description of the IMAPS instrument, its performance on the first ORFEUS-SPAS mission in 1993<sup>3</sup><sup>3</sup>3Improvements in IMAPS after the first flight removed most of the problems discussed by Jenkins, et al. (1996; in particular, see their §8.2). Most important, the severe changes in photocathode sensitivity that were evident on the first flight were not manifested on the second flight., and the methods of data correction and analysis. We summarize very briefly how the spectra are recorded by IMAPS: In any single exposure that covers an angle $`18\mathrm{}20\mathrm{}\times 14\mathrm{}40\mathrm{}`$, one-quarter of the echelle grating’s free spectral range and, nominally, diffraction orders 194 through 242 are recorded by an electron-bombarded CCD image sensor. This detector has an opaque photocathode on a smooth substrate and uses magnetic focusing to form the electron image on the CCD. Electrons impacting on the back side of the specially thinned CCD have an energy of 18 keV, and they produce enough secondary electrons within the silicon layer to make individual photoevents appear as bright spots. Each spot has an amplitude that is about 20 times greater than the combined noise from the readout amplifier and the random fluctuations in dark current. The CCD has a format of $`320\times 256`$ pixels, each of which is $`30\mu `$m square and subtends a $`\mathrm{\Delta }\lambda `$ equivalent to a Doppler shift of $`1.25\mathrm{km}\mathrm{s}^1`$. The echelle orders are separated by about 5 CCD pixels, but they are rather broad in the cross-dispersion direction. The CCD is read out 15 times a second. The video signals from successive frames are summed in an accumulating memory to produce the integrated spectral images. In our processing of these images after the flight, we subtracted dark-current comparison frames that were recorded at frequent intervals with the accelerating high voltage turned off. The effective area of IMAPS on the 1996 flight was about $`3\mathrm{cm}^2`$ at wavelengths longward of about 1020 Å, leading to typical signal-to-noise ratios of about 80 near the maximum of the echelle grating’s blaze angle for stars as bright as $`\delta `$ Ori A. However the Al+LiF coatings on the two gratings have a low reflection efficiency at shorter wavelengths, resulting in a factor of 10 lower effective area in the vicinity of the L$`\delta `$ and L$`ϵ`$ lines. This reduced efficiency coupled with the much lower flux at the centers of the stellar Lyman series lines made it especially difficult to achieve high values of signal-to-noise. We overcame this problem by recording a large number of spectra that could be added together. The total integrated flux at the continuum levels near L$`\delta `$ and L$`ϵ`$ amounted to about 600 photons for each CCD pixel width in the dispersion direction ($`1.25\mathrm{km}\mathrm{s}^1`$). Noise fluctuations in the spectra had $`rms`$ deviations about equal to 1/10 of the continuum levels, with the principal noise source being the multiple readouts of the CCD, rather than from photon-counting statistical errors. (This is clearly evident in Fig. 1, which shows a noise level at zero intensity to be about the same as that at the elevated intensity levels.) We deliberately introduced offsets in position for the spectra in different sets of exposures. This was done to reduce the possibility that the spectrum could be perturbed by subtle flaws, such as CCD columns with anomalous responses or variations in photocathode efficiency with position (we could see no evidence for the latter however). Figure 1 shows the D and H absorption profiles for $`\delta `$ Ori at L$`\delta `$ and L$`ϵ`$. Observations of telluric atomic oxygen lines from excited fine-structure levels, seen elsewhere in the IMAPS spectrum of this star, indicated that the instrumental profile that governs the wavelength resolution of these observations was consistent with a Gaussian distribution having a FWHM<sup>4</sup><sup>4</sup>4See Jenkins & Peimbert (1997) for the details on how to arrive at this finding – their measurements for the IMAPS spectrum of $`\zeta `$ Ori A are not far from those that apply to our $`\delta `$ Ori spectrum. equal to $`4.0\mathrm{km}\mathrm{s}^1`$. At this resolving power, the deuterium features are well separated from their hydrogen counterparts, as can be seen in Fig. 1. While the deuterium L$`\delta `$ profile does show some asymmetry, to within the uncertainties of the noise there does not seem to be any extraordinary complexity in the velocity structure of the D profiles. For instance, there seems to be no evidence for any strong, narrow velocity components buried within the main peak of the L$`ϵ`$ feature. Most important, the D features are not badly saturated. ### 3.2 Possible Contamination Signal There is some overlap of signal from one echelle diffraction order to the next. Our optimal extraction routine was designed to compensate for this effect (Jenkins et al. 1996), but if this correction was not perfect we may have had some contamination by the spectral intensities from an adjacent order. We had to be especially watchful for this possibility in the vicinity of the Lyman series lines because the stellar continuum level in the region of interest is much lower than elsewhere. This gives the contamination signal an advantage over the signal we wished to study. For L$`ϵ`$ there is no problem because there are no spectral features in the orders either directly above or below the D and H features or their nearby continua. Errors in correcting for order overlap will only change the effective zero intensity level, which is corrected out anyway. The next higher order of diffraction that appears just below the one that contains the L$`\delta `$ absorptions is featureless, but, unfortunately, the order above the L$`\delta `$ order exhibits interstellar absorption features from the very strong multiplet of N I at 954 Å. In our investigation of the spectrum in the vicinity of L$`\delta `$, we allowed for the possibility that our correction for the order overlap was either too large or too small. This error could have added a spurious signal to our spectrum. Therefore, we included as a free parameter a scaling coefficient $`R_c`$ (which could be either positive or negative) for the amplitude of a correction signal (with the same shape) to cancel the possible residual contamination, and we allowed it to vary as we explored for minimum values of $`\chi ^2`$ (§§ 4.2 and 4.3). When this coefficient is less than 0.005 or larger than 0.03, unreasonably large perturbations can be seen in the bottom of the very broad hydrogen feature. Within this range, however, we allowed for the fact that our derivation of $`N`$(D I) could be influenced by the exact value. Figure 1 shows the correction signal with the most plausible amplitude, as indicated by minimum $`\chi ^2`$ for $`R_c`$ at the most probable $`N`$(D I) given in Table 2. The spectrum that is shown in this figure has had this correction included. ### 3.3 Background Level Our analysis of the D profiles is very dependent on our having an accurate determination of the level of zero intensity. Sources of background illumination include not only grating scatter, but also a diffuse glow caused by a portion of the L$`\alpha `$ geocoronal background that is not fully rejected by a mechanical collimator at the instrument’s entrance aperture. (The detector’s dark count rate is negligible compared to these sources of background.) Fortunately, we could use the bottoms of the broad, heavily saturated H absorptions that accompany each D profile establish the position of the background level. In principle, the saturated portion of the H profile could mislead us if there were broad, shallow wings in the instrumental profile caused by scattering from the echelle grating. If this were the case, one could imagine that the local background level might increase slightly for wavelengths somewhat removed from the strong H feature. We can rule out this prospect on two grounds. First, before IMAPS was flown, we illuminated it in a vacuum tank with a collimated beam from a molecular hydrogen emission line source, and faint, very broad wings of the recorded emission lines could be seen only on the strongest features. The energy in these wings corresponded to 15% of the total, spread over several Å. The remaining 85% was within the main peak. Second, for $`\delta `$ Ori we found that for both L$`\delta `$ and L$`ϵ`$ the apparent depths of the D features in Copernicus scans (taken with an ordinary grating in first order) showed excellent agreement with those registered in the IMAPS spectrum after it had been degraded by convolving it with the Copernicus instrumental profile function \[a triangle with FWHM = 0.045 Å (Laurent, Vidal-Madjar, & York 1979)\]. For these two reasons, we feel confident that the apparent flux in the bottom of the H feature is, to within the uncertainties from noise, a good representation for the zero level under the deuterium line. ### 3.4 Interference from Other Lines Table 1 lists lines from species other than D I and H I that are in vicinity of the deuterium absorptions or the fragments of the spectrum that were used to define the continuum level (§4.3). The Werner 3$``$0 P(2) line lies within the H L$`\delta `$ feature, and thus it is of no importance. From other lines out of the $`J`$=4 level of H<sub>2</sub> that appear elsewhere in our IMAPS spectrum, we know that the 4$``$0 P(4) line of the Werner system should have a negligible strength. Again using other features in the IMAPS spectrum, we found that the remaining two lines shown in the table could perturb our spectra and influence our final results. We therefore felt it was necessary to estimate their strengths and then apply a correction to compensate for their presence. To estimate the strength of the 14$``$0 P(2) Lyman line of H<sub>2</sub>, we noted that the 4$``$0 R(2) line at 1051.498 Å had a maximum depth of 0.27 times the local continuum at $`v=23\mathrm{km}\mathrm{s}^1`$, and it was recorded in a part of our spectrum where the signal-to-noise ratio was about 80. This line has a value for $`f\lambda `$ that is 1.7 times that of the 14$``$0 P(2) line. To compensate for the effect of the latter on our continuum to the left of the D I L$`\delta `$ feature, we divided the observed spectrum by the continuum-normalized intensities in Lyman 4$``$0 R(2) profile all taken to the 1/1.7 power, with the profile shifted in wavelength to match that of the 14$``$0 P(2) line. For a template of the Fe II absorption, we used the line at 1081.875 Å that shows a maximum depth of 0.26 at $`v=25\mathrm{km}\mathrm{s}^1`$ recorded at S/N = 90 (this maximum for the 937.652 Å line falls within the H absorption) and a shoulder at $`v=12\mathrm{km}\mathrm{s}^1`$ with a depth of 0.12. This shoulder for the 937.652 line falls on top of a critical piece of continuum between the D I and H I L$`ϵ`$ features. The 1081.875 line has $`f\lambda `$ that is 0.82 times that of the interfering feature (Morton 1991), and once again this difference was taken into account when we made the correction. ## 4 Interpretation of the Data ### 4.1 Velocity Profile Template To derive the most accurate value for the column density of atomic deuterium $`N`$(D I), it is beneficial to use information from other species recorded at much higher S/N to help define more accurately the shape of the D I velocity profile. Profiles of O I and N I are probably the most suitable comparison examples for two reasons. First, these two elements have very mild depletions, if any, caused by the atoms condensing into solid form onto dust grains (Meyer, Cardelli, & Sofia 1997; Meyer, Jura, & Cardelli 1998). The column densities of N and O seem to track those of H over a diverse sample of regions (Ferlet 1981; York et al. 1983). As a consequence, it is unlikely that their velocity profiles will differ appreciably from that of D I. This is in contrast to the usual striking differences exhibited between elements that are mildly depleted, such as Na I, and elements that are strongly depleted, such as Ca II. The former is generally concentrated at lower velocities than the latter for a given line of sight (Routly & Spitzer 1952; Siluk & Silk 1974; Vallerga et al. 1993; Sembach & Danks 1994). Second, O I and N I have ionization potentials close to that of neutral hydrogen, and this close match in energy makes their susceptibility to ionization nearly the same and also insures that the cross sections for (nearly resonant) charge exchange are high (Field & Steigman 1971; Butler & Dalgarno 1979). For this reason, plus the consideration that whatever means there are for ionizing N and O will operate in much the same way for H (or D), we can generally regard the relative ionizations of oxygen and nitrogen to be good representations for that of D \[but for evidence to the contrary, see Vidal-Madjar et al. (1998)\]. This assumes, of course, that O and N are not being ionized appreciably to multiply charged states.<sup>5</sup><sup>5</sup>5We looked for absorption by the N III transition at 989.799 Å in our IMAPS spectrum of $`\delta `$ Ori A. No absorption was evident at $`v=25\mathrm{km}\mathrm{s}^1`$, but it was difficult to assign a quantitative upper limit because of interference from the nearby feature of Si II at 989.873 Å. In the wavelength coverage of IMAPS where we have a reasonably good S/N, there are no transitions from the O I ground state that are weak enough (and with known f-values) to yield absorption lines that we can analyze. There is, however, a good series of exposures in the HST archive<sup>6</sup><sup>6</sup>6Exposure identifications z2zb0304t, z2zb0305t and z2zb0306t. that cover the O I 1355.6 Å feature in the spectrum of $`\delta `$ Ori, recorded by the GHRS Echelle-A spectrograph. Unfortunately, the transition probability for this line is so weak that only the main peak in the velocity profile shows up above the noise. For N I, within the coverage of IMAPS there are three multiplets (at 952.4 Å, 953.8 Å and 954.1 Å) from the ground level that are ideal for studying the apparent distribution<sup>7</sup><sup>7</sup>7For the distinction between the apparent and true velocity distributions, see the discussions by Savage & Sembach (1991) and Jenkins (1996). In our study of $`\delta `$ Ori at high velocity resolution, it is probably safe to assume that the two are equal to each other. $`N_a(v)`$ of the nitrogen atoms with velocity, defined as $$N_a(v)=3.768\times 10^{14}\frac{\tau _a(v)}{f\lambda }\mathrm{cm}^2(\mathrm{km}\mathrm{s}^1)^1,$$ (1) where the apparent optical depth $`\tau _a(v)`$ is a valid quantity to measure at velocities $`v`$ where the line is not badly saturated or, alternatively, not too weak. In this equation $`f`$ is the transition’s oscillator strength, and $`\lambda `$ is expressed in Å. In our study of the N I lines, we adopted f-values from the laboratory measurements of Goldbach, et al. (1992). For the triplet at 952.4 Å, the weakest feature at 952.523 Å is only moderately saturated. The other two features are heavily saturated but useful for revealing the weaker shoulder on the left-hand side of the main peak. The 4 much stronger features of N I in the vicinity of 954 Å are useful for defining accurately the behavior of $`N_a(v)`$ at velocities where it is below about $`5\times 10^{13}\mathrm{cm}^2(\mathrm{km}\mathrm{s}^1)^1`$. We derived a composite $`N_a(v)`$ profile for N I from the 7 lines using the method employed by Jenkins & Peimbert (1997) when they synthesized the profiles of H<sub>2</sub> in various $`J`$ levels toward $`\zeta `$ Ori A. Figure 2 shows the $`N_a`$ profiles that we derived for N I and O I. We chose to work with the N I profile in our interpretation of the D lines because it was of much better quality. This pragmatic reason for choosing N I as a template is contrary to the idealistic stance that O I would be a better match to D I, based on evidence from the absorption lines in the spectrum of G191-B2B recorded by Vidal-Madjar, et al (1998) and calculations of partially ionized atomic gases by Sofia & Jenkins (1998). We note, however, that to within the noise fluctuations the significant part of the profile of O I is consistent with the main part of the profile of N I. There is a suggestion of an apparent inconsistency between N I and O I in the velocity range $`10<v<20\mathrm{km}\mathrm{s}^1`$. While this may be true, we point out that this weak shoulder in the O I absorption, if it exists, may have been lost in the fitting of the continuum to the curvature of the stellar spectrum, which is larger than the expected size of the shoulder indicated by the N I profile. Also, the existence of some absorption to the left of a main peak is supported by the shape of the L$`\delta `$ deuterium profile shown in Figures 1 and 3. In principle, we should be cautious about possible contamination of the interstellar N I profile by nitrogen atoms in the Earth’s atmosphere above our orbital altitude of 295 km. Above this altitude, the MSIS-86 model atmosphere for solar minimum shows an exponential decrease in the density of nitrogen atoms with a scale height of 53 km, starting at a density of $`5.5\times 10^6\mathrm{cm}^3`$ at 295 km (Meier 1991). For a zenith angle $`z`$ of 45°, we calculate that the telluric contribution to the observed $`N`$(N I) should be $`1.3\times 10^{13}\mathrm{cm}^2`$, an amount that would be just about invisible in the representation shown in Fig. 2. Even at $`z=90\mathrm{°}`$, $`N(\mathrm{N}\mathrm{I})=1.3\times 10^{14}\mathrm{cm}^2`$, which is just slightly larger than the bump (presumably due to noise) immediately to the right of the “(N I)$``$” indication in the figure. Thus, we feel it is safe to dismiss the possibility that any telluric N I contamination is large enough to influence our profile. One important difference between D and either N or O is the atomic mass. If the thermal Doppler broadening for deuterium atoms is not very much less than that due to macroscopic motions, we would expect the D profiles to be broader than those of O and N. With the simplifying assumption that the temperature of the gas does not vary much from place to place, we expect that for nitrogen the observed distribution of the atoms with velocity is represented by the turbulent motions $`t(v)`$ convolved with the thermal Doppler profile, $`\varphi _D(m,T,v)`$, given by $$\varphi _D(m,T,v)=\sqrt{m/(2\pi kT)}\mathrm{exp}[mv^2/(2kT)],$$ (2) with $`m`$ equal to 14 times the proton mass $`m_p`$. For convenience, we can include in $`t(v)`$ the instrumental smearing of the profile, on the condition that we are not being misled by saturated, unresolved structures in the absorption line (Savage & Sembach 1991; Jenkins 1996). The same relation holds for deuterium with $`m=2m_p`$. Since the convolution of two Gaussian distributions produces a third with a second moment equal to the sum of the two original ones, we can state that $$t(v)\varphi _D(2m_p,T,v)=t(v)\varphi _D(14m_p,T,v)\varphi _D(7m_p/3,T,v).$$ (3) When we analyzed the deuterium features, we adopted for a standard model of their shapes the nitrogen velocity profile convolved with the last term in Eq. 3. We allowed the temperature $`T`$ to be a free parameter that could influence the fit between the profiles of N I and D I (and one that is also of some astrophysical relevance). We did not allow for variations of $`T`$ among unrecognized and blended velocity substructures that contributed to the profile, since the identification of these components is somewhat arbitrary. Our goal was to account for the general modification of the profile due to the known differences in the effects of thermal and turbulent broadening. (For determining $`N`$(D I), the weakness of the dependence of the derived $`N`$(D I) with $`T`$ expressed in the endnote of Table 2 indicates that our simplification that $`T`$ is constant is probably safe.) Figure 2 shows our model for the deuterium profile (smooth, dashed line) for a value of $`T`$ that gave the minimum $`\chi ^2`$ for the preferred value of $`N`$(D I) in Table 2. This is a smoothed version of the N I profile (heavy, solid line) that was obtained from the convolution by the kernel $`\varphi _D(7m_p/3,T,v)`$ from Eq. 3. In addition to allowing $`T`$ to vary, we also allowed for the existence of a uniform velocity offset between N I and D I, in recognition of the possibility that either our wavelength scale or the laboratory wavelengths of the N I features had some small, systematic errors. ### 4.2 Allowances for Random and Systematic Errors The presence of random errors due to noise fluctuations in the signal presents the usual challenge of determining the most probable result for $`N`$(D I) and permissible variations that still give an acceptable fit to the data. On top of this we must consider additional uncertainties caused by systematic errors. The ones that we can identify easily are the inaccuracies in defining the background level for both lines (§3.3) and the contamination signal in the L$`\delta `$ profile(§3.2). Additional parameters that could affect the outcome are the temperature of the gas $`T`$ through its effect in making the deuterium profile smoother than the N I template (§4.1), the difference in the zero points of the N I and D I velocity scales, and the adopted heights and slopes of the continuum levels over the deuterium L$`\delta `$ and L$`ϵ`$ absorption features. Our tactic for coping with these systematic errors was to express them in terms of simple parameters that could vary and then consider them in a unified analysis. Since these errors could be correlated, we felt that it would be unwise to analyze them separately. We determined how well the data conform to various combinations of parameter values by evaluating the conventional $`\chi ^2`$ statistic, $`[(I_{\mathrm{meas}}I_{\mathrm{exp}})/\sigma _I]^2`$, where $`I_{\mathrm{meas}}`$ is a measured intensity with uncertainty $`\sigma _I`$, and $`I_{\mathrm{exp}}`$ is the expected intensity given the specified set of parameter values. Useful discussions of how to interpret the determinations of $`\chi ^2`$ when there are many free parameters are given by Lampton, et al. (1976) and Bevington & Robinson (1992, p. 212). The basic scheme is to find the minimum value $`\chi ^2(\mathrm{min})`$ that arises when all parameters that influence $`I_{\mathrm{exp}}`$ are free to vary, and then examine the deviations $`\chi ^2\chi ^2(\mathrm{min})`$ as the parameters stray from their optimum values. Our $`\chi ^2`$ values represented a sum over both the L$`\delta `$ and L$`ϵ`$ features taken together. This approach is similar to one adopted by Burles & Tytler (1998a) in their analysis of the deuterium abundance in a quasar absorption line system, except that we decouple the hydrogen measurements (§5) from those of deuterium because they are fundamentally different from each other. To limit the number of degrees of freedom that apply to the confidence intervals for the outcomes that we are interested in, we segregated the free parameters into two fundamental categories. First, we recognized those parameters that had an astrophysical significance, $`N`$(D I) and $`T`$. We sought to find a confidence interval that constrained these two parameters simultaneously. While $`T`$ might seem to be an incidental parameter outside the objective of this study, there were good physical reasons for our verifying that no appreciable portion of the probability density wandered above or below acceptable limits. The second category contained parameters that were of no particular interest to us, i.e., nuisance parameters, but ones that had to be allowed to change freely as we re-minimized the $`\chi ^2`$ for every new trial combination of $`N`$(D I) and $`T`$. We had no profound reason to require that any of these variables in the second category be constrained, and thus we could consider a projection of the lowest $`\chi ^2`$ values in the multi-dimensional space of these variables onto just the $`N`$(D I)$`T`$ plane. This allowed us to restrict the number of the degrees of freedom (df) that applied to the confidence intervals down to only 2. In summary, parameters that mattered were (1) $`N`$(D I) and (2) $`T`$, while those that did not were (3) the coefficient $`R_c`$ for scaling the contamination signal (§3.2), (4) a relative velocity error $`\mathrm{\Delta }v_{\mathrm{N},\mathrm{D}}`$ between the N I profile template (§4.1) and the deuterium absorption, (5 and 6) the background levels in the bottom of the H L$`\delta `$ and L$`ϵ`$ features, and finally (7 through 10) the two coefficients that described the continuum straight lines spanning the D L$`\delta `$ and L$`ϵ`$ features, i.e., in each case the level near the middle of the D feature and the slope of the line.<sup>8</sup><sup>8</sup>8While one might argue that the continuum is not straight and has a curvature produced by the damping wings of the H I absorption, this effect is probably small enough to be masked by a curvature in the opposite direction caused by the broad stellar hydrogen feature. For the value of $`N`$(H I) given in §5, the damping wing absorbs only 9% of the flux at the D L$`\delta `$ line and 2% at the D L$`ϵ`$ line. Moreover, the appearance of the continuum suggests that a straight-line fit is justified – see Fig. 3. ### 4.3 Determination of $`N`$(D I) and its Uncertainty From our knowledge of the CCD readout noise and dark current combined with the statistical fluctuations in the (background + signal) photons, we made an initial estimate for the uncertainties in the individual measurements of intensity $`\sigma _I`$ at each velocity. We determined the correlation length for these errors by comparing fluctuations of intensity at a velocity $`v`$ with those of $`v+\mathrm{\Delta }v`$. The correlations disappear for $`\mathrm{\Delta }v=1.25\mathrm{km}\mathrm{s}^1`$, which is exactly the width of each pixel in the CCD.<sup>9</sup><sup>9</sup>9This is not a trivial finding. In an electron-bombarded CCD image sensor, correlation lengths greater than a CCD pixel can result if the diameter of the secondary electron cloud from each event is of order or larger than a CCD pixel (Jenkins et al. 1988). Events straddling a pixel boundary, for instance, will create a correlated signal in both pixels that pick up the secondary charges. Evidently such events are not common enough to cause a statistically significant effect, otherwise we would have found correlation lengths greater than $`1.25\mathrm{km}\mathrm{s}^1`$. Our determinations of $`\chi ^2`$ discussed below relied on intensities separated by this value for $`\mathrm{\Delta }v`$. The most important terms in the summation for $`\chi ^2`$ are those that are directly influenced by trial values of $`N`$(D I) and $`T`$, through differences over the (deuterium line) velocity interval $`10`$ to $`+40\mathrm{km}\mathrm{s}^1`$ between measured intensities $`I_{\mathrm{meas}}`$ and the computed values of $`I_{\mathrm{exp}}`$, the expected absorption profile multiplied by a local continuum level. At the same time, parameters that define the continuum contribute to $`\chi ^2`$ through the deviations between $`I_{\mathrm{meas}}`$ and straight-line extrapolations over the intervals ($`100`$ to $`10`$, +40 to $`+47\mathrm{km}\mathrm{s}^1`$) for L$`\delta `$ and ($`55`$ to $`10`$, +40 to $`+50\mathrm{km}\mathrm{s}^1`$) for L$`ϵ`$ (see Fig 3). Likewise, $`\chi ^2`$ is influenced by modifications in the background zero level that must be subtracted from the raw intensities at all velocities: we allowed the sum to include deviations away from zero for the background-corrected fluxes over the heavily saturated portion of the H profile from +60 to $`+140\mathrm{km}\mathrm{s}^1`$ (for the D line heliocentric velocity scales shown in Figs. 1 and 3). Finally, fluxes over all velocities in the L$`\delta `$ profile can be modified by the contamination correction signal, whose amplitude was allowed to vary as we searched for a minimum $`\chi ^2`$ in each case. We had a total of 320 independent intensity measurements to constrain the 10 free parameters listed at the end of §4.2, so we should insist that the minimum $`\chi ^2`$ agree with a reasonable expectation for $`\mathrm{df}=310`$. In fact, with our original estimate for the noise in the measurements, we arrived at a minimum $`\chi ^2=225`$, a value that was unreasonably low. In later calculations, we rescaled this noise level by a factor $`\sqrt{225/278}=0.90`$, since we had a 90% confidence that the minimum $`\chi ^2`$ should be at least equal to 278 for df = 310. We felt that it was legitimate for us to perform a post facto rescaling of the noise, because our original estimate was accurate to only a level of about 25%. This rescaling is a conservative one, because it’s more probable that the minimum $`\chi ^2`$ should really be about equal to 310. If we had used 310 instead of 278 in the expression for the noise multiplication factor, we accordingly would have found tighter limits for $`N`$(D I) because the $`\chi ^2`$ expressions would have increased more rapidly as we deviated away from the most probable $`N`$(D I). To find the minimum $`\chi ^2`$, we used Powell’s method of converging to the minimum of a multi-dimensional function (Press et al. 1992, p. 406). After finding this minimum and noting the most probable $`N`$(D I), we then evaluated the confidence interval for $`N`$(D I) by forcing this parameter to vary, but at the same time allowing the other 9 parameters to adjust to new minima in $`\chi ^2`$. Our target values for the new minima corresponded to $`\chi ^2(\mathrm{min})+4.6`$ and $`\chi ^2(\mathrm{min})+9.2`$ for the 90% and 99% confidence limits (i.e., “1.65$`\sigma `$” and “2.58$`\sigma `$” deviations), respectively, where $`\chi ^2(\mathrm{min})`$ is the overall minimum at the preferred value of $`N`$(D I) as shown in Table 2. This exercise ultimately led to the limiting values for $`N`$(D I) listed in the table. Over the full range of $`N`$(D I) between the most extreme limits, the temperature $`T`$ was the only parameter that showed any profound change. For this reason, $`T`$ is also listed. Our result for the most probable $`N`$(D I) is in near perfect agreement with the value $`\mathrm{log}N(\mathrm{D}\mathrm{I})=15.08`$ reported by Laurent, et al (1979) in their investigation that led to a value D/H = $`7\times 10^6`$ using data from Copernicus. Fig. 3 shows the observed deuterium profiles along with the expected absorption profiles (upper and lower boundaries of the crosshatched regions) whose shapes are determined by the shape of the N I profile (heavy, solid line in Fig. 2) after it has been smoothed to allow for possible extra thermal Doppler motions that would be expected for the lighter atoms (dashed line in Fig. 2). Basically, apart from the thermal smearing, there is no evidence that there are deviations between the nitrogen and deuterium velocity profiles. In Fig. 3 we also illustrate with a dashed line the depth and shape of the expected profile if $`N(\mathrm{D}\mathrm{I})`$ were as high as $`2.34\times 10^{15}\mathrm{cm}^2`$, a value that would give $`\mathrm{D}/\mathrm{H}=1.5\times 10^5`$ as seen elsewhere (§2) if $`N(\mathrm{H}\mathrm{I})=1.56\times 10^{20}\mathrm{cm}^2`$5 below). When $`N(\mathrm{D}\mathrm{I})`$ is forced to this large value, $`\chi _{\mathrm{min}}^2`$ occurs at $`T300`$K. This value seems unrealistically low, in view of the evidence that the 0$``$1 rotational temperature of H<sub>2</sub> toward $`\delta `$ Ori A is 1625 K (Savage et al. 1977). Thus, we set a constraint $`T=1625`$ K (but allowed other free parameters to float) when we constructed the dashed line in Fig. 3. For this case, $`\chi ^2\chi _{\mathrm{min}}^2=36.3`$ which is clearly unacceptable. ## 5 A Redetermination of $`N`$(H I) Published values of $`N`$(H I) based on moderate resolution recordings of the L$`\alpha `$ absorption in the spectrum of $`\delta `$ Ori range from $`1.25_{0.28}^{+0.33}\times 10^{20}\mathrm{cm}^2`$ (Jenkins 1970), based on photographic spectra recorded on sounding rocket flights, to $`1.7\pm 0.34\times 10^{20}\mathrm{cm}^2`$ (Bohlin, Savage, & Drake 1978) from a spectrum recorded by Copernicus. Since $`\delta `$ Ori is a hot star (O9.5II), the stellar H I absorption line makes a negligible contribution to $`N`$(H I).<sup>10</sup><sup>10</sup>10Diplas & Savage (1994) have estimated that the equivalent H I column density caused by stellar absorption for $`\delta `$ Ori is 10<sup>17.6</sup> cm<sup>-2</sup>. Therefore correcting for the stellar L$`\alpha `$ line changes log $`N`$(H I) by only 0.01 dex. The accuracies of these measurements are satisfactory for studies of general trends, but for measuring D/H, and in particular to investigate the possible spatial variability of D/H, we must strive for a precision in $`N`$(H I) that is as good as or preferably better than that for $`N`$(D I). The H absorption features shown in Fig. 1 are of no use in determining $`N`$(H I) because the lines are heavily saturated, and most of the absorption is by small amounts of hydrogen with velocities well displaced from the line core. The damping wings of these lines are too weak to measure. In contrast, the L$`\alpha `$ feature has very strong wings, ones that make this feature the least susceptible of all the Lyman series lines to any contributions from high-velocity wisps of H I that do not produce detectable counterparts in D I absorption. It is for this reason that we concluded that the L$`\alpha `$ feature was the best indicator of $`N`$(H I). Spectra of $`\delta `$ Ori obtained with the International Ultraviolet Explorer (IUE) in high-dispersion mode (FWHM $``$ 25 km s<sup>-1</sup>) were particularly attractive for our study of the L$`\alpha `$ feature for several reasons. First, the alternative was to use archival Copernicus data (L$`\alpha `$ was not recorded by IMAPS or HST), but the only available Copernicus data with sufficiently broad wavelength coverage of L$`\alpha `$ were obtained with the low-resolution U2 detector, and for this detector uncertainties are introduced by stray light from the vent hole (Rogerson et al. 1973), an effect that requires a special correction (Bohlin 1975) of uncertain accuracy. Second, a large number of observations of $`\delta `$ Ori obtained under slightly different observing conditions (e.g., small aperture vs. large aperture) over the course of many years are available from the IUE archive. By analyzing all of the IUE exposures rather than a single observation, we can validate our estimates of random errors and also increase our chances of exposing systematic errors in the derivation of $`N`$(H I). Finally, our ability to combine many observations allowed us to reduce the effects from random noise by brute force. A potentially important source of systematic error is the fact that $`\delta `$ Ori is a complex multiple-star system. The primary (component A) is a single-lined spectroscopic binary that is important in the history of ISM research: stationary Ca II absorption features in its spectrum provided the earliest evidence of interstellar gas (Hartmann 1904). The velocity amplitude of the binary is $`98\mathrm{km}\mathrm{s}^1`$, and it has a period of 5.7 days (Harvey et al. 1987). It is also a partially eclipsing binary (Koch & Hrivnak 1981), and it has a visual companion of comparable brightness at a separation of $`0\stackrel{}{\mathrm{.}}2`$ (Heintz 1980; McAlister & Hendry 1982). The spectroscopic binary nature of the star system presents an opportunity to investigate systematic errors in the determination of the interstellar $`N`$(H I): the H I L$`\alpha `$ profile is extremely broad, and if there are unrecognized stellar lines in the principal part of the Lorentz wings of the L$`\alpha `$ feature, then their additional optical depth could lead to an overestimate of $`N`$(H I). However, such stellar lines should move in velocity as the binary traverses its orbit, and this may lead to different values of $`N`$(H I) when observations made at different times are analyzed. If this occurs, then we should see $`N`$(H I) change as a function of the spectroscopic binary phase. Similarly, we can check for systematic changes in $`N`$(H I) when the multiple star enters the partial eclipses over the phase intervals 0.9$``$0.1 and 0.4$``$0.6. While any dependence on phase may uncover an influence that stellar features have on the outcome, there is no guarantee that they do not perturb our result in a manner that is uniform over all phases. According to the NSSDC archive, there are 59 IUE observations of $`\delta `$ Ori obtained with the Short Wavelength Prime (SWP) camera in the high-dispersion echelle mode. We screened these observations for saturated exposures, missing data, or other problems and rejected two of the observations, leaving 57 spectra for our analysis. Using the standard IUE RDAF software, we selected the spectral regions of interest from the standard IUESIPS data rather than the NEWSIPS reductions since there are a number of problems with NEWSIPS processing as applied to high dispersion spectra that could adversely affect our analysis (Massa et al. 1998). In the vicinity of the L$`\alpha `$ line the orders on the IUE detector are closely spaced, and scattered light from adjacent orders overlaps in the interorder region causing an incorrect background subtraction and zero intensity level when using the standard software. We used the method of Bianchi & Bohlin (1984) to correct for this problem. Also, in some cases, a velocity shift was applied to the IUE data based on the position of the N I triplet at 1200 Å compared to that expected from the IMAPS N I profile (§4.1) with optical depths rescaled to account for the stronger transition probabilities. This should register all of the IUE data to the correct velocity scale to an accuracy of better than $`\pm `$5 km s<sup>-1</sup>, which is more than adequate for a determination of $`N`$(H I) since we are fitting both of the strong damping wings of the L$`\alpha `$ profile (an error of 5 km s<sup>-1</sup> in the velocity scale zero point changes $`N`$(H I) by an insignificant amount). IUE data contain a number of perturbations in addition to the usual photon counting noise. They show strong fixed pattern noise, “hot spots” which mimic emission features, and drop-outs where the reseaux used to correct for camera distortions happen to fall on the spectrum (Harris & Sonneborn 1987). In addition, the spectra occasionally show artifacts at the transitions between echelle orders due to errors in the ripple correction (see below). Again, by analyzing all of the IUE data, we can reduce the impact of these noise sources, which are present in some of the observations and are not apparent in others. We ascertained that there was no persistent nonlinearity in the photometric response of IUE by comparing its L$`\alpha `$ profiles of HD 93521 and HD 74455 with those recorded for the same stars by the GHRS on HST with the medium-resolution grating (G160M). Departures from the GHRS spectra near the breaks in the IUE echelle orders seem to come and go, but aside from the greater random noise in the IUE spectra, the spectra are usually very similar to each other. In simplest terms, our means for constraining the H I column density followed a method introduced by Jenkins (1971) and used later by Bohlin (1975), Bohlin, et al (1978), Shull & van Steenberg (1985), and Diplas & Savage (1994): we determined the $`N`$(H I) that provides the best fit to the L$`\alpha `$ profile with the optical depth $`\tau `$ at a given wavelength $`\lambda `$ calculated from the expression $$\tau (\lambda )=N(\mathrm{H}\mathrm{I})\sigma (\lambda )=4.26\times 10^{20}N(\mathrm{H}\mathrm{I})(\lambda \lambda _0)^2$$ (4) (Jenkins 1971), where $`\lambda _0`$ is the L$`\alpha `$ line center at the velocity centroid of the hydrogen. However, we went a step further by employing the technique used to estimate $`N`$(D I) in §4.3, i.e., we first determined the important free parameters that could be adjusted to fit the H I L$`\alpha `$ absorption profile, then we found the set of parameters that minimized $`\chi ^2`$ using Powell’s method, and finally we set confidence limits on the H I column density by increasing (or decreasing) $`N`$(H I) with the other parameters freely varying until $`\chi ^2`$ increased by the appropriate amount for the confidence limit of interest. Figure 4 shows a sample IUE spectrum in the vicinity of the L$`\alpha `$ absorption line. From this figure one can see that the continuum is close to linear in this region. However, it is possible that the continuum has a slight downward or upward curvature, so we assumed a second-order polynomial to describe the continuum and allowed the $`\chi ^2`$ minimization process to determine the continuum shape that provided the best fit to the L$`\alpha `$ profile. Therefore the free parameters for fitting the H I profile were $`N`$(H I), three coefficients that specify the second-order continuum polynomial, and a simple additive correction to the intensity zero point.<sup>11</sup><sup>11</sup>11Despite our use of the Bianchi & Bohlin (1984) correction, in many cases inspection of the flat-bottomed, saturated portion of the L$`\alpha `$ profile showed that the zero intensity level was not quite correct, so we included a zero point shift as a free parameter and used intensity points within the saturated core as one of the collections of terms for calculating $`\chi ^2`$. We point out that the continuum placement is constrained not only by the fits to regions that are far removed from the L$`\alpha `$ feature, but also by requiring a good match to the shapes of its damping wings. For instance, if the continuum is badly placed or has too much upward or downward curvature, then a poor fit results. In particular, if we artificially forced the continuum to have a downward curvature (in an experimental challenge to lower $`N`$(H I) and thus provide a higher D/H), we obtained clearly inferior fits. We found that in the course of our minimizing $`\chi ^2`$ that we could always simultaneously obtain a good fit to the profile and match the outlying fluxes with a nearly flat continuum. Figure 5 shows two examples of H I L$`\alpha `$ profiles observed with IUE, along with the computed profiles for the lower and upper bounds on $`N`$(H I) at the 90% confidence level for each case. The final continua corresponding to the upper and lower bounds are plotted with dashed lines. Panel ($`a`$) shows a typical spectrum, while ($`b`$) shows examples of artifacts at $`\lambda `$ 1214.4 and 1224.8 Å that we encountered at the transitions between IUE echelle orders. Assuming these to be artifacts due to the ripple correction, we used only the sides of the L$`\alpha `$ profile in the wavelength ranges 1209.0$``$1213.5 Å and 1217.14$``$1223.0 Å and thereby excluded these artifacts from the $`\chi ^2`$ calculation. This procedure resulted in bounding profiles such as those shown in both of the panels of the Figure. However, as an experiment we also processed all of the IUE data including the region at $``$ 1214.4 Å in the $`\chi ^2`$ calculation in order to evaluate the importance of this effect on the final results (see below). It is important to note that the great strength of the L$`\alpha `$ profile makes the Lorentzian wings dominate over the effects of instrumental or Doppler broadening. Gas that is known to exist in the vicinity of the Orion association at high velocities ($`v100\mathrm{km}\mathrm{s}^1`$) (Cowie, Songaila, & York 1979) should not be important, since the absorptions from any wisps of H I at such velocities are displaced by only about 0.3 Å relative to the line center. Figure 6 shows the logarithms of the derived H I column densities with their 1$`\sigma `$ error bars, plotted as a function of the spectroscopic binary phase, for all of the IUE SWP data except the two rejected exposures. We calculated the phase using the period and $`T_0`$ derived by Harvey, et al. (1987) from their analysis of all suitable data from 1902$``$1982 (including some of the IUE data used here). There are no obvious systematic trends as function of spectroscopic binary phase evident in this plot, which gives us some assurance that weak stellar lines do not significantly affect the derived $`N`$(H I). In the figure, large and small aperture data are shown with different symbols to check for any systematic differences, and no differences are readily apparent. With only the large aperture data we derive a mean H I column density of $`<N`$(H I)$`>`$ = 1.54$`\times 10^{20}`$ cm<sup>-2</sup> with an rms dispersion $`\sigma `$ = $`0.08\times 10^{20}`$ \[both quantities are weighted inversely by the variances of the individual $`N`$(H I) estimates\]. With the small aperture data, we obtain $`<N`$(H I)$`>`$ = 1.59$`\times 10^{20}`$ cm<sup>-2</sup> with $`\sigma `$ = $`0.11\times 10^{20}`$. Therefore it appears appropriate to combine the large and small aperture data to constrain $`N`$(H I). Using the entire IUE data set, we obtain $`<N`$(H I)$`>`$ = 1.56$`\times 10^{20}`$ cm<sup>-2</sup> with an rms scatter equal to $`0.09\times 10^{20}`$. Since there are 57 measurements, the error in the mean = $`\sigma /\sqrt{57}`$ = 0.01$`\times 10^{20}`$ for the whole data set. We find that including the region that spans the ripple correction artifact at 1214.4 Å shown in Figure 5(b) lowers the overall result by less than 0.01 dex. This is because the feature is present in only a small fraction of the IUE spectra. The scatter in Figure 6 appears to be due entirely to the uncertainties from noise in the individual measurements: the value for $`\chi ^2=_i\left\{\left[N_i(\mathrm{H}\mathrm{I})N(\mathrm{H}\mathrm{I})\right]/\sigma \left[N(\mathrm{H}\mathrm{I})\right]_i\right\}^2`$ calculated for the entire data set is 50.32, which implies a reduced $`\chi ^2`$, $`\chi ^2/56=0.90`$. Nevertheless, given the many potential sources of systematic error in these particular IUE data, it is still possible that there are some unrecognized systematic errors which affect N(H I) and that the real error in the mean is underestimated. However, the good fits to the H I L$`\alpha `$ profiles (see Fig. 5) and the lack of pronounced variations with binary phase indicate that such unrecognized systematics are not likely to be large.<sup>12</sup><sup>12</sup>12In their study of an observation of L$`\alpha `$ absorption in the spectrum $`\mu `$ Col, Howk, Savage and Fabian (1999) estimated a systematic error of 0.02 dex for a determination of $`N(\mathrm{H}\mathrm{I})`$ somewhat less than $`10^{20}\mathrm{cm}^2`$. This relative error is about half of the amount that would be needed to have any appreciable impact on the relative errors for our D/H toward $`\delta `$ Ori A given in §6. Although we must acknowledge that their GHRS spectrum is of much better quality than any that were taken by IUE, their independent estimate that gives a low value for the magnitude of a systematic error in this type of measurement is encouraging. It is reassuring to note that a constant, systematic error in the result for $`N(\mathrm{H}\mathrm{I})`$ would need to be at least 15 times as large as the formal (random) error before it could have a meaningful effect on the overall error for D/H derived in §6 below. In the light of our deliberate attempt to uncover such systematic errors in the study of $`N(\mathrm{H}\mathrm{I})`$ vs. orbital phase, we are confident that they could not have such a large numerical advantage, which means they are unlikely to be a critical issue in this investigation. Finally, to illustrate more graphically our confidence in the H I result, we show with dotted and dash-dot damping profiles in Fig. 5(a) the expected appearance of the L$`\alpha `$ absorption if our $`N`$(D I) derivations are correct but $`N`$(H I) were low enough to make $`\mathrm{D}/\mathrm{H}=1.5\times 10^5`$, as indicated for other lines of sight (§2). There seems to be no question that the real data are strikingly inconsistent with these lower values for $`N`$(H I). ## 6 D/H toward $`\delta `$ Ori and its Significance Combining the results reported in §4.3 and §5, we find that with 90% confidence we can declare that $`N(\mathrm{D}\mathrm{I})/N(\mathrm{H}\mathrm{I})=7.4_{1.3}^{+1.9}\times 10^6`$ in the direction of $`\delta `$ Ori A. The most noteworthy feature of this result is that it differs from most determinations of D/H along other lines of sight in our local region of the Galaxy (§2). It is clear that our value for D/H represents a deviation, even if one relies only on the HST observations that generally concentrate within the range $`\mathrm{D}/\mathrm{H}=1.31.7\times 10^5`$ in the very local medium and rejects the Copernicus results because their accuracies may have been overstated. Our result shows a simple velocity structure for the D I, N I and O I absorption profiles and thus removes the primary uncertainty that confronted Laurent, et al (1979). It also removes the grounds on which McCullough (1992) rejected the measurements of deuterium toward stars in Orion. ## 7 Abundances of Heavier Elements Variations in the heavy element abundances of stars with similar ages, such as B stars in the Orion association (Cunha & Lambert 1994) or F and G type stars at a Galactocentric radius nearly the same as the Sun (Edvardsson et al. 1993), suggest that the ISM out of which the stars formed may be a heterogeneous mixture of gases with different levels of heavy element enrichment. This may possibly result from random dilutions of gas in the Galactic plane by metal-poor material falling in from the halo (Meyer et al. 1994). If the material in the direction of $`\delta `$ Ori had a lower deuterium abundance because it had been subjected to more intensive stellar processing or less of this dilution, we would expect the heavy element abundances to be higher than elsewhere. For our test of this proposition, we will examine the abundances of oxygen and nitrogen, both of which are unlikely to be appreciably altered by depletions onto dust grains (Meyer, Cardelli, & Sofia 1997; Meyer, Jura, & Cardelli 1998). O I and N I are also good standards because their ionizations are closely coupled to that of H I (§4.1), and this allows us to neglect the higher stages of ionization because they should be identified only with ionized H and D. O and N are also useful for comparisons with abundance studies elsewhere in the Universe. These investigations are facilitated by the generous number of transitions in the ultraviolet with a broad range of $`f`$ values (Timmes et al. 1997). \[Argon is another element that is expected to have very little depletion, but it is not suitable for comparison because under many circumstances its ionization can differ appreciably from that of H (Sofia & Jenkins 1998).\] Meyer, et al. (1998) found from their measurement of the intersystem O I transition at 1356 Å that toward $`\delta `$ Ori $`\mathrm{O}/\mathrm{H}=2.82\pm 0.46\times 10^4`$ (from the same data that we used to construct the profile shown in Fig. 2). This value<sup>13</sup><sup>13</sup>13The value for $`N`$(H I) adopted by Meyer, et al. (1998) was $`1.6\times 10^{20}\mathrm{cm}^2`$ which is very close to the result $`1.56\times 10^{20}\mathrm{cm}^2`$ that we derived in our more precise analysis (§5). is slightly less than their average of $`3.19\pm 0.14\times 10^4`$ over 13 lines of sight. Within the experimental errors, however, the value is consistent with the average. Meyer, et al. (1997) found that $`\mathrm{N}/\mathrm{H}=7.5\pm 0.4\times 10^5`$ toward 6 stars. Since $`\delta `$ Ori was not part of this sample, we must rely on our own measurement of N I which yields $`N_a(v)𝑑v=6.2\times 10^{15}\mathrm{cm}^2`$ – see §4.1 and the N I profile in Fig. 2. With our result for $`N`$(H I), we arrive at $`\mathrm{N}/\mathrm{H}=4.0\times 10^5`$. Thus, we see no evidence that the gas has been specially enriched with material having a high abundance of heavy elements and, by virtue of a more intensive exposure to stellar interiors, a more thorough depletion of deuterium. In fact, the possible positive correspondence in the deviations in the N and D abundances is reminiscent of the suggested correlation (but with large errors) between D/H and Zn/H shown by York & Jura (1982). Of course, we must acknowledge that for nitrogen a comparison of our result for $`\delta `$ Ori with the general measurements of Meyer, et al (1997) for other lines of sight may be compromised by errors in f values, since we used different transitions to determine $`N(\mathrm{N}\mathrm{I})`$. ## 8 Discussion Our finding presented in §6 indicates that the most probable D/H toward $`\delta `$ Ori is about half as large as that found from various HST investigations of the local ISM. Our result applies to an average over a range of velocities, which means that it represents a lower limit for the magnitude of deviations from the other cases. This anomaly is not linked with an increase in O/H or N/H, as one might expect from a simple explanation that a greater fraction of the gas had been cycled through stellar interiors. Of course, it may be possible to envision that the gas toward $`\delta `$ Ori holds an unusually large fraction of material that has cycled only through the outer envelopes of stars, thus depleting the D without increasing the concentrations of heavier elements. Recent observations of HD emission in the infrared from gas near Orion seem to confirm our finding that the abundance of deuterium is low in this region. Wright, et al. (1999) detected emission from the HD $`J=10`$ transition at $`112\mu \mathrm{m}`$ toward the Orion Bar with the Long Wavelength Spectrometer on board the Infrared Space Observatory (ISO). While there are some uncertainties in the rotation temperature of HD and the correction factors that must be applied to observations of the accompanying H<sub>2</sub>, they arrived at a preferred value $`\mathrm{HD}/\mathrm{H}_2=2.0\pm 0.6\times 10^5`$ which leads to $`\mathrm{D}/\mathrm{H}=1.0\pm 0.3\times 10^5`$ since there should be no appreciable D or H in atomic form. Their total range for D/H could be as large as 0.35 to $`1.30\times 10^5`$ however. Bertoldi, et al. (1999) used the Short Wavelength Spectrometer on ISO and found a weak emission from the $`J=65`$ transition at $`19.4\mu \mathrm{m}`$ for HD in the Orion molecular outflow OMC-1. Again, corrections using information from models of the gas had to be made to interpret the results. Bertoldi, et al.(1999) concluded that $`\mathrm{D}/\mathrm{H}=7.6\pm 2.9\times 10^6`$. The measurements of HD by both groups seem to lead to results that are consistent with our determination of atomic D/H toward $`\delta `$ Ori A. Very distant gas systems that are registered in quasar absorption line spectra reveal apparent values of D/H that range from $`34\times 10^5`$ (Burles & Tytler 1998a, b) to $`2\times 10^4`$ (Songaila et al. 1994; Carswell et al. 1996; Rugers & Hogan 1996; Wampler et al. 1996; Webb et al. 1997; Tytler et al. 1999) – see reviews by Burles & Tytler (1998c) and Hogan (1998). The large dispersion in these outcomes might be attributable either to complications that arise from our incomplete knowledge of the chemical evolution of systems at large redshifts, the difficulty in obtaining accurate values of $`N`$(H I), or to the presence of random, weak, H I systems that masquerade as deuterium by having a velocity offset that is equal to about $`80\mathrm{km}\mathrm{s}^1`$ from a main system. One might suppose that, in time, additional observations that include new cases or new data on existing ones may lead to a better understanding of the behavior of D/H in the Universe. Unfortunately, this optimistic belief may have been dealt a setback by our result that indicates that D/H could be driven by a process that we do not understand. Essentially, we see evidence that the ratio changes over a distance scale where the environment should be homogeneous, according to observations of other elements and generally accepted simple models for a galaxy’s chemical evolution and mixing rates in the ISM. A few proposals to explain possible deviations in the balance of atomic D to H in the interstellar medium have been considered in the past. The simplest involves the selective incorporation of D into HD, an effect that can amplify HD/H<sub>2</sub> to values well above the fundamental ratio of D to H (Watson 1973), but one that is probably counterbalanced by the more rapid photodissociation of HD in diffuse clouds because there is no self shielding (as there often is with H<sub>2</sub>). A preferential formation of HD is not responsible for the depletion of atomic D toward $`\delta `$ Ori A, since $`\mathrm{log}N(\mathrm{HD})<12.8`$ (Spitzer, Cochran, & Hirshfeld 1974) (likewise, we see no HD features in our IMAPS spectrum). Another alternative, one advanced by Vidal-Madjar, et al. (1978) and Bruston et al. (1981), makes use of the differences in the ways that D and H can respond to radiation pressure, as a result of the very different opacities in the Lyman lines. They proposed that this effect could lead to a separation of the two species if there were a density gradient and a nonisotropic radiation field. Finally, Jura (1982) has suggested that deuterium atoms could collide with dust grains, stick to them, and then be more strongly bound than hydrogen. Furthermore, he suggests that the mobility of the D atoms on the surfaces of these grains could be much lower than that of H, thus limiting the chances for combining with H atoms and being ejected as HD. One possible way to investigate the plausibility of this hypothesis might be to look for D$``$C or D$``$O stretch mode absorption features in dense clouds. We look forward to the possibility that insights on the relationship for the variability in D/H to other interstellar parameters could arise from the anticipated large increase of information that should come from the Far Ultraviolet Spectroscopic Explorer (FUSE) after its launch in mid-1999. For the local ISM, the FUSE Principal Investigator Team has identified as potential targets<sup>14</sup><sup>14</sup>14Details are given in the NASA Research Announcement for FUSE, dated Feb 9, 1998 (NRA 98-OSS-02), or else see http://fusewww.gsfc.nasa.gov/fuse/. 7 cool stars, 19 white dwarfs, 9 late B- or early A-type stars, and 7 central stars of planetary nebulae for studying D/H in the first two years of operations. The ORFEUS-SPAS project was a joint undertaking of the US and German space agencies, NASA and DARA. The successful execution of our observations was the product of efforts over many years by engineering teams at Princeton University Observatory, Ball Aerospace Systems Group (the industrial subcontractor for the IMAPS instrument) and Daimler-Benz Aerospace (the German firm that built the ASTRO-SPAS spacecraft and conducted mission operations). Contributions to the success of IMAPS also came from the generous efforts by many members of the Optics Branch of the NASA Goddard Space Flight Center (grating coatings and testing) and from O. H. W. Siegmund and S. R. Jelinsky at the Berkeley Space Sciences Laboratory (deposition of the photocathode material). This research was supported by NASA grants NAG5$``$616 to Princeton University and NAG5$``$3539 to Villanova University. We thank K. R. Sembach for supplying an IDL routine to apply the Bianchi & Bohlin correction to the IUE data. We also thank A. Vidal-Madjar, B. T. Draine and B. D. Savage for their helpful comments on early drafts of this paper. The O I absorption feature at 1355 Å was observed by the NASA/ESA Hubble Space Telescope. This spectral segment was obtained from the data archive at the Space Telescope Science Institute, operated by AURA under NASA contract NAS5-26555. The IUE data were obtained from the National Space Science Data Center (NSSDC) at NASA’s Goddard Space Flight Center.
no-problem/9901/cond-mat9901245.html
ar5iv
text
# Magnetoresistance, Micromagnetism, and Domain Wall Scattering in Epitaxial hcp Co Films ## Acknowledgments The authors thank Peter M. Levy for helpful discussions of the work and comments on the manuscript. We thank M. Ofitserov for technical assistance. This research was supported by DARPA-ONR, Grant # N00014-96-1-1207. Microstructures were prepared at the CNF, project #588-96. Corresponding author: andy.kent@nyu.edu
no-problem/9901/math9901143.html
ar5iv
text
# Exponents and the Cohomology of Finite Groups ## 1 Introduction Throughout this paper, we will use the integers $``$ as coefficients for cohomology groups unless otherwise specified and will write $`H^{}()`$ for $`H^{}(;)`$. It is well known that for $`G`$ a finite group, the integral cohomology groups $`H^{}(G)`$ are finitely generated in each dimension and are annihilated by $`|G|`$ in positive dimensions. (Here $`|G|`$ stands for the order of $`G`$.) Thus if we define $`\overline{H}(G)=_{i=1}^{\mathrm{}}H^i(G)`$, we have $`|G|\overline{H}(G)=0`$. ###### Definition 1.1. Given a group $`G`$, we define the exponent of $`G`$ as $`exp(G)=min\{n1:g^n=1,gG\}`$. We use the convention that $`exp(G)=\mathrm{}`$ if the set that we are minimizing over is empty. ###### Definition 1.2. For a finite group $`G`$, $`e(G)`$ is defined to be $`exp(\overline{H}(G))`$. It follows easily that $`e(G)\text{}|G|`$. Let $`p`$ be a prime and $`P`$ be a finite $`p`$-group. It is known that the value of $`e(P)`$ (which will be a power of $`p`$) contains information about the structure of $`P`$. In particular there is the following theorem of A. Adem: ###### Theorem 1.3 (A. Adem). If $`P`$ is a finite $`p`$-group, then $`e(P)=p`$ if and only if $`P`$ is an elementary abelian $`p`$-group. (Also note that if $`P`$ is a $`p`$-group, then $`e(P)=1`$ if and only if $`P=1`$. See for example page 149 of \[B\].) For a general $`p`$-group $`P`$, finding $`e(P)`$ can be quite difficult. Therefore we define another related quantity which is sometimes easier to calculate. ###### Definition 1.4. For $`P`$ a finite $`p`$-group, $`e_{\mathrm{}}(P)=min\{n1:n\overline{H}(P)\text{ is finite}\}.`$ Notice that $`e_{\mathrm{}}(P)`$ will be a power of $`p`$ and $`e_{\mathrm{}}(P)\text{}e(P)\text{}|P|`$. ###### Remark 1.5. It is easy to see that for $`C=/p^n`$, the cyclic group of order $`p^n`$, one has $`exp(C)=e_{\mathrm{}}(C)=e(C)=|C|=p^n.`$ A question one is lead to ask is, does $`e_{\mathrm{}}(P)=e(P)`$? This is part of a conjecture of A. Adem stated on page 438 of \[C\]: ###### Conjecture 1.6. If $`S`$ is a finite group, and $`H^i(S)`$ contains elements of order $`p^n`$ for some $`i`$ then it does so for infinitely many $`i`$. In particular for a $`p`$-group $`P`$, $`e_{\mathrm{}}(P)=e(P)`$. This conjecture is true in certain cases as is seen in the next proposition: ###### Proposition 1.7. Let $`P`$ be a $`p`$-group. If $`e_{\mathrm{}}(P)=1`$ then $`e(P)=1`$ and $`P=1`$. If $`e_{\mathrm{}}(P)=p`$ then $`e(P)=p`$ and $`P`$ is elementary abelian. ###### Proof. The first part follows from standard Nakayama-Rim Theory. (see page 140 of \[B\].) The second part follows essentially from the theorem of A. Adem stated before. (See \[A\] or \[Le\].) ∎ However, it turns out the conjecture is false in general. (At least when $`p`$ is odd.) The main purpose of this paper is to provide a counterexample to the conjecture which will be done in the next section. However before doing this, let us prove a few basic properties of $`e_{\mathrm{}}(P)`$. ###### Proposition 1.8. If $`P_1P_2`$ where $`P_2`$ is a $`p`$-group then $`e_{\mathrm{}}(P_1)\text{}e_{\mathrm{}}(P_2)`$. ###### Proof. By the Evens-Venkov Theorem, $`H^{}(P_1)`$ is a finitely generated $`H^{}(P_2)`$-module, say with generators $`x_1,\mathrm{},x_nH^{}(P_1)`$. Let $`t=max\{dim(x_i):1in\}`$ and $`s`$ be such that $`e_{\mathrm{}}(P_2)H^i(P_2)=0`$ for $`i>s`$. Then it is easy to see that for $`j>s+t`$, we have $`e_{\mathrm{}}(P_2)H^j(P_1)=0`$. Thus $`e_{\mathrm{}}(P_2)\overline{H}(P_1)`$ is finite and $`e_{\mathrm{}}(P_1)\text{}e_{\mathrm{}}(P_2)`$. ∎ ###### Corollary 1.9. If $`P`$ is a finite $`p`$-group, then $$exp(P)\text{}e_{\mathrm{}}(P)\text{}e(P)\text{}|P|.$$ Furthermore, there are examples of $`p`$-groups which show that these quantities are different in general. ###### Proof. For the first part, it only remains to show $`exp(P)\text{}e_{\mathrm{}}(P)`$. Notice there is a cyclic subgroup $`C`$ of $`P`$ of size $`exp(P)`$. Thus by proposition 1.8, one has $`exp(P)=|C|=e_{\mathrm{}}(C)\text{}e_{\mathrm{}}(P)`$. Thus we have the first part. When $`P`$ is elementary abelian of rank greater than one, then $`e(P)=p`$ is not equal to $`|P|`$. When $`P`$ is extraspecial with $`exp(P)=p`$, then by the theorem of A. Adem quoted before, we can see $`e_{\mathrm{}}(P)p`$ and hence $`e_{\mathrm{}}(P)`$ is not equal to $`exp(P)`$. Finally, the counterexample in the next section gives an example of a $`p`$-group $`P`$ where $`e_{\mathrm{}}(P)`$ is not equal to $`e(P)`$. ∎ ###### Definition 1.10. Let $`S(p^n)`$ be the Sylow $`p`$-group of the symmetric group on $`p^n`$ letters. ###### Remark 1.11. $`exp(S(p^n))=e_{\mathrm{}}(S(p^n))=e(S(p^n))=p^n`$. ###### Proof. The cyclic group $`/p^n`$ acts faithfully on itself by left multiplication and hence embeds in the symmetric group on $`p^n`$ letters. Thus $`S(p^n)`$ has a cyclic subgroup of order $`p^n`$ and hence $`p^n\text{}exp(S(p^n))`$. So in light of corollary 1.9, it remains only to show that $`e(S(p^n))\text{}p^n`$. This follows by an induction. When $`n=1`$, $`S(p^n)`$ is cyclic of order $`p`$, so this case follows. The induction proceeds by noting that $`S(p^n)`$ is isomorphic to the wreath product of $`S(p^{n1})`$ with $`/p`$. Thus $`S(p^n)`$ has a subgroup of index $`p`$ which is isomorphic to the direct product of $`p`$ copies of $`S(p^{n1})`$, and hence $`e(S(p^n))\text{}pe(S(P^{n1}))\text{}pp^{n1}=p^n`$ by an easy transfer argument. ∎ This allows us to prove the following lemma (given in \[Le\], we include a proof for completeness). ###### Lemma 1.12. Let P be a finite $`p`$-group, if the intersection of all subgroups of P of index $`p^n`$ is trivial then $`e_{\mathrm{}}(P)\text{}p^n`$. ###### Proof. Let $`H_1,\mathrm{},H_k`$ be the distinct subgroups of index $`p^n`$ in P. Then for each $`1ik`$, P acts on the left cosets of $`H_i`$ and this gives us a homomorphism $$\varphi _i:PS(p^n),$$ with kernel lying in $`H_i`$. Putting these homomorphisms together we get a homomorphism from P into L which is the direct product of $`k`$ copies of $`S(p^n)`$ and this is injective as its kernel is the intersection of the kernels of the $`\varphi _i`$ maps for all $`1ik`$ which is trivial by assumption. Thus $`P`$ can be considered a subgroup of $`L`$ and from the remark above, $`e_{\mathrm{}}(L)=p^n`$ and hence the lemma follows from proposition 1.8. ∎ Neither lemma 1.12 or proposition 1.8 are true if we replace $`e_{\mathrm{}}()`$ with $`e()`$. For example, the group $`G(𝔰𝔩_2)`$ provided as a counterexample in the next section, embeds in a direct product $`L`$ of a few copies of $`S(p^2)`$ because the intersection of its index $`p^2`$ subgroups is trivial. Of course $`e(L)=p^2`$, however $`e(G(𝔰𝔩_2))=p^3`$. ## 2 The Counterexample Throughout this section, $`p`$ will be an odd prime. We will be looking at a certain central extension: $$1WGV1$$ where $`V,W`$ are elementary abelian $`p`$-groups. It is well-known that there is a bracket $`,:VW`$ and a $`p`$-power map $`\varphi :VW`$ which are defined using the commutator and $`p`$-power in the group $`G`$ (See \[BrP\]). These are an alternating bilinear form and a linear map respectively. We will assume as in \[BrP\] that $`\varphi `$ is an isomorphism and hence identify $`V`$ and $`W`$. It was shown there that such groups $`G`$ are in natural correspondence with bracket algebras (Lie algebras minus the Jacobi identity) over $`𝔽_p`$. In particular, to every such bracket algebra, there exists a unique such group corresponding to it. To get the corresponding bracket algebra one just forms $`[,]:VV`$ by composing $`,`$ with the inverse of $`\varphi `$. Hence either $`V`$ or $`W`$ can be considered as the bracket algebra. Now recall we have the Lie algebra $`𝔰𝔩_2`$ which is a 3-dimensional algebra over $`𝔽_p`$. We can choose a basis $`\{h,x_+,x_{}\}`$ which then has bracket given by: $`\begin{array}{cc}\hfill [h,x_+]& =2x_+\hfill \\ \hfill [h,x_{}]& =2x_{}\hfill \\ \hfill [x_+,x_{}]& =h\hfill \end{array}`$ Let $`G=G(𝔰𝔩_2)`$ be the group associated to this Lie algebra as mentioned above. It follows easily then that $`G`$ has exponent $`p^2`$ and order $`p^6`$. It was shown in the last section of \[BrP\] that $`e(G)=p^3`$ and in fact there are elements of order $`p^3`$ in $`H^4(G)`$. We will show that the intersection of all subgroups of index $`p^2`$ in $`G`$ is trivial and hence that $`e_{\mathrm{}}(G)\text{}p^2`$ by lemma 1.12. One can conclude that $`e_{\mathrm{}}(G)=p^2`$, as $`G`$ is not elementary abelian. Since we know as previously remarked that $`e(G)=p^3`$, this means $`H^{}(G)`$ is annihilated by $`p^2`$ in all sufficiently high dimensions but not in all positive dimensions, and hence gives us the counterexample we sought! So it remains to show that the intersection of all subgroups of index $`p^2`$ in $`G`$ is trivial. Well we have $$1WGV1$$ where $`W`$ and $`V`$ can both be identified with $`𝔰𝔩_2`$. Note the preimage of any 1-dimensional subspace of $`V`$ is a subgroup of index $`p^2`$ in $`G`$ and the intersection of these is in $`W`$ so the intersection of all index $`p^2`$ subgroups lies in $`W`$. To show the intersection of the index $`p^2`$ subgroups is in fact trivial, we need a lemma: ###### Lemma 2.1. Every 2-dimensional sub(Lie)algebra $`S`$ of $`𝔰𝔩_2`$ considered as a subset in $`V`$, has a subgroup $`K`$ of index $`p^2`$ in $`G`$ lying over it. Furthermore the intersection of $`K`$ and $`W`$ is just $`\varphi (S)`$ where $`\varphi `$ is the p-power map mentioned before. ###### Proof. Let $`\widehat{x}`$ and $`\widehat{y}`$ be a basis for the 2-dimensional subalgebra and lift them to $`x,yG`$. Let $`K`$ be the subgroup they generate then $`K`$ maps down to the subalgebra under the projection map to $`V`$ and hence lies over it. It remains to show $`K`$ has index $`p^2`$. Well any element $`t`$ in $`K`$ can be written as a product of $`x,y,x^1,y^1`$ in some combination or other. As the commutator of $`x`$ and $`y`$ is central we can move all the $`x`$’s to the leftmost and all the $`y`$’s to the right of them leaving a bunch of commutators of $`x`$’s and $`y`$’s and their inverses on the right. However if we let $`L`$ be the image under the $`\varphi `$ p-power map of the subalgebra $`S`$ then $`L`$ is generated by $`x^p`$ and $`y^p`$ so is in $`K`$, is central and contains these commutators. This means that each commutator can be written as a product of some power of $`x^p`$ and some power of $`y^p`$ and as these are central we can move them to join the other powers of $`x`$ and $`y`$ respectively. The upshot is that any $`tK`$ can be written $`t=x^ly^s`$ for some $`l,s`$ integers. However $`x,y`$ have order $`p^2`$ so we see the order of $`K`$ is at most $`p^4`$, but we see easily that it has order at least $`p^4`$ so $`K`$ has order $`p^4`$ and is of index $`p^2`$. The intersection of $`K`$ with $`W`$ is obviously $`L`$. ∎ If we view $`W`$ as $`𝔰𝔩_2`$ then we have already argued that the intersection of all subgroups of index $`p^2`$ in $`G`$ lies in $`W`$. To show this intersection is trivial, in light of the lemma above, it suffices to show that the intersection of 2-dimensional Lie subalgebras in $`𝔰𝔩_2`$ is trivial. We do this next and then will be done. Note $`\{h,x_+\}`$ generates a 2-dimensional subalgebra of $`𝔰𝔩_2`$ and so does $`\{h,x_{}\}`$. Their intersection is the 1-dimensional subspace spanned by $`h`$. So we only need to show that there is a 2-dimensional subalgebra which does not contain $`h`$. Now let $`\alpha =4^1`$ in $`𝔽_p`$. Consider $`S`$ the subspace generated by $`\{h+x_+,\alpha h+x_{}\}`$. Then we have: $`\begin{array}{cc}\hfill [h+x_+,\alpha h+x_{}]& =2\alpha x_+2x_{}+h\hfill \\ & =2\alpha (x_++h)2(x_{}\alpha h)+(14\alpha )h\hfill \\ & =2\alpha (h+x_+)2(\alpha h+x_{})\hfill \end{array}`$ as $`4\alpha =1`$. So we see $`S`$ is a 2-dimensional sub(Lie)algebra of $`𝔰𝔩_2`$. However it is easy to see that $`h`$ is not in $`S`$. Thus we are done as mentioned before.
no-problem/9901/cond-mat9901162.html
ar5iv
text
# The Boson Peak in Amorphous Silica: Results from Molecular Dynamics Computer Simulations ## I Introduction In the last few years various scattering techniques, such as neutron, Raman and X–ray scattering, have been used to investigate the so–called boson peak, a vibrational feature, which is found in the frequency spectra of many, typically strong, glass formers at a frequency of about $`1`$ THz boson\_peak . In this context various mechanisms giving rise to this peak have been proposed, such as certain localized vibrational modes or scattering of acoustic waves, and also simple models have been developed that produce an excess over the Debye behavior in the density of states boson\_peak\_th . Especially in the case of silica molecular dynamics computer simulations have recently been used in order to gain insight into the nature of the boson peak taraskin97ab ; dellanna97 ; horbach98 . Despite the limitations of these simulations, such as the small system size (of the order of $`10^3`$$`10^4`$ particles) and high cooling rates (of the order of $`10^{12}`$ $`\mathrm{K}/\mathrm{s}`$), they are very useful because they include in principle the full microscopic information in form of the particle trajectories. Most of the recent computer simulation studies have investigated the boson peak within the harmonic approximation in that the eigenvalues and eigenvectors of the dynamical matrix have been calculated taraskin97ab ; dellanna97 . In contrast to this method we use the full microscopic information to determine quantities like the dynamic structure factor $`S(q,\nu )`$ directly from the particle coordinates. Thus, we are not restricted to the harmonic approximation and we are able to compare the dynamics of our silica model in the liquid state with the dynamics in the glass state. Moreover, by varying parameters like the size of the system and the mass of the particles we gain information on the character of the boson peak excitations. ## II Details of the simulation The silica model we use for our simulation is the one proposed by van Beest et al. beest which is given by $$u(r_{ij})=\frac{q_iq_je^2}{r_{ij}}+A_{ij}\mathrm{exp}(B_{ij}r_{ij})\frac{C_{ij}}{r_{ij}^6}\mathrm{with}i,j\{\mathrm{Si},\mathrm{O}\}.$$ (1) The values of the partial charges $`q_i`$ and the constants $`A_{ij},B_{ij}`$ and $`C_{ij}`$ can be found in the original publication. The simulations were done at constant volume keeping the density fixed at $`2.37`$ $`\mathrm{g}/\mathrm{cm}^3`$. Our simulation box contains $`8016`$ particles with a box length of $`48.37`$ Å. We investigate the equilibrium dynamics of the liquid state as well as the glass state. The lowest temperature for which we were able to fully equilibrate our system was $`2750`$ K. At this temperature we integrated the equations of motion over $`13`$ million time steps of $`1.6`$ fs, thus over a time span of about $`21`$ ns. The glass state was produced by starting from two equilibrium configurations at $`T=2900`$ K and cooling them to the temperatures $`T=1670`$ K, $`1050`$ K and $`300`$ K with a cooling rate of $`1.810^{12}`$ $`\mathrm{K}/\mathrm{s}`$. The details of how we calculated the time Fourier transformations can be found elsewhere horbach99 . ## III Results We investigate the high frequency dynamics of silica by means of the dynamic structure factor $$S(q,\nu )=N^1_{\mathrm{}}^{\mathrm{}}𝑑t\mathrm{exp}(i2\pi \nu t)\underset{kl}{}\mathrm{exp}(i𝐪[𝐫_k(t)𝐫_l(0)]),$$ (2) and its self part $`S_\mathrm{s}(q,\nu )`$ which can be extracted from Eq. (2) by taking into account only the terms with $`k=l`$. In the following we will consider only $`S(q,\nu )`$ for the oxygen–oxygen correlations because the oxygen–silicon and the silicon–silicon correlations behave similarly with respect to the features which are discussed below. As we have reported elsewhere for the liquid state at the temperature $`T=2900`$ K horbach98 , apart from optical modes with frequencies $`\nu >20`$ THZ, two types of excitations are visible in the dynamic structure factor for $`q>0.23`$ Å<sup>-1</sup>. The first one corresponds to the boson peak which is located, essentially independent of $`q`$, around $`1.8`$ THz. The second one corresponds to dispersive longitudinal acoustic modes. Note that the latter are not like longitudinal acoustic excitations in harmonic crystals because, due to the disorder, they cannot be described as plane waves. Having found the aforementioned two features at $`T=2900`$ K we will now look at the temperature dependence of $`S(q,\nu )`$, which is shown in Fig. 1, by plotting $`S(q,\nu )/T`$ versus $`\nu `$ at $`q=1.7`$ Å<sup>-1</sup> in the frequency range below $`20`$ THz. From this figure we can conclude that the dynamic structure factor scales roughly with temperature which is expected if the harmonic approximation is valid. Moreover we can clearly identify for all temperatures two peaks: the boson peak located at $`1.75`$ THz (vertical line) and a peak which corresponds to the longitudinal acoustic modes located around $`17`$ THz. Even at $`T=3760`$ K the excitations giving rise to a boson peak at lower temperatures are at least partially present in that a shoulder can be recognized in the frequency region of the boson peak. In order to get some insight into the properties of the vibrational modes of our silica system at frequencies around $`1`$ THz we varied the size of the system at a fixed mass density of $`2.37`$ $`\mathrm{g}/\mathrm{cm}^3`$. Fig. 2 shows the self part of the dynamic structure factor for $`N=336`$, $`1002`$ and $`8016`$ particles at the temperature $`T=3760`$ K and the three $`q`$ values $`0.37`$ Å<sup>-1</sup>, $`1.7`$ Å<sup>-1</sup> and $`4.75`$ Å<sup>-1</sup>. Whereas the curves for the different system sizes coincide for frequencies that are larger than a weakly $`N`$ dependent frequency $`\nu _{\mathrm{cut}}(N)`$, for $`\nu <\nu _{\mathrm{cut}}(N)`$ the amplitude of $`S_\mathrm{s}(q,\nu )`$ decreases with decreasing $`N`$. Note that $`\nu _{\mathrm{cut}}(N)`$ is essentially independent of the wave–vector $`q`$. We read off $`\nu _{\mathrm{cut}}1.7`$ THz for $`N=336`$ and $`\nu _{\mathrm{cut}}1.2`$ THz for $`N=1002`$. Both frequencies are marked as vertical lines in Fig. 2. $`\nu _{\mathrm{cut}}(N)`$ coincides approximately with the frequency of the transverse acoustic excitation corresponding to the lowest $`q`$ value which is determined by the size of the simulation box. To see that this is the case note that the lowest $`q`$ values for $`N=336`$ and $`N=1002`$ are $`q_{\mathrm{min}}=0.37`$ Å<sup>-1</sup> and $`q_{\mathrm{min}}=0.26`$ Å<sup>-1</sup>, respectively. In comparison to that the $`q`$ values we read off from the transverse acoustic dispersion branch for $`T=3760`$ K and $`N=8016`$, and which correspond to the frequency $`\nu _{cut}`$ for $`N=336`$ and $`N=1002`$, are $`0.32`$ Å<sup>-1</sup> and $`0.22`$ Å<sup>-1</sup>, respectively (see horbach98 ). That the latter $`q`$ values are slightly smaller than the corresponding values for $`q_{\mathrm{min}}`$ is due to the fact that the transverse dispersion branch has been determined from the peak maxima $`\nu _{\mathrm{max}}(q)`$ in the transverse current correlation function. So there is always a significant contribution of transverse acoustic modes with frequencies $`\nu <\nu _{\mathrm{max}}(q)`$ for a given $`q`$. All this can be summarized by saying that the absence of transverse acoustic modes is connected with a missing of excitations giving rise to the boson peak. Therefore, in the smaller systems only the high frequency part of the boson peak is present. Moreover, it seems that the boson peak modes are only fully present, for a given frequency, if there exist transverse acoustic excitations at the same frequency. To learn more about the character of the boson peak excitations we varied also the masses of the silicon and oxygen atoms such that the mass density remains fixed. Fig. 3 shows $`S(q,\nu )`$ at $`T=2750`$ K and $`q=0.6`$ Å<sup>-1</sup> for the four mass pairs $`M_1=(28.086,15.999)`$, $`M_2=(14.043,23.021)`$, $`M_3=(44.085,8.000)`$, and $`M_4=(56.085,2.000)`$ where the first and the second number are the masses in atomic units for silicon and oxygen, respectively. Note that $`M_1`$ corresponds to the real masses of silicon and oxygen normally used in our simulation. From the figure we see that there is a strong dependence on the mass ratio for the two peaks visible for $`M_1`$ above $`20`$ THz which are due to localized optical modes. In contrast to that, at least within the accuracy of the statistics of our data, there is no dependence for the modes giving rise to the boson peak and the acoustic modes which means that the boson peak excitations cannot be strongly localized. This supports the aforementioned statement that the boson peak is due to the coupling to transverse acoustic modes. The fact that the boson peak is independent of the mass ratio between the silicon and oxygen mass is also a good test for theoretical models of the boson peak in silica. In conclusion we investigated the excitations giving rise to the boson peak by means of molecular dynamics computer simulations. We find that the dynamic structure factor $`S(q,\nu )`$ scales roughly with temperature in the range $`3760`$ K $`T300`$ K, which means that our silica system is in this sense quite harmonic even for temperatures as high as $`3760`$ K. By calculating $`S_\mathrm{s}(q,\nu )`$ for different system sizes we find that the modes contributing to the boson peak are only fully present at a given frequency if there exist transverse acoustic modes at the same frequency. This is supported by the fact that the height and the width of the boson peak are independent under the variation of the masses of silicon and oxygen if the mass density is fixed. So we observe that the boson peak is due to a coupling to transverse acoustic modes. Of course, the explanation of the nature of this coupling is an interesting goal for the future. ## ACKNOWLEDGMENTS This work was supported by BMBF Project 03 N 8008 C and by SFB 262/D1 of the Deutsche Forschungsgemeinschaft. We also thank the RUS in Stuttgart for a generous grant of computer time on the T3E.
no-problem/9901/physics9901049.html
ar5iv
text
# References Comment to the paper ” The energy conservation law for electromagnetic field in application to problems of radiation of moving particles ” E.G.Bessonov In the paper the energy conservation law (the Poynting theorem) was applied to a problem of radiation of a charged particle in an external electromagnetic field. The authors consecutively and mathematically strictly solved the problem but received wrong result. They derived an expression which includes a change of the energies of the electromagnetic fields accompanying the homogeneously moving particle $`\mathrm{\Delta }W=W_2W_1`$ corresponding to the initial and final velocity of the particle (see expression (19) in ). The energy of the field accompanying the particle $`W`$ is the energy of the particle of the electromagnetic origin. It should not enter the solution of the problem. The authors do not specify the dimensions of the particle. For pointlike particle this energy and the change $`\mathrm{\Delta }W`$ are infinite values and consequently the expression (19) loses sense. In quantum theory the derived expression require some renormalization. In classical theory in the section devoted to the energy conservation law the energy of the accompanying field that is the energy of the particle of the electromagnetic origin is hidden in the total energy of the electromagnetic origin and hence it is appeared unnoticed. For this reason the solutions based on the use of the energy conservation law lead to the wrong results when $`\mathrm{\Delta }W0`$ . The received solutions differ from the solutions based on the equations of motion of particles in the external fields. We will explain our observation using the second example considered by the authors. This example was formulated as follows. Let a charged particle is moving in a positive direction of the axis ”z” with a velocity $`\stackrel{}{v}_1`$. In some area with the linear dimensions $`L`$ located near to the origin of the reference frame the external electromagnetic fields are created where the velocity of the particle is varied under some law which is not specified. Then the particle go out from this area and it’s velocity accepts the value $`\stackrel{}{v}_2`$ which hereinafter is not changed. The authors proceed from the expression for the energy conservation law of the form $$\frac{}{t}_V\frac{|\stackrel{}{E}|^2+|\stackrel{}{H}|^2}{8\pi }𝑑V=_V\stackrel{}{ȷ}\stackrel{}{E}𝑑V\frac{C}{4\pi }_S[\stackrel{}{E}\stackrel{}{H}]𝑑\stackrel{}{S},$$ (1) where $`\stackrel{}{E}`$, $`\stackrel{}{H}`$ are vectors of the electric and magnetic fields respectively created in a general case by a set of particles, charged bodies and magnets, $`\stackrel{}{ȷ}`$ a vector of density of a current, the sign $`V`$ under the integral means that the integral is carried out through a chosen volume $`V`$ and the sign $`S`$ means that the integral is carried out through a surface $`S`$ limiting this volume. This law (the Poynting theorem) follows from the Maxwell’s equations. From this law the authors came to the expression $$\frac{c}{4\pi }_{\mathrm{}}^+\mathrm{}𝑑t_S[\stackrel{}{E}^\mathrm{"}\stackrel{}{H}^\mathrm{"}]𝑑\stackrel{}{S}=_{\mathrm{}}^+\mathrm{}𝑑t_V\stackrel{}{ȷ}\stackrel{}{E}𝑑V\mathrm{\Delta }W,$$ (2) where the vectors $`\stackrel{}{E}^\mathrm{"}`$, $`\stackrel{}{H}^\mathrm{"}`$ are vectors of free electric and magnetic fields emitted by a particle, $`W_1=(1/8\pi )(|\stackrel{}{E}_1|^2+|\stackrel{}{H}_1|^2)𝑑V`$, $`W_2=(1/8\pi )(|\stackrel{}{E}_2|^2+|\stackrel{}{H}_2|^2)𝑑V`$ are the total energies of the electromagnetic fields, created by the homogeneously moving charged particle in the unlimited space (the energies of the accompanying field), vectors $`\stackrel{}{E}_1`$, $`\stackrel{}{H}_1`$ and $`\stackrel{}{E}_2`$, $`\stackrel{}{H}_2`$ are the vectors of the electric and magnetic field strengths created by a particle moving homogeneously with velocities $`\stackrel{}{v}_1`$, $`\stackrel{}{v}_2`$ accordingly. It is supposed that the boundary of the volume $`V`$ is chosen so far that the wavepacket of radiation was in time to be separated from the field of the charged particle so that free fields of radiation and the field accompanying the particle are not overlapped. Further the authors go to the conclusion that the flow of radiation from the volume $`V`$ according to (2) is determined not only by the work of forces acting on the charged particles by fields (integral from $`\stackrel{}{ȷ}\stackrel{}{E}`$) but also by change of the energy of the accompanying electromagnetic field $`\mathrm{\Delta }W`$. Now we notice that the vector of the electric field strength in the region of location of the particle can be presented in the form $`\stackrel{}{E}=\stackrel{}{E}_{ext}+\stackrel{}{E}_s`$, where $`\stackrel{}{E}_{ext}`$ is the vector of the external electric field strength created by charged bodies and other particles, $`\stackrel{}{E}_s`$ vector of the electric field strength produced by the particle under consideration. Therefore the external fields and the fields produced by a particle (inertial and radiating self-fields) were took into account in the change of the energy of the particle $`\epsilon `$ and the value $`_V\stackrel{}{ȷ}\stackrel{}{E}𝑑V=d\epsilon /dt`$ , . That is why the value $`_{\mathrm{}}^+\mathrm{}𝑑t_V\stackrel{}{ȷ}\stackrel{}{E}𝑑V=\mathrm{\Delta }\epsilon =\epsilon _2\epsilon _1=mc^2(\gamma _2\gamma _1)`$ in the equation (2) is the change of the total energy of the particle where $`m`$ is the weight of the particle, $`\gamma =1/\sqrt{1\beta ^2}`$, $`\beta =|\stackrel{}{v}/c|`$, the subscripts $`1,2`$ are related to initial and final velocity of the particle. The value $`c/4\pi _{\mathrm{}}^+\mathrm{}𝑑t_S[\stackrel{}{E}^\mathrm{"}\stackrel{}{H}^\mathrm{"}]𝑑\stackrel{}{S}=\epsilon ^{rad}`$ is the energy of the electromagnetic radiation emitted by the particle in the form of free electromagnetic waves. Thus the expression (2) can be presented in the form $$\mathrm{\Delta }\epsilon =\epsilon ^{rad}\mathrm{\Delta }W.$$ (3) In the presented example it was supposed that the external fields are static and the energy of these fields is constant. In a static case the electric field is potential one $`\stackrel{}{E}_{ext}(\stackrel{}{r})𝑑\stackrel{}{r}=0`$. The external field could be the magnetic one. Therefore obviously the change of the energy of the particle in the case of static fields should be equal to the energy of radiation taken with the negative sign $$\mathrm{\Delta }\epsilon =\epsilon ^{rad}.$$ (4) The expression (4) follows also from the equations of motion of the particle in the external fields taking into account the radiation reaction force and the laws of radiation of a particle in the external fields which are determine the rate of losses of the energy of the particle in the form of radiation. Contrary to expected result a superfluous term has appeared in the expression (3) which is equal to the change of the energy of the field $`\mathrm{\Delta }W`$ accompanying the particle and differ from zero as the initial $`v_1=|\stackrel{}{v}_1|`$ and final $`v_2=|\stackrel{}{v}_2|`$ velocities are not equal ($`v_1v_20`$). The presence of the superfluous term $`\mathrm{\Delta }W`$ is in accordance with the conclusions made in the paper that from the equations of Maxwell-Lorentz does not follow the correct energy conservation law that is the law which describe the nature correct way since the equations of Maxwell and equations of Lorentz are inconsistent. The expression (1) contains a logic error consisting in the fact that in the first field term of this expression the energy of the electromagnetic field is included and in this energy the energy of the accepted electromagnetic field of the particle i.e. the energy of the particle of electromagnetic origins is hidden. It means that the energy of the particle of the electromagnetic origin in the equation (1) is presented in two terms (left and first right term). Accordingly the energy of the particle of the electromagnetic origin in the equation (3) is also presented in two terms ($`\mathrm{\Delta }\epsilon `$ and $`\mathrm{\Delta }W`$). We should like to remind that energy of the particle is a sum of energies of the electromagnetic and non-electromagnetic origin and in the case of pointlike particles they are infinite and have opposite sign and their sum presents the experimentally observable value $`\epsilon `$ . Thus the energy of particles of the electromagnetic origin is presented in expressions (1), (3) twice and that is why the Poynting theorem generalized on a case of a system of fields and particles becomes incorrect. In the case of pointlike particles the value $`\mathrm{\Delta }W`$ in the expression (3) is infinite when the value $`v_1v_20`$ and that is why this expression loses its sense. The logic error consist in the double inclusion of the energy of the particle of the electromagnetic origin in the same equation. We would like to remind the energy conservation law for a system of the electromagnetic field and particles in an integral form $`\epsilon ^\mathrm{\Sigma }/t=0`$ or $`\epsilon ^\mathrm{\Sigma }=const`$ where $$\epsilon ^\mathrm{\Sigma }=\frac{|\stackrel{}{E}|^2+|\stackrel{}{H}|^2}{8\pi }𝑑V+\mathrm{\Sigma }\epsilon _i,$$ (5) $`\epsilon _i`$ is the energy of a particle $`i`$, and the integration is carried out through the whole space . The first term from the right in the expression (5) contains both the free field of radiation emitted by charged particles and the field accompanying these particles. The dimensions, charge, and weight of the particle and also their structure can be arbitrary. That is why massive charged bodies and magnets can enter in (5). Massive bodies can have complex structure. In this case the exchange of the energy of electromagnetic fields is possible in internal degrees of freedom of a body (for example, in heating the body). At that the weight and, accordingly, energy of bodies $`\epsilon _i`$ will be increased. In a general case if the wave packet of radiation emitted by a system of particles will be in time to be separated from the fields accompanying these particles then the change of the energy of the system of particles $`\mathrm{\Delta }\epsilon =\mathrm{\Sigma }\mathrm{\Delta }\epsilon _i`$ and the change of the energy of the electromagnetic field according to (5) will be determined by the same expression (3) where now $`\mathrm{\Delta }W`$ is the change of the energy of the accompanying electromagnetic fields of all particles<sup>1</sup><sup>1</sup>1Certainly it is possible to receive this conclusion proceeding from the expression (1).. It means that the conclusion about a logic mistake made at the proof of the energy conservation law for a system of electromagnetic field and particles made in the paper for a general case non-obviously was confirmed by the authors of the commented paper in their example. At the derivation of the energy conservation law for the system of the electromagnetic field and particles a mistake was made which further was accepted by repetition in many papers and textbooks. Therefore the interpretation of this law in the textbooks should be changed and should be treated in the form of an open question in classical electrodynamics. The author thanks B.M.Bolotovskii and S.N.Stoliarov for useful discussions of the present comment.
no-problem/9901/hep-ph9901347.html
ar5iv
text
# References Comment on “Next-to-next-to-leading order vacuum polarization function of heavy quark near threshold and sum rules for $`b\overline{b}`$ system” and “Next-to-next-to-leading order relation between $`R(e^+e^{}b\overline{b})`$ and $`\mathrm{\Gamma }_{\mathrm{sl}}(bcl\nu _l)`$ and precise determination of $`|V_{cb}|`$ A.A.Penin and A.A.Pivovarov Institute for Nuclear Research of the Russian Academy of Sciences, 60th October Anniversary Pr., 7a, Moscow 117312, Russia Abstract The most recent recalculation of the two-loop correction to the static quark-antiquark potential gave the numerical value different from the previously known one. We comment on the effect this change produces on the numerical estimates of the bottom quark pole mass $`m_b`$, the strong coupling constant $`\alpha _s`$ and the Cabibbo-Kobayashi-Maskawa matrix element $`|V_{cb}|`$ obtained in our papers . In two recent papers numerical values of the bottom quark pole mass $`m_b`$, the strong coupling constant $`\alpha _s`$ and the Cabibbo-Kobayashi-Maskawa matrix element $`|V_{cb}|`$ have been determined from the sum rules for the $`\mathrm{{\rm Y}}`$ system and the $`B`$-meson semileptonic width. These phenomenological results have been obtained by exploiting the next-to-next-to-leading order expression for the vacuum polarization function of a heavy quark near the threshold. This expression depends on the value of the two-loop correction to the static quark-antiquark potential. In the analyses of refs. the numerical value of coefficient $`a_2`$ obtained in was used. Recently the two-loop correction to the static potential has been recalculated in with a new result for the coefficient $`a_2`$ that differs from the previous one. However we found that the use of corrected numerical value of the coefficient $`a_2`$ leads to a change of the numerical estimates for $`m_b`$, $`\alpha _s`$ and $`|V_{cb}|`$ obtained in our papers that lies well within the error bars given for these parameters in . In ref. we applied the sum rules technique for the system of $`\mathrm{{\rm Y}}`$ resonances to determine the values of the bottom quark pole mass $`m_b`$ and strong coupling constant $`\alpha _s`$. The analysis is based on the result for the heavy quark polarization function near the threshold in the next-to-next-to-leading order of perturbative QCD and relativistic expansion. This result depends, in particular, on the two-loop correction to the static potential of the quark-antiquark interaction first computed in . Recently this correction has been recalculated independently with another technique in . A different value of the coefficient $`a_2`$ came out $$a_2=\left(\frac{4343}{162}+4\pi ^2\frac{\pi ^4}{4}+\frac{22}{3}\zeta (3)\right)C_A^2\left(\frac{1798}{81}+\frac{56}{3}\zeta (3)\right)C_AT_Fn_f$$ $$\left(\frac{55}{3}16\zeta (3)\right)C_FT_Fn_f+\left(\frac{20}{9}T_Fn_f\right)^2$$ which is smaller than the previous result of ref. by an amount $`2\pi ^2C_A^2`$. After performing the analysis with the corrected value we found that this variation of the coefficient affects our numerical estimates only slightly. Namely, the value of $`\alpha _s`$ extracted from the sum rules is practically insensitive to the above variation while the value of $`m_b`$ decreases for $`0.1\%`$ when the corrected two-loop coefficient is used instead of the previous one. Since the theoretical uncertainty in $`m_b`$ exceeds $`1\%`$ this variation is negligible. In the paper we used the relation between the moments of the $`\mathrm{{\rm Y}}`$ system spectral density and the inclusive $`B`$-meson semileptonic width for precise determination of the $`|V_{cb}|`$ matrix element. Evaluating the moments we used the next-to-next-to-leading order expression of the heavy quark polarization function near the threshold computed with Peter’s coefficient $`a_2`$. Changing it to the correct Schröder’s value we obtain $`0.1\%`$ increase of the extracted numerical value for $`|V_{cb}|`$. The total theoretical uncertainty of this quantity exceeds $`3\%`$ that makes this variation completely negligible. The reason for such a weak influence of the change on our results is that the coefficient $`a_2`$ parameterizes only a part of the correction to the NRQCD Hamiltonian in this order that makes the dependence on $`a_2`$ much softer than one could expect from the direct numerical change of the coefficient itself. To conclude, the numerical estimates of the bottom quark pole mass, the strong coupling constant and the Cabibbo-Kobayashi-Maskawa matrix element $`|V_{cb}|`$ presented in refs are insensitive to the correction of the previously obtained value of the two-loop coefficient $`a_2`$ . The corresponding shifts of the extracted values of $`m_b`$, $`\alpha _s`$ and $`|V_{cb}|`$ are an order of magnitude smaller than the theoretical uncertainties of these quantities given in .
no-problem/9901/hep-ex9901007.html
ar5iv
text
# REVIEW OF b-FLAVORED HADRON SPECTROSCOPY ## 1 Introduction B-flavored hadrons are the heaviest flavored hadrons, since due to $`V_{tb}1`$ the top quark decays long before forming bound states. They are copiously produced in $`p\overline{p}`$ collisions at the Tevatron and in $`Z^0`$ decays at LEP. Each LEP experiment has collected $`8.8\times 10^5b\overline{b}`$ events. The formation of B-mesons versus b-baryons at the $`Z^0`$ is favored by 9:1 The heavy quark $`(\overline{Q})`$ also dresses favorably with light quarks produced in soft gluon processes. This leads to production rates of $`B_u^+:B_d^0:B_s^0=1:1:\frac{1}{3}`$. $`B_c`$ production is reduced by $`23`$ orders of magnitude, since a hard gluon process is needed. The properties of heavy-light systems ($`\overline{Q}q`$ or $`Qqq`$) are predicted by heavy quark effective theory (HQET), which is based on the observation that in the limit $`m_Q\mathrm{}`$ the heavy quark decouples from the light degrees of freedom. The heavy quark symmetry provides a good approximation for b hadrons since $`m_b>>\mathrm{\Lambda }_{QCD}`$ and corrections obtained from a $`\frac{1}{m_b}`$ expansion are small. In the heavy quark limit, also the spin of the heavy quark, $`s_Q`$, decouples from the orbital angular momentum of the system, $`l`$, and the spin of the light quark(s) $`s_q`$. Both $`s_Q`$ and $`j=ls_q`$ are separately conserved. Thus, states are grouped into doublets bearing similar properties. The $`B`$ and $`B^{}`$ belong to the same doublet. The $`l=1`$ orbital excitations, frequently called $`B^{}`$’s, fall into two doublets. The $`j=\frac{1}{2}`$ states which include the scalar $`B_0^{}`$ and the axial vector $`B_1^{}`$ are broad decaying dominantly via S-wave to $`B\pi `$ and $`B^{}\pi `$, respectively. The $`j=\frac{3}{2}`$ states which include the axial vector $`B_1`$ and the tensor $`B_2^{}`$ are narrow. Their dominant decays proceed via D-wave to $`B^{}\pi `$ and $`B^{()}\pi `$, respectively. At the $`Z^0`$ the production of $`l=0`$ B-mesons versus $`l=1`$ B-mesons is favored by 7:3. According to spin counting $`75\%`$ of the $`l=0`$ B-states are $`B^{}`$’s. The $`B^{}`$’s decay strongly to $`B^{}`$’s or $`B`$’s with a ratio ranging between $`1:1`$ and $`3:1`$. Thus, B-mesons are the best laboratory to test HQET predictions. The crucial experimental tool for b-hadron spectroscopy is inclusive b hadron reconstruction. For $`b\overline{b}`$ events selected via impact parameter tagging or via high $`p_t`$ muons, energy and momentum of the b-hadron are reconstructed, using either a rapidity algorithm (ALEPH, DELPHI ) or secondary vertex reconstruction (OPAL ). For events consistent with a b-hadron the Q-value defined by $`Q_{BX}=m_{BX}m_Bm_X`$ is determined. For the majority of events in the final sample DELPHI e.g. achieves an energy resolution of $`\sigma _E/E=7\%`$ and angular resolutions of $`\sigma _\varphi \sigma _\theta 15\mathrm{mr}`$. ## 2 Status of Pseudoscalar and Vector B Mesons The $`B^+`$ and $`B^0`$ masses have been measured rather precisely by CLEO, ARGUS and CDF. The fits performed by the PDG yield: $`m_{B^+}=5278.9\pm 1.8`$ MeV and $`m_{B_d^0}=5279.2\pm 1.8`$ MeV. The errors are dominated by a systematic uncertainty in determining the $`e^+e^{}`$ energy scale. The $`B^0`$ is heavier than the $`B^+`$ as expected. However, the observed mass difference of $`m_{B_d^0}m_{B^+}=0.35\pm 0.29`$ is consistent with zero. The $`B_s^0`$ mass has been measured rather precisely by CDF in the $`B_s^0J/\psi \varphi `$ channel (see Figure 1). The present $`B_s^0`$ mass measurements are depicted in Figure 2. Including the LEP results the PDG mass fit yields $`m_{B_s^0}=5369.3\pm 2.0`$ MeV. This constrains the $`B_s^0B_d^0`$ mass splitting to $`90\pm 2.7`$ MeV which is consistent with a quenched lattice QCD prediction of $`107\pm 13`$ MeV. The $`B_c`$ meson is not yet observed. Searches have been conducted by all four LEP experiments and by CDF. The channels studied include $`J/\psi \pi ^+`$, $`J/\psi a_1^+`$ and $`J/\psi l\nu _l`$. Individual candidates are seen but are consistent with the expected background. The most serious candidate is reported by ALEPH in the $`J/\psi \mu \nu _\mu `$ channel and has a mass of $`m=5.96\pm 0.25\pm 0.19`$ GeV. In the heavy quark expansion hyperfine splittings (HFS) are proportional to $`\frac{1}{m_Q}`$. Thus, the $`B`$ and $`B^{}`$ are much closer spaced than the $`D`$ and $`D^{}`$. A quenched lattice calculation e.g. yields $`\mathrm{\Delta }m_B^{HFS}:=m_{B_{u,d}^{}}m_{B_{u,d}}=34\pm 6`$ MeV and $`\mathrm{\Delta }m_B^{HFS}:=m_{B_s^{}}m_{B_s}=27\pm 17`$ MeV. The small HFS permits only electromagnetic (EM) transitions, of which $`B^{}\gamma B`$ is the dominant one. For $`B^{}`$ reconstruction the low-energy photon needs to be detected. At LEP, however, the photon energy is boosted, maximally up to 800 MeV. L3 detects the photon directly in a crystal calorimeter, whereas the other LEP experiments reconstruct $`e^+e^{}`$ conversions. Typical energy and angular resolutions are $`\sigma _E/E=12\%`$ and $`\sigma _{\theta ,\varphi }=12mr`$, respectively. Figure 3 shows the $`Q_{B\gamma }`$-value distribution measured by OPAL. Since the B-meson flavor is not identified, $`\mathrm{\Delta }m_B^{HFS}`$ includes contributions from $`B^+`$, $`B_d^0`$ and $`B_s^0`$. Figure 4 summarizes all $`\mathrm{\Delta }m_B^{HFS}`$ measurements from LEP plus some old results from CLEO II and CUSB. The world average of the $`B^{}B`$ hyperfine splitting is $`\mathrm{\Delta }m_B^{HFS}=45.79\pm 0.35`$ MeV. DELPHI also has set $`95\%`$ CL limits on $`B^+B_d^0`$ and $`B_{u,d}^+B_s^0`$ HFS differences, yielding: $`|\mathrm{\Delta }m_{B_u}^{HFS}\mathrm{\Delta }m_{B_d}^{HFS}|<6`$ MeV and $`|\mathrm{\Delta }m_{B_s}^{HFS}\mathrm{\Delta }m_{B_{u,d}}^{HFS}|<6`$ MeV. All LEP experiments have measured the relative $`B^{}`$ production cross section in $`Z^0`$ decays. Ignoring feed-down from $`B^{}`$’s the LEP average is $`\frac{\sigma _B^{}}{\sigma _B+\sigma _B^{}}=0.748\pm 0.004`$. This agrees with expectations from naive spin counting, since the adjustment due to feed-down from $`B^{}`$’s is only a few % effect depending on assumptions made for $`B^{}`$ decays. ALEPH, DELPHI and OPAL also have measured the $`B^{}`$ polarization. The observed helicity angle distribution is uniform, indicating that all helicity states are equally populated. The combined LEP result for the longitudinal helicity component is $`\frac{\sigma _L}{\sigma _T+\sigma _L}=0.33\pm 0.04`$. Though the M1 transition is nearly $`100\%`$, higher order EM transitions like the $`B^{}`$ Dalitz decays should also occur. The branching fraction independent of a form factor model is expected to be $`B(B^{}Be^+e^{})0.466\%`$. Since the $`e^+`$ and $`e^{}`$ momenta are below 100 MeV, electron identification and tracking in the vertex detector are essential. DELPHI has combined Dalitz pairs originating from the primary vertex with a B candidate. The resulting $`Q_{Be^+e^{}}`$ distribution plotted in Figure 5 shows a peak at the expected $`\mathrm{\Delta }m_B^{HFS}`$ value. The $`B^{}`$ Dalitz decay rate normalized to that of the M1 transition is measured to be: $`\mathrm{\Gamma }(B^{}Be^+e^{})/\mathrm{\Gamma }(B^{}B\gamma )=(4.7\pm 1.1\pm 0.9)\times 10^3`$. ## 3 Status of Orbitally-Excited B-Mesons HQET predicts two narrow and two broad $`B^{}`$ states similarly as in the $`D^{}`$ system. Using the heavy quark expansion Eichten, Hill and Quigg determine the masses and total widths of the two $`j=\frac{3}{2}`$ $`B_{u,d}^{}`$ states to be $`m_{B_1}=5759`$ MeV, $`m_{B_2^{}}=5771`$ MeV, $`\mathrm{\Gamma }_{B_1}=21`$ MeV and $`\mathrm{\Gamma }_{B_2^{}}=25`$ MeV, respectively. A recent calculation by Falk and Mehen based on the heavy flavor expansion yields masses of $`m_{B_1}=5780`$ MeV and $`m_{B_2^{}}=5794`$ MeV. The masses of the $`j=\frac{1}{2}`$ states are expected to lie about 100 MeV lower than those of the $`j=\frac{3}{2}`$ states. ALEPH, DELPHI and OPAL have analyzed single $`\pi ^+`$ transitions using inclusive B reconstruction methods. The Q-value distribution measured by DELPHI is plotted in Figure 6. A broad structure is observed at $`m=5734\pm 5\pm 17`$ MeV. A decomposition into individual $`j=\frac{3}{2}`$ and $`j=\frac{1}{2}`$ states is presently not conclusive. Similar observations have been found by the other LEP experiments. A summary of all mass measurements is shown in Figure 7. Note that the masses from OPAL and ALEPH shown here have been shifted up by 31 MeV <sup>1</sup><sup>1</sup>1We have assumed that $`B^{}\pi `$ versus $`B\pi `$ transitions are enhanced by $`2\pm 1:1`$, by considering various scenarios for $`B^{}`$ production and decay. to account for dominant contributions from $`B^{}\pi `$ transitions. The combined LEP result for the $`B_{u,d}^{}`$ mass is $`m(B_{u,d}^{})=5722\pm 8`$ MeV. This is lower than the mass predictions for $`j=\frac{3}{2}`$ states, thus leaving room for contributions from the $`j=\frac{1}{2}`$ states. ALEPH, in addition, has performed an exclusive analysis in the $`B\pi `$ channel. A significant narrow structure is seen at $`m=5703\pm 14`$ MeV. The resolution of $`\sigma =28\pm _{14}^{18}`$ MeV would permit contributions from both $`j=\frac{3}{2}`$ states. However, even after the +31 MeV shift, the mass is still too low to agree with a DELPHI measurement obtained in the $`B^{()}\pi \pi `$ final state (see below). The decay angle distribution of the $`\pi `$ in the $`B^{}`$ rest frame provides information on the helicity distribution of the light quark system. DELPHI has observed a uniform decay angle distribution, which implies that the maximally allowed helicity components of the light quark system are not suppressed. This is surprising since ARGUS has observed the opposite for $`D_2^{}`$ decays. Assuming that the contribution of decays from the $`j=\frac{1}{2}`$ states, which produce a non-uniform decay angle distribution, is small, a fraction of $`w_{\frac{3}{2}}=0.53\pm 0.07\pm 0.10`$ is measured for the helicity $`j=\pm \frac{3}{2}`$ components.<sup>2</sup><sup>2</sup>2The S-wave decays $`B_0^{}B\pi `$ and $`B_1^{}B^{}\pi `$ are expected to be dominant. The masses predicted by Eichten, Hill and Quigg for the narrow $`B_s^{}`$ states are $`m_{B_{s1}}=5849`$ MeV and $`m_{B_{s2}^{}}=5861`$ MeV. Predictions by Falk and Mehen are again higher, yielding $`m_{B_s1}=5886`$ MeV and $`m_{B_{s2}^{}}=5899`$ MeV. Since the $`B_s^{}B_{u,d}`$ mass difference is larger than the kaon mass, the dominant transitions are $`B_s^{}KB_{u,d}^{}`$. Using the inclusive analysis techniques DELPHI has studied $`B^{()}K^\pm `$ final states. DELPHI is rather suited for analyzing such channels because of their excellent kaon identification over a wide momentum range. The resulting Q-value distribution depicted in Figure 8 shows two narrow structures at $`70\pm 4\pm 8`$ MeV and $`142\pm 4\pm 8`$ MeV. Their widths are slightly smaller than the observed resolution. Assuming that the upper peak stems from the transition $`B_{s1}B^{}K`$ and the lower peak stems from $`B_{s2}^{}BK`$, masses of $`m_{B_{s1}}=5888\pm 4\pm 8`$ MeV and $`m_{B_{s2}^{}}=5914\pm 4\pm 8`$ MeV have been obtained. The mass splitting is $`m_{B_{s2}^{}}m_{B_{s1}}=26\pm 6\pm 8`$ MeV. Both the masses and the splitting are higher than the HQET predictions. In addition, upper limits have been set on the widths, yielding $`\mathrm{\Gamma }_{B_{s1}}<60`$ MeV and $`\mathrm{\Gamma }_{B_{s2}^{}}<50`$ MeV at $`95\%`$ CL, respectively. The production cross sections for $`B_{s1}`$ and $`B_{s2}^{}`$ states with respect to that of $`B_{u,d}^{}`$ has been measured to be: $`(\sigma _{B_{s1}}+\sigma _{B_{s2}^{}})/\sigma _{B_{u,d}^{}}=0.142\pm 0.028\pm 0.047`$. OPAL has also studied this channel, observing a $`\mathrm{\Gamma }=47`$ MeV broad structure at $`m=5853\pm 15`$ MeV, which again needs to be shifted upward by $`31`$ MeV to account for dominant $`B^{}K`$ transitions. ## 4 First Observation of Radially Excited B Mesons Radial excitations of D mesons and B mesons should exist similarly as those of $`c\overline{c}`$ and $`b\overline{b}`$ states. A QCD inspired relativistic quark model predicts the masses of the 2S pseudoscalar and vector states to lie at $`5900`$ MeV and $`5930`$ MeV, respectively. DELPHI has extended the inclusive analysis to $`\pi ^+\pi ^{}`$ transitions to look for such states. For $`b\overline{b}`$ events with a $`\pi ^+\pi ^{}`$ pair from the primary vertex where both pions have large rapidities ($`\eta >2.5`$) and are in the same hemisphere as the B candidate, the variable, $`Q_{B\pi \pi }=m_{B^{()}\pi ^+\pi ^{}}m_{B^{()}}2m_{\pi ^\pm }`$, was determined. This selection is $`52\pm 3\%`$ efficient and has a purity of $`80\pm 4\%`$. The resulting Q-value distribution displayed in Figure 9 shows two narrow structures, one at $`Q_{B^{()}\pi \pi }=301\pm 4\pm 10`$ MeV containing $`56\pm 13`$ events and a second at $`Q_{B^{()}\pi \pi }=220\pm 4\pm 10`$ MeV containing $`60\pm 12`$ events. The corresponding measured resolutions, $`\sigma =12\pm 3`$ MeV and $`\sigma =15\pm 3`$ MeV, are compatible with the detector resolution, implying that their natural widths must be narrow. Thus, the two broad $`j=\frac{1}{2}`$ orbital excitations cannot contribute significantly here. Figure 10 shows all allowed transitions for the 2S states and 1P states. The non-suppressed $`\pi `$ and $`\pi \pi `$ transitions of the narrow $`j=3/2`$ P states are: $`B_1B^{}\pi `$ (D-wave), $`B_1B\pi \pi `$ & $`B_1B^{}\pi \pi `$ (P-wave); $`B_2^{}B\pi `$ & $`B_2^{}B^{}\pi `$ (D-wave), and $`B_2^{}B^{}\pi \pi `$ (P-wave). The corresponding transitions of the 2S states are: $`B^{}B^{}\pi `$ (P-wave), $`B^{}B_0^{}\pi `$ & $`B^{}B\pi \pi `$ (S-wave); $`B^{}B\pi `$ & $`B^{}B^{}\pi `$ (P-wave), and $`B^{}B_1\pi `$ & $`B^{}B^{}\pi \pi `$ (S-wave). $`\rho `$ transitions are suppressed by phase space. Since the mass resolution is smaller than $`\mathrm{\Delta }m_B^{HFS}`$, we can exclude that a single excited state decays to $`B\pi \pi `$ and $`B^{}\pi \pi `$ simultaneously. In that case two peaks separated by $`\mathrm{\Delta }m_B^{HFS}`$ should have been visible. It is, however, possible that the two peaks originate from two closely spaced excited states, where the heavier decays to $`B^{}\pi \pi `$ and the lighter to $`B\pi \pi `$. The lower peak most likely stems from the P-wave transitions, $`B_1B\pi ^+\pi ^{}`$ with a possible contribution from $`B_2^{}\pi ^+\pi ^{}B^{}`$. Denoting the mass splitting of the $`j=\frac{3}{2}`$ states by $`\mathrm{\Delta }m_{B_{3/2}}:=m_{B_2^{}}m_{B_1}`$ and the fraction of $`B_2^{}`$ decays by f, we can parametrize the masses of the narrow states by: $`m_{B_1}=5778+f(\mathrm{\Delta }m_B^{HFS}\mathrm{\Delta }m_{B_{3/2}})\pm 11`$ MeV and $`m_{B_2^{}}=5824(1f)(\mathrm{\Delta }m_B^{HFS}\mathrm{\Delta }m_{B_{3/2}})\pm 11`$ MeV. These mass estimates are consistent with the heavy quark predictions for $`j=\frac{3}{2}`$ states, but they are higher than the mass of the broad structure observed in Figure 6. This implies that $`j=\frac{1}{2}`$ states contribute significantly there. Assuming that $`0\mathrm{\Delta }m_{B_{3/2}}\mathrm{\Delta }m_B^{HFS}`$ as in the D-system, we can set bounds of $`m_{B_1}>5756`$ MeV and $`m_{B_2^{}}<5846`$ MeV $`\mathrm{@}95\%`$ CL, which are in conflict with the exclusive $`B^{()}\pi `$ ALEPH result. The upper peak has to originate from states which lie $`80`$ MeV above the $`B_1`$. The most likely interpretation is that this peak stems from $`\pi \pi `$ transitions of the 2S radial excitations: $`B^{}B\pi \pi `$ and $`B^{}B^{}\pi \pi `$. The S-wave $`\pi \pi `$ transitions are expected to be dominant, though two successive $`\pi `$ transition via the broad $`j=\frac{1}{2}`$ orbital excitations are kinematically allowed. However, more detailed studies are needed to clarify this issue. Though single $`\pi `$ transitions $`B^{()}B^{()}\pi `$ are allowed, they should be suppressed because of nodes in the radial wave functions, which lead to cancelations in the overlap integral. Such cancelations have been observed in $`\rho ^{}\pi \pi `$ and $`\psi (4040)D\overline{D}`$ decays. Nevertheless, the observed Q-value distribution for $`B^{()}\pi `$ final states actually has room for such transitions. Assuming that the production of $`B^{}`$ to $`B^{}`$ is similar to that of $`B^{}`$ to $`B`$, we obtain the following mass estimates for the 2S states: $`m_B^{}=5859+\frac{3}{4}(\mathrm{\Delta }m_B^{HFS}\mathrm{\Delta }m_B^{}^{HFS})\pm 12`$ MeV and $`m_B^{}=5905\frac{1}{4}(\mathrm{\Delta }m_B^{HFS}\mathrm{\Delta }m_B^{}^{HFS})\pm 12`$ MeV. Here $`\mathrm{\Delta }m_B^{}^{HFS}`$ denotes the HFS of the radially excited states. These values are consistent with predictions from a QCD inspired relativistic quark model, thus supporting the interpretation of observing 2S radial excitations. A preliminary estimate of the production cross sections from the observed signal yield is: $`\sigma (bB^{}+B^{})/\sigma (ball)=0.5\%4\%`$. The branching ratio for $`B_1B\pi \pi `$ is of the order of $`2\%10\%`$. ## 5 Status of b Baryons The $`\mathrm{\Lambda }_b`$ is clearly established since a recent CDF measurement in the $`\mathrm{\Lambda }J/\psi `$ channel. The $`\mathrm{\Lambda }J/\psi `$ invariant mass peaks at $`m_{\mathrm{\Lambda }_b}=5621\pm 4\pm 3`$ MeV. Previously, ALEPH and DELPHI had observed a few candidates in the $`\mathrm{\Lambda }_c^\pm \pi ^{}`$ channel. All mass measurements are summarized in Figure 11. The present world average for the $`\mathrm{\Lambda }_b`$ mass is $`m_{\mathrm{\Lambda }_b}=5624\pm 9`$ MeV. Using the heavy quark expansion in combination with the observed $`\mathrm{\Sigma }_c^{}\mathrm{\Sigma }_c`$ HFS provides a prediction for the $`\mathrm{\Sigma }_b^{}\mathrm{\Sigma }_b`$ HFS of $`\mathrm{\Delta }m_{\mathrm{\Sigma }_b}^{HFS}=22`$ MeV. DELPHI has looked for b-flavored baryons using the inclusive analysis techniques. Baryon enrichment is obtained by selecting fast p’s, n’s and $`\mathrm{\Lambda }`$’s. For events consistent with a $`\mathrm{\Lambda }_b`$, a pion from the primary vertex is added to determine the variable $`Q_{\mathrm{\Lambda }_b\pi }=m_{\mathrm{\Lambda }_b\pi }m_{\mathrm{\Lambda }_b}m_\pi `$. The resulting distribution depicted in Figure 12 reveals two structures at $`Q_{\mathrm{\Lambda }_b\pi }=33\pm 3\pm 8`$ MeV and $`Q_{\mathrm{\Lambda }_b\pi }=89\pm 3\pm 8`$ MeV. Interpreting these as transitions from the $`\mathrm{\Sigma }_b`$ and $`\mathrm{\Sigma }_b^{}`$, mass differences of $`m_{\mathrm{\Sigma }_b}m_{\mathrm{\Lambda }_b}=173\pm 3\pm 8`$ MeV and $`m_{\mathrm{\Sigma }_b^{}}m_{\mathrm{\Lambda }_b}=229\pm 3\pm 8`$ MeV are determined. Within errors they are consistent with quark model predictions yielding $`m_{\mathrm{\Sigma }_b}m_{\mathrm{\Lambda }_b}=200\pm 20`$ MeV and $`m_{\mathrm{\Sigma }_b^{}}m_{\mathrm{\Lambda }_b}=230\pm 20`$ MeV, respectively. The measured HFS of $`\mathrm{\Delta }m_{\mathrm{\Sigma }_b}^{HFS}=56\pm 15`$ MeV is in conflict with HQET predictions. This measurement needs to be checked, since presently it cannot be ruled out that either a transition from a different state is seen or that for one of the structures the observed mass is shifted due to contributions from another transition. It is worthwhile to note that the lower peak is narrower than the higher peak. This is supportive for a more complex interpretation. To clarify this issue DELPHI plans to redo the analysis with reprocessed data, which show significant improvements in track reconstructions and thus achieve improved efficiencies and momentum resolutions. Assuming that the two peaks stem from $`\pi `$ transitions of the $`\mathrm{\Sigma }_b`$ and $`\mathrm{\Sigma }_b^{}`$, DELPHI measures a relative production cross section of $`(\sigma _{\mathrm{\Sigma }_b}+\sigma _{\mathrm{\Sigma }_b^{}})/\sigma (ball)=4.8\pm 0.6\pm 1.5\%`$. The fraction originating from $`\mathrm{\Sigma }`$ baryons is $`24\pm 6\pm 10\%`$. DELPHI has also measured the helicity angle distribution of the $`\pi `$ in the $`\mathrm{\Sigma }_b^{}`$ rest frame. A fit to the Falk Peskin model yields $`w_1=0.36\pm 0.30\pm 0.030`$ for the helicity $`h=\pm 1`$ component of the light quark system, indicating that these states are suppressed. According to Falk and Peskin large $`\mathrm{\Sigma }_b`$ and $`\mathrm{\Sigma }_b^{}`$ rates in combination with a suppression of helicity $`\pm 1`$ states lead to a substantial reduction of the $`\mathrm{\Lambda }_b`$ polarization in $`Z^0`$ decays. This has in fact been observed by ALEPH, measuring $`P(\mathrm{\Lambda }_b)=0.26_{0.20}^{+0.25}`$$`{}_{0.12}{}^{}{}_{}{}^{+0.13}`$. ## 6 Summary The knowledge of the B meson sector has improved over the past few years. The present status is summarized in Figure 13. Precisely measured masses exist for all pseudoscalar and vector B meson ground states except for those in the $`B_c`$ system, which has not been detected yet. Evidence is found for orbital excitations, but only in the $`B_s^0`$ system it was possible to isolate two separate narrow states. Cross section measurements agree with expectations and decay angle distributions indicate that helicity $`j=\pm \frac{3}{2}`$ states are not suppressed. While presently there is no evidence for orbital excitations with $`L>1`$, first evidence is found for the 2S $`B_{u,d}`$ radial excitations. States and transitions in the b-baryon sector are summarized in Figure 14. The knowledge is still rather poor here. Only the $`\mathrm{\Lambda }_b`$ is well-established. The $`\mathrm{\Sigma }_b`$ and $`\mathrm{\Sigma }_b^{}`$ may be observed but the HFS is in conflict with HQET predictions. Thus, these measurements need confirmation. The mass of the $`\mathrm{\Xi }_b`$ is unknown, though its lifetime has been measured. So far no other b-flavored baryon has been identified. ## 7 Acknowledgments This work has been supported by the Research Council of Norway. I would like to acknowledge the DELPHI collaboration for support. Special thanks goes to M. Feindt, Ch. Weiser and Ch. Kreuter for fruitful discussions. ## References
no-problem/9901/hep-ph9901268.html
ar5iv
text
# References Scaling properties of transverse flow in Bjorken’s scenario for heavy ion collisions V.Fortov, P. Milyutin <sup>a</sup> and N. Nikolaev<sup>b,c</sup> <sup>a</sup>High Energy Density Research Center of the Russian Academy of Sciences, IVTAN, Izhorskaya 13/9, 127412 Moscow, Russia <sup>b</sup>Institut f. Kernphysik, Forschungszentrum Jülich, D-52425 Jülich, Germany <sup>b</sup>L. D. Landau Institute for Theoretical Physics, GSP-1, 117940, ul. Kosygina 2, Moscow V-334, Russia. Abstract We report a simple analytic solution for the velocity $`u`$ of the transverse flow of QGP at a hadronization front in Bjorken’s scenario. We establish scaling properties of the transverse flow as a function of the expansion time. We present simple scaling formula for the expansion velocity distribution. Landau’s hydrodynamic stage is a part of all scenarios for the evolution of the hot and dense matter (quark-gluon plasma - QGP) formed in ultrarelativistic heavy ion collisions . It is now well understood that Landau’s complete stopping of Lorentz-contracted colliding nuclei is not feasible because of the Landau-Pomeranchuck-Migdal (LPM) effect, i.e., the finite proper formation time $`\tau _0`$ (, for the modern modern QCD approach to the LPM effect see , the early works on LPM phenomenology of nuclear collisions are reviewed in ), although evaluations of $`\tau _0`$ and of the initial energy density $`ϵ_{max}`$ remain controversial . The corollary of the LPM effect in conjunction with the approximate central rapidity plateau is the rapidity-boost invariance of initial conditions. The corresponding solution for a longitudinal expansion in an 1$`+`$1-dimensional approximation, neglecting the transverse flow, was found by Bjorken (, see also ). There is some experimental evidence , although a disputed one , for a transverse flow which must develop if the lifetime of the hydrodynamical stage is sufficiently long. In this communication we present a simple solution of the Euler-Landau equation for the velocity of transverse expansion, $`u,`$ gained in the hydrodynamic expansion of QGP before the hadronization phase transition. Our solution shows that for the usually considered lifetime $`\tau _B`$ of QGP the transverse flow is non-relativistic. It is only marginally sensitive to properties of the hot stage and offers a reliable determination of $`\tau _B`$ if the radial profile of the initial energy density is known. We find that the $`u`$-distribution is a scaling function of $`u/u_m`$, where $`u_m`$ is a maximal velocity of expansion. We start with the familiar Landau relativistic hydrodynamics equations $$_\mu T_{\mu \nu }=0,$$ (1) for the energy-momentum tensor $`T_{\mu \nu }=(ϵ+p)u_\mu u_\nu p\delta _{\mu \nu }`$, where $`ϵ`$ and $`p`$ are the energy density and pressure in the comoving frame, and $`u_\mu `$ is the 4-velocity of the element of the fluid . The initial state is formed from subcollisions of constituents (nucleons, constituent quarks and/or partons) of colliding nuclei and is glue dominated at early stages. The LPM effect implies that for a subcollision at the origin, $`x=(t,z,\stackrel{}{r})=0`$, the secondary particle formation vertices lie on a hyperbole of constant proper time $`\tau `$, $`\tau ^2=t^2z^2\tau _0^2,`$ and $`ϵ,p`$ do not depend on the space-time rapidity $`\eta =\frac{1}{2}ln\left(\frac{t+z}{tz}\right)`$ of the comoving reference frame . In the 1$`+`$1-dimensional approximation, this leads to the celebrated Bjorken equation $$\frac{ϵ}{\tau }+\frac{ϵ+p}{\tau }=0.$$ (2) According to the lattice QCD studies, the familiar $`c_s^2=\frac{1}{3}`$ holds for a velocity of sound $`c_s`$ in the QGP excepting a negligible narrow region of the hadronization transition temperature $`T_h160`$MeV and energy density $`ϵ_h1.5`$ GeV$`/`$fm<sup>3</sup> . With the equation of state $`p=c_s^2ϵ`$, the Bjorken equation has a solution $$ϵ\tau ^{(1+c_s^2)}$$ (3) Widely varying estimates for $`ϵ_{max}`$ and the proper time $`\tau _0`$ are found in the literature . However, as Bjorken has argued , $`ϵ_{max}\frac{1}{\tau _0}`$ and the actual dependence of the Bjorken lifetime $`\tau _B`$ on $`ϵ_{max}`$ is rather weak: $$\tau _B=\tau _0\left[\left(\frac{T_{max}}{T_h}\right)^{\frac{4}{1+c_s^2}}1\right]=\frac{\tau _0ϵ_{max}}{ϵ_h}\left[\left(\frac{ϵ_{max}}{ϵ_h}\right)^{\frac{c_s^2}{1+c_s^2}}\frac{ϵ_h}{ϵ_{max}}\right].$$ (4) For central $`PbPb`$ collisions, for which there is some experimental evidence for the QGP formation , the typical estimates are $`\tau _B3\mathrm{f}/\mathrm{c}`$ at SPS and $`\tau _B6\mathrm{f}/\mathrm{c}`$ at RHIC , which are much larger than the standard estimate $`\tau _00.5`$ f/c. Now we turn to the major theme of collective transverse expansion, which is driven by radial gradient of pressure. As we shall see, the radial flow is nonrelativistic. Then, to the first order in radial velocity $`u_r`$, the radial projection of (1) gives the Euler-Landau equation $$(ϵ+p)\frac{u_r}{\tau }+u_r\left(\frac{(ϵ+p)}{\tau }+\frac{(ϵ+p)}{\tau }\right)+\frac{p}{r}=0,$$ (5) in which we can use the Bjorken’s solution for $`ϵ`$ and $`p=c_s^2ϵ`$. Then the Euler-Landau equation can be cast in a simple form $$\frac{u_r}{\tau }\frac{c_s^2}{\tau }u_r=\frac{c_s^2}{(1+c_s^2)}\frac{\mathrm{log}p}{r}.$$ (6) The important point is that the transverse expansion of the QGP fireball can be neglected which we can justify a posteriori. For this reason the time dependence of the logarithmic derivative $`D(r,\tau )=\frac{\mathrm{log}p}{r}`$ can be neglected, it is completely determined by the initial density profile and depends neither on the temperature nor fugacities of quarks and gluons, which substantially reduces the model-dependence of the transverse velocity. Then the solution of (5) subject to the boundary condition $`u_r(\tau _0)=0`$ is $$u_r(r,\tau )=\frac{c_s^2\tau ^{c_s^2}}{1c_s^4}_{\tau _0}^\tau 𝑑tt^{c_s^2}D(r,t)\frac{c_s^2\tau D(r,0)}{1c_s^4}\left[1\left(\frac{\tau _0}{\tau }\right)^{1c_s^2}\right].$$ (7) The model-independent estimates for the initial density/pressure profile are as yet lacking. AT RHIC and higher energies of the initial state is expected to be formed by semihard parton-parton interactions for which nuclear shadowing effects can be neglected . Then for central collisions $`ϵ(r,\tau _0)T_A^k(r)`$, where $`k=2`$ and $`T_A(r)=𝑑zn_A(\sqrt{z^2+r^2})`$ is the density of constituents, $`T_A(r)\mathrm{exp}(\frac{r^2}{R_A^2})`$, where $`R_A1.1A^{1/3}`$fm is the nuclear radius. In another extreme scenario of strong shadowing and of strong LPM effect the soft particle production is not proportional to the multiplicity of collisions of fast partons and $`k=1`$ is more appropriate. Hereafter we take $`k=2`$. In any case, the logarithmic pressure gradient is approximately linear, $`D(r,t)2kr/R_A^2`$, and according to the solution (7) the displacement of the fluid element $`\mathrm{\Delta }r`$ is proportional to the radius, $`\mathrm{\Delta }rr\tau ^2`$. Consequently, we have the Hubble-type radial rescaling $$\lambda (\tau )1+\frac{\mathrm{\Delta }r}{r}1+\left(\frac{\tau }{\tau _T}\right)^2,$$ (8) where $$\tau _T\frac{R_A}{\sqrt{k}c_s}.$$ (9) has a meaning of the lifetime against transverse expansion. For central $`PbPb`$ collisions eq. (9) gives $`\tau _T10k^{0.5}\mathrm{f}/\mathrm{c}`$ which is larger than the above cited estimates of $`\tau _B`$ and at SPS and RHIC energies the transverse expansion of the fireball can be neglected. In a quasi-uniform plasma the hydrodynamic expansion lasts until $`ϵ=ϵ_h`$. For long-lived QGP and $`\tau \tau _0`$ we can use the Bjorken’s solution $$ϵ(r,\tau )=ϵ_h\left(\frac{T_A(r)}{T_A(0)}\right)^k\left(\frac{\tau _B}{\tau }\right)^{1+c_s^2},$$ (10) which gives the position of the hadronization front $$r_h(\tau )=R_A\sqrt{\frac{1+c_s^2}{k}\mathrm{ln}\frac{\tau _B}{\tau }}$$ (11) and the radial velocity at the hadronization front $$u(\tau )=u_r(r_h(\tau ),\tau )=\frac{c_s^2\tau }{R_A(1c_s^4)}\sqrt{4k(1+c_s^2)\mathrm{ln}\frac{\tau _B}{\tau }}\left[1\left(\frac{\tau _0}{\tau }\right)^{1c_s^2}\right].$$ (12) For the usually discussed $`\tau _B`$ and $`\tau _0`$ we have $`\tau _B\tau _0`$. For such s long-lived QGP, $`\tau _B\tau _0`$, the radial velocity takes the maximal value $`u_m`$ at $`t1/\sqrt{e}`$, $$u_m=\frac{c_s^2\tau _B}{R_A(1c_s^4)}\sqrt{\frac{2k(1+c_s^2)}{e}}\left[1\left(\frac{\tau _0\sqrt{e}}{\tau _B}\right)^{1c_s^2}\right].$$ (13) It is remarkable that the average radial acceleration $`u_m/\tau _B`$ is approximately constant. The solid line in fig. 1 shows the maximal velocity $`u_m`$ evaluated from (12). The large-$`\tau _B`$ approximation (13), shown by the dashed line, reproduces these results to better than $`4\%`$ at $`\tau _B=3`$ f/c and better than $`1\%`$ at $`\tau _B=8`$ f/c. For the above cited estimates for $`\tau _B`$ in central $`PbPb`$ collisions we find $`u_m(SPS)0.13`$ and $`u_m(RHIC)0.28`$, consequently the nonrelativistic expansion approximation is justified very well. Now notice, that for a long-lived QGP the hadronization front (11), shown in fig. 2, and $`u(\tau )/u_m`$ depend only on the scaling variable $`t=\tau /\tau _B`$, with an obvious exception of the short-time region $`\tau \tau _0`$. This scaling property is clearly seen in fig. 3 where we show the $`u/u_m`$ as a function of $`t=\tau /\tau _B`$. Notice a convergence to a universal curve with the increasing $`\tau _B`$. The most interesting quantity is a radial velocity distribution which can be evaluated experimentally from the Doppler modifications of the thermal spectrum. In order to test our results one needs particles which are radiated from the hadronization front. The standard scenario is that the hadronization transition is followed by an expanding mixed phase which, however, does not contribute to the transverse velocity because in the mixed phase $`c_s^20`$ is negligible small. The mixed phase is followed by a hydrodynamic expansion and post-acceleration of strongly interacting pions and baryons until the hadronic freeze-out temperature $`T_f<T_h`$ is reached . This post-acceleration is negligible for weakly interacting $`K^+`$ and $`\varphi `$-mesons, which gives the desired access to the radial flow at the hadronization transition. In the evaluations of the modification of the the thermal spectrum and one needs to know the $`u`$-distribution weighted with the particle multiplicity. For the of constant hadronization temperature contribution of the hadronization surface $`r_h(\tau )`$ to the particle multiplicity is $$dwr_h(\tau )d\tau .$$ (14) Making use of the solutions (11) and (12), it can readily be transformed into $`dw/dur_h(du/d\tau )^1`$. The important point is that because of the above discussed scaling properties of the transverse flow, the velocity distribution is a scaling function of $`x=u/u_m`$: $$\frac{dw}{du}=\frac{1}{u_m}\frac{f(x,\tau _B)}{\sqrt{1x}},$$ (15) where for a long-lived QGP $`f(x,\tau _B)`$ does not depend on $`\tau _B`$. The square-root singularity at $`x=1`$ is a trivial consequence of the vanishing derivative $`du(\tau )/d\tau `$ at $`\tau \tau _B/\sqrt{e}`$. In Fig. 4 we show the scaling function $`f(x,\tau _B)`$ for $`\tau _B=6`$ f/c. We don’t show $`f(x,\tau _B)`$ for other values of $`\tau _B`$, because the variations from $`\tau _B=6`$ f/c to 3 f/c to 9 f/c do not exceed several per cent and are confined to a narrow region of $`x\text{ }<0.2`$. The approximation $`f(x)=0.5`$ is good for all the practical purposes. In conclusion, we would like to argue that the shape of the velocity distribution is to a large extent the model independent one. The generic origin of the square-root singularity at $`x=1`$ has already been emphasized, the fact that $`f(0)0`$ is due to a radiation from the surface $`r_hR_A`$ at early stages of expansion. Above we assumed that hydrodynamic expansion continues untill the hadronization transition. Following Pomeranchuk one can argue that in the non-uniform plasma the hydrodynamic expansion stops when the mean free path $$l_{int}=\frac{1}{n(r_c,\tau )\sigma _t}$$ (16) defined in terms of the transport cross section $`\sigma _t`$, is larger that the GQP density variation length $`D(r,0)^1`$. In the partially equilibrated QGP $`l_{int}T^1`$. Then the Pomeranchuk condition gives the temperature $`T_P`$ at which the hydrodynamic expansion stops, $`T_Pr/R_A^2`$. The possibility remains open that at early stages $`T_P>T_h`$, in which case $`dwr_h(\tau )(T_P/T_h)^3d\tau `$. This enhanced radiation at early stages at slow radial expansion but at higher temperatures $`T_P`$ would mimic radiation at a lower temperature and higher radial velocity. This may result in the apparent depletion of $`f(0)`$; in order to explore this possibility one needs better understanding of $`l_{int}`$ near the hadronization transition. The NA49 fits to the proton, kaon and pion transverse mass $`m_T`$ distribution in central $`PbPb`$ collisions at SPS assume identical freeze-out temperature for all particle species . For positive particles NA49 finds $`T_f=140\pm 7`$ MeV and the transverse velocity $`u=0.41\pm 0.11`$. However, for the $`K^+`$ one must take the higher freeze-out temperature $`T_f=T_h160`$ MeV given by the lattice QCD. Because of the anti-correlation between the local temperature $`T_f`$ and $`u_T`$, see Fig. 7 in , such a fit with larger $`T_f`$ to the same $`m_T`$ distribution shall yield smaller $`u`$. Acknowledgements: This work was partly supported by the INTAS grant 96-597 and the Grant N 94-02-05203 from the Russian Fund for Fundamental Research. Figure captions * The maximal velocity of radial expansion $`u_m`$ for for central $`PbPb`$ collisions as a function of the expansion time $`\tau _B`$. Shown by the dotted line is the large-$`\tau _B`$ formula (15). * The time dependence of the hadronization front for central $`PbPb`$ collisions. * The converegence to the scaling behaviour of the time dependence of the relative radial velocity $`u/u_m`$ for central $`PbPb`$ collisions. * The scaling function $`f(x,\tau _B)`$ of. Eq. (15) is shown for $`\tau _B=6`$.
no-problem/9901/chao-dyn9901023.html
ar5iv
text
# To appear in Phys. FluidsNote on Forced Burgers Turbulence ## Abstract A putative powerlaw range of the probability density of velocity gradient in high-Reynolds-number forced Burgers turbulence is studied. In the absence of information about shock locations, elementary conservation and stationarity relations imply that the exponent $`\alpha `$ in this range satisfies $`\alpha 3`$, if dissipation within the powerlaw range is due to isolated shocks. A generalized model of shock birth and growth implies $`\alpha =7/2`$ if initial data and forcing are spatially homogeneous and obey Gaussian statistics. Arbitrary values $`\alpha 3`$ can be realized by suitably constructed homogeneous, non-Gaussian initial data and forcing. Burgers equation was originally proposed as a simplified dynamical system that might point at statistical procedures applicable to Navier-Stokes turbulence. That goal seems still far away. Meanwhile, a body of research has been devoted to Burgers turbulence itself. The particular topic addressed in the present paper is the power law of a part of a probability density for Burgers turbulence forced at high Reynolds number. The conclusion reached is that determination of the correct power law requires detailed examination of the dynamical behavior of explicit structures, namely shocks. This perhaps has discouraging implications for any general theory of the higher statistics of Navier-Stokes turbulence, where more varied and plastic flow structures must be confronted. There has been considerable interest in a putative powerlaw range of the probability density $`Q(\xi )`$ of velocity gradient $`\xi =u_x`$ for high-Reynolds-number Burgers turbulence forced at large scales. This range, of form $`Q(\xi )|\xi |^\alpha `$, is believed to occupy negative $`\xi `$ values intermediate between those near the central peak of $`Q`$ and those characteristic of the shock interiors. Proposals for $`\alpha `$ include $`2`$ , the range $`5/2`$ to $`3`$ , $`3`$ , and $`7/2`$ . It should be emphasized at the outset that the high-Reynolds-number limit is not the only case of theoretical interest. As with Navier-Stokes turbulence, the construction of faithful analytical approximations at finite Reynolds numbers remains a challenge. E and Vanden Eijnden advance and clarify the mathematics of the infinite-Reynolds-number limit and provide some valuable tools. One is a simple representation of the effects of shock interactions on $`Q`$ in terms of the rate at which fluid is swallowed by the shocks. Another is a steady-state integral representation of the asymptotic large-$`|\xi |`$ form of $`Q(\xi )`$ in terms of the dissipation term $`F(\xi )`$ in the $`Q`$ equation of motion. Another is an explicit shock-birth model that implies $`\alpha =7/2`$. The present paper explores the relation between the form of $`F(\xi )`$ and the value of $`\alpha `$. If viscous effects in the $`\alpha `$ range are due to isolated shocks, the form of $`F(\xi )`$ in this range expresses the relative likelihood, weighted by shock strength, that a shock occurs in a fluid environment with given $`\xi `$. In the absence of an explicit shock-growth model, or other source of information about the distribution of shocks, the integral equation for $`Q`$ is found to yield $`\alpha 3`$. The slightly stronger bound $`\alpha >3`$ is stated in and , but with the recognition that $`\alpha =3`$ may be realized under particular circumstances. The limit of infinite Reynolds number is taken in the present paper without making a split of the velocity field into shock interiors and external field. At the end, the analysis is extended to the split-field representation. A more general model of shock growth is presented here. It is independent of details of internal shock structure. In this model, one examines the length of time during which $`\xi `$ can steepen within a fluid element before the fluid element hits a shock. The model implies $`\alpha =7/2`$ if the forcing is statistically homogeneous and Gaussian. More general statistically homogeneous forcing can realize arbitrary values $`\alpha 3`$. Let $`R=u_0L/\nu `$, $`\xi _0=u_0/L`$, $`\xi _S=R\xi _0`$, where $`u_0`$, $`L`$, $`\nu `$ are root-mean-square velocity, spatial macroscale, and viscosity. The order of magnitude of gradients within typical shocks is $`\xi _S`$. The forced Burgers equation is $$u_t+u_xu=\nu u_{xx}+f,$$ (1) where $`f`$ is the (statistically stationary) forcing field. If $`f`$ has infinitely short correlation times, (1) leads to $$Q_t=\xi Q+(\xi ^2Q)_\xi +BQ_{\xi \xi }+F,$$ (2) where $$F(\xi ,t)=\nu (H(\xi ,t)Q(\xi ,t))_\xi ,H(\xi ,t)=\xi _{xx}|\xi ,$$ (3) and the parameter $`B`$ measures the strength of forcing of $`\xi `$ . The first term on the right side of (2) represents loss or gain of measure due to squeezing or stretching of the fluid, the second term describes relaxation of positive $`\xi `$ and steepening of negative $`\xi `$, and $`F`$ includes all viscous effects. Statistical homogeneity requires $`\xi =_{\mathrm{}}^{\mathrm{}}\xi Q𝑑\xi =0`$. Multiplication of (2) by $`\xi `$ and integration over all $`\xi `$ shows that this condition is preserved, provided that $`Q`$ vanishes strongly enough at $`\pm \mathrm{}`$. The result depends on $$_{\mathrm{}}^{\mathrm{}}\xi F(\xi )𝑑\xi =0,$$ (4) which follows from (3) and $`\xi _{xx}=0`$. If $`f`$ has spectral support effectively confined to wavenumbers $`O(1/L)`$, then $`\xi _0=O(B^{1/3})`$. In this case, it is widely agreed that the steady-state $`Q`$ has a complicated form in the limit $`R\mathrm{}`$. There is a central peak of width $`O(\xi _0)`$ whose form is $`R`$-independent in the limit. There is faster-than-algebraic decay as $`|\xi |\mathrm{}`$. For $`\xi >0`$, this decay has the specific form $`\mathrm{exp}(\xi ^3/3B)`$, with an algebraic prefactor. The rapidly-decaying tail for $`\xi <0`$ (far-left tail) includes $`|\xi |O(\xi _S)`$. It is preceded, at smaller $`|\xi |`$, by an algebraic tail of form $`1/R|\xi |`$ ($`1`$ range) associated with the shoulders of developed shocks. Between the $`1`$ range and the central peak, an inner algebraic tail of form $`Q(\xi )\xi _0^{\alpha 1}|\xi |^\alpha `$ is expected. This tail is driven by the inviscid steepening of negative gradients. Proposals for the value of $`\alpha `$ have ranged from $`2`$ to $`7/2`$. The $`\alpha `$ range is infinite if $`R`$ is, but is confined to $`|\xi |`$ smaller than the $`O(\xi _S)`$ gradients inside the shocks. Thus the range is restricted to $`\xi >\xi _MR^z\xi _0`$, with $`z1`$. In fact, both shock analysis and simulation show that the $`\alpha `$ range is masked by the $`1`$ range at negative enough $`\xi `$. The masking further restricts the observable $`\alpha `$ range to $`z=1/(\alpha 1)`$, a result that follows from setting $`1/R\xi _M=\xi _0^{\alpha 1}\xi _M^\alpha `$. It should be emphasized that the transition from $`Q(\xi )|\xi |^\alpha `$ to $`Q(\xi )1/R|\xi |`$ at $`\xi =O(\xi _M)`$ is a masking, not a dynamical transition. The $`1`$ range is associated with the shoulders of quasi-stationary mature shocks while the $`\alpha `$ range is associated with inviscid steepening of gradients away from shocks. The latter process continues, for at least some fluid elements, up to $`|\xi |=O(\xi _S)`$. The behavior of $`Q`$ in all ranges is linked by (2) to the form of $`F(\xi )`$. For large negative $`\xi `$, the relationship is expressed in especially clear form by the integral representation $$Q(\xi )|\xi |^3_{\mathrm{}}^\xi \xi ^{}F(\xi ^{})𝑑\xi ^{}(\xi \xi _0)$$ (5) derived by E and Vanden Eijnden . An alternative form of (5) is $$\nu H(\xi )\xi ^2+\frac{1}{Q(\xi )}_{\mathrm{}}^\xi Q(\xi ^{})\xi ^{}𝑑\xi ^{}(\xi \xi _0).$$ (6) or by (3), $$F\xi Q(\xi ^2Q)_\xi (\xi \xi _0).$$ (7) Equation (7) says that in a steady state, for $`\xi `$ such that the $`B`$ term is negligible, the $`F`$ term in (2) must balance both the gradient steepening term $`(\xi ^2Q)_\xi `$ and the loss-of-measure term $`\xi Q`$. Equations (5)–(7) do not require large $`R`$. According to (5), the behavior of $`Q`$ in the $`\alpha `$ range depends on, first, the value $`_{\mathrm{}}^{\xi _M}\xi ^{}F(\xi ^{})𝑑\xi ^{}`$ of the integral at the outer end of the range; second, the $`\xi `$ dependence of $`F(\xi )`$ within the $`\alpha `$ range, which determines the growth of the integral within the range. If $`F(\xi )`$ in the $`\alpha `$ range arises solely from isolated shocks, the analysis of gives an immediate bound on $`\alpha `$. This analysis expresses a simple physics: If a shock lies in surrounding fluid with gradient $`\xi `$, then this fluid is swallowed by the convergence at the shock, thereby decreasing $`Q(\xi )`$. This means that $`F(\xi )`$ cannot be positive in the $`\alpha `$ range. Since $`\xi <0`$, the integral in (5) therefore cannot increase as $`|\xi |`$ decreases within the range, and $`\alpha <3`$ is impossible. The cases $`\alpha =3`$ and $`\alpha >3`$ imply qualitatively different magnitudes in (5). If $`\alpha =3`$, then (5) immediately gives $`_{\mathrm{}}^{\xi _M}\xi ^{}F(\xi ^{})𝑑\xi ^{}=O(\xi _0^2)`$. It is then required that $`F`$ be small enough that $`_{\mathrm{}}^\xi \xi ^{}F(\xi ^{})𝑑\xi ^{}=O(\xi _0^2)`$ if $`\xi `$ lies anywhere within the $`\alpha `$ range. If $`\alpha >3`$, (5) immediately gives $`_{\mathrm{}}^{\xi _M}\xi ^{}F(\xi ^{})𝑑\xi ^{}0`$ as $`\xi _M/\xi _0\mathrm{}`$. Then $`F(\xi )|\xi |^{1\alpha }`$ within the range will give $`Q(\xi )|\xi |^\alpha `$. This means $`F(\xi )\xi Q(\xi )`$. If dissipation within the $`\alpha `$ range is due to isolated shocks, the physical difference between $`\alpha =3`$ and $`\alpha >3`$ lies in the weighted relative likelihood that shocks exist in environments with given $`\xi `$. $`F(\xi )Q(\xi )`$, a case of Polyakov’s closure , is one form that is consistent with $`\alpha =3`$. This corresponds to unbiased placement of shocks in the fluid . $`F(\xi )\xi Q(\xi )`$ represents bias toward larger $`|\xi |`$ within the $`\alpha `$ range. Since $`Q(\xi )`$ falls off rapidly in the $`\alpha `$ range, even this form puts most of the shocks in fluid with $`|\xi |=O(\xi _0)`$. If $`\alpha =3`$, the inviscid steepening of gradient in most fluid elements with negative $`\xi `$ continues until $`|\xi |=O(\xi _S)`$. If the dissipative process that stops further steepening is absorption by well-defined shocks, these shocks must occur with sufficient weight in environments with $`|\xi |=O(\xi _S)`$. Since $`\xi _S`$ measures typical gradients within shocks, other processes, such as new shock formation, may actually dominate the dissipation. Further insight concerning the relation between $`F`$ and $`\alpha `$ is provided by the equation of motion (2) for $`Q`$ and, in particular, by the steady-state equation $`Q_t=0`$. A variety of global assumed forms for $`F`$, including $`F(\xi )Q(\xi )`$, can be adjusted, by tuning of the constant of proportionality, to give a steady-state $`Q(\xi )`$ that vanishes strongly as $`\xi +\mathrm{}`$, is positive everywhere, and has a range $`\alpha =3`$. In order to be physically relevant, $`F(\xi )`$ for negative $`\xi `$ beyond $`\xi _M`$ must be shaped to give consistent $`1/R|\xi |`$ and far-tail ranges for $`Q`$. Global model forms can also be constructed that yield a range with $`\alpha >3`$. The quantity $`\xi `$ is identically conserved at value zero. It is of interest to examine the way in which flow of $`\xi `$ through the $`\alpha `$ range depends on the value of $`\alpha `$. Let (2) be multiplied by $`\xi `$ and integrated over the range $`(\xi _M,\mathrm{})`$ to yield $$_{\xi _M}^{\mathrm{}}\xi Q_t(\xi )𝑑\xi =\xi _M^3Q(\xi _M)+_{\xi _M}^{\mathrm{}}\xi F(\xi )𝑑\xi .$$ (8) In writing (8), a partial integration is performed, it is assumed that $`Q`$ vanishes strongly enough at $`+\mathrm{}`$, and it is assumed that $`\xi _M/\xi _0`$ is large enough that the $`B`$ term is negligible. The left side of (8) is the rate of increase of $`\xi `$ in the range $`(\xi _M,\mathrm{})`$. On the right side, $`\xi _M^3Q(\xi _M)`$ is the rate of increase due to flow of negative contribution to $`\xi `$, through the boundary at $`\xi _M`$, to the range $`(\mathrm{},\xi _M)`$. This flow is due to inviscid steepening of gradients. The $`F`$ term on the right side of (8) is the rate of increase of $`\xi `$, or decrease of $`\xi `$, in the range $`(\xi _M,\mathrm{})`$, due to viscous interaction with isolated shocks and any other dissipative structures that may be present. The nature of the $`F`$ term in (8) must be understood clearly. The present analysis is at finite $`R`$, with the eventual limit $`R\mathrm{}`$ considered. There is no sudden jump of $`\xi `$ contribution from $`\alpha `$ range to shock interiors. What does happen at large $`R`$ is that a fluid element with $`\xi `$ in the $`\alpha `$ range hits the shock and then suffers a very rapid, but continuous, steepening until its $`\xi `$ is the order of that in the shock interior. The sum of these events in the entire range is expressed by the $`F`$ term in (8). In a steady state, the right side of (8) vanishes. The implications differ in the cases $`\alpha =3`$ and $`\alpha >3`$. If $`\alpha >3`$, the boundary-flow term in (8) vanishes in the limit $`\xi _M/\xi _0\mathrm{}`$. This means that the $`F`$ integral term vanishes also in the limit. In view of (4), where the limits are true infinity (infinite compared to $`\xi _S`$), it follows that $`_{\mathrm{}}^{\xi _M}\xi F(\xi )𝑑\xi `$ also vanishes in the limit. This does not mean that $`F(\xi )`$ tends to zero in the limit for all $`\xi `$ in the range $`(\mathrm{},\xi _M)`$. Instead, there are both positive and negative contributions that cancel in the limit. In general, $`F(\xi )`$ is positive in the $`1`$ range, as illustrated by (9) and (10) below. If $`\alpha =3`$, the boundary term in (8) tends to a nonzero positive constant in the limit. A steady state then implies that the $`F`$ integral in (8) is negative and that $`_{\mathrm{}}^{\xi _M}\xi F(\xi )𝑑\xi `$ is positive. This means that the contributions from the negative and positive regions of $`F`$ in the range $`(\mathrm{},\xi _M)`$ do not cancel. The nonzero value of $`_{\mathrm{}}^{\xi _M}\xi F(\xi )𝑑\xi `$ needed in the case $`\alpha =3`$ has already been noted in the discussion of (5). For both $`\alpha >3`$ and $`\alpha =3`$, the boundary flow through an arbitrary point $`\xi _\alpha `$ fully within the $`\alpha `$ range is independent of $`\xi _\alpha `$ in the limit $`R\mathrm{}`$. Thus if $`\xi _M`$ in (8) is replaced by any $`\xi _\alpha `$ such that $`\xi _\alpha /\xi _M0`$, $`\xi _\alpha /\xi _0\mathrm{}`$ in the limit, then the limiting value of the boundary flow vanishes if $`\alpha >3`$. The flow has a value $`O(\xi _0^2)`$, independent of $`\xi _\alpha `$, if $`\alpha =3`$. The form of $`Q`$ and $`F`$ in the far-left tail and $`1`$ range actually is not constrained by whether $`_{\mathrm{}}^{\xi _M}\xi F(\xi )𝑑\xi `$ vanishes or is $`O(\xi _0^2)`$ as $`R\mathrm{}`$. The key is the value of $`z=1/(\alpha 1)`$, which gives the position of the join between $`1`$ and $`\alpha `$ ranges at $`\xi _M=R^z\xi _0`$. Consider the generic form $$Q(\xi )Z(\xi /R\xi _0)/R|\xi |(\xi \xi _0)$$ (9) in the far-tail and $`1`$ ranges, where $`Z`$ vanishes strongly as $`\xi \mathrm{}`$, $`ZC_z`$ as $`\xi /R\xi _00`$, and $`C_z`$ is an $`O(1)`$ constant. If this form is substituted into (7), a consequence is $$_{\mathrm{}}^{\xi _M}\xi F(\xi )𝑑\xi C_zR^\beta \xi _0^2(R1),$$ (10) where $`\beta =(3\alpha )/(\alpha 1)`$. If $`\alpha =3`$, this gives the needed $`O(\xi _0^2)`$ result. If $`\alpha >3`$, the right side vanishes at $`R=\mathrm{}`$. Thus the value of the boundary flow automatically adjusts to the value of $`\alpha `$ that is determined by the form of $`F`$ in the $`\alpha `$ range. In other words, it adjusts to the probability distribution of shock occurence in the $`\alpha `$ range. Explicit models of shock development lead to explicit forms of $`F(\xi )`$. The following model generalizes those of . It assumes large $`R`$ but does not invoke the internal shock structure. The development is followed before and after shock birth. Take the unforced case first. Consider an initial velocity field of form $$u(x,0)\xi _0(ax+bx|x/L|^p)(p>0)$$ (11) in the vicinity of a point $`x=0`$ where a shock will form at time $`1/a\xi _0`$. Here $`a`$ and $`b`$ are $`O(1)`$ positive constants. If $`p`$ is an even integer, all $`x`$ derivatives of $`u`$ exist at $`x=0`$. If $`p`$ is an odd integer or non-integer, only the derivatives of order $`np+1`$ exist. This form of initial velocity field leads to $`\alpha =3+1/p`$, a result that can be verified in several ways. The following is a simple qualitative argument. The inviscid evolution of velocity gradient in a Lagrangian frame satisfies $`\dot{\xi }=\xi ^2`$. It then follows from (11) that the time of initial shock formation is $`t_0=1/a\xi _0`$. The negative gradient within a fluid element initially at small enough $`|x|>0`$ grows inviscidly until the fluid element hits the shock. The time at which a fluid element initially at $`x`$ falls into the growing shock is $`t_x1/[\xi _0(ab|x/L|^p)]`$. The times of arrival at the shock determine the fractional measure of initial points $`x`$ such that, before a fluid element hits the shock, the gradient magnitude increases to values $`|\xi |\xi _0`$. The gradient magnitude at $`x`$ grows to $`a\xi _0|x/L|^p/[b(p+1)]`$ at $`t_0`$ and $`a\xi _0|x/L|^p/bp`$ at $`t_x`$. Thus the measure of points $`x`$ such that the gradient magnitude in the fluid element initially at $`x`$ can grow to a value that equals or exceeds $`|\xi |\xi _0`$ within the intervals $`(0,t_0)`$ or $`(0,t_x)`$ is $`|\xi |^{1/p}`$. Of the initial fluid elements that achieve a value at least $`|\xi |\xi _0`$ before hitting the shock, the fraction that does this in the preshock interval $`(0,t_0)`$ is $`(p/p+1)^{1/p}`$. When the fluid element has achieved the value $`\xi `$, squeezing has decreased its measure by a factor $`|\xi |^1`$. Finally, the residence time of the fluid element in the gradient interval $`d\xi `$ at $`\xi `$ is $`dt=|\xi |^2d\xi `$. Putting these factors together, one obtains $`\overline{Q}(\xi )|\xi |^{121/p}=|\xi |^{31/p}`$, where $`\overline{Q}(\xi )`$ is the mean of $`Q(\xi ,t)`$ over a time interval (say $`2/a\xi _0`$) long enough for all fluid elements that can achieve $`|\xi |\xi _0`$ to have hit the shock. In , $`\alpha =3`$ was deduced under the assumption that the fractional measure of fluid elements that can achieve gradient magnitudes $`\xi _0`$ is $`O(1)`$. The measure $`|\xi |^{1/p}`$ found instead in the present model changes the result to $`\alpha =3+1/p`$. For all finite $`p>0`$, the form of $`F`$ calculated from the present model is $`F(\xi )\xi Q(\xi )`$, within the $`\alpha `$ range. Initial sawtooth profiles, where $`u(x)`$ consists solely of straight-line segments, correspond to $`b=0`$ (or $`p=\mathrm{}`$) in the model. They evolve into shocks that have finite amplitude at birth and yield $`\alpha =3`$. The present model thereby is consistent with the conclusion that isolated shocks can induce $`\alpha =3`$ only if they are created with finite amplitude. The steady state produced by spatially smooth Gaussian forcing supported by wavenumbers $`O(1/L)`$ can be interpreted in terms of this model. Such forcing induces smooth profiles corresponding to $`p=2`$ near points of extremal slope. In the absence of force, a shock forms from steepening of slope at a point of maximally negative slope. Smooth change of the velocity field due to Gaussian forcing can change the location of the point of maximally negative slope as a function of time. But such forcing does not change the nature of the shock formation phenomenon because the quadratic decrease of slope magnitude away from the point of negative maximum survives. Once the slope magnitude at negative maximum is large compared to $`\xi _0`$, the forcing should have no significant effect on either the progression to shock birth or the initial shock growth after birth. The value $`\alpha =7/2`$ corresponding to $`p=2`$ is the steady-state result. Forcing that supports the general case $`p2`$ in a statistically steady state can be constructed as follows: Let the forcing consist of a set of $`\delta `$-functions in time, spaced at time intervals $`O(1/\xi _0)`$. Let each such $`\delta `$-function create an increment to $`u(x)`$ that consists of straight-line segments of length $`O(L)`$ with $`O(u_0/L)`$ positive slope, smoothly joined to interposed negative-slope regions of $`O(L)`$ or shorter lengths. In each negative-slope region let there be a point of maximally negative slope surrounded by a neighborhood in which $`u(x)`$ has the form $`\xi _0(ay+by|y/L|^p)`$, where $`y`$ is the distance from the point of maximally negative slope. The values of $`a`$ and $`b`$ can change stochastically from one such region to another. Under these conditions, the negative-slope increment to $`u(x)`$ created by each $`\delta `$-function force field is added to an existing field that has locally constant slope with $`O(1)`$ probability. Therefore the points of maximally-negative slope are at the special points $`y=0`$, and the consequent shock development yields $`\alpha =3+1/p`$ for all $`p>0`$, as in the initial-value case. All cases of the model except $`p=2`$, $`\alpha =7/2`$ require special shapes of the velocity field prior to shock formation and, therefore, precise phase relations of spatial Fourier components. If the forcing field is spatially homogeneous and has an infinitely short coherence time (white forcing), the effective forcing is Gaussian and these shapes and phase relations cannot be realized. The white, homogeneous forcing assumed in writing the $`B`$ term in (2) implies $`p=2`$, $`\alpha =7/2`$. At the end of , it was noted that the value of $`\alpha `$ depends on how likely it is for a shock collision to interrupt the steepening of negative gradients. It was argued that collisions should be infrequent enough that inviscid steepening should survive for most fluid elements of negative $`\xi `$ until $`|\xi |=O(\xi _S)`$ is reached. This implies $`\alpha =3`$. The present shock-growth model, following the earlier ones in , says instead that fluid elements with large negative $`\xi `$ are inevitably close to shocks that soon swallow them if $`p=O(1)`$. For most fluid elements with negative $`\xi `$, the inviscid steepening is terminated by collision with the shocks while $`|\xi |`$ is still $`O(\xi _0)`$. The analysis in is done in terms of a split of $`u`$ and $`\xi `$ into parts exterior to shocks and parts interior to shocks, in the limit $`R\mathrm{}`$: $$u(x,t)=u_e(x,t)+u_i(x,t),\xi (x,t)=\xi _e(x,t)+\xi _i(x,t)$$ (12) It is of interest to discuss how to make the split (12) when $`R`$ is large but not infinite, and to express the preceding analysis in terms of the split-field representation. The split into interior and exterior fields is analyzed in by means of matched asymptotic expansions. If the $`R=\mathrm{}`$ field consists of infinitely thin shocks surrounded by fluid in which $`|\xi |<\xi _1`$, where $`\xi _1`$ is some finite bound, the split is clear and unambiguous. At large finite $`R`$, one can isolate a small region surrounding each shock as $`u_i`$, and consistently let the widths of these regions shrink to zero at $`R=\mathrm{}`$. E and Vanden Eijnden show that the result is a simple set of statistical equations relating $`\xi _e`$ and the shock jumps: $$\xi _e+\rho s=0,$$ (13) $$F_e(\xi _e,t)=\frac{\rho }{2}s\left[V_{}(\xi _e,s,t)+V_+(\xi _e,s,t)\right]𝑑s.$$ (14) Here $`\rho `$ is the number density of shocks, $`s`$ is shock-jump strength ($`<0`$), $`F_e`$ is the viscous term in an equation of motion like (2) for the probability density $`Q_e(\xi _e)`$ of the exterior field, and $`V_{}`$ ($`V_+`$) is the probability that a shock of strength $`s`$ has a left (right) environment with gradient $`\xi _e`$. These equations have a direct physical interpretation. Equation (14) expresses the viscous term as the rate at which $`Q_e(\xi _e)`$ is diminished through the swallowing of fluid with gradient $`\xi _e`$ by shocks. The time derivative of (13) is an expression of the fact that the rate of change of shock strength is given by the product of the convergence velocity and the negative of the gradient of the external field, as the latter is swallowed by the shock. The split into $`\xi _e`$ and $`\xi _i`$ is less clear if there is a $`\alpha `$ range at $`R=\mathrm{}`$ that includes $`|\xi |`$ values larger than any finite bound. Then the $`\xi _e`$ field is not smooth. As $`R`$ approaches infinity, it is not possible to form an $`R`$-dependent parameter $`\xi _1`$ such that all field with $`|\xi |<\xi _1`$ belongs to $`\xi _e`$ and all field with $`|\xi |>\xi _1`$ belongs to $`\xi _i`$. Instead the division into interior and exterior fields must be made individually at each shock. The equation of motion for $`\xi `$ obtained by differentiation of (1) contains the steepening term $`\xi ^2`$ and the dissipation term $`\nu \xi _{xx}`$. One possibility for splitting the $`\xi `$ field is to let $`\xi _e`$ include all points $`\xi >0`$ and all points $`\xi <0`$ that are exterior to defined boundaries of the shock shoulders. A shock-shoulder boundary could be defined as a point where, as one moves through the field toward the shock, $`\xi ^2/|\nu \xi _{xx}|`$ first falls below some prescribed ratio, say $`10`$. Then $`\xi _i`$ constitutes the field interior to the left and right shock-shoulder boundaries. This gives a $`\xi _e`$ field that extends up to values $`|\xi |=O(\xi _S)`$ at some points, with a $`Q_e(\xi _e)`$ that must deviate from powerlaw behavior for such $`|\xi |`$ values. The qualitative behaviors of $`\xi _e`$ and $`\xi _i`$ are independent of the precise value prescribed for the critical ratio of steepening term to dissipation term. Since $`|\xi _e|`$ extends to $`\mathrm{}`$ in the limit, it is not obvious that all interactions between exterior and interior fields have a form consistent with (13) and (14). In the limit, the $`1/R|\xi |`$ range belongs to $`\xi _i`$, which includes the shock shoulders. Both $`\xi _e`$ and $`\xi _i`$ contribute to the total-field probability density $`Q(\xi _M)`$, and this fact remains as $`R\mathrm{}`$. It is been remarked above that $`F(\xi )`$ is positive in the $`1`$ range, while (14) gives negative $`F_e(\xi _e)`$. This emphasises that the region of $`\xi `$ contributing to the $`1`$ range in each individual shock boundary layer should be assigned to $`\xi _i`$. Equations similar to (2), (5), and (8) can be written for $`Q_e(\xi _e)`$: $$(Q_e)_t=\xi _eQ_e+(\xi _e^2Q_e)_\xi +B(Q_e)_{\xi \xi }+F_e,$$ (15) $$Q_e(\xi _e)|\xi _e|^3_{\mathrm{}}^{\xi _e}\xi _e^{}F_e(\xi _e^{})𝑑\xi _e^{}(\xi _e\xi _0),$$ (16) $$_{\xi _\alpha }^{\mathrm{}}\xi _e(Q_e)_t(\xi _e)𝑑\xi _e=\xi _\alpha ^3Q_e(\xi _\alpha )+_{\xi _\alpha }^{\mathrm{}}\xi _eF_e(\xi _e)𝑑\xi _e.$$ (17) Although the equations look the same, $`F_e`$ and $`F`$ behave differently for negative arguments. The parameter $`\xi _M`$ has no special significance here because the $`\alpha `$ range of $`Q_e`$ is not masked by the $`1`$ range. The latter belongs to $`\xi _i`$. Therefore $`\xi _M`$ has been replaced in (17) by $`\xi _\alpha `$, which is any value within the $`\alpha `$ range that satisfies the limiting relations $$\xi _\alpha /\xi _0\mathrm{},\xi _\alpha /\xi _S0(R\mathrm{})$$ (18) If $`\alpha =3`$, (17), like (8), exhibits a boundary flow that does not vanish at $`R=\mathrm{}`$. The analog of (4), $`_{\mathrm{}}^{\mathrm{}}\xi _eF_e(\xi _e)𝑑\xi _e=0`$, holds in steady states. In general transient states, (13) implies that there is an additional term involving the rate of change of shock jumps. The power-law behavior of $`Q_e(\xi _e)`$ must change to a faster (eventually faster-than-algebraic) fall-off for $`|\xi _e|O(\xi _S)`$. The generic form may be written $$Q_e(\xi _e)\xi _0^{\alpha 1}|\xi _e|^\alpha \stackrel{~}{Z}(\xi _e/R\xi _0)(\xi _e\xi _0).$$ (19) Here $`\stackrel{~}{Z}`$ vanishes strongly at $`\xi _e=\mathrm{}`$ and is $`O(1)`$ at $`\xi _e=0`$. The precise form of $`\stackrel{~}{Z}`$ is $`\alpha `$-dependent. In contrast to (9), there is no $`1`$ range. The prefactor in (19) gives the consistency property $`Q_e(\xi _0)=O(1/\xi _0)`$ if the $`\alpha `$ range is extrapolated toward the central peak of $`Q_e`$. Substitution of (19) into (16) yields $$_{\mathrm{}}^{\xi _\alpha }\xi _eF_e(\xi _e)𝑑\xi _e=(\xi _0/\xi _\alpha )^{\alpha 3}O(\xi _0^2).$$ (20) If $`\alpha >3`$, the dissipation measured by $`_{\xi _\alpha }^{\mathrm{}}\xi _eF_e(\xi _e)𝑑\xi _e`$ equals the total dissipation of $`\xi _e`$ in the limit $`R\mathrm{}`$. If $`\alpha =3`$, (20) shows that there is additionally an essential contribution $`_{\mathrm{}}^{\xi _\alpha }\xi _eF_e(\xi _e)𝑑\xi _e`$ that is $`O(\xi _0^2)`$ in the limit. This arises from $`O(\xi _0^2/\xi _S^2)`$ levels of $`F_e(\xi _e)`$ needed to induce the fast fall-off of $`Q_e(\xi _e)`$ at $`|\xi _e|=O(\xi _S)`$. The fast fall off occurs also if $`\alpha >3`$, but in that case (20) shows that the associated contribution $`_{\mathrm{}}^{\xi _\alpha }\xi _eF_e(\xi _e)𝑑\xi _e`$ is a vanishing part of the overall budget in the $`R\mathrm{}`$ limit. In , the limit “$`\mathrm{}`$” in (16) above and other equations is taken to mean a point within the infinite $`\alpha `$ range, rather than true negative infinity ($`\xi _S`$). In other words “$`\mathrm{}`$” is taken to mean $`\xi _\alpha `$, as defined by (18) in the limit $`R=\mathrm{}`$. This causes no problem if $`\alpha >3`$. If $`\alpha =3`$, it is clear from (20) that “$`\mathrm{}`$” must stay at true negative infinity. If the possibility $`\alpha =3`$ is to be examined, clearly one cannot confine attention solely to the strict $`\alpha `$ powerlaw range and ignore the transition region at $`|\xi _e|=O(\xi _S)`$. The case $`\alpha =3`$ implies that, for most fluid elements with steepening negative $`\xi `$, the inviscid steepening halts at $`\xi `$ within the transition region. The magnitude of $`F_e(\xi _e)`$ at $`|\xi _e|=O(\xi _S)`$ required by $`\alpha =3`$ signifies that shocks have a relatively high concentration in fluid with such $`\xi _e`$. In comparison with $`\alpha >3`$, shocks are moved from environments with $`|\xi _e|\xi _S`$ to environments in the transition region $`|\xi _e|=O(\xi _S)`$. If an explicit shock-growth model is not adopted to fix $`F_e`$, it is possible a priori that shocks with environments in the transition region could fall within the picture invoked by , in which shocks are created at zero amplitude and the balance is described fully by (13) and (14). In this case it is only needed to include the transition region of shock environments in $`\xi _e`$ when calculating the balance (13) for $`\alpha =3`$. However, it is also possible a priori that $`F_e(\xi _e)`$ is weighted to sufficiently large $`|\xi _e|`$ that (14) is not an accurate description of the dissipation mechanism for $`\alpha =3`$, or that $`F_e(\xi _e)`$ includes interactions with shocks created at finite amplitude as also considered in . I am grateful to S. Boldyrev, W. E, U. Frisch, T. Gotoh, A. M. Polyakov, and E. Vanden Eijnden for fruitful interactions. This work was supported by the U. S. Department of Energy under Grant DE-FG03-90ER14118 and by the U. S. National Science Foundation under Grant DMS-9803538.
no-problem/9901/cond-mat9901026.html
ar5iv
text
# Two-dimensional Dilute Ising Models: Defect Lines and the Universality of the Critical Exponent 𝜈 ## Abstract We consider two-dimensional Ising models with randomly distributed ferromagnetic bonds and study the local critical behavior at defect lines by extensive Monte Carlo simulations. Both for ladder and chain type defects, non-universal critical behavior is observed: the critical exponent of the defect magnetization is found to be a continuous function of the strength of the defect coupling. Analyzing corresponding stability conditions, we obtain new evidence that the critical exponent $`\nu `$ of the bulk correlation length of the random Ising model does not depend on dilution, i.e. $`\nu =1`$. KEY WORDS: random Ising model; defect lines; Monte Carlo simulations The presence of quenched randomness may drastically change the critical properties of magnetic systems. For disorder which is coupled to the energy density, i.e. in particular for random bond and random site dilution, the relevance-irrelevance of the perturbation at a second-order phase transition point is given by the well known Harris criterion. If the specific heat exponent of the pure system is positive, $`\alpha >0`$, a new random fixed point is expected to control the critical properties of the dilute model. The marginal situation in the Harris criterion, $`\alpha =0`$, is represented by the two-dimensional (2d) Ising model, in which case detailed studies, both (field-)theoretical and numerical, have been performed to clarify the critical properties of the dilute model. By now, according to general view, the dilution is considered as a marginally irrelevant perturbation, thus the critical singularities in the dilute Ising model are characterized by the power laws of the perfect model modified by logarithmic corrections. There is, however, another view which interprets numerical data as giving evidence for dilution dependent critical exponents. In this paper we try to decide between the conflicting views in an indirect way by studying the local critical behavior at a defect line in the dilute model. A defect line, which could be located at grain boundaries in real systems, represents a marginal perturbation in the 2d perfect Ising model. According to Bariev’s exact solution the critical exponent $`\beta _d`$, defined via the temperature dependence of the defect or local magnetization $`m_d`$, $$m_dt^{\beta _d},t=(T_cT)/T_c0^+,$$ (1) is a continuous function of the strength of the defect coupling $`J_d`$. For a chain defect, see Fig. 1b, one gets $$\beta _d=\frac{2}{\pi ^2}\mathrm{arctan}^2\kappa _c,\kappa _c=\mathrm{exp}\left[2(J_dJ)/T_c\right],$$ (2) whereas for a ladder defect, see Fig. 1a, $`\beta _d`$ is given by $$\beta _d=\frac{2}{\pi ^2}\mathrm{arctan}^2\kappa _l,\kappa _l=\frac{\mathrm{tanh}(J/T_c)}{\mathrm{tanh}(J_d/T_c)},$$ (3) where $`J`$ is the coupling in the isotropic Ising model. We note that the above formulae could be generalized to non-isotropic models, as well. As shown recently by Pleimling and Selke, the edge magnetization of three-dimensional Ising magnets at the surface transition has a similar non-universal critical behavior, which, indeed, can be related to the local critical behavior at a defect line in the two-dimensional Ising model. The exact results on the local critical behavior of the 2d Ising model in eqs. (2) and (3) are in complete agreement with a stability analysis of the fixed point of the homogeneous system in the presence of a defect line. Under a small perturbation this fixed point is unstable, if the critical exponent of the bulk correlation length, $`\nu `$, is $`\nu 1`$. Furthermore, a ladder defect with small local couplings behaves like two weakly coupled surfaces, and ordinary surface critical behavior will result, provided the corresponding surface fixed point remains stable against a weak coupling between the surfaces. This stability condition can be expressed in terms of the surface susceptibility exponent of the homogeneous model as $`\gamma _{1,1}<0`$. Applying hyperscaling, one obtains for $`d=2`$, $`\gamma _{1,1}=\nu 2\beta _1`$, where $`\beta _1`$ is the critical exponent of the surface magnetization). As one may easily check the corresponding two marginality conditions, for ladder defects, $$\nu =1\text{ and}\gamma _{1,1}=0,$$ (4) are both satisfied for the 2d Ising model; the marginality is manifested by the defect coupling dependent critical exponent in eq. (3). In the following, we are going to utilize the above observations and study the local critical behavior at defect lines in the dilute Ising model. We consider strongly diluted systems, so that the bulk critical region is clearly controlled by the random fixed point, and insert the line defects as local perturbation. Then the relevance-irrelevance criterion for the local critical behavior is expected to have the same form as described above, with the exponents, $`\nu `$ and $`\gamma _{1,1}`$, referring now to the dilute model. Determining the local magnetization exponent $`\beta _d`$, at the defect, one may imagine two scenarios: i) $`\beta _d`$ showing a continuous variation with the defect coupling $`J_d`$, or ii) $`\beta _d`$ staying constant in, at least, some extended range of $`J_d`$. In the first case, there would be evidence that the marginality conditions, see eq. (4), remain valid for the dilute model. Otherwise, one might infer that the critical exponents $`\nu `$ and $`\gamma _{1,1}`$ for the pure and dilute models are different. In what follows we consider a random-bond nearest neighbor Ising model on a square lattice where the random ferromagnetic couplings, $`J_1`$ and $`J_2`$, occur with equal probability. That model is self-dual, and the self-duality point $$\mathrm{tanh}(J_1/T_c)=\mathrm{exp}(2J_2/T_c),$$ (5) corresponds to the critical point, if there is one phase transition in the system. Indeed, this assumption is strongly supported by numerical calculations. In this dilute model, ladder and chain defects are then introduced, where the defect couplings $`J_d`$ are uniform and ferromagnetic. To calculate the local critical properties we did extensive Monte Carlo (MC) simulations using Wolff’s cluster flip algorithm. We have considered square lattices with $`L`$ columns and $`L`$ rows; $`J_d`$ couples neighboring spins in the center column for chain defects, whereas for the ladder defect $`J_d`$ connects spins between the two center columns. Typically we took $`L=256`$, applying full periodic boundary conditions and generating about $`10^4`$ clusters per realization. The results are then averaged over hundred realizations. The statistical errors during a MC run in a given sample turned out to be significantly smaller than those arising from the ensemble averaging. We mention that similar parameters were used in the previous study on the surface critical behavior of the dilute Ising model, which corresponds to the case with a ladder defect, where $`J_d`$ vanishes. In the MC simulations we calculated the average magnetization per column, $`m(i)=|s_{ij}|/L`$, where the sum runs over $`j=1,2,\mathrm{},L`$. The defect magnetization is then given by $`m_d=m(L/2)`$. The simulations were performed at three values of the dilution parameter $`r=J_1/J_2=1`$, $`1/4`$ and $`1/10`$ and for several values of the defect coupling in the region $`0J_d/J_24`$. The magnetization profile $`m(i)`$ displays at the defect either a maximum or a minimum depending on the strength of the defect couplings, as illustrated in Fig. 2 for ladder defects. Far from the defect, there is a plateau in the profile with the height signaling the bulk magnetization, $`m_b`$. The size of the defect region, $`l_d`$, where the magnetization differs substantially from its bulk value, is related to the bulk correlation length of the system. In the thermodynamic limit, $`L\mathrm{}`$, as the critical temperature $`T_c`$, see eq. (5), is approached, the magnetization profile $`m(i)`$ goes to zero as a power-law $`m(i)t^{\beta (i)}`$, where $`\beta (L/2)=\beta _d`$ and $`\beta (i)=\beta `$ for $`|L/2i|>l_d`$, where $`\beta `$ is the usual bulk critical exponent. To estimate the values of these critical exponents from simulational data, one may define temperature dependent effective exponents $$\beta (i)_{eff}=\mathrm{d}\mathrm{ln}[m(i)]/\mathrm{d}\mathrm{ln}[t],$$ (6) which are approximated by using data at discrete temperatures, say, $`t+\mathrm{\Delta }t/2`$ and $`t\mathrm{\Delta }t/2`$. In the limit of sufficiently small $`\mathrm{\Delta }t`$ and $`t`$, the effective exponents approach the true critical exponents, presuming that the system is large enough so that finite-size effects play no role. To avoid finite-size effects, $`L`$ should be much larger than the correlation lengths in the bulk and at the defect line. Actually we approached the critical point by calculating $`\beta (i)_{eff}`$ for $`t=0.15,0.13,0.11,0.09`$ and $`0.07`$, with $`\mathrm{\Delta }t=0.02`$, and then did a linear extrapolation to $`t=0`$. The error caused by the extrapolation seems to be rather small. Further technical details can be found in Ref. 14. Before presenting our findings about the defect line problem in the dilute model, we will first consider the perfect model with random defect couplings. The aim of this part of our investigation is to clarify, whether random defects could lead to varying local exponents. The two random, ferromagnetic couplings in the defect line, $`J`$ and $`J_d`$, are assumed to occur with equal probability, where $`J`$ is also the coupling in the rest of the system. Results about the local magnetization exponent $`\beta _d`$ for various values of the ratio $`J_d/J`$ are shown in Fig. 3, both for chain and ladder defects. The error bars in Fig. 3 take into account the sample averaging and the extrapolation. Only a few typical error bars are shown. For $`J_d/J=1`$, the critical exponent of the perfect model, $`\beta =1/8`$, is reproduced quite accurately. For other values of that ratio, one observes a non-universal critical behavior with $`\beta _d`$ varying continuously with $`J_d/J`$, in accordance with the above marginality conditions.- It may be interesting to note the touching of the curves for the ladder and the chain defects at $`J_d=J`$ . In Fig. 4, results about the critical exponent of the local magnetization $`\beta _d`$ for the dilute model, with random couplings $`J_1`$ and $`J_2`$, and the defect line, with uniform coupling $`J_d`$, are depicted. In the case $`r=J_1/J_2=1`$, our data are in good agreement with the exact results, see eqs. (2) and (3) for chain and ladder defects. In the dilute case, $`J_1J_2`$, $`\beta _d`$ is seen to vary continuously with the strength of the defect coupling $`J_d`$. For a fixed value of $`J_d`$, the defect energy-density increases, relative to the average bulk value, for decreasing value of $`r=J_1/J_2`$. Therefore, generally, there is an increasing local order at the defect, which is connected to a decreasing value of the defect exponent, $`\beta _d`$. This argument, however, seems to be not valid for the ladder defect with $`J_dJ_2`$. In this limit one has effectively a chain defect with random couplings $`J_1+J_2`$, with probability $`1/2`$, as well as $`2J_1`$ and $`2J_2`$, each with probability $`1/4`$. Then, shown in Fig. 4, $`\beta _d(J_d)`$ is increasing with increasing dilution.- For $`J_d=0`$, one recovers the surface critical exponent $`\beta _d=\beta _1=1/2`$. Another limiting situation is obtained for a chain defect with zero defect bond, $`J_d=0`$. Then the problem is equivalent to a ladder defect with three random couplings, which depend on $`J_1`$ and $`J_2`$. As seen in the inset of Fig. 4, such random couplings could also lead to a non-universal behavior. This observation is in agreement with our findings on the perfect model in Fig. 3. To summarize, we considered uniform ladder and chain defects in two-dimensional dilute Ising models and determined the critical exponent of the defect magnetization. The exponent was found to be a continuous function of the defect coupling. Assuming that the first stability criterion mentioned above holds for the dilute case as well, one gets for the critical exponent $`\nu `$ of the bulk random Ising model the borderline value $`\nu =1`$. Accordingly, one could rule out $`\nu >1`$, as had been suggested before in the context of dilution dependent bulk critical exponents. In conclusion, we suggest that the non-universal critical behavior is related to the borderline values of the critical exponents of the bulk dilute model, as given in eq. (4). Consequently, one obtains $`\nu =1`$ and $`\gamma _{1,1}=0`$ (implying $`\beta _1=1/2`$, in agreement with Ref. 14), both for the perfect and dilute two dimensional Ising models . ###### Acknowledgements. This work has been supported by the Hungarian National Research Fund under grants No OTKA TO23642 and OTKA F/7/026004 and by the Ministery of Education under grant No FKFP 0765/1997. Useful discussions with W. Selke and L. Turban are gratefully acknowledged. F. Sz. thanks the Institut für Theoretische Physik, Technische Hochschule Aachen, where part of this work has been completed, for kind hospitality, and the DAAD for the scholarship enabling his visit there.